The BDN Editorial Board operates independently from the newsroom, and does not set policies or contribute to reporting or editing articles elsewhere in the newspaper or on bangordailynews.com.
Robot dogs policing the streets of New York. Artificial intelligence platforms that “hallucinate.” Company executives insisting that they want to be part of the solution, not the problem.
We can’t help but wonder: Haven’t we seen this movie before?
As artificial intelligence (AI) moves forward in sometimes promising, sometimes terrifying leaps and bounds, it feels as if we’re in the early scenes of a Terminator movie. And if you think that’s an overreaction, just listen to some of the people at the forefront of this industry and its recent developments.
“I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that,” Sam Altman, the CEO of OpenAI, said at a recent congressional hearing. “We want to work with the government to prevent that from happening.”
Altman’s company is behind the much-publicized ChatGPT, an AI chat tool that can answer questions and generate content. His candor about the potential for things to go wrong with AI technology is appreciated. But the American public, and Congress in particular, should be hesitant to give him and other industry leaders kudos for this obvious recognition. And they certainly shouldn’t hand the keys over to the industry and let them guide the much needed conversation about how (and not whether) to more tightly regulate this technology.
“His company is the one that put this out there,” Suresh Venkatasubramanian, a professor of computer science and data science at Brown University, told the Daily Beast after the hearing. “He doesn’t get to opine on the dangers of AI. Let’s not pretend that he’s the person we should be listening to on how to regulate.”
As Venkatasubramanian succinctly put it, “We don’t ask arsonists to be in charge of the fire department.”
The dizzying amount of AI applications comes with a dizzying amount of potential ways the technology could go wrong — whether it through misuse by criminal, corporate or government entities, or the all-out sci-fi machine takeover a la Terminator or the Matrix.
That science fiction seems closer to reality each day. But vague fears about a robot apocalypse don’t need to drive AI skepticism. There is already enough real-world evidence to do that. Artificial intelligence is further weaponizing disinformation. For example, fake photos showing a bombing at the Pentagon, seemingly AI-generated, confused people this week and even may have caused a temporary dip in the stock market.
Political organizations are using AI to blur the lines of reality and rhetorical projection. The Republican National Committee released a video after President Joe Biden announced he is running for reelection that used AI to generate an imagined bleak future should Biden be reelected. The images were fake but they looked real.
Companies are looking at ways AI can replace human workers. Heck, AI might have even been able to write this editorial in a more convincing way than we have.
So, where do we go from here? We hardly understand some of the technology involved, so we aren’t exactly the people to be designing a stronger regulatory structure for it. But we are quite sure that allowing the industry to shape the rules of the road for itself is not the way to go.
Congress must take care not to reward the arsonists by giving them control of the fire department.