Top Tech Talent Warns of AI's Threat to Human Existence in Open Letter

Elon Musk, Steve Wozniak, and Andrew Yang are among more than 1,000 tech leaders asking for time to establish human safety parameters around AI.

Open AI, Chat GPT illustration
Source: SOPA Images Limited via Alamy Stock Photo

More than 1,000 of technology's top talent names — including Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and politician Andrew Yang — have signed an open letter urging AI pioneers to pump the brakes on the AI development race, because of its potential danger to humanity.

"Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable," the open letter, published on the Future of Life Institute site said, in part. "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

The potential danger of unchecked training of large learning models (LLMs), as outlined in the letter, is nothing less than humans being fully replaced by more intelligent AI systems. 

Real-Life SkyNet? Parsing AI's Danger to Humanity

"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" the letter asks. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?"

Possible harm from advanced AI is a worry even for its proponents: Greg Brockman, CEO of ChatGPT, which created OpenAI, recently told a crowd at SXSW he was concerned about AI's ability to both spread disinformation and launch cyberattacks. But those worries are a far cry from fears that AI could become sentient.

To be clear, the six-month pause is intended to give policymakers, and AI safety researchers, time to put safety parameters around the technology, the letter explained. The Pause Giant AI Experiments letter stresses that the group is not calling for a halt on all AI development, but rather wants developers to stop the rush to roll out new capabilities without fully understanding their potential harm.

Even so, skeptics might look at CEOs like Musk, with potential commercial interests at stake in slowing OpenAI's development of GPT-5, and dismiss the Pause AI Open Letter as little more than a public relations ploy.

"We have to be a little suspicious of the intentions here — many of the authors of the letter have commercial interests in their own companies getting a chance to catch up with OpenAI's progress," Chris Doman, CTO of Cado Security said in a statement provided to Dark Reading. "Frankly, it's likely that the only company currently training an AI system more powerful than GPT-4 is OpenAI, as they are currently training GPT-5."

Will the "Pause AI" Open Letter Do Anything?

Beyond the celebrity names, the varied backgrounds, and public points of view of the signatories makes the letter worth taking seriously, according to Dan Shiebler, researcher with Abnormal Security. Indeed, the signatories include some of the brightest academic minds in the AI field, including John Hopfield, Professor Emeritus of Princeton University, and the inventor of associative neural networks, and Max Tegmark, professor of physics at MIT's Center for Artificial Intelligence & Fundamental Interactions.

"The interesting thing about this letter is how diverse the signers and their motivations are," Shiebler said in a statement to Dark Reading. "Elon Musk has been pretty vocal that he believes [artificial general intelligence] (computers figuring out how to make themselves better and therefore exploding in capability) to be an imminent danger, whereas AI skeptics like Gary Marcus are clearly coming to this letter from a different angle." 

However, ultimately, Shiebler doesn't predict the letter will do anything to slow AI development.

"The cat is out of the bag on these models," Shiebler said. "The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to prepare businesses to use these models safely and securely, not try to stop the clock on their development."

Still, shining a light on the safety and ethics considerations is a good thing, according to John Bambenek, principal threat hunter at Netenrich.

"While it's doubtful that anyone is going to pause anything, there is a growing awareness that consideration of the ethical implications of AI projects is lagging far behind the speed of development," he said via email. "I think it is good to reassess what we are doing and the profound impacts it will have."

About the Author

Becky Bracken, Senior Editor, Dark Reading

Dark Reading

Becky Bracken is a veteran multimedia journalist covering cybersecurity for Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights