Leaders in AI concerned tech could cause risk of 'human extinction'

Zach Fuentes Image
Wednesday, May 31, 2023
Leaders in AI warn of potential risks against humanity
A new warning Tuesday says that artificial Intelligence is raising the risk of extinction.

SAN FRANCISCO (KGO) -- There's a new warning Tuesday that says artificial intelligence is raising the risk of extinction. That is according to a statement released by a Bay Area nonprofit, signed by some of the leading figures in AI technology.

"We're concerned that AI could potentially cause the risk of human extinction," said Dan Hendrycks, executive director of Center for AI Safety.

The people at the Bay Area-based nonprofit aren't the only ones who share that concern.

MORE: Is AI coming for your job? Here's what experts, OpenAI CEO say amid calls for government regulation

They drafted a one-sentence statement that said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Making the statement even more powerful are the big names who signed it in agreement including Sam Altman, CEO of OpenAi who's behind ChatGPT and Geoffrey Hinton who's called the "godfather of artificial intelligence."

"After we got Geoff Hinton, then it became a lot easier," Hendrycks said of garnering support for the statement, "We weren't even anticipating getting many of the industry leaders."

Those industry leaders include high-level executives at Microsoft and Google, along with hundreds of other experts in tech.

VIDEO: What are the benefits and concerns of artificial intelligence? Bay Area experts weigh in

Italy announced that they are temporarily banning the latest version of Chat GPT from San Francisco-based company Open AI.

"We are in an AI arms race," Hendrycks said, "The companies are needing to compete with each other, they're needing to develop AI as quickly as possible and they've then put development of AI and making it more powerful over making it safe and understandable and transparent."

But Hendrycks says there's even more risk sources.

"Another possibility is that we automate so much of the economy that basically the world is more autonomously run, and we don't know how to do things," he said,

"We become very dependent on them and we're completely subject to them always doing our bidding. If they turn in another direction or pursue something else, then we would be powerless to correct it."

San Jose State professor and tech expert Ahmed Banafa says he also signed the statement, adding that it makes sense that some big names decided to do the same as they may want to wash their hands of critical responsibilities.

"One big problem here is when you have this kind of technology in the hands of few, when you have few companies controlling this, then those people are going to control the world," Banafa said, "This is why you see the CEO of OpenAI, the one from Google also, and all those startups, that say, 'We don't want to be the only one in the picture. We'd like to get everybody together.' Because if it's in the hands of few, those people can direct technology, direct society, and have tremendous power on how the world is functioning."

Hendrycks says the hope is that the statement and its support can get other leaders and policymakers to recognize the severity of the growing, risky technology.

"If we treat it like a global priority, and cooperate as we did with nuclear weapons," he said, "We could make the risks be substantially more manageable, but we'll need to be proactive about that."

Now Streaming 24/7 Click Here

If you're on the ABC7 News app, click here to watch live