Local computer science experts examine consequences of artificial intelligence

KGO logo
Tuesday, May 12, 2015
Computer science experts examine consequences of artificial intelligence
Several prominent computer experts have warned that the possibility of machines taking over the world is no longer just science fiction and could have grave consequences.

BERKELEY, Calif. (KGO) -- Research into artificial intelligence is moving fast and this has some of the smartest people in the world worried. In recent weeks, several prominent computer experts have warned that the possibility of machines taking over the world is no longer just science fiction.

Two top movies right now imagine a future with artificial intelligence, machines that out-think people. In "Ex Machina," it's a beautiful robot and in "Avengers: Age of Ultron," it's a scary looking creature originally designed to save the world. The future probably won't look like the movies, but the danger that a computer programmed to help us could harm us instead is increasingly real.

"We'll try to make them only do good things, but once we let them free and give them a little bit of leeway, if they ever get smart and start thinking on their own, there's no way to write software perfectly. It's all over, game over," said Apple's co-founder Steve Wozniak. "It's scary because I can see a lot of the thinking I have done in my life is no longer even done by humans."

Wozniak is the latest prominent technology expert to sound the alarm. He joins Microsoft founder Bill Gates, physicist Stephen Hawking and Tesla CEO Elon Musk. Musk is so concerned, he donated $10 million to the Future of Life Institute, which was created in part to make sure machines behave themselves.

University of California Berkeley computer science professor Stuart Russell is working on the problem. "To figure out how to make sure in a mathematically provable way that the robots will be on our side," Russell said.

Russell is one of the world's leading experts on artificial intelligence. He points to the Johnny Depp movie "Transendence" as an example of what the world might be like with super intelligent machines.

"The machine is able to figure out how to cure blindness and resurrect the dead and cure the planet's environment just in the space of a few weeks," Russell said.

But then the computer connects to the Internet and tries to carry its goals too far. Russell says that it actually is a serious risk in the future. He believes the way to prevent it is to create computers that somehow learn human values and put them in context.

"Even before we reach the level of a super intelligent system, we will have systems that can read everything that's ever been written and extract information from it," Russell said.

That's still a way off, but various videos show types of machine learning that is already possible. One video shows a robotic helicopter that taught itself very technical maneuvers simply by watching other helicopters controlled by people. Another robot was taught to tie a knot, then build on that to learn more complicated knots. It's not as tough as deciphering human values, but it's the next step in learning how to keep artificial intelligence under control.

"It's a very serious question, not just because of the potential risk, but because of the potential benefits," Russell said.

It was once the stuff of science fiction, and now it's a real world issue that could have dangerous consequences facing scientists who know that they won't get bailed out by superheroes. Most experts agree it's impossible to know exactly how long it will be until there are computers with full artificial intelligence. The history of this kind of research is filled with sudden breakthroughs and there's a growing consensus that it's best to prepare now to ensure machines won't get out of control in the future.

Written and produced by Jennifer Olney.

Copyright © 2024 KGO-TV. All Rights Reserved.