/* ---- Google Analytics Code Below */

Monday, November 04, 2019

AI Heading off a Cliff?

Thoughts from a well known AI and Computing expert.

Warning! AI Is Heading for a Cliff   By California Magazine via CACM

Asked if the race to achieve superhuman artificial intelligence (AI) was inevitable, Stuart Russell, University of California Berkeley professor of computer science and a leading expert on AI, says yes.

Asked if the race to achieve superhuman artificial intelligence (AI) was inevitable, Stuart Russell, UC Berkeley professor of computer science and leading expert on AI, says yes.

"The idea of intelligent machines is kind of irresistible," he says, and the desire to make intelligent machines dates back thousands of years. Aristotle himself imagined a future in which "the plectrum could pluck itself" and "the loom could weave the cloth." But the stakes of this future are incredibly high. As Russell told his audience during a talk he gave in London in 2013, "Success would be the biggest event in human history … and perhaps the last event in human history."

For better or worse, we're drawing ever closer to that vision. Services like Google Maps and the recommendation engines that drive online shopping sites like Amazon may seem innocuous, but advanced versions of those same algorithms are enabling AI that is more nefarious. (Think doctored news videos and targeted political propaganda.)

AI devotees assure us that we will never be able to create machines with superhuman intelligence. But Russell, who runs Berkeley's Center for Human-Compatible Artificial Intelligence and wrote Artificial Intelligence: A Modern Approach, the standard text on the subject, says we're hurtling toward disaster. In his forthcoming book, Human Compatible: Artificial Intelligence and the Problem of Control, he compares AI optimists to the bus driver who, as he accelerates toward a cliff, assures the passengers they needn't worry—he'll run out of gas before they reach the precipice.

"I think this is just dishonest," Russell says. "I don't even believe that they believe it. It's just a defensive maneuver to avoid having to think about the direction that they're heading."

The problem isn't AI itself, but the way it's designed. Algorithms are inherently Machiavellian; they will use any means to achieve their objective. With the wrong objective, Russell says, the consequences can be disastrous. "It's bad engineering."

Proposing a solution to AI's fundamental "design error" is the goal of Professor Russell's new book, which comes out in October. In advance of publication, we sat down to discuss the state of AI and how we can avoid plunging off the edge.

This conversation has been edited for length and clarity.   .... "

No comments: