/* ---- Google Analytics Code Below */

Monday, March 11, 2019

On the Limits of Deep Learning

Very perceptive.   Deep learning is not AI, and many challenges still exist.

What are the limits of deep learning?    Original paper.

Proceedings of the National Academy of Sciences  by M. Mitchell Waldrop

The much-ballyhooed artificial intelligence approach boasts impressive feats but still falls short of human brainpower. Researchers are determined to figure out what’s missing.

There’s no mistaking the image: It’s a banana — a big, ripe, bright-yellow banana. Yet the artificial intelligence (AI) identifies it as a toaster, even though it was trained with the same powerful and oft-publicized deep-learning techniques that have produced a white-hot revolution in driverless cars, speech understanding, and a multitude of other AI applications. That means the AI was shown several thousand photos of bananas, slugs, snails, and similar-looking objects, like so many flash cards, and then drilled on the answers until it had the classification down cold. And yet this advanced system was quite easily confused — all it took was a little day-glow sticker, digitally pasted in one corner of the image.

Apparent shortcomings in deep-learning approaches have raised concerns among researchers and the general public as technologies such as driverless cars, which use deep-learning techniques to navigate, get involved in well-publicized mishaps.  

This example of what deep-learning researchers call an “adversarial attack,” discovered by the Google Brain team in Mountain View, CA, highlights just how far AI still has to go before it remotely approaches human capabilities. “I initially thought that adversarial examples were just an annoyance,” says Geoffrey Hinton, a computer scientist at the University of Toronto and one of the pioneers of deep learning. “But I now think they’re probably quite profound. They tell us that we’re doing something wrong.”

That’s a widely shared sentiment among AI practitioners, any of whom can easily rattle off a long list of deep learning’s drawbacks. In addition to its vulnerability to spoofing, for example, there is its gross inefficiency. “For a child to learn to recognize a cow,” says Hinton, “it’s not like their mother needs to say ‘cow’ 10,000 times” — a number that’s often required for deep-learning systems. Humans generally learn new concepts from just one or two examples.

Then there’s the opacity problem. Once a deep-learning system has been trained, it’s not always clear how it’s making its decisions. “In many contexts that’s just not acceptable, even if it gets the right answer,” says David Cox, a computational neuroscientist who heads the MIT-IBM Watson AI Lab in Cambridge, MA. Suppose a bank uses AI to evaluate your credit-worthiness and then denies you a loan: “In many states there are laws that say you have to explain why,” he says.

And perhaps most importantly, there’s the lack of common sense. Deep-learning systems may be wizards at recognizing patterns in the pixels, but they can’t understand what the patterns mean, much less reason about them. “It’s not clear to me that current systems would be able to see that sofas and chairs are for sitting,” says Greg Wayne, an AI researcher at DeepMind, a London-based subsidiary of Google’s parent company, Alphabet. .... " 

https://www.pnas.org/content/116/4/1074

No comments: