The problem is the term AI itself. The assumption that it is far more than it is. This does not mean you should not think about how smarter capabilities could be inserted into codes to augment our capabilities. For a while we were using the term 'Cognitive Systems' to indicate methods closer and even mimicking human perception and abilities. Probably be better to ditch 'AI' and go with Cognitive. Though even the latter requires too much explanation and can be over emphasized. Our Cognitive Systems Institute, monitored here, attempts to emphasize cognitive aspects. Beware over-marketing.
Artificial intelligence is often overhyped—and here’s why that’s dangerous
AI has huge potential to transform our lives, but the term itself is being abused in very worrying ways, says Zachary Lipton, an assistant professor at Carnegie Mellon University.
by Martin Giles
To those with long memories, the hype surrounding artificial intelligence is becoming ever more reminiscent of the dot-com boom.
Billions of dollars are being invested into AI startups and AI projects at giant companies. The trouble, says Zachary Lipton, is that the opportunity is being overshadowed by opportunists making overblown claims about the technology’s capabilities.
During a talk at MIT Technology Review’s EmTech conference today, Lipton warned that the hype is blinding people to its limitations. “It’s getting harder and harder to distinguish what’s a real advance and what is snake oil,” he said.
AI technology known as deep learning has proved very powerful at performing tasks like image recognition and voice translation, and it’s now helping to power everything from self-driving cars to translation apps on smartphones,
But the technology still has significant limitations. Many deep-learning models only work well when fed vast amounts of data, and they often struggle to adapt to fast-changing real-world conditions.
In his presentation, Lipton also highlighted the tendency of AI boosters to claim human-like capabilities for the technology. The risk is that the AI bubble will lead people to place too much faith in algorithms governing things like autonomous vehicles and clinical diagnoses.
“Policymakers don’t read the scientific literature,” warned Lipton, “but they do read the clickbait that goes around.” The media business, he says, is complicit here because it’s not doing a good enough job of distinguishing between real advances in the field and PR fluff.
Lipton isn’t the only academic sounding the alarm: in a recent blog post, “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” Michael Jordan, a professor at University of California, Berkeley, says that AI is all too often bandied about as “an intellectual wildcard,” and this makes it harder to think critically about the technology’s potential impact. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment