/* ---- Google Analytics Code Below */

Monday, December 20, 2021

What is Practical AI Understanding?

 Looking forward to reading this, Quanta mag is usually good and medium level technically for information delivery ... Yes ...  understanding in context is the most important thing, and thus precise calibration according to need is important. 

What Does It Mean for AI to Understand?

By Quanta Magazine, December 20, 2021

Even simple chatbots, such as Joseph Weizenbaums 1960s ersatz psychotherapist Eliza, have fooled people into believing they were conversing with an understanding being, even when they knew that their conversation partner was a machine.

Remember IBM's Watson, the AI Jeopardy! champion? A 2010 promotion proclaimed, "Watson understands natural language with all its ambiguity and complexity." However, as we saw when Watson subsequently failed spectacularly in its quest to "revolutionize medicine with artificial intelligence," a veneer of linguistic facility is not the same as actually comprehending human language.

Natural language understanding has long been a major goal of AI research. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write. This approach, as Watson showed, was futile — it's impossible to write down all the unwritten facts, rules and assumptions required for understanding text. More recently, a new paradigm has been established: Instead of building in explicit knowledge, we let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words. The result is what researchers call a language model. When based on large neural networks, like OpenAI's GPT-3, such models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning.

But has GPT-3 — trained on text from thousands of websites, books and encyclopedias — transcended Watson's veneer? Does it really understand the language it generates and ostensibly reasons about? This is a topic of stark disagreement in the AI research community. Such discussions used to be the purview of philosophers, but in the past decade AI has burst out of its academic bubble into the real world, and its lack of understanding of that world can have real and sometimes devastating consequences. In one study, IBM's Watson was found to propose "multiple examples of unsafe and incorrect treatment recommendations." Another study showed that Google's machine translation system made significant errors when used to translate medical instructions for non-English-speaking patients.  ... '   ( full article at link below) 

From Quanta Magazine

View Full Article

No comments: