More on the recently announced Google chatbot. As I have said before, we often see intelligence in our day to day world as a conversation. Person to person, person to machine, person to assistant, even person to document. We have adjust to different expectations based on 'who' is communicating. But there is also the matter of context, if its not well understood the apparent intelligence can be poor. Consider 'making sense' to be a primary measure of achieving a conversational goal. Making sense, common or otherwise, needs a firm contextual basis. Evidence here:
Artificial intelligence: Does another huge language model prove anything? By Ben Dickson in Techtalks
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
This week, Google introduced Meena, a chatbot that can “chat about… anything.” Meena is the latest of many efforts by large tech companies trying to solve one of the toughest challenges of artificial intelligence: language.
“Current open-domain chatbots have a critical flaw — they often don’t make sense. They sometimes say things that are inconsistent with what has been said so far, or lack common sense and basic knowledge about the world,” Google’s researcher wrote in a blog post.
They’re right. Making sense of language and engaging in conversations is one of the most complicated functions of the human brain. Until now, most efforts to create AI that can understand language, engage in meaningful conversations, and generate coherent excerpts of text have yielded poor-to-modest results. .... "
Tuesday, February 11, 2020
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment