/* ---- Google Analytics Code Below */

Saturday, March 25, 2023

Large Language Models: A Cognitive and Neuroscience Perspective

Irving does an excellent review and links to much work about LLMs, Large Language Models, and oter topics that are now much in the news. Below an intro. I plan to read all the articles pointed to at the link.  A considerable weakness in the current directions?   Implications to all this,  will provide.

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.  By Irving Wladawsky-Berger  March 23, 2023

Large Language Models: A Cognitive and Neuroscience Perspective

Over the past few decades, powerful AI systems have matched or surpassed human levels of performance in a number of tasks such as image and speech recognition, skin cancer classification, breast cancer detection, and highly complex games like Go. These AI breakthroughs have been based on increasingly powerful and inexpensive computing technologies, innovative deep learning (DL) algorithms, and huge amounts of data on almost any subject. More recently, the advent of large language models (LLMs) are taking AI to the next level. And, for many technologists like me, LLMs and their associated chatbots have introduced us to the fascinating world of human language and cognition.

I recently learned the difference between form, communicative intent, meaning, and understanding from “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data,” a 2020 paper by linguistic professors Emiliy Bender and Alexander Koller. These linguistic concepts helped me understand the authors’ argument that “in contrast to some current hype, meaning cannot be learned from form alone. This means that even large language models such as BERT do not learn meaning; they learn some reflection of meaning into the linguistic form which is very useful in applications.”

A few weeks ago, I came across another interesting paper, “Dissociating Language and Thought in Large Language Models: a Cognitive Perspective,” published in January, 2023 by principal authors linguist Kyle Mahowald and cognitive neuroscientist Anna Ivanova and four additional co-authors. The paper nicely explains how the study of human language, cognition and neuroscience sheds light on the potential capabilities of LLMs and chatbots. Let me briefly discuss what I learned.

“Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text,” said the paper’s abstract. “This achievement has led to speculation that these networks are — or will soon become — thinking machines, capable of performing tasks that require abstract knowledge and reasoning. “Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: formal linguistic competence, which includes knowledge of rules and patterns of a given language, and functional linguistic competence, a host of cognitive abilities required for language understanding and use in the real world.

The authors point out that there’s a tight relationship between language and thought in humans. When we hear or read a sentence, we typically assume that it was produced by a rational person based on their real world knowledge, critical thinking, and reasoning abilities. We generally view other people’s statements not just as a reflection of their linguistic skills, but as a window into their mind. .... '


No comments: