/* ---- Google Analytics Code Below */

Tuesday, May 09, 2023

Theory of Mind (TOM) responses Essential, Improved

Theory of Mind (TOM) mean the degree you can determine the mental state of a person/thing you are talking to.  It is often used even in interactions with simple decision trees.   What is the callers goal?  What technical terms do they/can they understand?    Cost/danger of particular recommendations'?  ...  As these relate to continuing interaction.  

A New AI Research from John Hopkins Explains How AI Can Perform Better at Theory of Mind Tests than Actual Humans   By Aneesh Tickoo- May 3, 2023

One might think as to what kind of daily circumstances can large language models (LLMs) reason about. Although large language models (LLMs) have achieved great success in many tasks, they continue to need help with tasks that call for reasoning. So-called “theory of mind” (ToM) reasoning, which entails keeping track of an agent’s mental state, including their objectives and knowledge, is one area of particular interest. Language models’ ability to correctly answer common questions has substantially increased. However, their performance in the theory of mind is somewhat subpar. 

In this study, researchers from Johns Hopkins University test the idea that proper prompting can improve LLMs’ ToM performance. 

For several reasons, LLMS must be capable of doing ToM reasoning with reliability:

ToM is a crucial component of social knowledge, enabling individuals to take part in complex social interactions and foresee the actions or reactions of others.

ToM is a complicated cognitive ability most highly developed in humans and a few other species. This can be because Tom uses structured relational information. The ability to infer the thoughts and beliefs of agents will be useful for models that interact with social data and with people.

Inferential reasoning is frequently used in ToM tasks. 

🚀 Check Out 100’s AI Tools in our AI Tools Club

Approaches to in-context learning can improve LLMs’ ability for a reason. For instance, to function successfully under ToM, LLMs must reason using unobservable information (such as actors’ concealed mental states), which must be inferred from context rather than parsed from the surface text (such as an explicit statement of a situation’s attributes). Therefore, evaluating and enhancing these models’ performance on ToM tasks may provide insight into their potential for inferential reasoning tasks. Researchers have shown that for sufficiently large language models (+100B parameters), model performance may be enhanced by employing just a small number of task demonstrations described exclusively through the model’s input (i.e., at inference time, without weight updates). 

The term “few-shot learning” is commonly used to describe this kind of performance improvement. Later studies demonstrated that LLMs’ capacity for complex reasoning was enhanced when the few-shot examples in the prompt included the steps taken to conclude (“chain-of-thought reasoning”). Furthermore, it has been demonstrated that teaching language models to think “step-by-step” improves their reasoning abilities even without exemplar demonstrations. The benefits of various prompting strategies have yet to be understood theoretically. Still, several recent research has investigated the implications of compositional structure and local dependencies in training data on the efficacy of these methods. 

No comments: