/* ---- Google Analytics Code Below */

Monday, December 12, 2022

A Wearable Reasoner

Reasonable thought, but how well does it work in varying  contexts.  Need to be wearable? Good demo included.

Wearable Reasoner: Towards Enhanced Human Rationality through a Wearable AI Assistant  in Media.MIT

We present "Wearable Reasoner," a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not to prompt people to question and reflect on the justification of their own beliefs and the arguments of others.  

In an experimental study, we explored the impact of argumentation mining and explainability of the AI feedback on the user through a verbal statement evaluation task. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and those without. When assisted by an AI system with explainable feedback, users significantly consider claims given with reasons or evidence more reasonable than those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, stating that they were happy to have a second opinion present, and emphasizing the improved evaluation of presented arguments. 

Left: Overview of Wearable Reasoner system. Right: Envisioned use cases of Wearable Reasoner for detecting empty claims (arguments presented without evidence) in political speech, advertisements, or high stake meetings.

Based on recent advances in artificial intelligence (AI), argument mining, and computational linguistics, we envision the possibility of having an AI assistant as a symbiotic counterpart to the biological human brain. As a "second brain," the AI serves as an extended, rational reasoning organ that assists the individual and can teach them to become more rational over time by making them aware of biased and fallacious information through just-in-time feedback. To ensure the transparency of the AI system, and prevent it from becoming an AI "black box,'' it is important for the AI to be able to explain how it generates its classifications. This Explainable AI additionally allows the person to speculate, internalize and learn from the AI system, and prevents an over-reliance on the technology. .... ' 

No comments: