/* ---- Google Analytics Code Below */

Thursday, February 23, 2023

AI Tool Guides Users Away from Incendiary Language

Cleaning up language.

AI Tool Guides Users Away from Incendiary Language

By Cornell Chronicle, February 16, 2023

Cornell University researchers have developed an artificial intelligence tool that can track online conversations in real-time, detect when tensions are escalating, and nudge users away from using incendiary language.

The research shows promising signs that conversational forecasting methods within the field of natural language processing could prove useful in helping both moderators and users proactively lessen vitriol and maintain healthy, productive debate forums.

The work is detailed in two papers, "Thread With Caution," and "Proactive Moderation of Online Discussions," presented virtually at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW).

The first study suggests that AI-powered feedback can be effective in enhancing awareness of existing tension in conversations and guide a user toward language that elevates constructive debate, researchers say.

From Cornell Chronicle

View Full Article   

No comments: