/* ---- Google Analytics Code Below */

Sunday, December 18, 2022

Trust in Online Content Moderation Depends on Moderator

See also previous note about this topic

Fairly obvious if the background of moderators is accurately provided. 

By Cornell Chronicle, November 3, 2022

A human and an artificial intelligence providing content moderation. 

Nearly 400 study participants were asked to log in at least twice a day for two days, and were randomly assigned to one of six experiment conditions, varying both the type of content moderation system and the type of harassment comment they saw.

Credit: Analytics India Magazine

An interdisciplinary research team at Cornell University found that an individual's trust in online content moderation systems and decisions depend on whether the moderator is human or artificial intelligence (AI) and the type of harassing content.

The study involved a custom social media site and a simulation engine that uses preprogrammed bots to mimic the behavior of other users.  Almost 400 participants were asked to beta test a new social media platform and randomly assigned to one of six experimental conditions that differed based on the type of content moderation system and harassing content.

With inherently ambiguous content, the researchers found that AI moderators were more likely to be questioned by users. However, trust in all types of moderation was about the same when clearly harassing comments were involved.

From Cornell Chronicle   View Full Article


No comments: