/* ---- Google Analytics Code Below */

Tuesday, January 01, 2019

MIT Researches Detecting and Addressing AI Bias

Good job of addressing the issues:

MIT researchers show how to detect and address AI bias without loss in accuracy By  Khari Johnson

Bias in AI leads to poor search results or user experience for a predictive model deployed in social media, but it can seriously and negatively affect human lives when AI is used for things like health care, autonomous vehicles, criminal justice, or the predictive policing tactics used by law enforcement.

In the age of AI being deployed virtually everywhere, this could lead to ongoing systematic discrimination.

That’s why MIT Computer Science AI Lab (CSAIL) researchers have created a method to reduce bias in AI without reducing the accuracy of predictive results.

“We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” said MIT professor David Sontag in a statement shared with VentureBeat. The paper was written by Sontag together with Ph.D. student Irene Chen and postdoctoral associate Fredrik D. Johansson.

The key, Sontag said, is often to get more data from underrepresented groups. For example, the researchers found in one case an AI model was twice as likely to label women as low-income and men as high-income. By increasing the representation of women in the dataset by a factor of 10, the number of inaccurate results was reduced 40 percent.

Traditional methods may suggest randomization of datasets related to a majority population as a way to resolve unequal results for different populations, but this approach can mean a tradeoff for less predictive accuracy to achieve fairness for all populations.

“In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model,” reads the paper titled “Why is my classifier discriminatory?”

Differences in predictive accuracy can sometimes be explained by a lack of data or unpredictable outcomes. The researchers suggest AI models be analyzed for model bias, model variance, and outcome noise before undergoing fairness criteria critiques. .... "

No comments: