/* ---- Google Analytics Code Below */

Tuesday, October 08, 2019

Removing Bias from Predictive Modeling

Podcast of interest from Wharton re Bias, Podcast at the link.

Wharton's James Johndrow discusses his research on removing human bias from predictive modeling.

Predictive modeling is supposed to be neutral, a way to help remove personal prejudices from decision-making. But the algorithms are packed with the same biases that are built into the real-world data used to create them. Wharton statistics professor James Johndrow has developed a method to remove those biases. His latest research, “An Algorithm for Removing Sensitive Information: Application to Race-independent Recidivism Prediction,” focuses on removing information on race in data that predicts recidivism, but the method can be applied beyond the criminal justice system. He spoke to Knowledge@Wharton about his paper, which is co-authored with his wife, Kristian Lum, lead statistician with the Human Rights Data Analysis Group. (Listen to the podcast at the top of this page.)

An edited transcript of the conversation follows.

Knowledge@Wharton: Predictive modeling is becoming an increasingly popular way to assist human decision-makers, but it’s not perfect. What are some of the drawbacks?

James Johndrow: There has been a lot more attention lately about it, partly because things are being automated so much. There’s just more and more interest in having automatic scoring, automatic decision-making, or at least partly automatic decision-making. The area that I have been especially interested in — and this is a lot of work that I do with my wife — is criminal justice.  .... "

No comments: