Intriguing effort. We worked with systems that ultimately required high level executive agreement. how might those decisions be trained for fairness? And consider the inherent risk involved in such approaches. Uncertainty is always inherent in such methods, how is that integrated?
Predict Responsibly: Fairness Needed in Algorithmic Decision-Making, U of T Experts Say in U of T News by Nina Haikara
David Madras at the University of Toronto (U of T) in Canada believes machine learning algorithms could handle uncertainty better by adding fairness in their decision-making processes. Madras worked with U of T professors Toniann Pitassi and Richard Zemel to develop an algorithmic model that includes fairness. The researchers note in situations where there is a degree of uncertainty, an algorithm must have the option to admit its lack of certainty and defer its decision to a human user. "In order to train up our model, we have to use historical decisions that are made by decision-makers," Zemel says. "The outcomes of those decisions, created by existing decision-makers, can be themselves biased or in a sense incomplete." Madras thinks greater concentration on algorithmic fairness alongside issues of privacy, security, and safety will help make machine learning more conducive to high-stakes applications. ... "
Sunday, May 06, 2018
Fairness in Decision Making: Deferring to Humans
Labels:
Algorithms,
Bias,
decisions,
Fairness,
Toronto,
uncertainty
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment