/* ---- Google Analytics Code Below */

Tuesday, March 03, 2020

Google Fairness Gym

A considerable effort reported on here to experiment with the broad idea of fairness in machine learning, via the notion of a 'gym' to exercise choices and results with varying data.    Article below has quite a bit of  detail on what this is trying to be.

ML-fairness-gym: A Tool for Exploring Long-Term Impacts of Machine Learning Systems
Wednesday, February 5, 2020
Posted by Hansa Srinivasan, Software Engineer, Google Research

Machine learning systems have been increasingly deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receives medical attention and many other settings. Understanding whether such systems are fair is crucial, and requires an understanding of models’ short- and long-term effects. Common methods for assessing the fairness of machine learning systems involve evaluating disparities in error metrics on static datasets for various inputs to the system. Indeed, many existing ML fairness toolkits (e.g., AIF360, fairlearn, fairness-indicators, fairness-comparison) provide tools for performing such error-metric based analysis on existing datasets. While this sort of analysis may work for systems in simple environments, there are cases (e.g., systems with active data collection or significant feedback loops) where the context in which the algorithm operates is critical for understanding its impact. In these cases, the fairness of algorithmic decisions ideally would be analyzed with greater consideration for the environmental and temporal context than error metric-based techniques allow. ....  " 

No comments: