From the CSIG talk given today:
An instructive experiment which was released for use and experimentation today by IBM. The slides instructive by themselves are here. The complete audio and video of the presentation will be placed here shortly. The comments on the presentation also point to other work that has been done and other efforts underway. Based on the complexity of the problem there is some doubt that a universal solution to this problem is easily determined, give also the broad regulatory and even philosophical underpinning . Also this is about Machine learning trained problems, not necessarily human decision making. Still it would be useful to detect if some artifact of ML, like sampling is involved. Also the need for integration of clear explanatory capabilities were mentioned. The examples shown were still too technically complex for typical decision makers.
Nicely done. Well worth examining. I understand anyone can experiment with this, instructions in the talk.
Talk: “AI Fairness 360”
Speaker: Kush Varshney, IBM
Talk Description:
Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.In this presentation, we introduce AI Fairness 360, a new Python package that includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models. They have developed the package with extensibility in mind. They encourage the contribution of your metrics, explainers, and debiasing algorithms. Please join the community to get started as a contributor. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment