Via DSC, below the intro, much more at the link:
The Ethical AI Application Pyramid
Posted by Bill Schmarzo In the blog “How Can Your Organization Manage AI Model Biases?”, I wrote:
“In a world more and more driven by AI models, Data Scientists cannot effectively ascertain on their own the costs associated with the unintended consequences of False Positives and False Negatives. Mitigating unintended consequences requires the collaboration across a diverse set of stakeholders in order to identify the metrics against which the AI Utility Function will seek to optimize.”
I’ve been fortunate enough to have had some interesting conversations since publishing that blog, especially with an organization who is championing data ethics and “Responsible AI” (love that term). As was so well covered in Cathy O’Neil’s book “Weapons of Math Destruction”, the biases built into many of the AI models that are being used to approve loans and mortgages, hire job applicants, and accept university admissions are yielding unintended consequences that severely impact both individuals and society.
AI models only optimize against the metrics against which it has been programmed to optimize. If the AI model yields unintended consequences, that’s not the AI models fault. It’s the fault of the data science team and the operational stakeholders who are responsible for defining the AI Utility Function against which the AI model will judge model progress and success (see Figure 1). ... "
No comments:
Post a Comment