/* ---- Google Analytics Code Below */

Sunday, October 31, 2021

Making Decision Makers Use and Understand the Value of Models

Many times had to consider how to get key decision makers to use the results of analytical models.  This article touches on that in some ways. Like to consider further how this could be done consistently. 

Making machine learning more useful to high-stakes decision makers

A visual analytics tool helps child welfare specialists understand machine learning predictions that can assist them in screening cases.

Adam Zewe | MIT News Office

The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation.

But these models don’t do any good if the humans they are intended to help don’t understand or trust their outputs.

Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening. In collaboration with a child welfare department in Colorado, the researchers studied how call screeners assess cases, with and without the help of machine learning predictions. Based on feedback from the call screeners, they designed a visual analytics tool that uses bar graphs to show how specific factors of a case contribute to the predicted risk that a child will be removed from their home within two years.... ' 

No comments: