/* ---- Google Analytics Code Below */

Monday, March 25, 2019

Dark Side of AI in Healthcare

This example requires making deceptive prediction based on goals.  Expressing goals and being transparent about them is good,  but can be problematic in context for any kind of analytic method.

 Warnings of a Dark Side to AI in Health Care 
The New York Times
By Cade Metz; Craig S. Smith

Harvard University and Massachusetts Institute of Technology (MIT) researchers warn in a recently published study that new artificial intelligence (AI) technology designed to enhance healthcare is vulnerable to misuse, with "adversarial attacks" that can deceive the system into making misdiagnoses being one example. A more likely scenario is of doctors, hospitals, and other organizations manipulating the AI in billing or insurance software in an attempt to maximize revenue. The researchers said software developers and regulators must consider such possibilities as they build and evaluate AI technologies in the years to come. MIT's Samuel Finlayson said, "The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information." Changes doctors make to medical scans or other patient data in an effort to satisfy the AI used by insurance firms also could wind up in a patient's permanent record.  .... "

No comments: