Always looking at data and context involved when algorithms fail. Like any kind of predictive approach, its rarely perfect. The same for classing analytics and for machine learning based methods. And for that matter for human predictive methods as well. And that can change as data and context changes over time. So these kinds of test examples are useful.
Algorithm That Predicts Deadly Infections Is Often Flawed By Wired
A study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic Systems' early warning system for sepsis infections performs poorly.
An algorithm designed by U.S. electronic health record provider Epic Systems to forecast sepsis infections is significantly lacking in accuracy, according to an analysis of data on about 30,000 patients in University of Michigan (U-M) hospitals.
U-M researchers said the program overlooked two-thirds of the approximately 2,500 sepsis cases in the data, rarely detected cases missed by medical staff, and was prone to false alarms.
The researchers said Epic tells customers its sepsis alert system can correctly differentiate two patients with and without sepsis with at least 76% accuracy, but they determined it was only 63% accurate.
U-M's Karandeep Singh said the study highlights wider shortcomings with proprietary algorithms increasingly used in healthcare, noting that the lack of published science on these models is "shocking."
From Wired
No comments:
Post a Comment