Failures can be useful if you understand the reason why. Did we learn nothing beyond that? And then, what was the why of not learning anything?
Artificial intelligence/Machine learning
Hundreds of AI tools have been built to catch covid. None of them helped.
Some have been used in hospitals, despite not being properly tested. But the pandemic could help make medical AI better.
by Will Douglas Heavenarchive page in TechnologyReview July 30, 2021
When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.
But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”
Google’s medical AI was super accurate in a lab. Real life was a different story.
If AI is really going to make a difference to patients we need to know how it works when real humans get their hands on it, in real situations.
It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.
In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful. ... '
No comments:
Post a Comment