Caution in implementation.
By Scientific American, October 25, 2022
Doctor wears virtual reality goggles depicting futuristic healthcare scenario.
Adopting a more holistic approach to developing and testing clinical AI models will lead to more nuanced discussions about how well these models can work and their limitations.
Mistakes by artificial intelligence (AI) models that support doctors’ clinical decisions can mean life or death. Therefore, it is critical that we understand how well these models work before deploying them. Published reports of this technology currently paint a too-optimistic picture of its accuracy, and the scientific papers detailing such advances may become foundations for new companies, new investments and lines of research, and large-scale implementations in hospital systems.
However, in most cases, the technology is not ready for deployment. Why? As researchers feed data into AI models, the models are expected to become more accurate, or at least not get worse. However, our work and the work of others has identified the opposite, where the reported accuracy in published models decreases with increasing dataset size.
From Scientific American
View Full Article (May Require Paid Registration)
No comments:
Post a Comment