We did related work which included risk analyses on solutions of many types, including AI machine learning and classical analytics. Even those that could be considered less than 'critical'. Typically using elements of predictive analyses. Prediction could also be used to produce test sets for failure and recovery analyses. Simulation also essential. So the approach here is quite interesting. Can see the methodologies being intertwined.
AI researchers devise failure detection method for safety-critical machine learning
Researchers from MIT, Stanford University, and the University of Pennsylvania have devised a method for predicting failure rates of safety-critical machine learning systems and efficiently determining their rate of occurrence. Safety-critical machine learning systems make decisions for automated technology like self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and planes. Unlike AI that helps you write an email or recommends a song, safety-critical system failures can result in serious injury or death. Problems with such machine learning systems can also cause financially costly events like SpaceX missing its landing pad.
Researchers say their neural bridge sampling method gives regulators, academics, and industry experts a common reference for discussing the risks associated with deploying complex machine learning systems in safety-critical environments. In a paper titled “Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems,” recently published on arXiv, https://arxiv.org/abs/2008.10581 the authors assert their approach can satisfy both the public’s right to know that a system has been rigorously tested and an organization’s desire to treat AI models like trade secrets. In fact, some AI startups and Big Tech companies refuse to grant access to raw models for testing and verification out of fear that such inspections could reveal proprietary information ...."
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment