/* ---- Google Analytics Code Below */

Wednesday, May 20, 2020

Like People, AI will also Fail.

A very nicely done, non-technical and usefully skeptical view of AI. Making the case that even if your AI solves a problem today, it is likely to fail tomorrow, when time and context drift and shift.  Just like human problem solvers can fail to find solutions to all problems.   In our own progress in the space, we engaged many of these experiences.   Sometimes we had to wait for decades to get better solutions.     I say know the risk and understand it too is shifting.  Build to solve useful problems.  Check your data and recheck your results.


What to Do When AI Fails  

By Andrew Burt and Patrick Hall in O'Reilly

These are unprecedented times, at least by information age standards. Much of the U.S. economy has ground to a halt, and social norms about our data and our privacy have been thrown out the window throughout much of the world. Moreover, things seem likely to keep changing until a vaccine or effective treatment for COVID-19 becomes available. All this change could wreak havoc on artificial intelligence (AI) systems. Garbage in, garbage out still holds in 2020. The most common types of AI systems are still only as good as their training data. If there’s no historical data that mirrors our current situation, we can expect our AI systems to falter, if not fail. 

To date, at least 1,200 reports of AI incidents have been recorded in various public and research databases. That means that now is the time to start planning for AI incident response, or how organizations react when things go wrong with their AI systems. While incident response is a field that’s well developed in the traditional cybersecurity world, it has no clear analogue in the world of AI.  What is an incident when it comes to an AI system? When does AI create liability that organizations need to respond to? This article answers these questions, based on our combined experience as both a lawyer and a data scientist responding to cybersecurity incidents, crafting legal frameworks to manage the risks of AI, and building sophisticated interpretable models to mitigate risk. Our aim is to help explain when and why AI creates liability for the organizations that employ it, and to outline how organizations should react when their AI causes major problems.  ... " 

No comments: