/* ---- Google Analytics Code Below */

Thursday, June 30, 2022

Examining AI Liability

 Good overview of the topic and related liability issues, we looked at this in early AI and analytical applications. Now especially applicable in automated vehicles.

Who Is Liable when AI Kills?

We need to change rules and institutions while still promoting innovation to protect people from faulty AI       By George Maliha, Ravi B. Parikh on June 29, 2022    in SCIAM

Who is responsible when AI harms someone?

A California jury may soon have to decide. In December 2019, a person driving a Tesla with an artificial intelligence driving system killed two people in Gardena in an accident. The Tesla driver faces several years in prison. In light of this and other incidents, both the National Highway Transportation Safety Administration (NHTSA) and National Transportation Safety Board are investigating Tesla crashes, and NHTSA has recently broadened its probe to explore how drivers interact with Tesla systems. On the state front, California is considering curtailing the use of Tesla autonomous driving features.

Our current liability system—our system to determine responsibility and payment for injuries—is completely unprepared for AI. Liability rules were designed for a time when humans caused the majority of mistakes or injuries. Thus, most liability frameworks place punishments on the end-user doctor, driver or other human who caused an injury. But with AI, errors may occur without any human input at all. The liability system needs to adjust accordingly. Bad liability policy will harm patients, consumers and AI developers.   .... ' 


No comments: