/* ---- Google Analytics Code Below */

Monday, January 29, 2018

Humans Gaming the System

Sometimes you need to include outlier training.   But is that an opening for gaming?   Ultimately we will also have to model multiple kinds of human intent as well.

Are AI Learning Scenarios Unpredictable Enough?    By Sam Ransbotham  in MIT Sloan

A fender bender heard around the AI world happened last week in Las Vegas when a self-driving shuttle was involved in a minor collision during its first hour of service. It is ironic that this happened in Vegas, a city based on games. How would you score this match between humans and machines? Is it 1-0 in favor of the humans, a victory for the home team?

Not so fast.

In the aftermath of the “calamity,” sensational headlines played to our default thinking that the machine was to blame. Perhaps we humans reveled a tiny bit in the schadenfreude of seeing our emerging computer overlords beaten so quickly when practice was over and the real game started.

But in this incident, the autonomous electric vehicle was shuttling eight passengers around Las Vegas’ Fremont East entertainment district when a human-operated delivery truck backed into the front bumper of the shuttle. Recognizing the oncoming truck, the shuttle stopped to avoid an accident. The human driving the truck, however, did not stop. We instead need to score this matchup as 0-1 in favor of AI.

Worse, this accident illustrates a crucial challenge in the interplay between AI and humans. Systems are typically configured in contexts without nefarious actors, where players are instead well-intentioned and follow the rules. After all, the first step is to get something working.

How does design for the “well-intentioned” manifest itself here? Consider how the situation unfolded: The shuttle seems to have accurately recognized the situation and predicted an imminent collision. This is a current strength of AI — processing input from many signals quickly to build an accurate short-term estimate of what will happen.

Given the prediction, the next step was more difficult. Should the shuttle have honked? That seems fairly risk-free. Reversed and backed away from the approaching truck? That seems more difficult and riskier than a honk. In this case, the shuttle stopped and did nothing — when in doubt, first do no harm. For imperfect AI, faced with uncertainty, a reasonable default is to stop and do nothing.

But this incident should show businesses that thinking about well-intentioned actors won’t be enough. The first law of robotics doesn’t stop with “a robot may not injure a human being”; it continues with, “or, through inaction, allow a human being to come to harm.”

Now that we know that the shuttle will stop, we have to think nefariously. For example, I live near a busy street, so there is no way that I would step out in front of traffic; there are way too many distracted drivers focused on their mobile devices and not on me. But, now that I know that vehicles will behave like the shuttle, why not step out whenever I feel like crossing the road? I can rely on excellent sensors, prediction, and lightning-fast braking to protect me. Going further, could I create traffic chaos on demand by jumping out unexpectedly? This scenario is not dissimilar to the denial-of-service attacks on computer systems, where attackers can shut down systems and hold them for ransom. .... " 

No comments: