/* ---- Google Analytics Code Below */

Monday, May 24, 2021

Explaining AI in Context

Not to say I am using the connection with autonomous cars, but am very much into how we explain with AI in all sorts of contexts.    We built some AI systems in our early days that could have used very precise explanatory capabilities, in order to keep its credibility over many maintenance cycles,  but it could only be crudely done at the time.   Here a nice case study in the here and now.  

The Rocky Road Toward Explainable AI (XAI) For AI Autonomous Cars 

The AI systems doing the piloting of autonomous cars will need to provide explanations to curious passengers about the route being undertaken    By Lance Eliot, the AI Trends Insider  

Our lives are filled with explanations. You go to see your primary physician due to a sore shoulder. The doctor tells you to rest your arm and avoid any heavy lifting. In addition, a prescription is given. You immediately wonder why you would need to take medication and also are undoubtedly interested in knowing what the medical diagnosis and overall prognosis are. 

So, you ask for an explanation. 

In a sense, you have just opened a bit of Pandora’s box, at least in regard to the nature of the explanation that you might get. For example, the medical doctor could rattle a lengthy and jargon-filled indication of shoulder anatomy and dive deeply into the chemical properties of the medication that has been prescribed. That’s probably not the explanation you were seeking.   

It used to be that physicians did not expect patients to ask for explanations. Whatever was said by the doctor was considered sacrosanct. The very nerve of asking for an explanation was tantamount to questioning the veracity of a revered medical opinion. Some doctors would gruffly tell you to simply do as they have instructed (no questions permitted) or might utter something rather insipid like your shoulder needs help and this is the best course of action. Period, end of story.   

Nowadays, medical doctors are aware of the need for viable explanations. There is specialized “bedside” training that takes place in medical schools. Hospitals have their own in-house courses. upcoming medical doctors are graded on how they interact with patients. And so on.   

Though that certainly has opened the door toward improved interaction with patients, it does not necessarily completely solve the explanations issue. 

Knowing how to best provide an explanation is both art and science. You need to consider that there is the explainer that will be providing the explanation, and there is a person that will be the recipient of the explanation.    ... '

No comments: