/* ---- Google Analytics Code Below */

Sunday, September 08, 2019

(Update) AI Explainability Toolkit Talk and Technology

From last weeks talk on the just released open source explainabilty toolkit.   This can be seen as a fundamental part of most conversations.   When we interact with colleagues or with professionals, and get recommendations, we often have to ask the question 'Why?'.  This is an attempt at preloading AI originating answers to that question, based on a number of common templates.

http://cognitive-science.info/wp-content/uploads/2019/09/AIX360-CSIG-V1-2019-09-05.pdf  (Slides)

http://cognitive-science.info/community/weekly-update/  Update: Recording: https://www.youtube.com/watch?v=Yn4yduyoQh4

http://aix360.mybluemix.net/   (Technical link, demos)

What does it take to trust AI decisions ? 
AI is now used in many high-stakes decision making applications.

Addressing:
Is it fair?  Is it easy to understand?  Did anyone tamper with it?  Is it accountable?  

Very good talk, lots of great progress shown here,  but still lots more to do.   Everyone doing serious work with AI systems should examine this work and see how their system could link to this capability.  And extend it.   More to follow.

IBM Research AI Explainability 360 Toolkit

By Vijay Arya, Rachel Bellamy, Pin-Yu Chen,Payel Das, Amit Dhurandhar, MaryJo Fitzgerald,Michael Hind, Samuel Hoffman,Stephanie Houde, Vera Liao, Ronny Luss,Sameep Mehta, Saska Mojsilovic, Sami Mourad,Pablo Pedemonte, John Richards,Prasanna Sattigeri, Moninder Singh,Karthikeyan Shanmugam, Kush Varshney,Dennis Wei, Yunfeng Zhang, Ramya Raghavendra .... 

No comments: