Thoughtful piece on the problem of how to reasonably provide explanation. Have now looked at several means of addressing the problem. Consider carefully the decisions being driven and risks involved.
Grilling the answers: How businesses need to show how AI decides
As artificial intelligence becomes more widespread, so the need to render it explainable increases. How can companies navigate the technical and ethical challenges?
By Lindsay Clark in Computerweek
Show your working: generations of mathematics students have grown up with this mantra. Getting the right answer is not enough. To get top marks, students must demonstrate how they got there. Now, machines need to do the same.
As artificial intelligence (AI) is used to make decisions affecting employment, finance or justice, as opposed to which film a consumer might want to watch next, the public will insist it explains its working.
Sheffield University professor of AI and robotics Noel Sharkey drove home the point when he told The Guardian that decisions based on machine learning could not be trusted because they were so “infected with biases”.
Sharkey called for an end to the application of machine learning to life-changing decisions until they could be proven safe in the same way that drugs are introduced into healthcare. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment