Like the aspect of evaluating explanations. Quantifiable? Like a human? Trying to follow the details.
Does This AI Think Like a Human?
MIT News, Adam Zewe, April 6, 2022
A new technique compares the reasoning of a machine-learning model to that of a human, so the user can see patterns in the model’s behavior. ...
Massachusetts Institute of Technology (MIT) and IBM Research scientists have developed the Shared Interest method for rapidly analyzing a machine learning model's behavior by evaluating its individual explanations. The technique uses saliency methods to highlight how the model made specific decisions, comparing them to ground-truth data. Shared Interest then applies quantifiable metrics that compare the model's reasoning to that of a human by measuring the alignment between its decisions and the ground truth, then classifying those decisions into eight categories. The method can be used for image and text classification. MIT's Angie Boggust warned that the technique is only as good as the saliency methods on which it is based; if those techniques are biased or contain inaccuracies, the technique will inherit those limitations. ... '
No comments:
Post a Comment