/* ---- Google Analytics Code Below */

Wednesday, July 06, 2022

Building in Explainability into ML Models

 Key to making them effectively Useable

Building Explainability into Components of ML Models

MIT News, Adam Zewe, June 30, 2022

Researchers at the Massachusetts Institute of Technology (MIT) and cybersecurity startup Corelight have developed a taxonomy to help developers create components of machine learning (ML) models that incorporate explainability. The researchers defined properties that make features interpretable for five varieties of users, and that provide instructions for engineering features into formats that will be easier for laypersons to understand. Key to the taxonomy is the precept that there is no universal model for interpretability. The researchers define properties that can make components approximately explainable for different decision-makers, and outline which properties are likely most valuable to users. MIT's Alexandra Zytek said, "The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with."  ... 

No comments: