Explaining AI, Decision Trees
Berkeley Artificial Intelligence Research
Making Decision Trees Accurate Again: Explaining What Explainable AI Did Not By Alvin Wan
The interpretability of neural networks is becoming increasingly necessary, as deep learning is being adopted in settings where accurate and justifiable predictions are required. These applications range from finance to medical imaging. However, deep neural networks are notorious for a lack of justification. Explainable AI (XAI) attempts to bridge this divide between accuracy and interpretability, but as we explain below, XAI justifies decisions without interpreting the model directly.
What is “Interpretable”?
Defining explainability or interpretability for computer vision is challenging: What does it even mean to explain a classification for high-dimensional inputs like images? As we discuss below, two popular definitions involve saliency maps and decision trees, but both approaches have their weaknesses. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment