Course notes on Human-Centered Machine Learning -
Explainability Part
— Lecture 1: XAI Intro —
What is Explainable AI/ML
• No consensus on a universal definition: definitions are domain-specific
• Interpretability: ability to explain or to present in understandable terms to a
human
⁃ The degree to which a human can understand the cause of a decision
⁃ The degree to which a human can consistently predict the result of a
model
• Explanation: answer to a why question
⁃ Usually relates the feature values of an instance to its model prediction
in a humanly understandable way
• Molnar: model interpretability (global) vs explanation of an individual
prediction (local)
• Ribeiro: explainable models are interpretable if they use a small set of
features; ‘an explanation is a local linear approximation of the model's
behavior
Motivation: why do we need XAI?
• Scientific understanding: does my model discriminate?
• Bias/fairness issues: why did my model make this mistake?
• Model debugging and auditing: how can I understand/interfere with the
model?
• Human-AI cooperation/acceptance: does my model satisfy legal requirements
(e.g. GDPR)?
• Regulatory compliance: healthcare, finance/banking, insurance
• Applications: affect recognition in video games, intelligent tutoring systems,
bank loan decision, bail/parole decisions, critical healthcare predictions (e.g.
cancer, major depression), film/music recommendation, job interview
recommendation/job offer, personality impression prediction for job interview
recommendation, tax exemption
Taxonomy
• Feature statistics: feature importance and interaction strengths
• Feature visualizations: partial dependence and feature importance plots
• Model internals: linear model weights, DT structure, CNN filters, etc.
• Data points: exemplars in counterfactual explanations
• Global or local surrogates via intrinsically interpretable models
• Example: play tennis decision tree:
⁃ Intrinsic, model specific, global & local, model internals
• Example: CNN decision areas in images:
⁃ Post-hoc, model specific, local, model internals
Scope of Interpretability
• Algorithmic transparency: how does the algorithm generate the model?
• Global, holistic model interpretability:
, ⁃ How does the trained model make predictions?
⁃ Can we comprehend the entire model at once?
• Global model interpretability on a modular level: how do parts of the model
affect predictions?
• Local interpretability for a single prediction: why did the model make a certain
prediction for an instance?
• Local interpretability for a group of predictions:
⁃ Why did the model make specific predictions for a group of instances?
⁃ May be used for analyzing group-wise bias
Evaluation of interpretability
• Application-level evaluation (real task):
⁃ Deploy the interpretation method on the application
⁃ Let the experts experiment and provide feedback
• Human-level evaluation (simple task): during development, by lay people
• Function-level evaluation (proxy task):
⁃ Does not use humans directly
⁃ Uses measures from a previous human evaluation
• All of above can be used for evaluating model interpretability as well as
individual explanations
Properties of explanation methods
• Expressive power: the "language" or structure of the explanations
⁃ E.g. IF-THEN rules, tree itself, natural language etc.
• Translucency: describes how much the explanation method relies on looking
into the machine learning model
• Portability: describes the range of machine learning models with which the
explanation method can be used
• Algorithmic complexity: computational complexity of the explanation method
Properties of individual explanations
• Accuracy: how well does an explanation predict unseen data?
• Fidelity: how well does the explanation approximate the prediction of the black
box model?
• Certainty/confidence: does the explanation reflect the certainty of the machine
learning model?
• Comprehensibility/plausibility:
⁃ How well do humans understand the explanations?
⁃ How convincing (trust building) are they?
⁃ Difficult to define and measure, but extremely important to get right
• Consistency: how much does an explanation differ between models trained on
the same task and produce similar predictions?
• Stability: how similar are the explanations for similar instances?
⁃ Stability within a model vs consistency across models
• Degree of importance: how well does the explanation reflect the importance of
features or parts of the explanation?
• Novelty and representativeness
Explainability Part
— Lecture 1: XAI Intro —
What is Explainable AI/ML
• No consensus on a universal definition: definitions are domain-specific
• Interpretability: ability to explain or to present in understandable terms to a
human
⁃ The degree to which a human can understand the cause of a decision
⁃ The degree to which a human can consistently predict the result of a
model
• Explanation: answer to a why question
⁃ Usually relates the feature values of an instance to its model prediction
in a humanly understandable way
• Molnar: model interpretability (global) vs explanation of an individual
prediction (local)
• Ribeiro: explainable models are interpretable if they use a small set of
features; ‘an explanation is a local linear approximation of the model's
behavior
Motivation: why do we need XAI?
• Scientific understanding: does my model discriminate?
• Bias/fairness issues: why did my model make this mistake?
• Model debugging and auditing: how can I understand/interfere with the
model?
• Human-AI cooperation/acceptance: does my model satisfy legal requirements
(e.g. GDPR)?
• Regulatory compliance: healthcare, finance/banking, insurance
• Applications: affect recognition in video games, intelligent tutoring systems,
bank loan decision, bail/parole decisions, critical healthcare predictions (e.g.
cancer, major depression), film/music recommendation, job interview
recommendation/job offer, personality impression prediction for job interview
recommendation, tax exemption
Taxonomy
• Feature statistics: feature importance and interaction strengths
• Feature visualizations: partial dependence and feature importance plots
• Model internals: linear model weights, DT structure, CNN filters, etc.
• Data points: exemplars in counterfactual explanations
• Global or local surrogates via intrinsically interpretable models
• Example: play tennis decision tree:
⁃ Intrinsic, model specific, global & local, model internals
• Example: CNN decision areas in images:
⁃ Post-hoc, model specific, local, model internals
Scope of Interpretability
• Algorithmic transparency: how does the algorithm generate the model?
• Global, holistic model interpretability:
, ⁃ How does the trained model make predictions?
⁃ Can we comprehend the entire model at once?
• Global model interpretability on a modular level: how do parts of the model
affect predictions?
• Local interpretability for a single prediction: why did the model make a certain
prediction for an instance?
• Local interpretability for a group of predictions:
⁃ Why did the model make specific predictions for a group of instances?
⁃ May be used for analyzing group-wise bias
Evaluation of interpretability
• Application-level evaluation (real task):
⁃ Deploy the interpretation method on the application
⁃ Let the experts experiment and provide feedback
• Human-level evaluation (simple task): during development, by lay people
• Function-level evaluation (proxy task):
⁃ Does not use humans directly
⁃ Uses measures from a previous human evaluation
• All of above can be used for evaluating model interpretability as well as
individual explanations
Properties of explanation methods
• Expressive power: the "language" or structure of the explanations
⁃ E.g. IF-THEN rules, tree itself, natural language etc.
• Translucency: describes how much the explanation method relies on looking
into the machine learning model
• Portability: describes the range of machine learning models with which the
explanation method can be used
• Algorithmic complexity: computational complexity of the explanation method
Properties of individual explanations
• Accuracy: how well does an explanation predict unseen data?
• Fidelity: how well does the explanation approximate the prediction of the black
box model?
• Certainty/confidence: does the explanation reflect the certainty of the machine
learning model?
• Comprehensibility/plausibility:
⁃ How well do humans understand the explanations?
⁃ How convincing (trust building) are they?
⁃ Difficult to define and measure, but extremely important to get right
• Consistency: how much does an explanation differ between models trained on
the same task and produce similar predictions?
• Stability: how similar are the explanations for similar instances?
⁃ Stability within a model vs consistency across models
• Degree of importance: how well does the explanation reflect the importance of
features or parts of the explanation?
• Novelty and representativeness