Explainable Artificial Intelligence (XAI) –
Study Notes
1. Why XAI? Impact and Motivation
1.1 Impact of Artificial Intelligence
AI systems increasingly influence high-stakes domains:
● Entertainment (recommendations, content creation)
● Education (student monitoring, grading)
● Medicine (diagnosis, treatment support)
● Human Resources (recruitment, evaluation)
➡ Because AI affects people’s lives, understanding and justifying decisions is critical.
1.2 Algorithmic Bias – Key Examples
Amazon AI Recruiting Tool
● Trained on historical CVs (mostly from men)
● Learned gender bias implicitly
● Example of bias caused by biased training data
COMPAS Case (Criminal Risk Prediction)
● Algorithm predicted recidivism risk
● Found to mislabel Black defendants as high-risk more often
● Example of societal bias amplified by AI
➡ These cases show the need for transparency and accountability.
, 2. Legal and Regulatory Context
2.1 EU AI Act
AI systems are classified by risk level:
● Unacceptable risk → banned
● High risk → strict requirements (education, grading, hiring, medical devices)
● Limited risk → transparency obligations (chatbots)
● Minimal risk → largely unregulated (spam filters, recommendation systems)
=> Explainability is especially required for high-risk AI systems.
2.2 GDPR – Right to Explanation
● Individuals have the right to an explanation for automated decisions that
significantly affect them
● Example: loan rejection must be explainable
3. Explainability as a Technical Challenge
3.1 White-box vs Black-box Models
White-box AI
● Transparent internal logic
● Example: decision trees, rule-based systems
● Easy to trace decisions step by step
Black-box AI
● Complex internal structure
● Example: deep neural networks
● High performance, but hard to interpret
➡ ️XAI mainly focuses on making black-box models understandable.
Study Notes
1. Why XAI? Impact and Motivation
1.1 Impact of Artificial Intelligence
AI systems increasingly influence high-stakes domains:
● Entertainment (recommendations, content creation)
● Education (student monitoring, grading)
● Medicine (diagnosis, treatment support)
● Human Resources (recruitment, evaluation)
➡ Because AI affects people’s lives, understanding and justifying decisions is critical.
1.2 Algorithmic Bias – Key Examples
Amazon AI Recruiting Tool
● Trained on historical CVs (mostly from men)
● Learned gender bias implicitly
● Example of bias caused by biased training data
COMPAS Case (Criminal Risk Prediction)
● Algorithm predicted recidivism risk
● Found to mislabel Black defendants as high-risk more often
● Example of societal bias amplified by AI
➡ These cases show the need for transparency and accountability.
, 2. Legal and Regulatory Context
2.1 EU AI Act
AI systems are classified by risk level:
● Unacceptable risk → banned
● High risk → strict requirements (education, grading, hiring, medical devices)
● Limited risk → transparency obligations (chatbots)
● Minimal risk → largely unregulated (spam filters, recommendation systems)
=> Explainability is especially required for high-risk AI systems.
2.2 GDPR – Right to Explanation
● Individuals have the right to an explanation for automated decisions that
significantly affect them
● Example: loan rejection must be explainable
3. Explainability as a Technical Challenge
3.1 White-box vs Black-box Models
White-box AI
● Transparent internal logic
● Example: decision trees, rule-based systems
● Easy to trace decisions step by step
Black-box AI
● Complex internal structure
● Example: deep neural networks
● High performance, but hard to interpret
➡ ️XAI mainly focuses on making black-box models understandable.