100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten 4.2 TrustPilot
logo-home
Samenvatting

Summary Explainable AI

Beoordeling
-
Verkocht
-
Pagina's
11
Geüpload op
22-11-2022
Geschreven in
2022/2023

This document contains notes and summaries covering the content of the course Human-Centered Machine Learning within the Artificial Intelligence Master at Utrecht University. It covers the following topics: - intro to XAI - interpretable models - model agnostic interpretability methods - neural network interpretability

Meer zien Lees minder









Oeps! We kunnen je document nu niet laden. Probeer het nog eens of neem contact op met support.

Documentinformatie

Geüpload op
22 november 2022
Aantal pagina's
11
Geschreven in
2022/2023
Type
Samenvatting

Onderwerpen

Voorbeeld van de inhoud

Course notes on Human-Centered Machine Learning - Fairness
Part

— Lecture 5: FairML intro —

Dual use
• Refers to possible beneficial but also harmful consequences of AI-powered
solutions
• Types of harms:
⁃ Allocative harms: “when a system withholds an opportunity or a
resource from certain groups”
⁃ Immediate, easier to measure
⁃ Examples: hiring processes, visa applications
⁃ Representational harms: “when systems reinforce the subordination of
some groups along the lines of identity - race, class, gender, etc.’’
⁃ Long term, more difficult to measure
⁃ Examples: google translate (nurse/doctor), CEO image search

Terminologies
• Bias in FairAI field is used differently than to referring to the bias term in a
linear model, such as linear regression
• Fair machine learning is just getting started, so there is no single definition for
“bias” or “fairness”
• Research articles often don’t define what they mean with these terms
• Different studies have different conceptualizations of bias

Data
• As usual: garbage in -> garbage out = bias in -> bias out
• Biased data:
⁃ First have the world as it should and could be
⁃ Then with retrospective injustice we introduce societal bias
⁃ You get the world as it actually is
⁃ Then with non-representative sampling and measurement errors we
introduce statistical bias
⁃ We get a representation of the world according to data
• If we would have a perfect representation of the world, we would only address
the statistical bias problem; but there are no real-world datasets free of
societal biases
• Statistical bias:
⁃ Because of non-representative sampling (e.g., commonly used image
datasets are often US/European centered)
⁃ Because of measurement errors
⁃ There’s often a disconnect between the target variable and the overall
goal
⁃ Example: being re-arrested vs. re-offending vs. risk to society
⁃ Example: repayment of loans vs. better lending policies
⁃ Often different stakeholders have different overarching goals

, Datasheets for Datasets (Gebru et.al 2021)
• Motivation: e.g., for what purpose was the dataset created?
• Composition: e.g., does the dataset contain data that might be considered
sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual
orientations, religious beliefs, etc.)?
• Collection process: e.g., what mechanisms or procedures were used to collect
the data (e.g., hardware apparatus or sensor, manual human curation,
software program, software API)?
• Uses: e.g., are there tasks for which the dataset should not be used?
• Distribution: e.g., how will the dataset be distributed (e.g., tarball on website,
API, GitHub)?
• Maintenance: will the dataset be updated (e.g., to correct labeling errors, add
new instances, delete instances)?

Fairness in development of ML models
• Sample size: usually performance tends to be lower for minority groups; this
even happens when data is fully representative of the world
• ML models can amplify biases in the data
⁃ Example from Zhao et al.: 33% of the cooking images have man in the
agent role, but during test time, only 16% of the agent roles are filled
with man
• Features:
⁃ Instances are represented by features
⁃ Which features are informative for a prediction may differ between
different groups
⁃ A particular feature set may lead to high accuracy for the majority
group, but not for a minority group
⁃ The quality of the features may differ between different groups
⁃ What about the inclusion of sensitive attributes as feature (e.g, gender,
race)? Would it:
⁃ Improve overall accuracy but lower accuracy for specific groups
⁃ Improve overall accuracy, for all groups
⁃ What if we need such information to evaluate the fairness of systems?
• Evaluation:
⁃ In ML, the evaluation often makes strong assumptions
⁃ Outcomes are not affected by decisions on others
⁃ Example: denying someone’s loan can impact the ability of a
family member to repay their loan
⁃ We don’t look at the type and distribution of errors
⁃ Decisions are evaluated simultaneously
⁃ Feedback loops & long-term effects
• Model cards for model reporting (Mitchell et al. 2019)
⁃ Aim: transparent model reporting, such as:
⁃ Model details (e.g., version, type, license, features)
⁃ Intended use (e.g., primary intended uses and users, out-of-
scope use cases)
⁃ Training data
⁃ Evaluation data
⁃ Ethical considerations

Maak kennis met de verkoper

Seller avatar
De reputatie van een verkoper is gebaseerd op het aantal documenten dat iemand tegen betaling verkocht heeft en de beoordelingen die voor die items ontvangen zijn. Er zijn drie niveau’s te onderscheiden: brons, zilver en goud. Hoe beter de reputatie, hoe meer de kwaliteit van zijn of haar werk te vertrouwen is.
massimilianogarzoni Universiteit Utrecht
Bekijk profiel
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
18
Lid sinds
8 jaar
Aantal volgers
13
Documenten
17
Laatst verkocht
5 maanden geleden

2,7

3 beoordelingen

5
0
4
0
3
2
2
1
1
0

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Veelgestelde vragen