AI Regulation les 15/11
The two key regulatory frameworks for AI are the
Council of Europe Framework Convention on Artificial Intelligence
and the Artificial Intelligence Act (AI Act), each with unique characteristics and
approaches:
Similarities and Differences between AI Act en Council of Europe Framwork convention
-> Scope and Reach
The Council of Europe Convention has a global reach, as more countries outside the
EU have signed it, including the United States, Japan, and Canada.
The AI Act is primarily focused on the EU Member States but has some extra-
territorial applications, meaning it affects global companies that operate within or
target the EU market.
-> Exemptions
Both instruments exclude AI systems used for national security or defense and
offer exemptions for research and development.
However, the AI Act applies more extensive obligations to private sector companies,
whereas the Convention allows individual member states the discretion to apply or
create their own rules for the private sector.
-> Risk-Based Approach
Both frameworks adopt a risk-based approach to regulation, where higher risks
require stronger obligations, and lower risks face fewer requirements.
The AI Act is more detailed and comprehensive in its provisions, particularly for AI
developers and deployers, while the Convention offers more flexibility to
accommodate various member states and their capabilities.
1. The Council of Europe Framework Convention on AI:
-> Origins and Development
Initially, soft law instruments like guidelines on AI (e.g., facial recognition, AI and data
protection) were developed. However, the need for legally binding international
regulation became clear, leading to the establishment of the ad hoc committee on AI.
This committee focused on principles like fundamental rights, a risk-based system,
and horizontal application across sectors.
The Convention is legally binding once ratified, and it is based on consensus from
various stakeholders, including 46 Council of Europe member states, other
international states, the private sector, and the EU.
It aims to ensure that AI activities throughout the entire lifecycle (from planning,
design, data collection, training, testing, deployment, monitoring, and retirement) comply
with human rights.
Unlike the AI Act, the Convention offers flexibility, particularly for private sector
regulation, allowing countries to either adopt the treaty’s provisions or create alternative
measures.
-> Core Principles in the Convention:
The Council of Europe Convention emphasizes the following seven principles that
should be reflected in domestic AI regulations:
, - Human Dignity and Individual Autonomy: Ensuring AI respects fundamental
human values.
- Transparency: AI systems should be transparent and understandable.
- Oversight: AI requires human oversight to ensure accountability.
- Accountability: Accountability must be maintained throughout the AI lifecycle.
- Non-Discrimination: Preventing bias and ensuring fairness, especially in
decision-making (e.g., combating AI bias)
- Privacy and Personal Data Protection: Safeguarding personal data and
respecting privacy.
- Reliability and Safety: Ensuring AI systems are safe and reliable, while fostering
innovation.
-> Flexibility and Criticism:
• The Convention’s flexibility has been criticized for diluting its provisions,
particularly after private sector lobbying, including significant influence from the
United States. Critics argue that stronger requirements were needed, especially for
the private sector.
• Despite this, the flexibility was viewed as necessary to achieve wider adoption,
as it allows different countries to tailor their AI regulatory frameworks according to their
specific needs and capacities.
-> Key Obligations for Signatory States:
1. Align Domestic Legal Frameworks with Human Rights:
• Member states must update and ensure that their national laws align with the
principles outlined in the Convention, specifically those that protect fundamental
human rights in the context of AI use.
2. Protecting Democratic Processes:
• States are required to safeguard the democratic process, especially with respect
to potential AI risks in areas such as elections, voting behavior, or public
opinion manipulation through AI systems.
3. Right to Complain:
• Individuals who are affected by AI systems or whose human rights are violated due
to AI operations must have access to a clear right to complain. This ensures
accountability and redress for any harms caused by AI systems.
4. Risk Management:
• States need to ensure that AI systems undergo risk management procedures.
This involves actively identifying risks, assessing these risks, and implementing
measures to mitigate them. This concept aligns with similar provisions in the AI Act.
5. Human Rights Impact Assessments:
• A key tool for implementing the Convention’s principles is the Human Rights
Impact Assessment. These assessments allow member states to evaluate the
potential impact of AI systems on human rights and to take corrective actions.
The two key regulatory frameworks for AI are the
Council of Europe Framework Convention on Artificial Intelligence
and the Artificial Intelligence Act (AI Act), each with unique characteristics and
approaches:
Similarities and Differences between AI Act en Council of Europe Framwork convention
-> Scope and Reach
The Council of Europe Convention has a global reach, as more countries outside the
EU have signed it, including the United States, Japan, and Canada.
The AI Act is primarily focused on the EU Member States but has some extra-
territorial applications, meaning it affects global companies that operate within or
target the EU market.
-> Exemptions
Both instruments exclude AI systems used for national security or defense and
offer exemptions for research and development.
However, the AI Act applies more extensive obligations to private sector companies,
whereas the Convention allows individual member states the discretion to apply or
create their own rules for the private sector.
-> Risk-Based Approach
Both frameworks adopt a risk-based approach to regulation, where higher risks
require stronger obligations, and lower risks face fewer requirements.
The AI Act is more detailed and comprehensive in its provisions, particularly for AI
developers and deployers, while the Convention offers more flexibility to
accommodate various member states and their capabilities.
1. The Council of Europe Framework Convention on AI:
-> Origins and Development
Initially, soft law instruments like guidelines on AI (e.g., facial recognition, AI and data
protection) were developed. However, the need for legally binding international
regulation became clear, leading to the establishment of the ad hoc committee on AI.
This committee focused on principles like fundamental rights, a risk-based system,
and horizontal application across sectors.
The Convention is legally binding once ratified, and it is based on consensus from
various stakeholders, including 46 Council of Europe member states, other
international states, the private sector, and the EU.
It aims to ensure that AI activities throughout the entire lifecycle (from planning,
design, data collection, training, testing, deployment, monitoring, and retirement) comply
with human rights.
Unlike the AI Act, the Convention offers flexibility, particularly for private sector
regulation, allowing countries to either adopt the treaty’s provisions or create alternative
measures.
-> Core Principles in the Convention:
The Council of Europe Convention emphasizes the following seven principles that
should be reflected in domestic AI regulations:
, - Human Dignity and Individual Autonomy: Ensuring AI respects fundamental
human values.
- Transparency: AI systems should be transparent and understandable.
- Oversight: AI requires human oversight to ensure accountability.
- Accountability: Accountability must be maintained throughout the AI lifecycle.
- Non-Discrimination: Preventing bias and ensuring fairness, especially in
decision-making (e.g., combating AI bias)
- Privacy and Personal Data Protection: Safeguarding personal data and
respecting privacy.
- Reliability and Safety: Ensuring AI systems are safe and reliable, while fostering
innovation.
-> Flexibility and Criticism:
• The Convention’s flexibility has been criticized for diluting its provisions,
particularly after private sector lobbying, including significant influence from the
United States. Critics argue that stronger requirements were needed, especially for
the private sector.
• Despite this, the flexibility was viewed as necessary to achieve wider adoption,
as it allows different countries to tailor their AI regulatory frameworks according to their
specific needs and capacities.
-> Key Obligations for Signatory States:
1. Align Domestic Legal Frameworks with Human Rights:
• Member states must update and ensure that their national laws align with the
principles outlined in the Convention, specifically those that protect fundamental
human rights in the context of AI use.
2. Protecting Democratic Processes:
• States are required to safeguard the democratic process, especially with respect
to potential AI risks in areas such as elections, voting behavior, or public
opinion manipulation through AI systems.
3. Right to Complain:
• Individuals who are affected by AI systems or whose human rights are violated due
to AI operations must have access to a clear right to complain. This ensures
accountability and redress for any harms caused by AI systems.
4. Risk Management:
• States need to ensure that AI systems undergo risk management procedures.
This involves actively identifying risks, assessing these risks, and implementing
measures to mitigate them. This concept aligns with similar provisions in the AI Act.
5. Human Rights Impact Assessments:
• A key tool for implementing the Convention’s principles is the Human Rights
Impact Assessment. These assessments allow member states to evaluate the
potential impact of AI systems on human rights and to take corrective actions.