Knowledge Vault 4 /60 - AI For Good 2021
How will the EU's AI Act impact you?
Kilian Gross
< Resume Image >
Link to IA4Good VideoView Youtube Video

Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:

graph LR classDef main fill:#f9f9f9, font-weight:bold, font-size:14px classDef regulation fill:#ffcc99, font-weight:bold, font-size:14px classDef risk fill:#ccff99, font-weight:bold, font-size:14px classDef transparency fill:#99ccff, font-weight:bold, font-size:14px classDef practices fill:#ff99cc, font-weight:bold, font-size:14px classDef biometric fill:#ccccff, font-weight:bold, font-size:14px A[How will the
EUs AI Act
impact you?] A --> B[AI Act:
first attempt
at horizontal regulation. 1] B --> C[AI system:
broadly defined,
uses regulated. 2] A --> D[Risk Categories] D --> E[Most AI:
minimal/no risk,
not regulated. 3] D --> F[High-risk AI:
embedded in products,
standalone. 5] D --> G[High-risk systems:
must meet EU
requirements. 6] A --> H[Transparency and Practices] H --> I[Special transparency:
notify humans
of bots. 4] H --> J[Prohibited AI practices:
manipulation, vulnerabilities,
social scoring. 7] H --> K[Private social scoring:
prohibited for
public use. 8] A --> L[Biometric Identification] L --> M[Biometric identification:
restricted in
public spaces. 9] L --> N[Regulation applies:
affects people
in EU. 10] class A,B,C regulation class D,E,F,G risk class H,I,J,K transparency class L,M,N biometric

Resume:

1.- The EU has proposed the Artificial Intelligence Act to regulate AI. It is the first attempt at horizontal AI regulation.

2.- The regulation defines an AI system broadly and lists AI techniques in an annex, regulating uses rather than the technology itself.

3.- Most AI systems are expected to fall under the minimal/no risk category and will not be regulated.

4.- Some AI systems have special transparency obligations, such as notifying humans when interacting with a bot or using emotion recognition.

5.- High-risk AI systems include those embedded in regulated products like toys and machinery, and standalone systems in 8 defined areas.

6.- High-risk systems must undergo a conformity assessment by the provider to verify requirements are met before being put on the EU market.

7.- Four types of AI practices are prohibited as an unacceptable risk - manipulation, exploitation of vulnerabilities, social scoring, and real-time biometric identification.

8.- Private companies could compile social scoring systems which would not be prohibited, only their use by public authorities is banned.

9.- Real-time biometric identification in publicly accessible spaces is prohibited with some exceptions. Post-event identification is high risk but allowed.

10.- The regulation applies whenever the AI system affects people in the EU, even if the provider and user are outside the EU.

Knowledge Vault built byDavid Vivancos 2024