How to address the AI Act and stay in control of your AI, without budget overruns, avoiding risks of non-compliance, legal disputes, or reputational damage.
Discrimination, misinformation, privacy breaches, algorithmic bias, behavioral manipulation, mass surveillance, cyberattacks, information system security breaches, socio-economic and environmental risks: these are the major risks associated with artificial intelligence (AI) identified by the MIT in its AI Risk Repository.
The European AI ACT, a framework of responsability
In response to these challenges, European lawmakers have led the way in establishing guarantees for ethical AI that respects fundamental rights. The AI Act, which came into force on August 2nd, 2024, establishes a framework of responsibility to strengthen trust in the European market.
At every stage of the design, development, implementation or use of an Artificial Intelligence System, it is essential to anticipate the new regulatory constraints introduced by the AI Act and incorporate the legal and contractual requirements applicable to your project.
This approach will allow you to best navigate the global legal environment in which your AI project operates, helping you stay in control without budget overruns, avoiding risks of non-compliance, legal disputes, or reputational damage.
The AI Act’s approach and key issues
Introducing a risk-based approach, the AI Act aims to prohibit manipulative AI Systems or those posing significant risks to individuals, ensuring that human-machine interactions remain transparent.
The AI Act also introduces product regulations, ranging from labeling to the prohibition of non-compliant Systems from entering the market.
Failure to comply with the AI Act can lead to financial penalties:
- For high-risk Systems, fines can reach up to €35 million or 7% of the company’s annual global turnover, whichever is higher.
- Non-compliance with other obligations under the AI Act may result in fines of up to €15 million or 3% of global annual turnover, whichever is higher.
- Furthermore, providing inaccurate or misleading information to the competent authorities can result in fines of up to €7.5 million or 1% of the company’s annual global turnover. In some cases, decisions regarding sanctions may be made public.
Authorities may also temporarily or permanently ban non-compliant AI Systems from the market. In the event of damage caused by a non-compliant AI System, companies may also be ordered to compensate the victims.
Which companies are affected by the AI Act?
An AI System is defined as a ” machine-based system that is designed to operate with varying levels of autonomy (…) and that (…) infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions” (Article 3.1 of the AI Act)
The AI Act applies to all companies wishing to develop and/or commercialize such an intelligent System:
- companies targeting the European market (whether the company’s headquarters are within or outside the EU).
- providers located in a third country, if the result of the System is used in the EU.
- The AI Act applies to AI Systems already in use within the company as of the AI Act’s effective date.
Timeline for implementation
The AI Act adopted on May 21, 2024, and effective as of August 1st, 2024, is being implemented gradually:
- 6 months for AI Systems deemed to have unacceptable risks (by February 1, 2025)
- 12 months for general-purpose AI models (by August 1, 2025)
- 24 or 36 months depending on high-risk AI Systems (by August 1, 2026)
A classification based on the level of risk
The AI Act introduces a classification of AI Systems based on the level of risk they pose, from levels 1 to 4, with different compliance obligations, or even their outright prohibition from the market.
Level 4: Unacceptable Risk
Level 4 targets Systems that implement manipulative techniques, exploit human vulnerabilities, or use social scoring systems. The following practices or use cases are prohibited, with rare exceptions:
-
- Use of subliminal or deliberately manipulative or deceptive techniques.
- Exploiting a person’s vulnerabilities (age, disability) to substantially alter their behavior.
- Use of biometric categorization systems based on sensitive or protected characteristics.
- Social scoring systems leading to unfair, unfavorable, or disproportionate treatment.
- Use of real-time remote biometric identification systems in publicly accessible spaces.
- Creation of facial recognition databases through non-targeted data harvesting.
- Emotion deduction, particularly in the workplace and educational settings.
Level 3: High Risk
Level 3 includes systems used in critical contexts such as public infrastructure, financial services, and healthcare. These systems require strict compliance measures, including regular security and transparency assessments.
AI Systems are automatically classified as high-risk when used in certain designated areas, such as:
-
- Presenting significant risks to the health, safety, or rights of individuals.
- Biometric and biometric-based systems with person categorization.
- Management and operation of critical infrastructure (road, rail, air traffic, water, gas, electricity, internet).
- Education and professional training.
- Employment, workforce management, and access to independent employment.
- Access to and right to essential private services, public services, and social benefits.
- AI Systems used in certain designated products are also automatically classified as high-risk, such as medical diagnostic devices, transport vehicles etc.
Level 2: Low to Moderate Risk
Level 2 includes Systems interacting with individuals that are neither unacceptable nor high-risk. Examples include:
-
-
- AI integrated into chatbots or customer service hotlines.
- AI generated artistic caricatures.
-
Level 1: Minimal or No Risk
Level 1 includes other systems that do not fall into levels 2, 3, or 4. These systems do not significantly impact fundamental rights, health, or safety. Examples include:
-
- Connected devices (home appliances) using AI systems.
- Anti-spam software.
- Most video games.
Compliance Requirements for AI Systems
High-Risk AI (Level 3)
The AI Act requires:
-
- Risk assessment: Providers must carry out rigorous risk assessments to identify potential dangers associated with AI use.
- Transparency and documentation: Providers must provide detailed documentation on the functioning of the AI, the data used for its training, and the measures taken to mitigate risks.
- Human supervision: Systems must be designed to allow for appropriate human oversight, ensuring that AI does not make autonomous decisions without control.
- Cybersecurity: Providers must ensure the robustness and security of AI systems against cyberattacks and other threats.
AI of Levels 1, 2, and 4
Systems of levels 1 and 2 are subject to transparency, information, and documentation obligations. Systems of level 4 are prohibited.
Vigilance over AI data
Depending on whether your company uses its own proprietary data or third-party data, the legal considerations differ.
Proprietary data, even when internal, is rarely neutral or public, often being subject to confidentiality agreements (Non-Disclosure Agreements, contracts with confidentiality clauses), to privacy regulations (including GDPR), or to Intellectual Property.
When using third-party data, legal constraints increase. In addition to the AI Regulation, AI Systems must also comply with the General Data Protection Regulation (GDPR), which sets strict standards for processing personal data in the European Union.
This topic will be discussed in more detail soon.
Getting ahead of regulations
It is crucial to develop a strong governance strategy for your AI projects to anticipate legal and reputational risks. AURELE IT has built specialized expertise that combines in-depth knowledge of the AI Act’s requirements, substantial experience in digital transformation projects, and specific expertise in GDPR compliance and data governance.
Contact Partner Attorney Florence Ivanier, for any inquiries:
.