AI Act Adopted: introduction to the 458-page EU legislation
- Articles and memoranda
- Posted 14.06.2024
What Happened?
On 21 May 2024, the European Council gave its final approval to the regulation laying down harmonised rules on artificial intelligence (the “AI Act”), a comprehensive legal framework designed to address the development, placing on the market, putting into service and use of artificial intelligence (“AI”) systems in the European Union (the “EU”). The AI Act marks the EU’s first attempt to regulate AI technologies.
After less than three years of legislative negotiations, the AI Act has nearly doubled in volume from its initial proposal – notably adding “general purpose AI models” as a new category of regulated technologies – highlighting its importance on the EU’s agenda.
Key Takeaways of the AI Act
The AI Act classifies AI systems into several categories based on the risks they pose to health, safety, and fundamental rights. The level of regulation increases with the level of risk, covering prohibited AI, high-risk AI, other AI systems and general-purpose AI with and without systemic risks.
- Prohibition of certain AI practices: the AI Act enforces strict prohibitions on specific AI practices that involve invasive or manipulative techniques leading to potential harm or discrimination. These particularly dangerous AI systems are prohibited in the EU without exception.
- Heavy obligations for high-risk AI systems: the AI Act sets stringent requirements for high-risk AI systems. These AI systems are defined according to Annex I and Annex III of the AI Act, covering products in sectors such as machinery, toys, recreational crafts, personal watercraft, lifts, explosives, radio, pressure equipment, cableways, personal protective equipment, gas appliances, medical devices, civil aviation, vehicles, marine equipment, rail systems, as well as certain products related to biometry and critical infrastructure. This category entails obligations for the AI systems themselves, their providers, importers, distributors, and deployers. Providers have the most numerous obligations, but similar responsibilities extend across the entire value chain (e.g. all of these actors must halt the deployment of a high-risk AI system if they have reasons to believe that it is not compliant with the AI Act).
- Transparency rules for “other AI systems”: the AI Act specifies transparency requirements for a category of AI systems referred to as “other AI systems”, being AI interacting directly with natural persons (e.g. chatbot) or generating multimedia content for various purposes. Providers and deployers of these AI systems must ensure transparency by disclosing to users that such content has been artificially created or manipulated. This appears particularly useful in the age of deep fakes where AI is used to create images of existing persons or places.
- Limited obligations for general-purpose AI models and general-purpose AI models with systemic risks: the AI Act defines “general-purpose AI models” broadly as AI models capable of performing a wide range of tasks and integrating into various applications or systems. Providers of these AI systems have minimal obligations. However, many AI systems that fall within a higher-risk category with more extensive requirements are also likely to fall within the general-purpose AI model definition. Additionally, the AI Act introduces “general-purpose AI models with systemic risks” which are general-purpose AI models with high impact capabilities (i.e. using an extremely large amount of computing power for training). These models not only have the basic obligations of general-purpose AI but also additional requirements focused on assessing and mitigating systemic risks and performing model evaluations.
- Establishment of new governance bodies at the European level: the AI Act establishes several new European bodies, including (i) an AI Office within the European Commission to enforce the AI Act, (ii) the European Artificial Intelligence Board, comprising one representative from each Member State, to advise and assist the European Commission and Member States in implementing the AI Act, offering advice, recommendations, and opinions on relevant matters, (iii) an Advisory Forum, to provides technical expertise and advice to the Board and the European Commission, and (iv) a scientific panel of independent experts to support enforcement activities upon the request of Member States.
- Establishment of new national competent authorities: in addition to the above, each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority.
What is next?
Now that the AI Act has been approved, it will be published in the Official Journal of the European Union later this month and will come into force 20 days after its publication. The AI Act will become applicable 24 months from its date of entry into force, but will be implemented in 3 stages:
- Six months after entry into force: rules regarding prohibited AI practices will apply.
- Twelve months after entry into force: rules for general-purpose AI (with and without systemic risks), the establishment of new EU governance bodies, and national competent authorities will apply.
- Thirty-six months after entry into force: rules for some of the high-risk AI systems will start to apply.
Additional European texts are intended to supplement the AI Act in the future, notably the AI Liability Directive and new Product Liability Directive.
At the national level, the CNPD (Commission Nationale pour la Protection des Données) announced on 14 June 2024 the opening of applications for their newly launched "Regulatory Sandbox". This collaborative environment allows companies registered in Luxembourg to test the legal compliance of their AI projects with GDPR requirements.