Skip to content

AI Act — Overview

What it is

The Artificial Intelligence Act is Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence. It is the first comprehensive legislative framework worldwide governing the development, placing on the market, and use of artificial intelligence systems.

Identifier Value
Act Regulation (EU) 2024/1689
Adoption 13 June 2024
Publication in OJEU 12 July 2024
Entry into force 1 August 2024
General application 2 August 2026
ELI http://data.europa.eu/eli/reg/2024/1689/oj

Why

The Regulation pursues multiple complementary objectives:

  • ensuring a high level of protection of health, safety, and fundamental rights enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law, and environmental protection;
  • promoting the uptake of human-centric and trustworthy artificial intelligence, in line with the Union's values;
  • preventing and mitigating harms arising from AI systems;
  • ensuring the proper functioning of the internal market, preventing regulatory fragmentation across Member States;
  • supporting innovation, particularly for the benefit of SMEs and start-ups.

Scope of application

Territorial and extraterritorial scope

The Regulation applies to:

  • providers placing on the market or putting into service AI systems or general-purpose AI models in the Union, regardless of their location;
  • deployers of AI systems established or located within the Union;
  • providers and deployers of AI systems established in third countries, where the output produced by the system is used in the Union;
  • importers and distributors of AI systems;
  • product manufacturers placing on the market or putting into service AI systems together with their product;
  • authorised representatives of providers not established in the Union;
  • affected persons located in the Union.

Exclusions

The following areas are expressly excluded from the scope of the Regulation:

  • AI systems used exclusively for military, defence, or national security purposes;
  • AI systems developed and put into service solely for the purpose of scientific research and development;
  • research, testing, and development activities prior to placing on the market;
  • AI systems used by natural persons in the course of purely personal non-professional activities;
  • obligations of providers and deployers of AI systems released under free and open-source licences, except for high-risk systems or prohibited practices.

Risk-based approach

The Regulation adopts a pyramidal approach, calibrating rules according to the intensity and scope of risks generated by AI systems. Four levels are identified:

Risk level Applicable regime Examples
Unacceptable risk Prohibited practices (Art. 5) Subliminal manipulative techniques, social scoring, predictive policing based on profiling, emotion recognition in workplaces or schools, scraping of facial images
High risk Strict requirements, conformity assessment, documentation and transparency obligations (Chapter III) AI systems used in critical infrastructure, education, employment, access to essential services, law enforcement, migration, administration of justice
Limited risk Specific transparency obligations (Art. 50) Chatbots, emotion recognition systems, deepfakes
Minimal or no risk No specific obligations Spam filters, video games with AI, simple recommendation systems

A parallel regime complements this classification for general-purpose AI models (GPAI), with specific obligations for models presenting systemic risk.

Structure of the Regulation

Component Quantity
Recitals 180
Chapters 13
Articles 113
Annexes 13

The Regulation is organised into the following chapters:

Chapter Subject Articles
I General provisions 1-4
II Prohibited AI practices 5
III High-risk AI systems 6-49
IV Transparency obligations for providers and deployers of certain AI systems 50
V General-purpose AI models (GPAI) 51-56
VI Measures in support of innovation 57-63
VII Governance 64-70
VIII EU database for high-risk AI systems 71
IX Post-market monitoring, information sharing, and market surveillance 72-94
X Codes of conduct and guidelines 95-96
XI Delegation of power and committee procedure 97-98
XII Penalties 99-101
XIII Final provisions 102-113

Application timeline

The Regulation provides for staggered application, set out in Article 113. The schedule is as follows:

Date What enters into application
1 August 2024 Entry into force of the Regulation
2 February 2025 Chapters I and II — General provisions and prohibitions of unacceptable AI practices (Art. 5)
2 August 2025 Chapter III, Section 4 (notified bodies), Chapter V (GPAI), Chapter VII (Governance), Chapter XII (Penalties, except Art. 101 on GPAI), Art. 78 (confidentiality)
2 August 2026 General application — Entire Regulation, except as otherwise indicated
2 August 2027 Art. 6(1) — High-risk AI systems falling under harmonisation legislation (Annex I) and related obligations

Penalties

Article 99 of the Regulation establishes a sanctioning system structured in three tiers, calibrated to the seriousness of the infringement:

Type of infringement Maximum penalty
Prohibited AI practices (Art. 5) Up to EUR 35,000,000 or, for undertakings, up to 7% of total worldwide annual turnover of the preceding financial year (whichever is higher)
Non-compliance with other obligations of the Regulation (Arts. 16, 22, 23, 24, 26, 31, 33, 34, 50) Up to EUR 15,000,000 or, for undertakings, up to 3% of total worldwide annual turnover (whichever is higher)
Supply of incorrect, incomplete, or misleading information to authorities Up to EUR 7,500,000 or, for undertakings, up to 1% of total worldwide annual turnover (whichever is higher)

For providers of general-purpose AI models, Article 101 provides for penalties up to EUR 15,000,000 or, for undertakings, up to 3% of total worldwide annual turnover.

For SMEs and start-ups, the lower of the two thresholds applies (Art. 99(6)).

Stakeholders involved

The Regulation identifies a range of subjects with differentiated obligations:

Subject Definition (Art. 3)
Provider Natural or legal person who develops or has developed an AI system or GPAI model and places it on the market or puts it into service under their own name or trademark
Deployer Natural or legal person, public authority, agency, or other body using an AI system under their authority (excluding personal non-professional use)
Importer Natural or legal person located in the Union who places on the market an AI system bearing the name or trademark of a person established in a third country
Distributor Natural or legal person in the supply chain, other than the provider or importer, who makes an AI system available on the Union market
Authorised representative Natural or legal person located in the Union who has received written mandate from a third-country provider to fulfil obligations under the Regulation
Product manufacturer Person placing on the market an AI system together with their product and under their own name or trademark

Governance

The governance architecture of the AI Act envisages a multi-level system:

  • AI Office of the European Commission, with supervisory and enforcement tasks, particularly for GPAI models;
  • European Artificial Intelligence Board, a body of Member States representatives with advisory and coordination functions;
  • Advisory Forum, comprising representatives of stakeholders, academia, and civil society;
  • Scientific Panel of independent experts, supporting the AI Office on technical matters relating to GPAI models;
  • National competent authorities, designated by each Member State, with market surveillance and AI systems oversight functions.

Implementing instruments

The implementation of the AI Act relies, besides the text of the Regulation and its delegated and implementing acts, on a set of soft-law implementing instruments — formally non-binding but relevant to demonstrate compliance and to guide supervisory activity:

  • GPAI Code of Practice — code of practice for providers of general-purpose AI models, pursuant to Article 56 AI Act, structured in three chapters (Transparency, Copyright, Safety and Security) and confirmed on 1 August 2025 as adequate to demonstrate compliance with Articles 53 and 55 AI Act;
  • Commission Guidelines on the scope of GPAI obligations (18 July 2025), providing the Commission's interpretation of key concepts of the Regulation;
  • European harmonised standards — under development at CEN-CENELEC under the Commission's mandate, will grant presumption of conformity under Article 40 AI Act once published;
  • Sectoral codes of conduct under Article 95 AI Act — voluntary instruments adopted on a sectoral basis.

For the comprehensive collection of published instruments, see the Soft law section of the site.

Entries from the AI-centric glossary relevant to this act:

Further reading


In case of any discrepancy between this English version and the Italian version of this overview, the Italian version shall prevail as the original document.

Last updated: April 2026