Skip to content

Glossary

AI-centric glossary of the European Union digital regulatory ecosystem. It collects the definitions from Regulation (EU) 2024/1689 — AI Act, Article 3, from Italian Law of 23 September 2025, No. 132, and a curated selection of adjacent key terms (FRIA, DPIA, GPAI, AI literacy, etc.) referenced in the other EU acts published on this site.

For each term the glossary provides: full legal definition quoted verbatim, pinpoint legal source, references in other relevant EU acts (inter-act cross-references), pointers to related terms, and — where useful — a brief curatorial note.

For the full picture of the relationships between legal acts, see also the inter-act cross-reference page.

Methodological note

Definitions are quoted verbatim from the official version of the primary source act, in single quotation marks (' ') as in the EUR-Lex English text. References to acts link to the full version published on this site. Any unofficial translations are marked as such.


Alphabetical index

AAI literacy · AI Office · AI regulatory sandbox · AI system · Anonymisation · Authorised representative

BBiometric categorisation system · Biometric data · Biometric identification · Biometric verification

CCE marking · Code of conduct / code of practice · Common specification · Conformity assessment · Conformity assessment body · Controller · Critical infrastructure

DData Protection Impact Assessment (DPIA) · Data subject · Deep fake · Deployer · Distributor · Downstream provider

EEmotion recognition system

FFloating-point operation · Fundamental Rights Impact Assessment (FRIA)

GGeneral-purpose AI model · General-purpose AI system

HHarmonised standard · High-impact capabilities · High-risk AI system

IImporter · Informed consent · Input data · Instructions for use · Intended purpose

LLaw enforcement · Law enforcement authority

MMaking available on the market · Market surveillance authority

NNational competent authority · Non-personal data · Notified body · Notifying authority

OOperator

PPerformance of an AI system · Personal data · Placing on the market · Post-market monitoring system · Post-remote biometric identification system · Processor · Profiling · Prohibited AI practices · Provider · Pseudonymisation · Publicly accessible space · Putting into service

RReal-time remote biometric identification system · Real-world testing plan · Reasonably foreseeable misuse · Recall of an AI system · Remote biometric identification system · Risk

SSafety component · Sandbox plan · Sensitive operational data · Serious incident · Special categories of personal data · Subject · Substantial modification · Systemic risk

TTesting data · Testing in real-world conditions · Training data

VValidation data · Validation data set · Very Large Online Platform (VLOP) · Very Large Online Search Engine (VLOSE)

WWidespread infringement · Withdrawal of an AI system

This page is updated periodically. Its contents may be subject to revisions, expansions and corrections in light of regulatory developments, case law and feedback received. For feedback or suggestions, open an issue on Codeberg.


A

AI literacy

Italian term: alfabetizzazione in materia di IA

'AI literacy' means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(56)

See also

Provider · Deployer · AI system

Note

AI literacy is an operational obligation on providers and deployers under Article 4 of the AI Act, in force since 2 February 2025 (anticipated relative to most obligations of the Regulation). It is not limited to technical skills: it includes understanding of risks, of the rights of affected persons, and of the regulatory framework. It concerns all staff dealing with the operation and use of AI systems.


AI Office

Italian term: ufficio per l'IA

'AI Office' means the Commission's function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in this Regulation to the AI Office shall be construed as references to the Commission.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(47)

See also

General-purpose AI model · General-purpose AI system · Systemic risk · National competent authority

Note

The AI Office is the European Commission structure established by Decision C(2024) 390 of 24 January 2024, part of DG CNECT. It has exclusive competence over the supervision of general-purpose AI models, in particular those with systemic risk (Articles 88-94 AI Act): it can request information, conduct evaluations, impose corrective measures, impose penalties. It also coordinates cooperation with national competent authorities through the European Artificial Intelligence Board (Article 65 AI Act) and hosts the secretariat of the Advisory Forum (Article 67) and the scientific panel of independent experts (Article 68).


AI regulatory sandbox

Italian term: spazio di sperimentazione normativa per l'IA

'AI regulatory sandbox' means a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(55)

See also

Sandbox plan · Testing in real-world conditions · Real-world testing plan · Provider · National competent authority

Note

The regulatory sandbox is a policy instrument introduced by the AI Act to foster innovation and regulatory learning. It is mandatory for Member States (Article 57): each Member State must establish at least one operational sandbox by 2 August 2026. It functions as a controlled and supervised framework allowing providers (including SMEs and startups) to develop, train, validate and test innovative AI systems under the guidance of the competent authority, with possible temporary exemptions from specific obligations of the Regulation. The sandbox differs from testing in real-world conditions outside the sandbox (Article 60), which has an autonomous and more restrictive regime.


AI system

Italian term: sistema di IA

'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(1)

Inter-act cross-references

Act Reference Nature of the cross-reference
PLD Recital 13; Article 4(1) AI systems are expressly "products" for PLD purposes; operational definitional cross-reference
GDPR (implicit cross-reference) When an AI system processes personal data, the GDPR applies cumulatively (AI Act Article 2(7))
DSA Article 3(s), (t) Recommender systems and automated content moderation systems may qualify as AI systems within the meaning of the AI Act
Italian Law 132/2025 (national implementation) The Italian act operates on the AI Act definition, with provisions adapting the domestic legal order

See also

Provider · Deployer · General-purpose AI system · High-risk AI system

Note

The AI Act definition is aligned with the 2023-updated OECD definition and is functionally technology-neutral (machine learning, symbolic systems, hybrid systems). The three qualifying elements are varying autonomy, possible post-deployment adaptiveness, and inference of outputs from inputs. Mere deterministic automation without inferential capacity falls outside the scope.


Anonymisation

Italian term: anonimizzazione

Term not formally defined in Article 4 of the GDPR but addressed in Recital 26: 'The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. This Regulation does not therefore concern the processing of such anonymous information, including for statistical or research purposes'.

Source: Reg. (EU) 2016/679 — GDPR, Recital 26

See also

Personal data · Non-personal data · Pseudonymisation · Data subject

Note

Anonymisation is the irreversible process that transforms personal data into non-personal data, definitively excluding the application of the GDPR to the resulting information. It differs from pseudonymisation (reversible, pseudonymised data remains personal data). The assessment of effective anonymity requires a reasonably likely identifiability test: all means (technical, economic, temporal) that the controller or a third party may reasonably employ for re-identification must be considered. Reference guidelines: WP29 Opinion 05/2014 on anonymisation techniques; EDPB Guidelines 04/2024 (final adoption 2025) on anonymisation and privacy-enhancing techniques. CJEU case law (Breyer C-582/14, IAB Europe C-604/22, EDPS-SRB T-557/20) has further refined the notion of identifiability.


Authorised representative

Italian term: rappresentante autorizzato

'authorised representative' means a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(5)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 27 Analogous figure: the GDPR EU representative is the counterpart for non-EU controllers/processors; the AI Act authorised representative is a mirror but autonomous figure
PLD Article 8(4) The PLD also provides for an authorised representative for non-EU manufacturers; figures may cumulate in the same person

See also

Provider · Importer · Operator · Deployer

Note

The authorised representative is the institutional bridging figure for non-EU providers: having received a written mandate, it acts in the Union on behalf of the provider as regards AI Act obligations. For high-risk systems and general-purpose AI models, Article 22 of the AI Act makes the appointment of an authorised representative established in the Union mandatory before placing on the market. It retains the technical documentation throughout the lifetime of the system (10 years), cooperates with authorities, and may withdraw from the mandate in case of provider non-compliance.


B

Biometric categorisation system

Italian term: sistema di categorizzazione biometrica

'biometric categorisation system' means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(40)

See also

Biometric data · Biometric identification · Biometric verification · Emotion recognition system

Note

Biometric categorisation systems are subject to differentiated regimes: categorisation to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation is PROHIBITED (Article 5(1)(g) AI Act); other forms of biometric categorisation fall within high-risk systems (Annex III, point 1(b)). Scope exclusion: systems ancillary to another commercial service and strictly necessary for objective technical reasons (e.g. facial filters in videoconferencing apps) do not fall within the definition.


Biometric data

Italian term: dati biometrici

'biometric data' means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(34)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(14) GDPR definition substantially identical; AI Act adopts an aligned notion. Biometric data for unique identification are a special category under GDPR Article 9(1)

See also

Personal data · Special categories of personal data · Biometric identification · Biometric verification · Biometric categorisation system

Note

Biometric data in the AI Act are the core of systems regulated as high-risk (Annex III, point 1) or prohibited (Article 5: real-time remote biometric identification, untargeted scraping of facial images, biometric categorisation to infer race, political opinions, sexual orientation, etc.). The notion includes both physical/physiological characteristics (face, fingerprints, iris) and behavioural ones (signature, gait, keystroke dynamics).


Biometric identification

Italian term: identificazione biometrica

'biometric identification' means the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(35)

See also

Biometric data · Biometric verification · Remote biometric identification system · Real-time remote biometric identification system · Post-remote biometric identification system · Biometric categorisation system

Note

Biometric identification is one-to-many: comparison of a person's biometric data with a database of individuals to determine identity. It differs from biometric verification (one-to-one, authentication). When identification occurs at a distance without active involvement of the person (remote biometric identification system), a restrictive AI Act regime applies, up to a prohibition for real-time use in publicly accessible spaces by law enforcement (Article 5(1)(h), subject to exhaustive exceptions).


Biometric verification

Italian term: verifica biometrica

'biometric verification' means the automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(36)

See also

Biometric data · Biometric identification · Biometric categorisation system · Remote biometric identification system

Note

Biometric verification is one-to-one: comparison of a person's biometric data with the biometric data of the same person previously provided, to authenticate identity (e.g. smartphone unlock with face or fingerprint recognition, corporate access controls). It differs from biometric identification (one-to-many). Pure biometric verification systems are expressly excluded from the AI Act high-risk biometric system regime (Annex III, point 1(c)), provided their sole purpose is to confirm that the person is who they claim to be. However, where biometric verification processes personal data, the GDPR applies in full (special categories under Article 9(1)).


C

CE marking

Italian term: marcatura CE

'CE marking' means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(24)

Inter-act cross-references

Act Reference Nature of the cross-reference
Reg. (EC) No 765/2008 (general CE marking regime) Horizontal regime on accreditation and CE marking in the single market, applicable in a complementary manner to the AI Act

See also

Provider · Conformity assessment · Harmonised standard · Common specification

Note

The CE marking is the visible symbol of compliance of a high-risk AI system with the AI Act requirements. It is affixed by the provider under its own responsibility, after completing the conformity assessment (Articles 43, 48 AI Act). For AI systems, the CE marking may be physical (on the product or packaging) or digital, depending on the nature of the system. It is a prerequisite for placing on the EU market and putting into service.


Code of conduct / code of practice

Italian term: codice di condotta / codice di buone pratiche

Term not formally defined in Article 3 of the AI Act. Governed as a soft-law instrument in two distinct variants:

Code of practice (codice di buone pratiche): Article 56 AI Act — instrument for providers of general-purpose AI models, including those with systemic risk, to demonstrate compliance with the obligations of Articles 53-55 pending or as an alternative to harmonised standards.

Code of conduct (codice di condotta): Article 95 AI Act — voluntary instrument for providers and deployers of non-high-risk AI systems, aimed at the voluntary application of some or all the requirements for high-risk systems (Chapter III, Section 2) or further objectives (environmental sustainability, AI literacy, accessibility, stakeholder participation, diversity in development).

Source: Reg. (EU) 2024/1689 — AI Act, Article 56 (code of practice) and Article 95 (code of conduct)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 40 The GDPR also provides for codes of conduct as a soft-law tool to demonstrate compliance; analogous but autonomous figure
DSA Articles 45-47 The DSA provides for codes of conduct for systemic risk mitigation and crisis codes; methodological convergence with the AI Act

See also

General-purpose AI model · Systemic risk · Harmonised standard · Common specification

Note

The General-Purpose AI Code of Practice was published by the Commission on 10 July 2025 and is the result of the work of four expert groups convened by the AI Office with the participation of providers, experts, civil society and Member States. The code covers three areas: transparency (templates for the public summary of training data), copyright policy, security and mitigation of systemic risks. Adherence is voluntary but offers a presumption of conformity rebus sic stantibus. A subject may adhere fully, partially or not at all, but in this latter case must justify the choice.


Common specification

Italian term: specifiche comuni

'common specification' means a set of technical specifications as defined in Article 2, point (4) of Regulation (EU) No 1025/2012, providing means to comply with certain requirements established under this Regulation.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(28)

Inter-act cross-references

Act Reference Nature of the cross-reference
Reg. (EU) No 1025/2012 Article 2(4) Canonical definition: 'technical specification' means a document that prescribes technical requirements to be fulfilled, with all the related framework on European standardisation

See also

Harmonised standard · Conformity assessment · CE marking

Note

Common specifications are a residual instrument: the Commission may adopt them by implementing acts (Article 41 AI Act) where harmonised standards do not exist, are insufficient, or do not meet the requests. Compliance with common specifications confers a presumption of conformity with AI Act requirements, on the same footing as harmonised standards. They are therefore an alternative route to conformity assessment without depending on the work of standardisation organisations (CEN-CENELEC).


Conformity assessment

Italian term: valutazione della conformità

'conformity assessment' means the process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(20)

See also

CE marking · Conformity assessment body · Notified body · Harmonised standard · Common specification · Substantial modification

Note

Conformity assessment is a prerequisite for placing on the market and putting into service of high-risk AI systems. Article 43 of the AI Act provides for two routes: (a) provider self-assessment based on internal control (Annex VI) — available for most high-risk systems under Annex III; (b) assessment with involvement of a notified body (Annex VII) — required for certain biometric systems (RBI, emotion recognition, biometric categorisation) and for high-risk systems covered by Annex I product legislation. Compliance with harmonised standards or common specifications confers a presumption of conformity with the requirements of Chapter III, Section 2.


Conformity assessment body

Italian term: organismo di valutazione della conformità

'conformity assessment body' means a body that performs third-party conformity assessment activities, including testing, certification and inspection.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(21)

See also

Notified body · Notifying authority · Conformity assessment · Harmonised standard

Note

The conformity assessment body is the independent third-party that carries out testing, certification and inspection on AI systems on behalf of the provider. When notified by the national notifying authority under Chapter III, Section 4 of the AI Act, it becomes a notified body and is authorised to carry out conformity assessment for high-risk AI systems requiring third-party involvement (remote biometric identification systems, biometric categorisation, emotion recognition: Annex VII).


Controller

Italian term: titolare del trattamento

'controller' means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law.

Source: Reg. (EU) 2016/679 — GDPR, Article 4(7)

See also

Processor · Data subject · Personal data · Deployer · Provider · Data Protection Impact Assessment (DPIA)

Note

The controller is the central figure of the GDPR: virtually all the substantive obligations of the Regulation rest on it (legal bases, processing principles, security, impact assessment, processing register, accountability). The qualification is functional: one is a controller if one determines purposes and means of processing, regardless of formal qualification. Joint controllership is possible (Article 26 GDPR) where multiple subjects jointly determine purposes and means. In the AI value chain, the AI Act deployer is frequently the controller of the personal data processed by the AI system. The AI Act provider is normally controller of the training data, but is not controller of processing carried out by the deployer.


Critical infrastructure

Italian term: infrastruttura critica

'critical infrastructure' means critical infrastructure as defined in Article 2, point (4), of Directive (EU) 2022/2557.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(62)

Inter-act cross-references

Act Reference Nature of the cross-reference
Dir. (EU) 2022/2557 (CER) Article 2(4) Canonical definition: 'critical infrastructure' means an asset, a facility, equipment, network or system, or a part thereof, which is necessary for the provision of an essential service
NIS2 (sister act of CER) NIS2 and CER are twin acts: NIS2 governs the cybersecurity of essential and important entities, CER the physical resilience of critical infrastructures

See also

Serious incident · AI system

Note

The AI Act cross-refers to the CER Directive 2022/2557 (Critical Entities Resilience) for the definition of critical infrastructure. The categories of critical infrastructure cover eleven sectors: energy, transport, banking, financial market infrastructures, health, drinking water, waste water, digital infrastructure, public administration, space, production/processing/distribution of food. AI systems used in these infrastructures typically fall within Annex III, point 2 (high-risk).


D

Data Protection Impact Assessment (DPIA)

Italian term: valutazione d'impatto sulla protezione dei dati (DPIA)

'data protection impact assessment' means an assessment not formally defined in Article 4 of the GDPR but governed by Article 35(1): 'Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data'.

Source: Reg. (EU) 2016/679 — GDPR, Article 35

Inter-act cross-references

Act Reference Nature of the cross-reference
AI Act Article 27 FRIA: analogous figure but with a different focus (risks to fundamental rights). FRIA and DPIA may integrate when the AI system processes personal data

See also

Fundamental Rights Impact Assessment (FRIA) · Controller · Personal data · Profiling · Special categories of personal data

Note

The DPIA is an obligation on the controller when processing is likely to result in a high risk to the rights and freedoms of data subjects. Article 35(3) lists three exhaustive cases where DPIA is required: (a) systematic and extensive evaluation of personal aspects through automated processing (incl. profiling) producing legal or similarly significant effects; (b) large-scale processing of special categories (Article 9) or criminal offence data (Article 10); (c) systematic monitoring on a large scale of a publicly accessible area. National supervisory authorities publish lists of processing operations requiring DPIA. Where a high-risk AI system processes personal data, DPIA and FRIA operate in continuum: the FRIA may complement the existing DPIA without duplication (Article 27(4) AI Act).


Data subject

Italian term: interessato

Term derived by mirror definition from Article 4(1) of the GDPR: ''personal data' means any information relating to an identified or identifiable natural person ('data subject')'.

Source: Reg. (EU) 2016/679 — GDPR, Article 4(1)

See also

Personal data · Controller · Processor · Profiling · Special categories of personal data

Note

The data subject is the natural person to whom the personal data relate. It must not be confused with: the subject of AI Act Article 3(58) (the person participating in real-world testing); the affected person under the AI Act (e.g. Article 26(11): the person who is the object of a decision taken with the AI system). The GDPR confers on the data subject a set of enforceable rights vis-à-vis the controller (Articles 12-22): information, access, rectification, erasure ('right to be forgotten'), restriction, portability, objection, freedom from automated decisions.


Deep fake

Italian term: deep fake

'deep fake' means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(60)

Inter-act cross-references

Act Reference Nature of the cross-reference
DSA Article 35(1)(k) VLOPs must adopt mitigation measures for systemic risks linked to AI-generated or manipulated content that may appear authentic (deepfakes), including labelling

See also

AI system · General-purpose AI model · Emotion recognition system

Note

Deep fakes are subject to a transparency obligation under Article 50(4) of the AI Act: anyone disseminating a deep fake must indicate that the content has been artificially generated or manipulated. Exceptions apply for artistic, satirical, research or security purposes. The definition is technically neutral: it does not require a specific generative technique (GAN, diffusion model, multimodal transformer), but assesses the perceptual effect on the observer.


Deployer

Italian term: deployer

'deployer' means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(4)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(7)-(8) The AI Act deployer does not automatically coincide with the GDPR controller or processor: the qualification must be assessed case by case. Frequently the deployer of an AI system processing personal data is also the controller
PLD Recital 13 The deployer is not as such a "manufacturer" under the PLD, but may fall within it if it substantially modifies the AI system after placing on the market

See also

Provider · Downstream provider · Operator · Importer · Distributor

Note

"Deployer" is one of the most relevant terms of the AI Act: it identifies who actually uses the AI system in their professional or institutional activity. In the initial proposals of the Regulation the term was user, later replaced to avoid confusion with the end user of the product. Deployers of high-risk AI systems are subject to specific obligations (Articles 26-27 AI Act): human oversight, monitoring, log retention, fundamental rights impact assessment (FRIA for public bodies and private entities providing public services).


Distributor

Italian term: distributore

'distributor' means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(7)

See also

Provider · Importer · Operator · Making available on the market

Note

The distributor operates downstream of the provider or the importer in the supply chain. Distributors of high-risk AI systems are subject to a sample conformity verification obligation (Article 24 AI Act): CE marking, EU declaration of conformity, presence of instructions for use. Where non-conformity is suspected, they must refrain from making the system available and cooperate with market surveillance authorities.


Downstream provider

Italian term: fornitore a valle

'downstream provider' means a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(68)

See also

Provider · General-purpose AI model · General-purpose AI system

Note

The notion of downstream provider is functional to govern the AI value chain: those who take an AI model (their own or someone else's) and integrate it into an AI system intended for the market are downstream providers. The distinction is central to the cooperation obligations along the chain: providers of general-purpose AI models (including those with systemic risk) must provide downstream providers with sufficient technical and contractual information to enable their compliance (Articles 53-55 AI Act).


E

Emotion recognition system

Italian term: sistema di riconoscimento delle emozioni

'emotion recognition system' means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(39)

See also

Biometric data · Biometric categorisation system · Biometric identification

Note

Emotion recognition systems are subject to differentiated regimes: (a) PROHIBITED when used in workplaces or in education (Article 5(1)(f) AI Act), except for medical or safety reasons; (b) high-risk when used for law enforcement, border management, access to essential public services (Annex III); (c) mandatory transparency under Article 50(3) in all other cases: the deployer must inform natural persons exposed to the system. The notion is broad and contested: the science of emotion recognition from biometric signals is epistemologically fragile and subject to methodological criticism (Barrett et al., Psychological Science in the Public Interest, 2019).


F

Floating-point operation

Italian term: operazione in virgola mobile

'floating-point operation' means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(67)

See also

General-purpose AI model · High-impact capabilities · Systemic risk

Note

The floating-point operation (FLOP) is the unit of measurement of cumulative computation for training an AI model. It is a technical notion with direct legal effect: Article 51(2) of the AI Act establishes the presumption that a general-purpose AI model has high-impact capabilities — and therefore systemic risk — when the cumulative computation for training, measured in FLOPs, is greater than 10²⁵. The threshold is adjustable by Commission delegated act.


Fundamental Rights Impact Assessment (FRIA)

Italian term: valutazione d'impatto sui diritti fondamentali (FRIA)

Term not formally defined in Article 3 of the AI Act but governed by Article 27. The FRIA is the assessment that certain deployers of high-risk AI systems must conduct prior to the first use of the system, describing the deployer's processes in which it will be used, the period and frequency of use, the categories of natural persons concerned, the specific risks to fundamental rights, and the human oversight and risk mitigation measures.

Source: Reg. (EU) 2024/1689 — AI Act, Article 27

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 35 DPIA: analogous figure but with a different focus (risks to data subjects). FRIA and DPIA may integrate when the AI system processes personal data (Article 27(4) AI Act: the FRIA may complement an existing DPIA)

See also

High-risk AI system · Deployer · Risk · Post-market monitoring system

Note

The FRIA is an obligation on the deployer (not the provider) for two exhaustive categories: (a) bodies governed by public law or private entities providing public services (Annex III, with certain exclusions); (b) deployers of high-risk systems for credit-worthiness assessment (Annex III, point 5(b)) and eligibility for essential private services, and for life and health insurance pricing. The FRIA must be notified to the market surveillance authority before the first use. The template is provided by the AI Office. The FRIA notion operates in continuum with the GDPR DPIA: where a pre-existing DPIA exists, the FRIA complements it without duplication.


G

General-purpose AI model

Italian term: modello di IA per finalità generali

'general-purpose AI model' means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(63)

Inter-act cross-references

Act Reference Nature of the cross-reference
Data Act (cumulative) Training data of general-purpose AI models may be subject to access/sharing obligations under the Data Act when sourced from connected products
GDPR (cumulative) Training general-purpose AI models on personal data is subject to the GDPR; open question on legal basis and data subject rights
DSA Article 35(1) VLOPs integrating general-purpose AI models in their systems must assess their systemic risks on the platform

See also

General-purpose AI system · AI system · High-impact capabilities · Systemic risk · Downstream provider · Floating-point operation

Note

A cardinal category introduced in the final AI Act text after the explosion of generative models (GPT, Claude, Gemini, Mistral, etc.). Three qualifying elements: significant generality, competence over a wide range of distinct tasks, integrability into downstream systems. Important exclusion: models used for research, development or prototyping do not fall within the scope until placing on the market. The specific regime is in Articles 51-55 AI Act: baseline obligations for all providers (technical documentation, copyright compliance policy, public summary of training data), enhanced obligations for models with systemic risk (active presumption above 10²⁵ cumulative FLOPs). Open-source models benefit from partial exemptions, except those with systemic risk.


General-purpose AI system

Italian term: sistema di IA per finalità generali

'general-purpose AI system' means an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(66)

See also

General-purpose AI model · AI system · High-impact capabilities · Systemic risk · Downstream provider

Note

The general-purpose AI system (GPAI system) is distinct from the general-purpose AI model (GPAI model): the model is the "engine" (trained weights), the system is the application that integrates and makes it usable (chatbots, APIs, consumer products). When a general-purpose AI system is used directly in cases falling under Annex III, it can fall within the high-risk regime. The model/system distinction is crucial for the AI Act chain of responsibility: those who develop the model have obligations under Articles 51-55; those who integrate the model into a system are providers (or downstream providers) and answer for the system's compliance.


H

Harmonised standard

Italian term: norma armonizzata

'harmonised standard' means a harmonised standard as defined in Article 2(1), point (c), of Regulation (EU) No 1025/2012.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(27)

Inter-act cross-references

Act Reference Nature of the cross-reference
Reg. (EU) No 1025/2012 Article 2(1)(c) Canonical definition: 'harmonised standard' means a European standard adopted on the basis of a request made by the Commission for the application of Union harmonisation legislation

See also

Common specification · Conformity assessment · CE marking · Notified body

Note

The AI Act cross-refers to Reg. 1025/2012 on European standardisation for the definition of harmonised standard. Compliance with a harmonised standard published in the OJEU confers on the provider the presumption of conformity with the requirements of AI Act Chapter III, Section 2 (Article 40 AI Act). Harmonised standards for the AI Act are under development by CEN-CENELEC JTC 21 on the basis of Commission standardisation request M/593 (request of 22 May 2023, expanded in 2024).


High-impact capabilities

Italian term: capacità di impatto elevato

'high-impact capabilities' means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(64)

See also

General-purpose AI model · Systemic risk · General-purpose AI system

Note

The notion is comparative and dynamic: it does not set a static technical threshold but updates automatically with the evolution of the state of the art. The AI Act introduces in Article 51 an operational presumption: a general-purpose AI model is presumed to have high-impact capabilities when the cumulative computation used for its training, measured in floating-point operations, is greater than 10²⁵ FLOPs. The Commission may adjust the threshold by delegated act.


High-risk AI system

Italian term: sistema di IA ad alto rischio

Term not formally defined in Article 3 of the AI Act. Qualification is governed by Article 6 and Annexes I and III. An AI system is qualified as high-risk in two scenarios:

a) Safety component of a product covered by EU harmonisation legislation (Article 6(1)): the AI system is a safety component, or itself a product, covered by the EU harmonisation legislation listed in Annex I (e.g. machinery, medical devices, personal protective equipment, lifts, radio equipment, toys, equipment for use in potentially explosive atmospheres, means of transport, etc.) and the applicable legislation requires a third-party conformity assessment.

b) Annex III sectors (Article 6(2)): AI system used in one of the following eight sectors: biometric identification and categorisation, critical infrastructure, education and vocational training, employment and worker management, access to essential public and private services, law enforcement, migration management and border control, administration of justice and democratic processes. Exception (Article 6(3)): an AI system is not high-risk if it does not pose a significant risk to fundamental rights.

Source: Reg. (EU) 2024/1689 — AI Act, Article 6 and Annexes I and III

See also

AI system · Safety component · Conformity assessment · CE marking · Fundamental Rights Impact Assessment (FRIA) · Substantial modification · Post-market monitoring system · Prohibited AI practices

Note

High-risk AI systems are subject to a dense regime of substantive obligations (Chapter III AI Act): risk management system (Article 9), data governance (Article 10), technical documentation (Article 11), log retention (Article 12), transparency to deployer (Article 13), human oversight (Article 14), accuracy/robustness/cybersecurity (Article 15), conformity assessment with CE marking (Articles 43-48), registration in the EU database (Article 49), post-market monitoring (Article 72), serious incident notification (Article 73). Most obligations enter into force on 2 August 2026. For AI systems that are safety components in products under Annex I, the deadline is extended to 2 August 2027.


I

Importer

Italian term: importatore

'importer' means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(6)

See also

Provider · Distributor · Operator · Authorised representative · Placing on the market

Note

The importer is the bridging figure between extra-EU and the single market: it is who materially places on the EU market an AI system manufactured in a third country that retains the name/trademark of the original manufacturer. For high-risk systems, the importer must verify before placing that the extra-EU provider has carried out the conformity assessment, drafted the technical documentation and affixed the CE marking; it must indicate its identification details and cooperate with authorities (Article 23 AI Act).


Italian term: consenso informato

'informed consent' means a subject's freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that are relevant to the subject's decision to participate.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(59)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(11); Article 7 Distinction: AI Act consent concerns participation in testing, not data processing. When the testing entails personal data processing, GDPR consent applies cumulatively

See also

Testing in real-world conditions · Subject · AI regulatory sandbox

Note

Informed consent under the AI Act is specific to AI system testing in real-world conditions outside sandboxes (Article 60). The four characteristics (freely given, specific, unambiguous, voluntary) echo the GDPR formulation but the regime is autonomous: the subject has the right to withdraw consent without justification and with immediate effect, and obtains the deletion of their personal data involved.


Input data

Italian term: dati di input

'input data' means data provided to or directly acquired by an AI system on the basis of which the system produces an output.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(33)

See also

AI system · Training data · Performance of an AI system


Instructions for use

Italian term: istruzioni per l'uso

'instructions for use' means the information provided by the provider to inform the deployer of, in particular, an AI system's intended purpose and proper use.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(15)

See also

Provider · Deployer · Intended purpose · Reasonably foreseeable misuse

Note

Instructions for use are the main information vehicle between provider and deployer of high-risk AI systems. Article 13 of the AI Act prescribes their minimum content: identity of the provider, characteristics/capabilities/limitations of the system, modifications made, performance with respect to persons or groups, training data (general description), required computational and hardware resources, expected lifetime and maintenance measures. They are mandatory and must be clear, complete, understandable and relevant.


Intended purpose

Italian term: finalità prevista

'intended purpose' means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(12)

See also

Provider · Instructions for use · Reasonably foreseeable misuse · Substantial modification

Note

The intended purpose is the assessment perimeter of the AI system: it is against this that compliance with AI Act requirements, expected performance, risks and mitigation measures are measured. Modification of the intended purpose after placing on the market constitutes a substantial modification (Article 3(23)) and triggers a new conformity assessment.


L

Law enforcement

Italian term: attività di contrasto

'law enforcement' means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(46)

See also

Law enforcement authority · Sensitive operational data · Real-time remote biometric identification system


Law enforcement authority

Italian term: autorità di contrasto

'law enforcement authority' means:

(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or

(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(45)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR (scope exclusion) The activities of law enforcement authorities are excluded from the GDPR scope and governed by Directive (EU) 2016/680 (LED)

See also

Law enforcement · Sensitive operational data

Note

The notion of law enforcement authority is functional: it covers both traditional public authorities (police forces, prosecutors) and private bodies to which Member State law confers public powers for criminal-repression purposes. The distinction matters because AI systems used by law enforcement authorities fall within Annex III, point 6 (high-risk systems) and are subject to specific transparency and supervision regimes.


M

Making available on the market

Italian term: messa a disposizione sul mercato

'making available on the market' means the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(10)

See also

Placing on the market · Putting into service · Provider · Distributor · Importer

Note

Making available on the market is a continuous act: every subsequent supply of the AI system after the first placing on the market. The distinction between placing on the market (point-in-time event, first time) and making available on the market (repeatable act) matters to identify the specific obligations of the distributor. Free of charge supply also falls within the definition, provided it occurs in the course of a commercial activity.


Market surveillance authority

Italian term: autorità di vigilanza del mercato

'market surveillance authority' means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(26)

See also

National competent authority · Notifying authority · Post-market monitoring system

Note

The cross-reference is to Regulation (EU) 2019/1020 on market surveillance and conformity of products. Market surveillance authorities for AI systems may coincide with the national competent authorities for the AI Act (Article 70) or be designated separately by Member States.


N

National competent authority

Italian term: autorità nazionale competente

'national competent authority' means a notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(48)

See also

Notifying authority · Market surveillance authority · AI Office

Note

In Italy, under Law 132/2025, the national competent authorities for the AI Act are AgID (Agency for Digital Italy) and ACN (National Cybersecurity Agency), with the Italian Data Protection Authority retaining competence over personal data processing aspects of AI systems.


Non-personal data

Italian term: dati non personali

'non-personal data' means data other than personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(51)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(1) Mirror definition: non-personal data = any data not falling within the GDPR notion of personal data
Reg. (EU) 2018/1807 (regulation on the free flow of non-personal data) Autonomous regime for non-personal data in the single market
Data Act (main object) The Data Act predominantly governs access to and sharing of non-personal data generated by connected products
DGA Article 2(1) The DGA definition of "data" covers both personal and non-personal data

See also

Personal data · Training data

Note

The notion is residual: anything that is not personal data under the GDPR is non-personal data. The distinction is practical but not always sharp: in many real-world datasets (e.g. industrial logs, IoT sensors) the classification requires case-by-case assessment. The transformation of personal data into non-personal data through anonymisation is governed by the GDPR (Recital 26) and produces data that permanently fall outside the GDPR scope.


Notified body

Italian term: organismo notificato

'notified body' means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(22)

See also

Conformity assessment body · Notifying authority · Conformity assessment · CE marking

Note

The notified body is the conformity assessment body accredited and notified to the Commission by the national notifying authority. It receives an identification number which appears alongside the CE marking when it intervenes in the conformity assessment (e.g. CE 0123). The notification, designation, monitoring and suspension/revocation regime is governed by Articles 28-39 AI Act. This figure is borrowed from the general system of the EU New Legislative Framework (NLF — Reg. 765/2008 + Decision 768/2008/EC).


Notifying authority

Italian term: autorità di notifica

'notifying authority' means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(19)

See also

National competent authority · Conformity assessment body · Notified body · Conformity assessment


O

Operator

Italian term: operatore

'operator' means a provider, product manufacturer, deployer, authorised representative, importer or distributor.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(8)

See also

Provider · Deployer · Importer · Distributor · Authorised representative · Downstream provider

Note

"Operator" is the umbrella category that gathers all entities in the AI Act value chain subject to substantive obligations. It is used in the Regulation when a provision applies indistinctly to providers, deployers, importers, distributors, authorised representatives and product manufacturers — typically in matters of cooperation with authorities, penalties and reporting of non-conformity. It must not be confused with the GDPR "controller" or "processor", nor with the PLD "manufacturers": the AI Act operator qualification is autonomous and specific.


P

Performance of an AI system

Italian term: prestazioni di un sistema di IA

'performance of an AI system' means the ability of an AI system to achieve its intended purpose.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(18)

See also

AI system · Intended purpose · Testing data · Post-market monitoring system

Note

The AI Act definition of performance is purpose-oriented: it measures the system's ability to achieve the provider's intended purpose, not an abstract notion of technical "accuracy". For high-risk systems, Article 15 of the AI Act requires appropriate levels of accuracy, robustness and cybersecurity, declared in the instructions for use. Actual performance is subject to post-market monitoring (Article 72 AI Act).


Personal data

Italian term: dati personali

'personal data' means personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(50)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(1) Canonical definition: 'personal data' means any information relating to an identified or identifiable natural person, directly or indirectly, by reference to an identifier (name, identification number, location data, online identifier, factors specific to physical/physiological/genetic/mental/economic/cultural/social identity)
DGA Article 2(3) Direct definitional cross-reference to GDPR
Data Act Article 2(3) Direct definitional cross-reference to GDPR
PLD Article 4(6) PLD definition of "data": cross-refers to Reg. (EU) 2022/868 (DGA), which itself cross-refers to the GDPR
DSA (implicit cross-reference to GDPR) The DSA operates without prejudice to the GDPR (Article 2(4)(g)); cumulative application

See also

Non-personal data · Special categories of personal data · Biometric data · Profiling

Note

"Personal data" is the pivot definition of the entire EU digital regulatory ecosystem. All EU acts touching data processing reference it by cross-reference. The notion is functional, not static: the same datum may be personal in one context and non-personal in another, depending on the reasonably likely identifiability of the natural person. CJEU case law (Breyer C-582/14, IAB Europe C-604/22, EDPS-SRB T-557/20) has consolidated a broad but contextual interpretation.


Placing on the market

Italian term: immissione sul mercato

'placing on the market' means the first making available of an AI system or a general-purpose AI model on the Union market.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(9)

See also

Making available on the market · Putting into service · Provider · Importer · Distributor

Note

Placing on the market is a single, point-in-time event for each system or model: the first time it is made available on the Union market. It triggers many provider obligations (post-market monitoring, registration, serious incident reporting). It differs from making available on the market (subsequent continuous act) and from putting into service (direct supply to the deployer for first use).


Post-market monitoring system

Italian term: sistema di monitoraggio successivo all'immissione sul mercato

'post-market monitoring system' means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(25)

See also

Provider · Placing on the market · Putting into service · Serious incident · Recall of an AI system · Withdrawal of an AI system · Performance of an AI system

Note

The post-market monitoring system (PMM) is a continuous obligation of the provider governed by Article 72 of the AI Act. For high-risk AI systems, the provider must document the monitoring plan and systematically collect data on performance over the lifecycle. The PMM feeds three streams: (a) reporting of serious incidents (Article 73), (b) activation of corrective measures such as recall or withdrawal (Article 20), (c) review of the risk management system (Article 9). The Commission adopts implementing acts to establish a detailed plan template (Article 72(3)).


Post-remote biometric identification system

Italian term: sistema di identificazione biometrica remota a posteriori

'post-remote biometric identification system' means a remote biometric identification system other than a real-time remote biometric identification system.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(43)

See also

Remote biometric identification system · Real-time remote biometric identification system · Biometric identification · Law enforcement

Note

Post-remote biometric identification operates on pre-existing datasets (e.g. recorded surveillance footage analysed afterwards) rather than in real time. It is classified as high-risk (Annex III, point 1(a)) and not prohibited. When used by law enforcement authorities, it is subject to authorisation by a judicial or independent administrative authority and to additional requirements (Article 26 AI Act). The negative definition (by exclusion of real-time) means that any RBI with delays not "short and limited" falls within this category.


Processor

Italian term: responsabile del trattamento

'processor' means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller.

Source: Reg. (EU) 2016/679 — GDPR, Article 4(8)

See also

Controller · Data subject · Personal data · Deployer · Provider

Note

The processor is the subject that performs processing operations on behalf of the controller. The relationship must be governed by a contract or other binding legal act (GDPR Article 28). The controller/processor distinction is functional: those who decide purposes and means of processing are controllers; those who operate according to the controller's instructions are processors. In the AI value chain: the AI Act provider is not as such a processor (it does not process personal data on someone else's behalf in developing the system); the AI Act deployer is frequently a controller but may also be a processor, depending on the underlying relationship. The qualification requires concrete case-by-case assessment (EDPB Guidelines 07/2020).


Profiling

Italian term: profilazione

'profiling' means profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(52)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(4) Canonical definition: 'profiling' means any form of automated processing of personal data consisting of the use of personal data to evaluate personal aspects (work performance, economic situation, health, personal preferences, interests, reliability, behaviour, location, movements)
GDPR Article 22 Right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or significant effects
DSA Article 26(3); Article 28 DSA-specific prohibitions: advertising based on profiling using GDPR Article 9 special categories (Article 26(3)); prohibition of profiling of minors for advertising (Article 28(2))

See also

Personal data · Special categories of personal data

Note

"Profiling" is the pivot definition for automated processing of personal data. The GDPR definition is broad: it covers any evaluation of personal aspects through automated processing. Profiling that produces decisions with significant legal effects (GDPR Article 22, LED Article 11) is subject to a restrictive regime: general prohibition rule, exhaustive exceptions, right to human intervention. When profiling is performed by an AI system, AI Act + GDPR cumulation is triggered, with possible classification of the system as high-risk (Annex III).


Prohibited AI practices

Italian term: pratiche di IA vietate

Term not formally defined in Article 3 of the AI Act. Article 5 exhaustively lists the AI practices prohibited on the Union market, in eight cases: (a) subliminal, manipulative or deceptive techniques; (b) exploitation of vulnerabilities (age, disability, socioeconomic situation); (c) social scoring for general use by public or private entities; (d) predictive assessment of the individual risk of criminal offence based solely on profiling or personality traits; (e) creation or expansion of facial recognition databases through untargeted scraping of the internet or CCTV footage; (f) inference of emotions in the workplace or in education (with medical/safety exceptions); (g) biometric categorisation to deduce race, political opinions, trade union membership, religious/philosophical beliefs, sex life or sexual orientation; (h) real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (subject to exhaustive exceptions).

Source: Reg. (EU) 2024/1689 — AI Act, Article 5

See also

AI system · Real-time remote biometric identification system · Biometric categorisation system · Emotion recognition system · Publicly accessible space · Law enforcement · Special categories of personal data

Note

Prohibited AI practices represent the highest level of the AI Act risk-based approach: unacceptable risk, absolute prohibition. In force since 2 February 2025, ahead of all other substantive provisions. Violation of the prohibition is sanctioned with the highest penalty: up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher (Article 99(3) AI Act). Relevant exceptions and derogations for biometric public-security practices are concentrated in Article 5(2)-(7). The Commission Guidelines of 4 February 2025 (C(2025) 884 final) clarify the application scope of each case.


Provider

Italian term: fornitore

'provider' means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(3)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 4(7)-(8) The AI Act provider does not automatically coincide with the GDPR controller or processor: the qualification must be assessed case by case
PLD Recital 13; Article 4(10) AI system providers are expressly treated as manufacturers for PLD purposes
Data Act Article 2(13)-(14) Operational distinction: the AI Act provider is not as such a "data holder" or "data recipient" within the meaning of the Data Act

See also

Deployer · Downstream provider · Operator · Importer · Distributor · Authorised representative

Note

"Provider" is the central figure of the AI Act: virtually all the substantive obligations of the Regulation rest on it (risk management system, data governance, transparency, human oversight, accuracy, robustness, cybersecurity, conformity assessment, CE marking, registration in the EU database, post-market monitoring). The definition includes both those who directly develop the system and those who have it developed and place it on the market under their own name — a broad notion that also captures white-labelling models. Importantly: the provider operates also free of charge (e.g. open-source models distributed without consideration).


Pseudonymisation

Italian term: pseudonimizzazione

'pseudonymisation' means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.

Source: Reg. (EU) 2016/679 — GDPR, Article 4(5)

See also

Anonymisation · Personal data · Special categories of personal data · Data subject

Note

Pseudonymisation is a technical and organisational security measure (GDPR Article 32) and an operational principle of data protection by design (Article 25). Pseudonymised data remain personal data: re-identification is possible through separately kept additional information. It differs from anonymisation (irreversible, anonymous data exits the GDPR scope). Pseudonymisation reduces risk but does not eliminate the applicability of the GDPR. In data governance for high-risk AI systems (AI Act Article 10), pseudonymisation is expressly provided for as a mitigation measure, in particular for training datasets and for bias monitoring purposes (Article 10(5)).


Publicly accessible space

Italian term: spazio accessibile al pubblico

'publicly accessible space' means any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(44)

See also

Real-time remote biometric identification system · Remote biometric identification system · Law enforcement

Note

The notion of publicly accessible space is broad and functional: it includes both public places (streets, squares, stations) and private places with undetermined access (shopping centres, stadiums, airport waiting areas). Conditions for access (ticket, booking, entry control) and capacity restrictions (maximum number of persons) do not exclude the qualification as publicly accessible space. The notion is crucial for the prohibition of real-time RBI (Article 5(1)(h)): the prohibition applies only in publicly accessible spaces, not in closed spaces (offices, dwellings, restricted areas) or systems mounted on private vehicles.


Putting into service

Italian term: messa in servizio

'putting into service' means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(11)

See also

Placing on the market · Making available on the market · Deployer · Provider · Intended purpose

Note

Putting into service is the alternative route to placing on the market for triggering the substantive obligations of the AI Act on the AI system. It is particularly relevant when the provider develops the system for internal use (its own or for a specific client) without distributing it on the market: the high-risk regime applies in full also in this case. Typical for public AI systems developed bespoke for administrations.


R

Real-time remote biometric identification system

Italian term: sistema di identificazione biometrica remota in tempo reale

'real-time remote biometric identification system' means a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(42)

See also

Remote biometric identification system · Post-remote biometric identification system · Publicly accessible space · Law enforcement

Note

Real-time remote biometric identification in publicly accessible spaces for law enforcement is in principle PROHIBITED by Article 5(1)(h) of the AI Act. The exhaustive exceptions are three: (a) targeted search for specific victims of abduction, trafficking or sexual exploitation, and missing persons; (b) prevention of a specific, substantial and imminent threat to the life or physical safety of persons, or of a genuine and present or genuine and foreseeable threat of a terrorist attack; (c) localisation or identification of persons suspected of serious offences listed in Annex II, punishable by penalties of at least 4 years. The use requires prior authorisation by a judicial or independent administrative authority (exception: urgency with subsequent authorisation within 24 hours) and a fundamental rights impact assessment. The notion of "real-time" includes identifications with limited short delays to avoid circumvention (e.g. technical delay for territorial coverage).


Real-world testing plan

Italian term: piano di prova in condizioni reali

'real-world testing plan' means a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real-world conditions.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(53)

See also

Testing in real-world conditions · Subject · Informed consent · AI regulatory sandbox


Reasonably foreseeable misuse

Italian term: uso improprio ragionevolmente prevedibile

'reasonably foreseeable misuse' means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(13)

See also

Intended purpose · AI system · Instructions for use · Risk · Substantial modification

Note

Reasonably foreseeable misuse is an extension of the assessment perimeter of the AI system beyond the strict intended purpose. For high-risk systems, the provider must factor it into risk assessment and mitigation (Article 9 AI Act): it is not enough to design for the "clean" use case, plausible human deviations (errors, operational shortcuts, creative uses of the system) and interactions with other systems must be anticipated. The notion introduces a threshold of reasonable foreseeability: wholly unexpected uses or extraordinary malicious uses (e.g. sophisticated adversarial machine learning attacks beyond the state of the art known at the time of placing on the market) remain outside. The presence of reasonably foreseeable misuse does not shift the qualification of "substantial modification" (Article 3(23)).


Recall of an AI system

Italian term: richiamo di un sistema di IA

'recall of an AI system' means any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(16)

See also

Withdrawal of an AI system · Provider · Deployer · Serious incident · Post-market monitoring system

Note

Recall is a downstream corrective measure: it acts on AI systems already made available to deployers and aims to retrieve them (return to the provider), shut them down (taking out of service) or disable them. It differs from withdrawal (Article 3(17)), which operates upstream on the supply chain to prevent further making available. Both measures are activated by the provider in case of non-compliance (Article 20 AI Act) or imposed by market surveillance authorities.


Remote biometric identification system

Italian term: sistema di identificazione biometrica remota

'remote biometric identification system' means an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person's biometric data with the biometric data contained in a reference database.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(41)

See also

Biometric identification · Biometric verification · Real-time remote biometric identification system · Post-remote biometric identification system · Publicly accessible space · Law enforcement

Note

Remote biometric identification systems (RBI) are the cardinal category of the AI Act biometric regime, subject to differentiated regimes depending on latency (real-time vs post), context (publicly accessible space vs closed space) and purpose (law enforcement vs other uses). Three qualifying elements distinguish RBI from mere biometric identification: absence of active involvement of the person, typical distance, comparison with a database. Biometric verification (one-to-one authentication with active involvement) is explicitly excluded.


Risk

Italian term: rischio

'risk' means the combination of the probability of an occurrence of harm and the severity of that harm.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(2)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Recitals 75-77, 84; Article 35 GDPR risk (to rights and freedoms) uses an analogous notion: probability × severity of harm to data subjects. Methodological convergence in DPIA
PLD Recital 21 Risk is a criterion to assess product defectiveness (Article 7); convergence with the AI Act in the overall assessment
DSA Article 34 VLOPs assess "systemic risks" using analogous methodology (probability × severity)

See also

Systemic risk · AI system · Serious incident

Note

The AI Act definition is aligned with ISO 31000 and international risk management literature. The notion is central to the risk-based approach that characterises the entire AI Act: AI practices are distributed along a continuum (unacceptable risk → prohibited under Article 5; high risk → subject to enhanced requirements under Chapter III; limited risk → transparency obligations under Article 50; minimal risk → unrestricted). The risk management system is a continuous obligation of high-risk system providers (Article 9 AI Act).


S

Safety component

Italian term: componente di sicurezza

'safety component' means a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(14)

See also

AI system · Substantial modification · Serious incident

Note

The qualification of an AI system as a safety component is one of the two cardinal criteria for high-risk classification under Article 6(1) of the AI Act: if the AI system is a safety component of a product covered by the Union harmonisation legislation listed in Annex I, it automatically falls within the high-risk regime.


Sandbox plan

Italian term: piano dello spazio di sperimentazione

'sandbox plan' means a document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(54)

See also

AI regulatory sandbox · Real-world testing plan · Provider · National competent authority


Sensitive operational data

Italian term: dati operativi sensibili

'sensitive operational data' means operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(38)

See also

Law enforcement · Law enforcement authority · Personal data

Note

Sensitive operational data is an autonomous notion of the AI Act, distinct from both personal data and special categories. Their protection is not based on the nature of the subject (data subject) but on the integrity of criminal proceedings: disclosure could prejudice ongoing or subsequent investigations. The notion is relevant for the transparency and logging regimes of AI systems used by law enforcement authorities.


Serious incident

Italian term: incidente grave

'serious incident' means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

(a) the death of a person, or serious harm to a person's health;

(b) a serious and irreversible disruption of the management or operation of critical infrastructure;

(c) the infringement of obligations under Union law intended to protect fundamental rights;

(d) serious harm to property or the environment.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(49)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Articles 33-34 When a serious incident involves personal data, AI Act notification obligations accumulate with the GDPR breach notification (single entry-point provided by the 2025 Digital Omnibus)
NIS2 (incident notification) For AI systems in entities subject to NIS2, the AI Act serious incident notification may accumulate with the NIS2 significant incident notification
PLD Recital 32; Article 7(2) An AI Act serious incident may meet the requirements to trigger the PLD presumption of defectiveness or causal link

See also

AI system · Critical infrastructure · Post-market monitoring system · Safety component

Note

A serious incident triggers a notification obligation on the provider towards market surveillance authorities (Article 73 AI Act): within 15 days as ordinary timeline, reduced to 2 days for critical infrastructure disruption and 10 days for death. The definition is functional (based on consequence, not on the technical cause of malfunction) and captures both physical damage and infringements of fundamental rights — broader than the notion of incident in other product regimes.


Special categories of personal data

Italian term: categorie particolari di dati personali

'special categories of personal data' means the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(37)

Inter-act cross-references

Act Reference Nature of the cross-reference
GDPR Article 9(1) Primary source definition: data revealing racial/ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data for unique identification, data concerning health, sex life or sexual orientation
DSA Article 26(3) Autonomous DSA prohibition of advertising based on profiling that uses GDPR Article 9 special categories

See also

Personal data · Biometric data · Profiling

Note

The AI Act references three EU sources for special categories: in addition to the GDPR for the private and general public sector, Directive (EU) 2016/680 (LED) for law enforcement and Regulation (EU) 2018/1725 for EU institutions and bodies. The processing of these categories in high-risk AI systems is governed by Article 10(5) of the AI Act, which authorises their use for bias monitoring under stringent conditions.


Subject

Italian term: soggetto

'subject', for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(58)

See also

Testing in real-world conditions · Informed consent · Real-world testing plan · AI regulatory sandbox

Note

"Subject" is a specific and contextual notion: it applies only for the purposes of testing in real-world conditions under Article 60 of the AI Act. It is not to be confused with the GDPR "data subject" (Article 4(1)) nor with the AI Act "affected person" (e.g. Article 26(11)). The testing subject has specific rights: prior informed consent, possibility to withdraw consent without justification and with immediate effect, deletion of personal data involved, compensation in case of damage.


Substantial modification

Italian term: modifica sostanziale

'substantial modification' means a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(23)

See also

AI system · Intended purpose · Conformity assessment · Provider · Deployer

Note

A substantial modification is the trigger that activates a new conformity assessment for an AI system already placed on the market. Two alternative scenarios: (a) the modification affects compliance with Chapter III, Section 2 requirements (data governance, robustness, accuracy, etc.); (b) the modification changes the intended purpose. Article 25(1) AI Act also provides that anyone (deployer, distributor, importer or third party) making a substantial modification to a high-risk system becomes itself a provider with all related obligations. The notion is central to the chain of responsibility along the system lifecycle.


Systemic risk

Italian term: rischio sistemico

'systemic risk' means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(65)

Inter-act cross-references

Act Reference Nature of the cross-reference
DSA Article 34 Related but autonomous notion: DSA systemic risks refer to VLOPs/VLOSE; AI Act systemic risks refer to general-purpose AI models. Possible cumulation for platforms integrating generative models

See also

General-purpose AI model · High-impact capabilities · Floating-point operation · Risk

Note

Systemic risk is a specific category introduced for general-purpose AI models with high-impact capabilities. It is governed by Articles 51-55 of the AI Act: presumption of existence for models with cumulative computation > 10²⁵ FLOPs; discretionary designation by the Commission on other criteria (number of parameters, data quality, autonomy, market relevance). Providers of models with systemic risk have enhanced obligations: model evaluation with standardised methodologies (red-teaming included), assessment and mitigation of systemic risks at EU level, reporting of serious incidents to the AI Office, adequate levels of cybersecurity.


T

Testing data

Italian term: dati di prova

'testing data' means data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(32)

See also

Training data · Validation data · Placing on the market · Putting into service

Note

Testing data are distinct from training and validation data and must ensure an independent evaluation of the system. For high-risk systems, Article 10(3) of the AI Act requires them to be relevant, sufficiently representative and complete with respect to the testing purposes.


Testing in real-world conditions

Italian term: prova in condizioni reali

'testing in real-world conditions' means the temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it does not qualify as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57 or 60 are fulfilled.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(57)

See also

Real-world testing plan · Subject · Informed consent · AI regulatory sandbox · Placing on the market · Putting into service

Note

Testing in real-world conditions is a typified case under the AI Act allowing an AI system to be tested outside laboratories and simulated environments without it constituting placing on the market or putting into service — provided strict conditions are met (Articles 57-60 AI Act): authorisation, deposited testing plan, informed consent of subjects, limited duration, monitoring. It differs from the regulatory sandbox: the sandbox is an authorised and supervised framework, testing in real-world conditions is an operational instrument that can also take place outside the sandbox.


Training data

Italian term: dati di addestramento

'training data' means data used for training an AI system through fitting its learnable parameters.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(29)

See also

Validation data · Testing data · Input data · Validation data set

Note

Training data are governed by Article 10 of the AI Act ("Data and data governance") for high-risk systems: they must be relevant, sufficiently representative, free of errors, and complete in relation to the intended purpose. Where they include personal data, the GDPR applies cumulatively, with possible activation of the exceptional regime under Article 10(5) for bias monitoring.


V

Validation data

Italian term: dati di convalida

'validation data' means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(30)

See also

Training data · Testing data · Validation data set


Validation data set

Italian term: set di dati di convalida

'validation data set' means a separate data set or part of the training data set, either as a fixed or variable split.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(31)

See also

Validation data · Training data · Testing data


Very Large Online Platform (VLOP)

Italian term: piattaforma online di dimensioni molto grandi (VLOP)

Term not autonomously defined in the DSA. The qualification of VLOP is governed by Article 33 DSA: an online platform with an average monthly number of active recipients of the service in the Union equal to or greater than 45 million is designated by the Commission as a 'very large online platform'.

Source: Reg. (EU) 2022/2065 — DSA, Article 33

Inter-act cross-references

Act Reference Nature of the cross-reference
DSA Articles 33-43 Enhanced obligations regime: systemic risk assessment and mitigation, independent audits, transparent recommender systems, data access for researchers, compliance officer
AI Act Article 35(1)(k) VLOPs assess and mitigate systemic risks linked to AI-generated or manipulated content (deepfakes) and to the integration of general-purpose AI models

See also

Very Large Online Search Engine (VLOSE) · Systemic risk · Deep fake · General-purpose AI model · Profiling

Note

VLOPs are designated by Commission decision following confirmation of the threshold of 45 million monthly active recipients. Current designations include — among others — Facebook, Instagram, LinkedIn, X, YouTube, TikTok, Snapchat, Pinterest, Amazon Store, Google Play, Google Maps, Google Shopping, Booking.com, AliExpress, Wikipedia, Zalando, Temu, Shein. VLOPs are subject to the enhanced regime of the DSA (Chapter III, Section 5): annual systemic risk assessment (Article 34), mitigation measures (Article 35), annual independent audit (Article 37), compliance officer (Article 41), data access for researchers (Article 40), direct supervision by the Commission (Article 56).


Very Large Online Search Engine (VLOSE)

Italian term: motore di ricerca online di dimensioni molto grandi (VLOSE)

Term not autonomously defined in the DSA. The qualification of VLOSE is governed by Article 33 DSA: an online search engine qualified as an online platform within the meaning of Article 3(j), with an average monthly number of active recipients of the service in the Union equal to or greater than 45 million, is designated by the Commission as a 'very large online search engine'.

Source: Reg. (EU) 2022/2065 — DSA, Article 33

Inter-act cross-references

Act Reference Nature of the cross-reference
DSA Articles 33-43 Enhanced obligations regime: systemic risk assessment and mitigation, independent audits, transparent recommender systems, data access for researchers, compliance officer
AI Act Article 35 When a VLOSE integrates general-purpose AI models, it assesses systemic risks under the DSA and the AI Act jointly

See also

Very Large Online Platform (VLOP) · Systemic risk · General-purpose AI model · Profiling

Note

VLOSEs are designated by Commission decision following confirmation of the threshold. Currently designated VLOSEs are Bing and Google Search. They are subject to the same enhanced obligations as VLOPs (DSA, Chapter III, Section 5): annual systemic risk assessment under Article 34, mitigation measures under Article 35, annual independent audit, compliance officer, data access for researchers, and Commission supervision.


W

Widespread infringement

Italian term: infrazione diffusa

'widespread infringement' means any act or omission contrary to Union law protecting the interest of individuals, which:

(a) has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other than the Member State in which:

(i) the act or omission originated or took place;

(ii) the provider concerned, or, where applicable, its authorised representative is located or established; or

(iii) the deployer is established, when the infringement is committed by the deployer;

(b) has caused, causes or is likely to cause harm to the collective interests of individuals and has common features, including the same unlawful practice or the same interest being infringed, and is occurring concurrently, committed by the same operator, in at least three Member States.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(61)

See also

Operator · Provider · Deployer · Authorised representative

Note

The notion is borrowed from Regulation (EU) 2017/2394 on cooperation between national authorities for consumer protection. It activates cross-border cooperation mechanisms among market surveillance authorities of different Member States when an AI Act infringement involves multiple jurisdictions — a typical situation for AI systems distributed via the internet.


Withdrawal of an AI system

Italian term: ritiro di un sistema di IA

'withdrawal of an AI system' means any measure aiming to prevent an AI system in the supply chain being made available on the market.

Source: Reg. (EU) 2024/1689 — AI Act, Article 3(17)

See also

Recall of an AI system · Provider · Distributor · Importer · Making available on the market

Note

Withdrawal operates upstream in the supply chain: it blocks further making available of an AI system that has already been placed on the market but has not yet reached the final deployers. It differs from recall (Article 3(16)), which acts downstream on systems already delivered to deployers. The measures are often used in combination: withdrawal to stop diffusion + recall to retrieve what has already been distributed.