Standards¶
This page is dedicated to the technical standards that operate on artificial intelligence and intersect the AI Act perimeter. The section collects the main international (ISO/IEC) and national (NIST) publications used as references for governance, risk management, life cycle, and management systems of AI systems.
Standards are not binding legal sources, but they represent the common ground of good practices referred to in codes of conduct, conformity assessments, and across industry. The AI Act explicitly encourages reliance on harmonised standards (Articles 40-41) as a tool to grant presumption of conformity for high-risk systems.
Access to documents
NIST documents are usually freely available (US government). ISO/IEC standards are paid publications, available on the official site iso.org.
NIST AI RMF 1.0¶
Full title: Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1) Issued by: National Institute of Standards and Technology (NIST), United States Publication: 26 January 2023 Status: in force Type: voluntary framework for AI risk management
Summary¶
Structured framework to identify, assess, and manage the risks of artificial intelligence systems throughout their life cycle. It is articulated in four main functions — Govern, Map, Measure, Manage — applied iteratively during design, development, deployment, and monitoring. The framework promotes trustworthy AI through seven attributes: validity and reliability, safety, privacy, fairness, transparency and accountability, explainability, and resilience to cybersecurity threats.
In July 2024 NIST published the Generative AI Profile (NIST AI 600-1), a cross-sectoral profile of the framework specifically addressing the peculiarities of generative AI.
Cross-reference with the AI Act¶
- Article 9 (Risk management system) — the AI RMF is structurally aligned with the risk management system required for high-risk systems
- Article 15 (Accuracy, robustness, cybersecurity) — the seven trustworthy attributes of the AI RMF cover many of the properties required
- Article 17 (Quality management system) — the framework supports operators in structuring quality management
- The AI RMF is widely cited as the leading international reference in AI Act compliance literature
Official document¶
- NIST AI RMF 1.0 (PDF) — open access
- NIST AI 600-1 — Generative AI Profile (PDF) — open access
- Official NIST AI RMF page
ISO/IEC 42001:2023¶
Full title: Information technology — Artificial intelligence — Management system Issued by: ISO/IEC (International Organization for Standardization / International Electrotechnical Commission) Publication: 18 December 2023 Status: in force, first edition Type: certifiable standard for AI management systems (AIMS)
Summary¶
The first certifiable international standard for AI management systems (AIMS — AI Management System). It specifies requirements for establishing, implementing, maintaining, and continuously improving a management system dedicated to the responsible development, provision, or use of AI systems.
It adopts the High-Level Structure shared with other management standards (ISO/IEC 27001, ISO 9001, ISO 14001), facilitating integration with existing management systems. It systematically addresses governance, roles and responsibilities, risk assessment, operational controls, performance indicators, and continuous improvement.
Cross-reference with the AI Act¶
- Article 17 (Quality management system) — the AIMS of ISO/IEC 42001 provides a concrete implementation of the quality management system required for high-risk system providers
- Articles 8-15 (Requirements for high-risk systems) — many controls in Annex A of ISO/IEC 42001 are aligned with AI Act requirements
- Article 26 (Obligations of deployers) — AIMS certification can support deployers in demonstrating responsible use
Official document¶
- ISO/IEC 42001:2023 — official ISO page — paid access
ISO/IEC 23894:2023¶
Full title: Information technology — Artificial intelligence — Guidance on risk management Issued by: ISO/IEC Publication: February 2023 Status: in force, first edition Type: guidelines on risk management for AI systems
Summary¶
Specific guidelines for risk management in artificial intelligence systems. They extend and adapt the principles of ISO 31000 (general risk management) to the peculiarities of AI: opacity of certain models, dependence on training data, learning dynamics, bias, model drift, and other AI-specific risks.
It provides operational guidance on how to identify, analyse, evaluate, and treat risks associated with the development, use, and maintenance of AI systems, in a sector-agnostic way.
Cross-reference with the AI Act¶
- Article 9 (Risk management system) — ISO/IEC 23894 is the leading international reference for implementing the risk management system required by Article 9
- Complementary to ISO/IEC 42001, of which it forms one of the operational pillars on the risk side
Official document¶
- ISO/IEC 23894:2023 — official ISO page — paid access
ISO/IEC 5338:2023¶
Full title: Information technology — Artificial intelligence — AI system life cycle processes Issued by: ISO/IEC Publication: September 2023 Status: in force, first edition Type: standard on life cycle processes
Summary¶
Defines the life cycle processes specific to artificial intelligence systems, integrating the general structure of ISO/IEC/IEEE 12207 (Software life cycle processes) with elements typical of AI: data management, training, validation, post-deployment monitoring, retraining.
It covers the entire life span of the system from conception to dismissal, defining activities, objectives, and outcomes for each phase. Operational reference for organisations wishing to formally structure their AI development cycle.
Cross-reference with the AI Act¶
- Article 11 (Technical documentation) — the life cycle processes of ISO/IEC 5338 support the production of the technical documentation required
- Article 12 (Record-keeping) — the life cycle monitoring envisaged by the standard feeds the logs required for high-risk systems
- Article 72 (Post-market monitoring) — the post-deployment monitoring phase of the standard is functional to the post-market monitoring required by the regulation
Official document¶
- ISO/IEC 5338:2023 — official ISO page — paid access
ISO/IEC 38507:2022¶
Full title: Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations Issued by: ISO/IEC Publication: April 2022 Status: in force, first edition Type: governance guidance on AI for top management
Summary¶
Guidelines addressed to the governance bodies of organisations (board of directors, senior management) on the implications of using AI. It provides a framework for informed decision-making on the adoption of AI systems, including aspects of responsibility, ethics, transparency, risk management, and regulatory compliance.
It differs from other standards in the sector because it does not address developers or risk managers, but top management: it provides the conceptual tools to interpret strategic decisions related to the introduction of AI within the organisation.
Cross-reference with the AI Act¶
- Recital 27 (AI governance culture) and Article 4 (AI literacy) — ISO/IEC 38507 provides the high-level framing useful for structuring corporate policies consistent with these principles
- Complementary to ISO/IEC 42001 on the governance side: where 42001 establishes the management system, 38507 addresses top-level strategic decisions
Official document¶
- ISO/IEC 38507:2022 — official ISO page — paid access
AI Act harmonised standards¶
The AI Act provides for the adoption of specific harmonised standards (Articles 40-41), compliance with which provides presumption of conformity with the regulation's requirements for high-risk systems. The European Commission has tasked CEN-CENELEC (through Joint Technical Committee JTC 21) with developing the family of harmonised standards, largely based on the adoption/adaptation of the ISO/IEC standards listed above.
The development of harmonised standards is ongoing: this page will be updated with definitive references as documents are published.