EU AI Act – Title 1

Title I: GENERAL PROVISIONS

 

 

HAVE ADOPTED THIS REGULATION:

TITLE I
GENERAL PROVISIONS

Article 1
Subject matter

The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law, and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation;

 

This Regulation lays down:

(a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;

(b) prohibitions of certain artificial intelligence practices;

(c) specific requirements for high-risk AI systems and obligations for operators of such systems;

(d) harmonised transparency rules for certain AI systems;

(e) rules on market monitoring, market surveillance governance and enforcement.

(ea) measures to support innovation, with a particular focus on SMEs and start-ups, including on setting up regulatory sandboxes and targeted measures to reduce the regulatory burden on SMEs’s and start-ups;

(eb) rules for the establishment and functioning of the Union’s Artificial Intelligence Office (AI Office).

 

Article 2
Scope

1. This Regulation applies to:

(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union;

(ca) providers placing on the market or putting into service AI systems referred to in Article 5 outside the Union where the provider or distributor of such systems is located within the Union;

(cb) importers and distributors of AI systems as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union;

(cc) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union.

2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems and that fall, within the scope of harmonisation legislation listed in Annex II - Section B, only Article 84 of this Regulation shall apply;

(a) <deleted>

(b) <deleted>

(c) <deleted>

(d) <deleted>

(e) <deleted>

(f) <deleted>

(g) <deleted>

(h) <deleted>

3. This Regulation shall not apply to AI systems developed or used exclusively for military purposes.

4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States and are subject of a decision of the Commission adopted in accordance with Article 36 of Directive (EU)2016/680 or Article 45 of Regulation 2016/679 (adequacy decision) or are part of an international agreement concluded between the Union and that third country or international organisation pursuant to Article 218 TFUE providing adequate safeguards with respect to the protection of privacy and fundamental rights and freedoms of individuals;

5. This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section IV of Directive 2000/31/EC of the European Parliament and of the Council [as to be replaced by the corresponding provisions of the Digital Services Act].

5a. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processes in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, without prejudice to arrangements provided for in Article 10(5) and Article 54 of this Regulation.;

5b. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety;

5c. This regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers.

5d. This Regulation shall not apply to research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service, provided that these activities are conducted respecting fundamental rights and the applicable Union law. The testing in real world conditions shall not be covered by this exemption.The Commission is empowered to may adopt delegated acts in accordance with Article 73 that clarify the application of this paragraph to specify this exemption to prevent its existing and potential abuse. The AI Office shall provide guidance on the governance of research and development pursuant to Article 56, also aiming to coordinate its application by the national supervisory authorities;

5e. This Regulation shall not apply to AI components provided under free and open-source licences except to the extent they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV. This exemption shall not apply to foundation models as defined in Art 3.

 

Article 3
Definitions

For the purpose of this Regulation, the following definitions apply:

(1)    ‘‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;

(1a) ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm;

(1b) ‘significant risk’ means a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons;

(1c) ‘foundation model’ means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks;

(1d) ‘general purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed;

(1e) ‘large training runs’ means the production process ofa powerful AI model that require computing resources above a very high threshold;

(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge;

(3) <deleted>

(4) ‘deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity;

(5) ‘authorised representative’ means any natural or legal person established in the Union who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;

(6) ‘importer’ means any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union;

(7) ‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties;

(8) ‘operator’ means the provider, the deployer, the authorised representative, the importer and the distributor;

(8a) ‘affected person’ means any natural person or group of persons who are subject to or otherwise affected by an AI system;

(9) ‘placing on the market’ means the first making available of an AI system on the Union market;

(10) ‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;

(11) ‘‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use on the Union market for its intended purpose;

(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;

(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose as indicated in instructions for use established by the provider, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems;

(14) ‘safety component of a product or system’ means, in line with Union harmonisation law listed in Annex II, a component of a product or of a system which fulfils a safety function for that product or system, or the failure or malfunctioning of which endangers the health and safety of persons;

(15) ‘instructions for use’ means the information provided by the provider to inform the deployer of in particular an AI system’s intended purpose and proper use, as well as information on any precautions to be taken; inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used;

(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider of an AI system that has been made available to deployers;

(17) ‘withdrawal of an AI system’ means any measure aimed at preventing the distribution, display and offer of an AI system;

(18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;

(19) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring;

(20) ‘conformity assessment’ means the process of demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled;

(21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection;

(22) ‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation;

(23) ‘substantial modification’ means a modification or a series of modifications of the AI system after its placing on the market or putting into service which is not foreseen or planned in the initial risk assessment by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation is affected or results in a modification to the intended purpose for which the AI system has been assessed;

(24) ‘CE marking of conformity’ (CE marking) means a physical or digital marking by which a provider indicates that an AI system or a product with an embedded AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing;

(25) ‘post-market monitoring’ means all activities carried out by providers of AI systems to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions;

(26) ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020;

(27) ‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012;

(28) ‘common specifications’ means a document, other than a standard, containing technical solutions providing a means to, comply with certain requirements and obligations established under this Regulation;

(29) ‘training data’ means data used for training an AI system through fitting its learnable parameters;

(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset is a separate dataset or part of the training dataset, either as a fixed or variable split;

(31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service;

(32) ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output;

(33) ‘biometric data’ means biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679;

(33a) ‘biometric-based data’ means data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person;

(33b) ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, and psychological human features for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification);

(33c) ‘biometric verification’ means the automated verification of the identity of natural persons by comparing biometric data of an individual to previously provided biometric data (one-to-one verification, including authentication);

(33d) ‘special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU)2016/679;

(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts, states of mind or intentions of individuals or groups on the basis of their biometric and biometric-based data;

(35) ‘biometric categorisation means assigning natural persons to specific categories, or inferring their characteristics and attributes on the basis of their biometric or biometric-based data, or which can be inferred from such data;

(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the deployer of the AI system whether the person will be present and can be identified, excluding verification systems;

(37) ‘real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited delays in order to avoid circumvention;

(38) ‘post’ remote biometric identification system’ means a remote biometric identification system other than a ‘real-time’ remote biometric identification system;

(39) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to the public, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions;

(40) ‘law enforcement authority’ means:

(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or

(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;

(41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;

(42) ‘national supervisory authority’ means a public (AM 69) authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State in the management Board of the AI Office;

(43) ‘national competent authority’ means any of the national authorities which are responsible for the enforcement of this Regulation;

(44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following: (a) the death of a person or serious damage to a person’s health,

(b) a serious disruption of the management and operation of critical infrastructure,

(ba) a breach of fundamental rights protected under Union law,

(bb) serious damage to property or the environment.

(44a) 'personal data' means personal data as defined in Article 4, point (1) of Regulation (EU)2016/679;

(44b) ‘non-personal data’ means data other than personal data;

(44c) ‘profiling’ means any form of automated processing of personal data as defined in point (4) of Article 4 of Regulation (EU) 2016/679; or in the case of law enforcement authorities – in point 4 of Article 3 of Directive (EU) 2016/680 or, in the case of Union institutions, bodies, offices or agencies, in point 5 Article 3 of Regulation (EU) 2018/1725;

(44d) "deep fake" means manipulated or synthetic audio, image or video content that would falsely appear to be authentic or truthful, and which features depictions of persons appearing to say or do things they did not say or do, produced using AI techniques, including machine learning and deep learning;

(44e) ‘widespread infringement’ means any act or omission contrary to Union law that protects the interest of individuals:

(a) which has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other than the Member State, in which:

(i) the act or omission originated or took place;

(ii) the provider concerned, or, where applicable, its authorised representative is established; or,

(iii) the deployer is established, when the infringement is committed by the deployer;

(b) which protects the interests of individuals, that have caused, cause or are likely to cause harm to the collective interests of individuals and that have common features, including the same unlawful practice, the same interest being infringed and that are occurring concurrently, committed by the same operator, in at least three Member States;

(44f) ‘widespread infringement with a Union dimension’ means a widespread infringement that has harmed or is likely to harm the collective interests of individuals in at least two-thirds of the Member States, accounting, together, for at least two-thirds of the population of the Union;

(44g) ‘regulatory sandbox’ means a controlled environment established by a public authority that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan under regulatory supervision;

(44h) ‘critical infrastructure’ means an asset, a facility, equipment, a network or a system, or a part of an asset, a facility, equipment, a network or a system, which is necessary for the provision of an essential service within the meaning of Article 2(4) of Directive (EU) 2022/2557;

(44k) ‘social scoring’ means evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics;

(44l) ‘social behaviour’ means the way a natural person interacts with and influences other natural persons or society;

(44m) ‘state of the art’ means the developed stage of technical capability at a given time as regards products, processes and services, based on the relevant consolidated findings of science, technology and experience;

(44n) ‘testing in real world conditions’ means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment;

 

Article 4
Amendments to Annex I

<deleted>

 

Article 4 a
General principles applicable to all AI systems

1. All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a highlevel framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded:

a) ‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans;

b) ‘technical robustness and safety’ means that AI systems shall be developed and used in a way to minimize unintended and unexpected harm as well as being robust in case of unintended problems and being resilient against attempts to alter the use or performance of the AI system so as to allow unlawful use by malicious third parties;

c) ‘privacy and data governance’ means that AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity;

d) ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights;.

e) ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law;

f) ‘social and environmental well-being’ means that AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the longterm impacts on the individual, society and democracy.

2. Paragraph 1 is without prejudice to obligations set up by existing Union and national law. For high-risk AI systems, the general principles are translated into and complied with by providers or deployers by means of the requirements set out in Articles 8 to 15, and the relevant obligations laid down in Chapter 3 of Title III of this Regulation. For foundation models, the general principles are translated into and complied with by providers by means of the requirements set out in Articles 28 to 28b. For all AI systems, the application of the principles referred to in paragraph 1 can be achieved, as applicable, through the provisions of Article 28, Article 52, or the application of harmonised standards, technical specifications, and codes of conduct as referred to in Article 69, without creating new obligations under this Regulation.

3. The Commission and the AI Office shall incorporate these guiding principles in standardisation requests as well as recommendations consisting in technical guidance to assist providers and deployers on how to develop and use AI systems. European Standardisation Organisations shall take the general principles referred to in paragraph 1of this Article into account as outcome-based objectives when developing the appropriate harmonised standards for high risk AI systems as referred to in Article 40(2b).

 

Article 4 b
AI literacy

1. When implementing this Regulation, the Union and the Member States shall promote measures for the development of a sufficient level of AI literacy, across sectors and taking into account the different needs of groups of providers, deployers and affected persons concerned, including through education and training, skilling and reskilling programmes and while ensuring proper gender and age balance, in view of allowing a democratic control of AI systems

2. Providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used.

3. Such literacy measures shall consist, in particular, of the teaching of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits.

4. A sufficient level of AI literacy is one that contributes, as necessary, to the ability of providers and deployers to ensure compliance and enforcement of this Regulation.