EU AI Act – Recitals Page 04

EU AI Act: RECITALS 31-40

31) The classification of an AI system as high-risk pursuant to this Regulation should not mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation law that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48 , where a third-party conformity assessment is provided for medium-risk and high-risk products.

(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products and that are listed in one of the areas and use cases in Annex III, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons and, where the AI system is used as a safety component of a critical infrastructure, to the environment . Such significant risk of harm should be identified by assessing on the one hand the effect of such risk with respect to its level of severity, intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons. Such combination could for instance result in a high severity but low probability to affect a natural person, or a high probability to affect a group of persons with a low intensity over a long period of time, depending on the context. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.

(32a) Providers whose AI systems fall under one of the areas and use cases listed in Annex III that consider their system does not pose a significant risk of harm to the health, safety, fundamental rights or the environment should inform the national supervisory authorities by submitting a reasoned notification. This could take the form of a one-page summary of the relevant information on the AI system in question, including its intended purpose and why it would not pose a significant risk of harm to the health, safety, fundamental rights or the environment. The Commission should specify criteria to enable companies to assess whether their system would pose such risks, as well as develop an easy to use and standardised template for the notification. Providers should submit the notification as early as possible and in any case prior to the placing of the AI system on the market or its putting into service, ideally at the development stage, and they should be free to place it on the market at any given time after the notification. However, if the authority estimates the AI system in question was misclassified, it should object to the notification within a period of three months. The objection should be substantiated and duly explain why the AI system has been misclassified. The provider should retain the right to appeal by providing further arguments. If after the three months there has been no objection to the notification, national supervisory authorities could still intervene if the AI system presents a risk at national level, as for any other AI system on the market. National supervisory authorities should submit annual reports to the AI Office detailing the notifications received and the decisions taken.

(33)
(33a) As biometric data constitute a special category of sensitive personal data in accordance with Regulation 2016/679, it is appropriate to classify as high-risk several critical use-cases of biometric and biometrics-based systems. AI systems intended to be used for biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those which are prohibited under this Regulation should therefore be classified as high-risk. This should not include AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises (one-to-one verification). Biometric and biometrics-based systems which are provided for under Union law to enable cybersecurity and personal data protection measures should not be considered as posing a significant risk of harm to the health, safety and fundamental rights.

(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of the supply of water, gas, heating electricity and critical digital infrastructure, since their failure or malfunctioning may infringe the security and integrity of such critical infrastructure or put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or health and safety of persons and property. Failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to the health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of such safety components may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres.

(35) Deployment of AI systems in education is important in order to help modernise entire education systems, to increase educational quality, both offline and online and to accelerate digital education, thus also making it available to a broader audience. AI systems used in education or vocational training, notably for determining access or materially influence decisions on admission or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education or to assess the appropriate level of education for an individual and materially influence the level of education and training that individuals will receive or be able to access or to monitor and detect prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation.

(36) AI systems used in employment, workers management and access to selfemployment, notably for the recruitment and selection of persons, for making decisions or materially influence decisions on initiation, promotion and termination and for personalised task allocation based on individual behaviour, personal traits or biometric data, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights. Relevant work-related contractual relationships should meaningfully involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also undermine the essence of their fundamental rights to data protection and privacy. This Regulation applies without prejudice to Union and Member State competences to provide for more specific rules for the use of AIsystems in the employment context.

(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services, including healthcare services, and essential services, including but not limited to housing, electricity, heating/cooling and internet, and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, gender, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. However, AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under this Regulation. Natural persons applying for or receiving public assistance benefits and services from public authorities, including healthcare services and essential services, including but not limited to housing, electricity, heating/cooling and internet, are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, nondiscrimination, human dignity or an effective remedy. Similarly, AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance may also have a significant impact on persons’ livelihood and may infringe their fundamental rights such as by limiting access to healthcare or by perpetuating discrimination based on personal characteristics. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.

(37a) Given the role and responsibility of police and judicial authorities, and the impact of decisions they take for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, some specific use-cases of AI applications in law enforcement has to be classified as high-risk, in particular in instances where there is the potential to significantly affect the lives or the fundamental rights of individuals.

(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities, as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, for the evaluation of the reliability of evidence in criminal proceedings, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be classified as high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and judicial authorities should not become a factor of inequality, social fracture or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, notably the difficulty in obtaining meaningful information on their functioning and the consequent difficulty in challenging their results in court, in particular by individuals under investigation.

(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, nondiscrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination and assessment of the veracity of evidence in relation to applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status; for monitoring, surveilling or processing personal data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons; for the forecasting or prediction of trends related to migration movements and border crossings. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or Union institutions, agencies or bodies as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non-refoulement, or or deny safe and effective legal avenues into the territory of the Union, including the right to international protection.

(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or administrative body or on their behalf to assist judicial authorities or administrative bodies in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution. The use of artificial intelligence tools can support, but should not replace the decision-making power of judges or judicial independence, as the final decision-making must remain a human-driven activity and decision. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.

(40a) In order to address the risks of undue external interference to the right to vote enshrined in Article 39 of the Charter, and of disproportionate effects on democratic processes, democracy, and the rule of law, AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI systems. with the exception of AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistical point of view.

(40b) Considering the scale of natural persons using the services provided by social media platforms designated as very large online platforms, such online platforms can be used in a way that strongly influences safety online, the shaping of public opinion and discourse, election and democratic processes and societal concerns. It is therefore appropriate that AI systems used by those online platforms in their recommender systems are subject to this Regulation so as to ensure that the AI systems comply with the requirements laid down under this Regulation, including the technical requirements on data governance, technical documentation and traceability, transparency, human oversight, accuracy and robustness. Compliance with this Regulation should enable such very large online platforms to comply with their broader risk assessment and riskmitigation obligations in Article 34 and 35 of Regulation EU 2022/2065. The obligations in this Regulation are without prejudice to Regulation (EU) 2022/2065 and should complement the obligations required under the Regulation (EU) 2022/2065 when the social media platform has been designated as a very large online platform. Given the European-wide impact of social media platforms designated as very large online platforms, the authorities designated under Regulation (EU) 2022/2065 should act as enforcement authorities for the purposes of enforcing this provision.