EU AI Act – Recitals Page 05

EU AI Act: RECITALS 41-50

(41) The fact that an AI system is classified as a high risk AI system under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful or unlawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law.

(41a) A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today, including but not limited to EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, antidiscrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and national law. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as for instance the Medical Device Regulation in the healthcare sector).

(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for deployers and affected persons, certain mandatory requirements should apply, taking into account the intended purpose, the reasonably foreseeable misuse of the system and according to the risk management system to be established by the provider. These requirements should be objective-driven, fit for purpose, reasonable and effective, without adding undue regulatory burdens or costs on operators.

(43) Requirements should apply to highrisk AI systems as regards the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as well as the environment, democracy and rule of law, as applicable in the light of the intended purpose or reasonably foreseeable misuse of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.

(44) Access to data of high quality plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, and where applicable, validation and testing data sets, including the labels, should be sufficiently relevant, representative, appropriately vetted for errors and as complete as possible in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the datasets, that might lead to risks to fundamental rights or discriminatory outcomes for the persons affected by the high-risk AI system. Biases can for example be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings. Results provided by AI systems are influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain vulnerable or ethnic groups, or racialised communities. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, contextal, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should, exceptionally and following the application of all applicable conditions laid down under this Regulation and in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725, be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the negative bias detection and correction in relation to highrisk AI systems. Negative bias should be understood as bias that create direct or indirect discriminatory effect against a natural person The requirements related to data governance can be complied with by having recourse to third-parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices.

(45) For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate nondiscriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.

(45a) The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in Union data protection law, are essential when the processing of data involves significant risks to the fundamental rights of individuals. Providers and users of AI systems should implement state-of-the-art technical and organisational measures in order to protect those rights. Such measures should include not only anonymisation and encryption, but also the use of increasingly available technology that permits algorithms to be brought to the data and allows valuable insights to be derived without the transmission between parties or unnecessary copying of the raw or structured data themselves.

(46) Having comprehensible information on how high-risk AI systems have been developed and how they perform throughout their lifetime is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date appropriately throughout the lifecycle of the AI system. AI systems can have a large important environmental impact and high energy consumption during their lifecyle. In order to better apprehend the impact of AI systems on the environment, the technical documentation drafted by providers should include information on the energy consumption of the AI system, including the consumption during development and expected consumption during use. Such information should take into account the relevant Union and national legislation. This reported information should be comprehensible, comparable and verifiable and to that end, the Commission should develop guidelines on a harmonised metholodogy for calculation and reporting of this information. To ensure that a single documentation is possible, terms and definitions related to the required documentation and any required documentation in the relevant Union legislation should be aligned as much as possible.

(46a) AI systems should take into account state-of-the art methods and relevant applicable standards to reduce the energy use, resource use and waste, as well as to increase their energy efficiency and the overall efficiency of the system. The environmental aspects of AI systems that are significant for the purposes of this Regulation are the energy consumption of the AI system in the development, training and deployment phase as well as the recording and reporting and storing of this data. The design of AI systems should enable the measurement and logging of the consumption of energy and resources at each stage of development, training and deployment. The monitoring and reporting of the emissions of AI systems must be robust, transparent, consistent and accurate. In order to ensure the uniform application of this Regulation and stable legal ecosystem for providers and deployers in the Single Market, the Commission should develop a common specification for the methodology to fulfil the reporting and documentation requirement on the consumption of energy and resources during development, training and deployment. Such common specifications on measurement methodology can develop a baseline upon which the Commission can better decide if future regulatory interventions are needed, upon conducting an impact assessment that takes into account existing law.

(46b) In order to achieve the objectives of this Regulation, and contribute to the Union’s environmental objectives while ensuring the smooth functioning of the internal market, it may be necessary to establish recommendations and guidelines and, eventually, targets for sustainability. For that purpose the Commission is entitled to develop a methodology to contribute towards having Key Performance Indicators (KPIs) and a reference for the Sustainable Development Goals (SDGs). The goal should be in the first instance to enable fair comparison between AI implementation choices providing incentives to promote using more efficient AI technologies addressing energy and resource concerns. To meet this objective this Regulation should provide the means to establish a baseline collection of data reported on the emissions from development and training and for deployment.

(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate.

(47a) Such requirements on transparency and on the explicability of AI decisionmaking should also help to counter the deterrent effects of digital asymmetry and so-called ‘dark patterns’ targeting individuals and their informed consent.

(48) High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.

(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. Performance metrics and their expected level should be defined with the primary objective to mitigate risks and negative impact of the AI system. The expected level of performance metrics should be communicated in a clear, transparent, easily understandable and intelligible way to the deployers. The declaration of performance metrics cannot be considered proof of future levels, but relevant methods need to be applied to ensure consistent levels during use While standardisation organisations exist to establish standards, coordination on benchmarking is needed to establish how these standardised requirements and characteristics of AI systems should be measured. The European Artificial Intelligence Office should bring together national and international metrology and benchmarking authorities and provide non-binding guidance to address the technical aspects of how to measure the appropriate levels of performance and robustness.

(50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. Users of the AI system should take steps to ensure that the possible trade-off between robustness and accuracy does not lead to discriminatory or negative outcomes for minority subgroups.