EU AI Act: RECITALS 51-60
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or confidentiality attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure. High-risk AI should be accompanied by security solutions and patches for the lifetime of the product, or in case of the absence of dependence on a specific product, for a time that needs to be stated by the manufacturer.
(52) As part of Union harmonisation legislation, rules applicable to the placing on the market, putting into service and use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council setting out the requirements for accreditation and the market surveillance of products, Decision No 768/2008/EC of the European Parliament and of the Council on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Council on market surveillance and compliance of products (‘New Legislative Framework for the marketing of products’).
(53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.
(53a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the Union and the Member States are legally obliged to protect persons with disabilities from discrilmination and promote their equality, to ensure that persons with disabilities have access, on an equal basis wirh others, to information and communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the growing importance and use of AI systems, the application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore essential that Providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102 and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as possible into the design of the high-risk AI system.
(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. For providers that have already in place quality management systems based on standards such as ISO 9001 or other relevant standards, no duplicative quality management system in full should be expected but rather an adaptation of their existing systems to certain aspects linked to compliance with specific requirements of this Regulation. This should also be reflected in future standardization activities or guidance adopted by the Commission in this respect. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
(55) Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation.
(56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union.
(57) In line with New Legislative Framework principles, specific obligations for relevant economic operators, such as importers and distributors, should be set to ensure legal certainty and facilitate regulatory compliance by those relevant operators.
(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for deployers. Deployers should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate.
(58a) Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the highrisk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups. Deployers should identify appropriate governance structures in that specific context of use, such as arrangements for human oversight, complaint-handling procedures and redress procedures, because choices in the governance structures can be instrumental in mitigating risks to fundamental rights in concrete use-cases. In order to efficiently ensure that fundamental rights are protected, the deployer of high-risk AI systems should therefore carry out a fundamental rights impact assessment prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigating the risks to fundamental rights identified at the latest from the time of putting it into use. If such plan cannot be identified, the deployer should refrain from putting the system into use. When performing this impact assessment, the deployer should notify the national supervisory authority and, to the best extent possible relevant stakeholders as well as representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment and are encouraged to make the summary of their fundamental rights impact assessment publicly available on their online website. This obligations should not apply to SMEs which, given the lack of resrouces, might find it difficult to perform such consultation. Nevertheless, they should also strive to invole such representatives when carrying out their fundamental rights impact assessment.In addition, given the potential impact and the need for democratic oversight and scrutiny, deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies, as well deployers who are undertakings designated as a gatekeeper under Regulation (EU) 2022/1925 should be required to register the use of any highrisk AI system in a public database. Other deployers may voluntarily register.
(59) It is appropriate to envisage that the deployer of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non-professional activity.
(60) Within the AI value chain multiple entities often supply tools and services but also components or processes that are then incorporated by the provider into the AI system, including in relation to data collection and pre-processing, model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development. The involved entities may make their offering commercially available directly or indirectly, through interfaces, such as Application Programming Interfaces (API), and distributed under free and open source licenses but also more and more by AI workforce platforms, trained parameters resale, DIY kits to build models or the offering of paying access to a model serving architecture to develop and train models. In the light of this complexity of the AI value chain, all relevant third parties, in particular those that are involved in the development, sale and the commercial supply of software tools, components, pre-trained models or data incorporated into the AI system, or providers of network services, should without compromising their own intellectual property rights or trade secrets, make available the required information, training or expertise and cooperate, as appropriate, with providers to enable their control over all compliance relevant aspects of the AI system that falls under this Regulation. To allow a costeffective AI value chain governance, the level of control shall be explicitly disclosed by each third party that supplies the provider with a tool, service, component or process that is later incorporated by the provider into the AI system.
(60a) Where one party is in a stronger bargaining position, there is a risk that that party could leverage such position to the detriment of the other contracting party when negotiating the supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations. Such contractual imbalances particularly harm micro, small and medium-sized enterprises as well as start-ups, unless they are owned or sub-contracted by an enterprise which is able to compensate the sub-contractor appropriately, as they are without a meaningful ability to negotiate the conditions of the contractual agreement, and may have no other choice than to accept ‘take-it-or-leave-it’ contractual terms. Therefore, unfair contract terms regulating the supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations should not be binding to such micro, small or medium-sized enterprises and start-ups when they have been unilaterally imposed on them.
(60b) Rules on contractual terms should take into account the principle of contractual freedom as an essential concept in business-to-business relationships. Therefore, not all contractual terms should be subject to an unfairness test, but only to those terms that are unilaterally imposed on micro, small and medium-sized enterprises and start-ups. This concerns ‘take-it-or-leaveit’ situations where one party supplies a certain contractual term and the micro, small or medium-sized enterprise and start-up cannot influence the content of that term despite an attempt to negotiate it. A contractual term that is simply provided by one party and accepted by the micro, small, medium-sized enterprise or a start-up or a term that is negotiated and subsequently agreed in an amended way between contracting parties should not be considered as unilaterally imposed.
(60c) Furthermore, the rules on unfair contractual terms should only apply to those elements of a contract that are related to supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations. Other parts of the same contract, unrelated to these elements, should not be subject to the unfairness test laid down in this Regulation.
(60d) Criteria to identify unfair contractual terms should be applied only to excessive contractual terms, where a stronger bargaining position is abused. The vast majority of contractual terms that are commercially more favourable to one party than to the other, including those that are normal in business-tobusiness contracts, are a normal expression of the principle of contractual freedom and continue to apply. If a contractual term is not included in the list of terms that are always considered unfair, the general unfairness provision applies. In this regard, the terms listed as unfair terms should serve as a yardstick to interpret the general unfairness provision.
(60e) Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. The foundation model can be unimodal or multimodal, trained through various methods such as supervised learning or reinforced learning. AI systems with specific intended purpose or general purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general purpose AI systems. These models hold growing importance to many downstream applications and systems.
(60f) In the case of foundation models provided as a service such as through API access, the cooperation with downstream providers should extend throughout the time during which that service is provided and supported, in order to enable appropriate risk mitigation, unless the provider of the foundation model transfers the training model as well as extensive and appropriate information on the datasets and the development process of the system or restricts the service, such as the API access, in such a way that the downstream provider is able to fully comply with this Regulation without further support from the original provider of the foundation model.
(60g) In light of the nature and complexity of the value chain for AI system, it is essential to clarify the role of actors contributing to the development of AI systems. There is significant uncertainty as to the way foundation models will evolve, both in terms of typology of models and in terms of selfgovernance. Therefore, it is essential to clarify the legal situation of providers of foundation models. Combined with their complexity and unexpected impact, the downstream AI provider’s lack of control over the foundation model’s development and the consequent power imbalance and in order to ensure a fair sharing of responsibilities along the AI value chain, such models should be subject to proportionate and more specific requirements and obligations under this Regulation, namely foundation models should assess and mitigate possible risks and harms through appropriate design, testing and analysis, should implement data governance measures, including assessment of biases, and should comply with technical design requirements to ensure appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity and should comply with environmental standards. These obligations should be accompanied by standards. Also, foundation models should have information obligations and prepare all necessary technical documentation for potential downstream providers to be able to comply with their obligations under this Regulation. Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans. These specific requirements and obligations do not amount to considering foundation models as high risk AI systems, but should guarantee that the objectives of this Regulation to ensure a high level of protection of fundamental rights, health and safety, environment, democracy and rule of law are achieved. Pre-trained models developed for a narrower, less general, more limited set of applications that cannot be adapted for a wide range of tasks such as simple multi-purpose AI systems should not be considered foundation models for the purposes of this Regulation, because of their greater interpretability which makes their behaviour less unpredictable.
(60h) Given the nature of foundation models, expertise in conformity assessment is lacking and third-party auditing methods are still under development . The sector itself is therefore developing new ways to assess fundamental models that fulfil in part the objective of auditing (such as model evaluation, red-teaming or machine learning verification and validation techniques). Those internal assessments for foundation models should be should be broadly applicable (e.g. independent of distribution channels, modality, development methods), to address risks specific to such models taking into account industry state-of-the-art practices and focus on developing sufficient technical understanding and control over the model, the management of reasonably foreseeable risks, and extensive analysis and testing of the model through appropriate measures, such as by the involvement of independent evaluators. As foundation models are a new and fastevolving development in the field of artificial intelligence, it is appropriate for the Commission and the AI Office to monitor and periodically asses the legislative and governance framework of such models and in particular of generative AI systems based on such models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It should be clarified that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR and (EU) 2019/790 of the European Parliament and of the Council.