EU AI Act – Recitals Page 08

EU AI Act: RECITALS 71-80

(71) Artificial intelligence is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that promotes innovation, is future-proof, and resilient to disruption, Member States should establish at least one artificial intelligence regulatory sandbox to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. It is indeed desirable for the establishment of regulatory sandboxes, whose establishment is currently left at the discretion of Member States, as a next step to be made mandatory with established criteria. That mandatory sandbox could also be established jointly with one or several other Member States, as long as that sandbox would cover the respective national level of the involved Member States. Additional sandboxes may also be established at different levels, including cross Member States, in order to facilitate cross-border cooperation and synergies. With the exception of the mandatory sandbox at national level, Member States should also be able to establish virtual or hybrid sandboxes. All regulatory sandboxes should be able to accommodate both physical and virtual products. Establishing authorities should also ensure that the regulatory sandboxes have the adequate financial and human resources for their functioning.

(72) The objectives of the regulatory sandboxes should be: for the establishing authorities to increase their understanding of technical developments, improve supervisory methods and provide guidance to AI systems developers and providers to achieve regulatory compliance with this Regulation or where relevant, other applicable Union and Member States legislation, as well as with the Charter of Fundamental Rights ; for the prospective providers to allow and facilitate the testing and development of innovative solutions related to AI systems in the pre-marketing phase to enhance legal certainty, to allow for more regulatory learning by establishing authorities in a controlled environment to develop better guidance and to identify possible future improvements of the legal framework through the ordinary legislative procedure. Any significant risks identified during the development and testing of such AI systems should result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. Member States should ensure that regulatory sandboxes are widely available throughout the Union, while the participation should remain voluntary. It is especially important to ensure that SMEs and startups can easily access these sandboxes, are actively involved and participate in the development and testing of innovative AI systems, in order to be able to contribute with their knowhow and experience.

(72a) This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox only under specified conditions in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Prospective providers in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety, health and the environment and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the prospective providers in the sandbox should be taken into account when competent authorities decide over the temporary or permanent suspension of their participation in the sandbox whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.

(72b) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support and promote research and development of AI in support of socially and environmentally beneficial outcomes by allocating sufficient resources, including public and Union funding, and giving priority access to regulatory sandboxes to projects led by civil society. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics.

(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Member States shall utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, start-ups, user and other innovators to provide guidance and respond to queries about the implementation of this Regulation. Such existing channels could include but are not limited to ENISA’s Computer Security Incident Response Teams, National Data Protection Agencies, the AI-on demand platform, the European Digital Innovation Hubs and other relevant instruments funded by EU programmes as well as the Testing and Experimentation Facilities established by the Commission and the Member States at national or Union level. Where appropriate, these channels shall work together to create synergies and ensure homogeneity in their guidance to startups, SMEs and users. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. The Commission shall regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs. For example, translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. Medium-sized enterprises which recently changed from the small to medium-size category within the meaning of the Annex to Recommendation 2003/361/EC (Article 16) shall have access to these initiatives and guidance for a period of time deemed appropriate by the Member States, as these new medium-sized enterprises may sometimes lack the legal resources and training necessary to ensure proper understanding and compliance with provisions.

(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AIon demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.

(75) It is appropriate that the Commission facilitates, to the extent possible, access to Testing and Experimentation Facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is notably the case for expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745 and Regulation (EU) 2017/746.

(76) In order to avoid fragmentation, to ensure the optimal functioning of the Single market, to ensure effective and harmonised implementation of this Regulation, to achieve a high level of trustworthiness and of protection of health and safety, fundamental rights, the environment, democracy and the rule of law across the Union with regards to AI systems, to actively support national supervisory authorities, Union institutions, bodies, offices and agencies in matters pertaining to this Regulation, and to increase the uptake of artificial intelligence throughout the Union, an European Union Artificial Intelligence Office should be established. The AI Office should have legal personality, should act in full independence, should be responsible for a number of advisory and coordination tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and should be adequately funded and staffed. Member States should provide the strategic direction and control of the AI Office through the management board of the AI Office, alongside the Commission, the EDPS, the FRA, and ENISA. An executive director should be responsible for managing the activities of the secretariat of the AI office and for representing the AI office. Stakeholders should formally participate in the work of the AI Office through an advisory forum that should ensure varied and balanced stakeholder representation and should advise the AI Office on matters pertaining to this Regulation. In case the establishment of the AI Office prove not to be sufficient to ensure a fully consistent application of this Regulation at Union level as well as efficient cross-border enforcement measures, the creation of an AI agency should be considered.

(77) Each Member State should designate a national supervisory authority for the purpose of supervising the application and implementation of this Regulation. It should also represent its Member State at the management board of the AI Office. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels. Each national supervisory authority should act with complete independence in performing its tasks and exercising its powers in accordance with this Regulation.

(77a) The national supervisory authorities should monitor the application of the provisions pursuant to this Regulation and contribute to its consistent application throughout the Union. For that purpose, the national supervisory authorities should cooperate with each other, with the relevant national competent authorities, the Commission, and with the AI Office.

(77b) The member or the staff of each national supervisory authority should, in accordance with Union or national law, be subject to a duty of professional secrecy both during and after their term of office, with regard to any confidential information which has come to their knowledge in the course of the performance of their tasks or exercise of their powers. During their term of office, that duty of professional secrecy should in particular apply to trade secrets and to reporting by natural persons of infringements of this Regulation.

(78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ or evolve after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law, including those protecting fundamental rights and consumer rights resulting from the use of their AI systems and take appropriate corrective actions. Deployers should also report to the relevant authorities, any serious incidents or breaches to national and Union law resulting from the use of their AI system when they become aware of such serious incidents or breaches.

(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. For the purpose of this Regulation, national supervisory authorities should act as market surveillance authorities for AI systems covered by this Regulation except for AI systems covered by Annex II of this Regulation. For AI systems covered by legal acts listed in the Annex II, the competent authorites under those legal acts should remain the lead authority. National supervisory authorities and competent authorities in the legal acts listed in Annex II should work together whenever necessary. When appropriate, the competent authorities in the legal acts listed in Annex II should send competent staff to the national supervisory authority in order to assist in the performance of its tasks. For the purpose of this Regulation, national supervisory authorities should have the same powers and obligations as market surveillance authorities under Regulation (EU) 2019/1020. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. After having exhausted all other reasonable ways to assess/verify the conformity and upon a reasoned request, the national supervisory authority should be granted access to the training, validation and testing datasets, the trained and training model of the high-risk AI system, including its relevant model parameters and their execution /run environment. In cases of simpler software systems falling under this Regulation that are not based on trained models, and where all other ways to verify conformity have been exhausted, the national supervisory authority may exceptionally have access to the source code, upon a reasoned request. Where the national supervisory authority has been granted access to the training, validation and testing datasets in accordance with this Regulation, such access should be achieved through appropriate technical means and tools, including on site access and in exceptional circumstances, remote access. The national supervisory authority should treat any information, including source code, software, and data as applicable, obtained as confidential information and respect relevant Union law on the protection of intellectual property and trade secrets. The national supervisory authority should delete any information obtained upon the completion of the investigation.

(80) Union law on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services law, the competent authorities responsible for the supervision and enforcement of the financial services law, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU.

(80a) Given the objectives of this Regulation, namely to ensure an equivalent level of protection of health, safety and fundamental rights of natural persons, to ensure the protection of the rule of law and democracy, and taking into account that the mitigation of the risks of AI system against such rights may not be sufficiently achieved at national level or may be subject to diverging interpretation which could ultimately lead to an uneven level of protection of natural persons and create market fragmentation, the national supervisory authorities should be empowered to conduct joint investigations or rely on the union safeguard procedure provided for in this Regulation for effective enforcement. Joint investigations should be initiated where the national supervisory authority have sufficient reasons to believe that an infringement of this Regulation amount to a widespread infringement or a widespread infringement with a Union dimension, or where the AI system or foundation model presents a risk which affects or is likely to affect at least 45 million individuals in more than one Member State.