EU AI Act – Recitals Page 09

EU AI Act: RECITALS 81-89

(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.

(82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council would apply as a safety net.

(83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should aim for transparency and openness while respecting the confidentiality of information and data obtained in carrying out their tasks by putting in place technical and organisational measures to protect the security and confidentiality of the information obtained carrying out their activities including for intellectual property rights and public and national security interests. Where the activities of the Commission, national competent authorities and notified bodies pursuant to this Regulation results in a breach of intellectual property rights, Member States should provide for adequate measures and remedies to ensure the enforcement of intellectual property rights in application of Directive 2004/48/EC.

(84) Compliance with this Regulation should be enforceable by means of the imposition of fines by the national supervisory authority when carrying out proceedings under the procedure laid down in this Regulation. Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down;. When assessing the amount of the fines, national competent authorities should, in each individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and to the provider’s size, in particular if the provider is a SME or a start-up. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. The penalties and litigation costs under this Regulation should not be subject to contractual clauses or any other arrangements.

(84a) As the rights and freedoms of natural and legal persons and groups of natural persons can be seriously undermined by AI systems, it is essential that natural and legal persons or groups of natural persons have meaningful access to reporting and redress mechanisms and to be entitled to access proportionate and effective remedies. They should be able to report infringments of this Regulation to their national supervisory authority and have the right to lodge a complaint against the providers or deployers of AI systems. Where applicable, deployers should provide internal complaints mechanisms to be used by natural and legal persons or groups of natural persons. Without prejudice to any other administrative or non-judicial remedy, natural and legal persons and groups of natural persons should also have the right to an effective judicial remedy with regard to a legally binding decision of a national supervisory authority concerning them or, where the national supervisory authority does not handle a complaint, does not inform the complainant of the progress or preliminary outcome of the complaint lodged or does not comply with its obligation to reach a final decision, with regard to the complaint.

(84b) Affected persons should always be informed that they are subject to the use of a high-risk AI system, when deployers use a high-risk AI system to assist in decision-making or make decisions related to natural persons. This information can provide a basis for affected persons to exercise their right to an explanation under this Regulation.When deployers provide an explanation to affected persons under this Regulation, they should take into account the level of expertise and knowledge of the average consumer or individual.

(84c) Union law on the protection of whistleblowers (Directive (EU) 2019/1937) has full application to academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators acquiring information on breaches of Union law by a provider of AI system or its AI system.

(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making. These consultations should involve the participation of a balanced selection of stakeholders, including consumer organisations, civil society, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.

(85a) Given the rapid technological developments and the required technical expertise in conducting the assessment of high-risk AI systems, the Commission should regularly review the implementation of this Regulation, in particular the prohibited AI systems, the transparency obligations and the list of high-risk areas and use cases, at least every year, while consulting the AI office and the relevant stakeholders.

(86) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council.

(87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.

(87a) As reliable information on the resource and energy use, waste production and other environmental impact of AI systems and related ICT technology, including software, hardware and in particular data centres, is limited, the Commission should introduce of an adequate methodology to measure the environmental impact and effectiveness of this Regulation in light of the Union environmental and climate objectives.

(88) This Regulation should apply from … [OP – please insert the date established in Art. 85]. However, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from … [OP – please insert the date – three months following the entry into force of this Regulation]. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from [OP – please insert the date – twelve months following the entry into force of this Regulation].

(89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on 18 June 2021.