EU AI Act – Title 8

Title VIII: POST-MARKET MONITORING, INFORMATION SHARING, MARKET SURVEILLANCE

  

TITLE VIII
POST-MARKET MONITORING, INFORMATION SHARING, MARKET SURVEILLANCE

 

CHAPTER 1
POST-MARKET MONITORING

 

Article 61
Post-market monitoring by providers and post-market
monitoring plan for high-risk AI systems

1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.

2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by deployers or collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems environment, including other devices and software taking into account the rules applicable from areas such as data protection, intellectual property rights and competition law.

3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by [twelve months after the date of entry into force of this Regulation].

4. For high-risk AI systems covered by the legal acts referred to in Annex II, where a post-market monitoring system and plan is already established under that legislation, the elements described in paragraphs 1, 2 and 3 shall be integrated into that system and plan as appropriate.

The first subparagraph shall also apply to high-risk AI systems referred to in point 5(b) of Annex III placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU.

 

CHAPTER 2
SHARING OF INFORMATION ON INCIDENTS AND MALFUNCTIONING

 

Article 62
Reporting of serious incidents

1. Providers and, where deployers have identified a serious incident, deployers of high-risk AI systems placed on the Union market shall report any serious incident of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the national supervisory authority of the Member States where that incident or breach occurred.  Such notification shall be made without undue delay after the provider, or, where applicable the deployer, has established a causal link between the AI system and the incident or the reasonable likelihood of such a link, and, in any event, not later than 72 hours after the provider or, where applicable, the deployer becomes aware of the serious incident.

1 a. Upon establishing a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, providers shall take appropriate corrective actions pursuant to Article 21.

2. Upon receiving a notification related to a breach of obligations under Union law intended to protect fundamental rights, the national supervisory authority shall inform the national public authorities or bodies referred to in Article 64(3). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1. That guidance shall be issued by [the entry into force of this Regulation] and shall be assessed regularly.

2 a. The national supervisory authority shall take appropriate measures within 7 days from the date it received the notification referred to in paragraph 1. Where the infringement takes place or is likely to take place in other Member States, the national supervisory authority shall notify the AI Office and the relevant national supervisory authorities of these Member States.

3. For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation, the notification of serious incidents constituting a breach of fundamental rights under Union law shall be transferred to the national supervisory authority.

3 a. National supervisory authorities shall on an annual basis notify the AI Office of the serious incidents reported to them in accordance with this Article.

 

CHAPTER 3
ENFORCEMENT

 

Article 63
Market surveillance and control of AI systems in the Union market

1. Regulation (EU) 2019/1020 shall apply to AI systems and foundation models covered by this Regulation. However, for the purpose of the effective enforcement of this Regulation:

(a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Title III, Chapter 3 of this Regulation;

(b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.

(b a) the national supervisory authorities shall act as market surveillance authorities under this Regulation and have the same powers and obligations as market surveillance authorities under Regulation (EU) 2019/1020.

2. The national supervisory authority shall report to the Commission and the AI Office annually the outcomes of relevant market surveillance activities. The national supervisory authority shall report, without delay, to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules.

3. For high-risk AI systems, related to products to which legal acts listed in Annex II, section A apply, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market surveillance activities designated under those legal acts.

3 a. For the purpose of ensuring the effective enforcement of this Regulation, national supervisory authorities may:

(a) carry out unannounced on-site and remote inspections of high-risk AI systems;

(b) acquire samples related to high-risk AI systems, including through remote inspections, to reverse-engineer the AI systems and to acquire evidence to identify non-compliance.

4. For AI systems placed on the market, put into service or used by financial institutions regulated by Union legislation on financial services, the market surveillance authority for the purposes of this Regulation shall be the relevant authority responsible for the financial supervision of those institutions under that legislation.

5. For AI systems that are used for law enforcement purposes, Member States shall designate as market surveillance authorities for the purposes of this Regulation the competent data protection supervisory authorities under Directive (EU) 2016/680.

6. Where Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority.

7. National supervisory authorities designated under this Regulation shall coordinate with other relevant national authorities or bodies which supervise the application of Union harmonisation law listed in Annex II or other Union law that might be relevant for the high-risk AI systems referred to in Annex III.

 

Article 64
Access to data and documentation

1. In the context of their activities, and upon their reasoned request the national supervisory authority shall be granted full access to the training, validation and testing datasets used by the provider, or, where relevant, the deployer, that are relevant and strictly necessary for the purpose of its request through appropriate technical means and tools.

2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2, after all other reasonable ways to verify conformity including paragraph 1 have been exhausted and have proven to be insufficient, and upon a reasoned request, the national supervisory authority shall be granted access to the training and trained models of the AI system, including its relevant model parameters. All information in line with Article 70 obtained shall be treated as confidential information and shall be subject to existing Union law on the protection of intellectual property and trade secrets and shall be deleted upon the completion of the investigation for which the information was requested.

2 a. Paragraphs 1 and 2 are without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020.

3. National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation when access to that documentation is necessary for the fulfilment of the competences under their mandate within the limits of their jurisdiction. The relevant public authority or body shall inform the national supervisory authority of the Member State concerned of any such request.

4. By 3 months after the entering into force of this Regulation, each Member State shall identify the public authorities or bodies referred to in paragraph 3 and make a list publicly available on the website of the national supervisory authority. National supervisory authorities shall notify the list to the Commission, the AI Office, and all other national supervisory authorities and keep the list up to date.

The Commission shall publish in a dedicated website the list of all the competent authorities designated by the Member States in accordance with this Article.

5. Where the documentation referred to in paragraph 3 is insufficient to ascertain whether a breach of obligations under Union law intended to protect fundamental rights has occurred, the public authority or body referred to in paragraph 3 may make a reasoned request to the national supervisory authority, to organise testing of the high-risk AI system through technical means. The national supervisory authority shall organise the testing with the close involvement of the requesting public authority or body within reasonable time following the request.

6. Any information and documentation obtained by the national public authorities or bodies referred to in paragraph 3 pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.

 

Article 65
Procedure for dealing with AI systems presenting a risk at national level

1. AI systems presenting a risk shall be understood as an AI system having the potential to affect adversely health and safety, fundamental rights of persons in general, including in the workplace, protection of consumers, the environment, public security, or democracy or the rule of law and other public interests, that are protected by the applicable Union harmonisation law, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the system are concerned, including the duration of use and, where applicable, its putting into service, installation and maintenance requirements.

2. Where the national supervisory authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to fundamental rights are present, the national supervisory authority shall also immediately inform and fully cooperate with the relevant national public authorities or bodies referred to in Article 64(3); Where there is sufficient reason to consider that that an AI system exploits the vulnerabilities of vulnerable groups or violates their rights intentionally or unintentionally, the national supervisory authority shall have the duty to investigate the design goals, data inputs, model selection, implementation and outcomes of the AI system. The relevant operators shall cooperate as necessary with the national supervisory authority and the other national public authorities or bodies referred to in Article 64(3).

Where, in the course of that evaluation, the national supervisory authority or, where relevant, the national public authority referred to in Article 64(3) finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe and in any event no later than fifteen working days or as provided for in the relevant Union harmonisation law as applicable.

The national supervisory authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the second subparagraph.

3. Where the national supervisory authority considers that non-compliance is not restricted to its national territory, it shall inform the Commission, the AI Office and the national supervisory authority of the other Member States without undue delay of the results of the evaluation and of the actions which it has required the operator to take.

4. The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the market throughout the Union.

5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the national supervisory authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market or put into service, to withdraw the AI system from that market or to recall it. That authority shall immediately inform the Commission, the AI Office and the national supervisory authority of the other Member States of those measures.

6. The information referred to in paragraph 5 shall include all available details, in particular the data necessary for the identification of the non-compliant AI system, the origin of the AI system and the supply chain, the nature of the noncompliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the national supervisory authority shall indicate whether the noncompliance is due to one or more of the following:

(a) a failure of the high-risk AI system to meet requirements set out this Regulation;

(b) shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring a presumption of conformity.

(b a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;

(b b) non-compliance with provisions set out in Article 52.

7. The national supervisory authorities of the Member States other than the national supervisory authority of the Member State initiating the procedure shall without delay inform the Commission, the AI Office and the other Member States of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections.

8. Where, within three months of receipt of the information referred to in paragraph 5, no objection has been raised by either a national supervisory authority of a Member State or the Commission in respect of a provisional measure taken by a national supervisory authority of another Member State, that measure shall be deemed justified. This is without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020. The period referred to in the first sentence of this paragraph shall be reduced to thirty days in the event of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5.

9. The national supervisory authorities of all Member States shall ensure that appropriate restrictive measures are taken in respect of the AI system concerned, such as withdrawal of the AI system from their market, without delay.

9 a. National supervisory authorities shall annually report to the AI Office about the use of prohibited practices that occurred during that year and about the measures taken to eliminate or mitigate the risks in accordance with this Article.

 

Article 66
Union safeguard procedure

1. Where, within three months of receipt of the notification referred to in Article 65(5), or 30 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, objections are raised by the national supervisory authority of a Member State against a measure taken by another national supervisory authority, or where the Commission considers the measure to be contrary to Union law, the Commission shall without delay enter into consultation with the national supervisory authority of the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within three months, or 60 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, starting from the notification referred to in Article 65(5) and notify such decision to the national supervisory authority of the Member State concerned. The Commission shall also inform all other national supervisory authorities of such decision.

2. If the national measure is considered justified, all national supervisory authorities designated under this Regulation shall take the measures necessary to ensure that the non-compliant AI system is withdrawn from their market without delay, and shall inform the Commission and the AI Office accordingly. If the national measure is considered unjustified, the national supervisory authority of the Member State concerned shall withdraw the measure.

3. Where the national measure is considered justified and the non-compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.

 

Article 66 a
Joint investigations

Where a national supervisory authority has reasons to suspect that the infringement by a provider or a deployer of a high-risk AI system or foundation model to this Regulation amount to a widespread infringement with a Union dimension, or affects or is likely affect at least 45 million individuals, in more than one Member State, that national supervisory authority shall inform the AI Office and may request the national supervisory authorities of the Member States where such infringement took place to start a joint investigation. The AI Office shall provide central coordination to the joint investigation. Investigation powers shall remain within the competence of the national supervisory authorities.

 

Article 67
Compliant AI systems which present a risk

1. Where, having performed an evaluation under Article 65, in full cooperation with the relevant national public authority referred to in Article 64(3), the national supervisory authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a serious risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights, or the environment or the democracy and rule of law or to other aspects of public interest protection , it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk.

2. The provider or other relevant operators shall ensure that corrective action is taken in respect of all the AI systems concerned that they have made available on the market throughout the Union within the timeline prescribed by the national supervisory authority of the Member State referred to in paragraph 1.

2 a. Where the provider or other relevant operators fail to take corrective action as referred to in paragraph 2 and the AI system continues to present a risk as referred to in paragraph 1, the national supervisory authority may require the relevant operator to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk.

3. The national supervisory authority shall immediately inform the Commission, the AI Office and the other national supervisory authorities. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken.

4. The Commission, in consultation with the AI Office shall without delay enter into consultation with the national supervisory authorities concerned and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the AI Office shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.

5. The Commission, in consultation with the AI Office shall immediately communicate its decision to the national supervisory authorities of the Member States concerned and to the relevant operators. It shall also inform the decision to all other national supervisory authorities.

5 a. The Commission shall adopt guidelines to help national competent authorities to identify and rectify, where necessary, similar problems arising in other AI systems.

 

Article 68
Formal non-compliance

1. Where the national supervisory authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the noncompliance concerned:

(a) the CE marking has been affixed in violation of Article 49;

(b) the CE marking has not been affixed;

(c) the EU declaration of conformity has not been drawn up;

(d) the EU declaration of conformity has not been drawn up correctly;

(e) the identification number of the notified body, which is involved in the conformity assessment procedure, where applicable, has not been affixed;

(e a) the technical documentation is not available;

(e b) the registration in the EU database has not been carried out;

(e c) where applicable, the authorised representative has not been appointed.

2. Where the non-compliance referred to in paragraph 1 persists, the national supervisory authority of the Member State concerned shall take appropriate and proportionate measures to restrict or prohibit the high-risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market without delay. The national supervisory authority of the Member State concerned shall immediately inform the AI Office of the non-compliance and the measures taken.

 

CHAPTER 3 a
REMEDIES

 

Article 68 a
Right to lodge a complaint with a national supervisory authority

1. Without prejudice to any other administrative or judicial remedy, every natural persons or groups of natural persons shall have the right to lodge a complaint with a national supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if they consider that the AI system relating to him or her infringes this Regulation.

2. The national supervisory authority with which the complaint has been lodged shall inform the complainant on the progress and the outcome of the complaint including the possibility of a judicial remedy pursuant to Article 78.

 

Article 68 b
Right to an effective judicial remedy against a national supervisory authority

1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them.

2. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to a an effective judicial remedy where the national supervisory authority which is competent pursuant to Articles 59 does not handle a complaint or does not inform the data subject within three months on the progress or outcome of the complaint lodged pursuant to Article 68a.

3. Proceedings against a national supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established. 4. Where proceedings are brought against a decision of a national supervisory authority which was preceded by an opinion or a decision of the Commission in the union safeguard procedure, the supervisory authority shall forward that opinion or decision to the court.

 

Article 68 c
A right to explanation of individual decision-making

1. Any affected person subject to a decision which is taken by the deployer on the basis of the output from an high-risk AI system which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety, fundamental rights, socio-economic well-being or any other of the rights deriving from the obligations laid down in this Regulation, shall have the right to request from the deployer clear and meaningful explanation pursuant to Article 13(1) on the role of the AI system in the decision making procedure, the main parameters of the decision taken and the related input data.

2. Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation under paragraph 1 follow from Union or national law are provided in so far as such exception or restrictions respect the essence of the fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society.

3. This Article shall apply without prejudice to Articles 13, 14, 15, and 22 of the Regulation 2016/679.

 

Article 68 d
Amendment to Directive (EU) 2020/1828

In Annex I to Directive (EU) 2020/1828 of the European Parliament and of the Council, the following point is added: “(67a) Regulation xxxx/xxxx of the European Parliament and of the Council [laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (OJ L ...)]”.

 

Article 68 e
Reporting of breaches and protection of reporting persons

Directive (EU) 2019/1937 of the European Parliament and of the Council shall apply to the reporting of breaches of this Regulation and the protection of persons reporting such breaches.