The Artificial Intelligence Act (AIA) – a brief overview

//by Behrang Raji// The European Commission published its draft on the regulation of AI on 21.04.2021. It is an extensive work with 89 recitals, 85 articles and further annexes. Throughout, it becomes clear that the draft attempts a very difficult balancing act, to normatively capture a rapidly developing technology and at the same time not to hinder innovation.

This article aims to highlight the challenges that this draft attempts to address by presenting the structure of the regulation.

I. Part of the European Data Strategy

Further research is needed to determine whether the chosen approaches are actually capable of meeting the challenges. On Feb. 19, 2020, the Commission published a white paper, “AI – A European Approach to Excellence and Trust.” Legal policy requirements for a regulatory framework were formulated there. Its goal is not to prevent innovation while adequately addressing risks. “The present proposal aims to implement the second objective for the development of an ecosystem of trust by proposing a legal framework for trustworthy AI. The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to adopt AI-based solutions, while encouraging companies to develop them.”

The draft claims to meet the European Council’s demands to promote AI funding on the condition that a high level of data protection, digital rights and ethical standards are ensured.

Against this political context, the Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:

  • ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

  • ensure legal certainty to facilitate investment and innovation in AI;

  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

The draft follows a “horizontal risk-based” regulatory approach. This means that legally more extensive requirements exist primarily for AI systems with a higher potential for harm; otherwise, general minimum requirements apply. The regulatory approach interlinks general and sector-specific regulatory approaches. Therefore, the draft regulation must be seen as a mosaic stone that will interweave with other planned regulatory projects (e.g. the Machinery Directive or the General Product Safety Directive). The AIA is therefore part of the Commission’s overall digital strategy ((https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0066) to create legal clarity and develop an “ecosystem of trust in AI in Europe” (Other regulatory projects in this context: Data Governance Act and the Open Data Directive)

Overall, the Commission’s aim is to create a harmonized legal framework on the basis of Art. 114 TFEU in order to strengthen the internal market. In contrast to the USA and China, the attempt is being made to create a regulatory framework at a relatively early stage. This is very welcome and reflects the political will to shape technological progress and the use of smart technology in our society in a value-oriented way.

II. Binding regulation as a choice of the legislative proposal

As the use of technologies with AI can be diverse and not limited to specific products and services, the Commission recognizes that the above objectives cannot be achieved by Member States alone and “Moreover, an emerging patchwork of potentially divergent national rules will hinder the seamless movement of products and services related to AI systems across the EU and will not be effective in ensuring the security and protection of fundamental rights and Union values in the different Member States. National approaches to address the problems will only create additional legal uncertainty and barriers and slow down the market uptake of AI.”

Although the Commission opts for a regulation that is directly applicable in all member states (Art. 288 TFEU), it also emphasizes that the provisions are not “overly prescriptive” and allow room for own regulations, e.g. in the area of supervision.

III. Structure of the AIA

Overall, the draft regulation can be divided into a total of twelve “Titles”.

1. Scope of the regulation and definitions, (Title I)

Title I defines the broad scope of the regulation, which concerns the placing on the market, putting into service and use of AI systems. Very interesting are definitions and handling with a rapidly developing area. According to Art. 3(1) AIA, an ‘artificial intelligence system’ (AI system) is software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. In Annex I only some technical approaches are listed without further explanation, e.g. under lit. (a) Machine Learning, including learning styles. The annexes, which can be changed relatively unbureaucratically by the Commission, are intended to give the regulation a certain flexibility.

2. Prohibited Artificial Intelligence Practices (Title II)

Title II – consisting only of an Article 5 AIA – establishes a list of prohibited AI. As in the GDPR, the Commission takes a risk-based approach. Unlike the GDPR, the Commission has actually created risk classes. A distinction is made between AI systems with an unacceptable risk, a high risk and a low or minimal risk. Title II includes those AI systems whose use is considered unacceptable.

What is striking is that the draft attempts to protect individual and, at the same time, overall societal interests. At the same time, the formulations in this title are so vague that circumventions are to be feared. To make matters worse, the title sounds harsher than the prohibitions actually are. The regulation provides for a critical opening clause in Art. 5 (4) AIA, which allows member states to permit real-time systems for remote biometric identification in publicly accessible spaces for the purpose of law enforcement within the specified framework after all. According to Art. 63 (5) AIA, the responsibility of supervision for such AI systems listed in Annex III (1) (a), as far as they are used for law enforcement purposes, could be transferred to the competent data protection supervisory authorities.

Overall, the main issue is to normatively evaluate certain purposes through classifications determined by their criticality for individual and overall societal interests.

3. High-Risk AI Systems

Title III contains special provisions for AI systems that pose a high risk to the health and safety or fundamental rights of natural persons. Such AI systems will be subject to extensive ex ante obligations (Art. 6 to 51 AIA).

The classification of an AI system as high-risk is based primarily on the intended purpose of the AI system, in addition to its functions. Annex III specifically identifies some high-risk AI systems. Under the terms of Article 7 of the AIA, the Commission is allowed to expand the list of high-risk AI systems used in certain predefined areas by applying a set of criteria and a risk assessment methodology. Examples interestingly include: applicant management tools; AI systems that check for example employment office credential eligibility; or credit bureaus such as Schufa, which use AI systems designed to assess the creditworthiness of natural persons or determine their credit score.

Chapter 3 establishes a set of horizontal obligations for high-risk AI system providers, and also imposes proportionate obligations on users and other participants in the AI value chain (e.g., importers, distributors, agents).

4. Transparency Obligations (Title IV)

Art. 52 AIA (Title IV) concerns certain AI systems in order to take into account the specific manipulation risks they pose. Transparency obligations will apply to systems that interact with humans, are used to recognize emotions or determine membership in (social) categories based on biometric data, or generate or manipulate content (“deep fakes”).

In this respect, prior information must be provided about communication with a chat bot. This is to enable individuals to make informed decisions.

5. Measures in Support of Innovation (Title V)

Title V (Art. 53 – 55 AIA) is intended to promote innovation. “The goal is to create a regulatory framework that is innovation-friendly, future-proof, and resilient to disruption. To this end, it encourages national competent authorities to establish regulatory sandboxes and sets out a basic framework in terms of governance, oversight and liability. AI regulatory sandboxes create a controlled environment in which innovative technologies can be tested for a limited time based on a test plan agreed with the relevant authorities. Title V also includes measures to reduce the regulatory burden on SMEs and startups.”

6. Governance and Implementation

Chapters VI, VII and VIII contain provisions on regulatory supervision and regulatory powers. It would be important to ensure that the powers are not weakened in the further development process of the regulation and that they are strengthened overall.

7. Codes of Conduct

Title IX (Art. 69 AIA) seeks to establish a framework for the creation of codes of conduct aimed at encouraging providers of non-high-risk AI systems to voluntarily apply the mandatory requirements for high-risk AI systems (as outlined in Title III). Voluntary self-regulation has not really become a workable approach in the past. Whether this modified approach will become a sustainable one or it will not be a kind of. Lobby-opening for associations remains to be seen. The question is why the possibility of voluntarily complying with stricter regulations needs to be normatively recorded at all.

8. Final Provisions

Of the final provisions (Titles X, XI, XII), Title X should be noted here.

For companies, Title X (Art. 70 – 72 AIA) is particularly important. Among other things, it also deals with ensuring the effective implementation of the regulation. Many formulations here are similar to the GDPR. For example, penalties should be effective, proportionate and dissuasive. The fine range extends further but is also oriented to turnover, and depending on the violation, from a minimum of 2% to 6% of total annual worldwide turnover in the previous fiscal year or from 10 million to a maximum of 30 million euros, whichever is higher. Unfortunately, the Commission has not taken the opportunity here to clarify whether an antitrust definition of an enterprise is used.

IV. Outlook

Still, the process from the draft submitted to the regulation in force will take several years, and it is yet uncertain what the actual content will be. The legislative process requires a total of three readings involving the European Parliament and the European Council. Regardless of the fact that this process could take an estimated two years, given the speed of development of the e-privacy regulation, Art 85 No. 2 AIA provides for a transition period of 24 months until the regulation becomes applicable. However, the draft is clear about one thing. A European regulatory framework is being created that will even ban certain AI systems, and penalties for violations of the regulation can be even higher than those for violations of the GDPR. Finally, it should be noted that the draft is another building block in the Commission’s data strategy and it is very appropriate to follow the regulatory process closely and to participate in public discourses as far as possible.

Behrang Raji is lawyer and currently working for the Supervisory Authority of Hamburg (Germany).