Challenges of AI regulation in the EU-trilogue: Searching for a balanced solution

//by Behrang Raji // With the new AI regulation, the European Union has decided to create a globally pioneering legal framework for AI technologies, aiming to safeguard fundamental values. On June 14th, the European Parliament adopted its position, enabling the final trilogue negotiations. At this stage, the Parliament and Council will negotiate the final text. On July 18th, the first round of negotiation talks took place. The following discussion will focus exclusively on two particularly noteworthy aspects within the completed and ongoing negotiations. This concerns the proposed amendments regarding the high-risk category in the AI regulation (1.) and the handling of generative foundation models like Chat GPT (2.)

Changes in the categorization of high-risk systems

During the negotiations on July 18th, the key point of discussion was the categorization of certain AI systems as high-risk systems. According to the Parliament’s viewpoint, it should be crucial whether the system listed in Annex III of the regulation actually poses a significant risk to the health, safety, or fundamental rights of natural persons, as assessed by the provider.

This approach poses a problem because providers can themselves determine the risk classification, potentially leading the entire category to become a form of veiled self-regulation. On the other hand, the Council proposes that the provider must prove that the AI systems mentioned in Annex III are not inherently high-risk, considering that the system’s output is merely accessory concerning the specific measure, thereby not presenting a significant risk.

Both approaches rely on vague and challenging additional criteria, which could lead to evasion and dilution of this central category as a whole. It particularly overlooks the anchor effects of such systems as assistants in decision-making processes which can heavily influence the decisions made by human beings and make it difficult for them to override the machine outputs. In sensitive areas such as administration, one could argue that the system itself does not lead to high risks, and any potential issues of discrimination are due to lacking organizational measures in dealing with such systems.

The negotiations are heading in the wrong direction if the legislator leaves the categorization of high-risk systems to the addressees of the regulation.

New rules for generative AI systems and foundation models

The legislative developments demonstrate the difficulty of regulating rapidly evolving technologies. With the hype around generative AI systems like Chat GPT, the legislator realized that such forms have not been adequately covered by regulations.

Foundation models are defined in Art. 3(1)(c) of the AIA as

AI system models trained on extensive data at scale, designed for general output, and adaptable to a wide range of distinctive tasks.“

Generative AI systems like Chat GPT are now defined in Art. 28(4) AIA as

Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialise a foundation model into a generative AI system, shall in addition“ (…).

Foundation models represent the latest development, where AI systems are trained for various applications. These models are trained on a wide range of data to fulfill a broad spectrum of tasks, including tasks they were not specifically designed for. For instance, a Large Language Model can be implemented in a graphic design tool to provide alternative suggestions for compelling slogans. Moreover, foundation models can be reused in countless downstream AI systems, making them crucial for many subsequent applications and systems.

The new legal requirements for providers of foundation models and generative AI systems, such as Chat GPT, are found in the newly introduced Art. 28b AIA. The list of obligations goes beyond transparency requirements and includes additional measures related to data and data governance. Additionally, the systems must be registered in an EU database.

Although it is consistent to incorporate these new developments into the new regulation, the question arises whether these rules need to be accompanied by specific regulations for sandboxes. If the legislator does not improve and elaborate on sandbox regulations, these new regulations for foundation models and generative AI systems could effectively make open-source developments impossible. Open-source developments thrive on adaptability, which stands in stark contrast to the risk management measures outlined in the new Art. 28b AIA.

Therefore, the legislator must create or expand regulations that allow providers to test and develop new AI technologies in a supervised environment. Deciding on the modalities and conditions for sandboxes much later through implementing acts (see Art. 53(6) AIA) misses the opportunity to establish a more balanced regulation.

Behrang Raji, Legal Counsel Data Protection, Eppendorf SE (Germany).