What is AI? According to EU AI Act

by Tea Mustac//

Since the 2nd of February was set as the deadline for the applicability of the first provisions of the AI Act, it might be useful to take a look at one of the Act’s most essential provisions. Namely, the definition of what constitutes an AI system and, consequently, what the AI Act generally applies to. This is essential because:

  1. The definition is much broader than what most people assume.

  2. Another provision becoming applicable on the February 3rd is the requirement for AI literacy, which applies to all providers and all deployers of all AI systems. Not just the (high-)risky ones.

Therefore, taking the band-aid off and knowing how many AI systems you already have in place might be a good idea. To do that, however, you’ll first need to understand what qualifies as an AI system and how to distinguish it from software that’s fancy—but not fancy enough to earn the AI label.

What Does the AI Act Say?

According to Article 3(1) of the AI Act an AI system is a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness and that infers from its inputs how to generate outputs – such as predictions, content, recommendations, or decisions – that can influence its environments. Supposedly, these AI systems do this for either explicit or implicit objectives.

The problem? This definition suffers from multiple conceptual deficiencies making, its usefulness in practical settings rather limited. For starters, the definition was heavily inspired by the one proposed by the OECD, a political intergovernmental organisation whose focus is on fostering economic development without stepping on anyone’s toes. In other words, not exactly a bastion of clear legislative language.

As a result, the legal definition it inspired ended up being ridiculously vague and maybe even vaguer than its already vague role model. Why? Because while the OECD definition mandates that AI systems exhibit adaptiveness, the AI Act’s one merely states they may exhibit adaptiveness. Meaning they don’t necessarily have to.1 This omission turns out to be one of the key missed opportunities for differentiating AI systems from traditional software systems.

The Role of Adaptiveness in Machine Learning

Adaptiveness is a fundamental characteristic of machine learning (ML) technologies, which are currently the main and most commonly used technologies for developing AI systems, precisely due to their exhibited adaptiveness to the data they are being fed. This adaptiveness enables systems developed using ML approaches to recognize complex statistical patterns present in the data and capture relationships between various data points that would otherwise be extremely difficult to notice.2

The downside? These patterns are usually unrecognizable to our human eyes, which is why we tend to put blind trust in ML algorithms that meet certain quality standards – usually by demonstrating accuracy that surpasses human performance.3

Precisely due to their complexity demanding a certain level of blind trust, paired with their adaptiveness to the changes in the environment, and the increasing influence they have on our lives, such systems have been gaining increasingly more attention across society. Legislative bodies included. Therefore, the failure to make adaptiveness one of the distinguishing legal criteria seems to be a rather unfortunate omission, as simple, hard-coded software systems without the ability to adapt to their input data are both interpretable as well as explainable. As such they don’t warrant more legal scrutiny than what is already covered in Article 22 of the GDPR, governing automated decision-making affecting individuals.

An attempt at rectifying this oversight was made in Recital 12 of the AI Act, which suggests that AI systems should be observed separately from simpler, traditional software systems. However, there is a catch. In EU law, recitals cannot be used to contradict or narrow the text of the norms.4 So, even if the recital could be used to narrow down the (presumably) purposefully broad definition, it cannot actually be relied upon to do so.

(The Failed Attempt at) Drawing the Line

This brings us to the article’s main question: where do we draw the line between fancy albeit still traditional software and software that’s so fancy it deserves to be called AI?

Recital 12 tries (and fails) to establish a clear boundary, as even if we ignored the legal limitations of recitals narrowing down the scope of the law, the text still provides no concrete examples or criteria for recognizing traditional software systems.

Skipping the previously analysed adaptiveness, which was denied the role of serving as a distinguishing criterion, when it comes to other characteristics of AI systems listed in the AI Act, none of them do much to clarify the distinction either. For instance, autonomy, which is here taken as a misnomer for the engineering term “automation”, 5 is to be interpreted as any degree of independence from a human operator. A smart refrigerator automatically adjusting the optimal temperature based on external conditions, the amount of food inside, etc.? Check – it does the adjustments autonomously.

Next criterion: the ability to make inferences on how to generate outputs. In machine learning, the inference phase is when an ML model makes its own predictions or decisions, which also requires the model to infer how to do so in the first place. Meaning? Everything going beyond 2 plus 2 equals 4 fulfils this criterion. An email program that classifies messages as spam based on what spam emails have historically looked like? Check—it’s making inferences as to what spam will look like in the future based on historic data.

Finally, we come to the objectives and the influence on the environment. As most software systems are developed with specific objectives in mind, such as streamlining processes, improving user experience, or influencing user behaviour, we will not waste much time on this. It is nearly impossible to imagine that we are developing something without at least an implicit objective in mind. Once we implement that into a process, a website, or a product it is very hard to imagine that not having any influence on that process, website, or product. Lest we did a very bad job at developing our system in the first place. Therefore, working under the assumption that we are all humans with certain objectives for which we develop systems, these two criteria will again almost always be fulfilled.

So, pro tip here, if your software goes beyond hard-coded, explicit instructions (à la a calculator), it’s probably an AI system under the AI Act. Save yourself the time, money, and headache of trying to argue otherwise.6

So, You Have an AI System—Now What?

Having an AI system in place is no reason to despair. Yes, the AI Act imposes some stringent requirements on AI systems, that is true. But, in most part, it imposes these requirements onto high-risk AI systems. Concluding that something is an AI system is therefore nothing more than a ticket into further assessment and classification rounds for that AI system. Well…slightly more than that. Enter Article 4 of the AI Act mandating that providers and deployers of all AI systems – regardless of their type or risk category – work on ensuring sufficient levels of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.

Put bluntly, unless you’re still working with pen and paper, you have AI systems lurking somewhere. At least some if not most of these will be low risk, but you should, nonetheless, do your best to train your employees (or anyone with access to the systems for that matter) on the capabilities and limitations of that particular system, while taking into account their prior knowledge and expertise.7 If you want to do this in line with the law, you should be ready to go – well…should have been ready to go – yesterday.

Tea Mustac, AI Governance and Privacy Specialist at Spirit Legal (Leipzig, Germany), Author of “AI Act Compact”.

1See Point I, OECD, Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449, 22 May 2019) https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 accessed 24 January 2025.

2For simplified explanations of ML technologies see, R2D3, A Visual Introduction to Machine Learning, Part 1 (2015) http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ accessed 24 January 2025.

3For simple explanations of the challenges with ML interpretability, see Christoph Molnar, Interpretable Machine Learning – A Guide for Making Black Box Models Explainable (2019) https://christophm.github.io/interpretable-ml-book/what-is-machine-learning.html accessed 24 January 2025.

4DW v FT (Case C-307/22, Judgment of 26 October 2023) para 44.

5Section 5.13. ISO/IEC 22989:2022 (Information technology – Artificial intelligence – Artificial intelligence concepts and terminology)

6See also, Tea Mustac and Peter Hense, AI Act Compact: Compliance, Management & Use Cases in Corporate Practice (1st edn, Fachmedien Recht und Wirtschaft 2024).

7For more on AI literacy in line with the EU AI Act, see Tea Mustać, ‘The AI Act Series: A Case for AI Inventories’ (AI Advances, 24 January 2025) https://medium.com/p/42f7221f7690 accessed 28 January 2025.