The liability regime for AI systems

by Behrang Raji//

The European Commission presented a proposal for a “Directive on AI Liability” (COM(2022) 496 final) (hereinafter “AI Liability Directive”) in September 2022. The new directive clarifies liability issues in connection with AI systems. It complements the Product Liability Directive, which has also been amended, and the sanctions regime of the draft Artificial Intelligence Act (AIA).1

I. Sanction rules for AI systems

The sanctions provisions of the AIA do not contain any specifications for claims for damages. The AIA is primarily aimed to ensure compliance, security and protect fundamental rights. The provisions seek to prevent damages (e.g. discrimination) from occurring, for example, through specifications for the training phase (quality criteria for training data, validation procedures). They reduce the risks, but can never eliminate them completely. The risks may materialise in individual cases and lead to damages. In these cases, fines can be imposed by the competent supervisory authorities. However, the AIA does not contain liability provisions that enable injured parties to assert claims for damages.

Even the GDPR, which grants data subjects a claim for damages under Article 82 of the GDPR, is not able to close this gap. The reason is, firstly, that the claim for damages under data protection law only applies to personal AI systems and, secondly, the claim is always directed against the controller or processor (Art. 82(1) GDPR). Other relevant actors in the value chain, such as the producer, are not covered by the GDPR.

The amended Product Liability Directive and the AI Liability Directive constitute a complementary package. The respective directives cover different types of liability. The Product Liability Directive regulates the strict liability of the manufacturer for defective products. The AI Liability Directive, on the other hand, covers residual liability claims based on the fault of a natural or legal person.

With regard to the necessity of these new provisions, the explanatory memorandum refers to the so-called “black box” effect of AI systems. Under the current law, the opacity of these systems makes it considerably more difficult for injured parties to prove the wrongful act as well as the damage. The new AI Liability Directive aims to remedy these gaps.

II The key provisions at a glance

  1. Scope of application, Art. 1 AI Liability Directive

According to Art. 1 AI Liability Directive, the Directive applies to non-contractual civil law fault-based claims for compensation for damages caused by an AI system. Therefore, it covers damages caused intentionally or by a negligent act or omission. This lends new explosiveness to the provisions of the AIA, which provides for an extensive catalog of duties for high-risk systems.

  1. Associational right of action, Art. 2 AI Liability Directive

Of particular practical importance is the power of associations to sue according to Art. 6 No. 6 lit. c) AI Liability Directive. On that basis consumer associations, for example, could play a special role in enforcing the law.

  1. Easing the burden of proof

Due to the opacity of AI systems, the Directive eases the burden of proof for injured parties.

a. Disclosure of evidence, Art. 3 AI Liability Directive

To the extent necessary to bring a claim for damages, Art. 3 AI Liability Directive allows a court to order the disclosure of relevant evidence for certain high-risk AI systems. Blanket requests for evidence are not permitted and disclosure must be limited to what is necessary. This is to reconcile the conflicting interests of the parties, as disclosure always also constitutes a disclosure of trade secrets. In this respect, the defendant is also to be given a legal remedy against the court’s order (Art. 3(4) AI Liability Directive). If a defendant does not comply with the court’s order for disclosure, the presumption rule under Art. 3(5) AI Liability Directive applies. In such a case, the court may presume that the defendant has breached its relevant duty of care. However, the defendant has the right to rebut this presumption – through disclosure. In this respect, Art. 3(5) AI Liability Directive promotes ordered disclosure and thus ensures access to evidence. This is a central component of the new rules.

b. Presumption of causality in case of fault, Art. 4 AI Liability Directive

Another very important provision is contained in Art. 4. Proof of fault can be somewhat easier in many case constellations. For example, it can be relatively easy to prove that documentation obligations have not been complied with. On the other hand, it can be very difficult to prove that the respective culpable breach of duty was causal for the damage. Therefore, the Directive provides for a rebuttable presumption of causality. In this case, the requirements of Art. 4(1)(a) – (c) AI Liability Directive must be met cumulatively.

In principle, the plaintiff must prove fault. On the other hand, this can also be presumed if the order to disclose (see above and a.) is not complied with (Art. 4 para. 1 lit. a) AI Liability Directive).

In Art. 4 (2) and (3) AI Liability Directive a distinction is made between claims against providers of a high-risk system and between claims against the user of such systems. With regard to the provisions concerning training data for AI systems (Title III, Chapter 2, namely Art. 8 – 15 AIA), the provider must be able to prove compliance with the legal requirements. This will present providers with considerable difficulties, since it is still unclear how the vague requirements regarding the quality criteria of training data (Art. 10 para. 3 AIA) are to be interpreted in concrete terms.

According to Art. 4 para. 5 AI Liability Directive, the presumption of causality should apply not only to high-risk AI systems, but to all AI systems.

However, the presumption of causality is capable of revocation according to Art. 4(7) AI Liability Directive.

  1. Outlook

It remains to be seen whether these additional provisions on civil liability, which make perfect sense for complainants, will also give actors in the AI value chain an additional incentive to invest in new AI technologies. They create incentives to comply with the many obligations. The new provisions are basically to be welcomed and strengthen consumer rights. At the same time, however, they exacerbate weaknesses of the AIA. Many obligations and requirements are somewhat vague and in need of improvement. In this respect, the Commission’s goal of creating the regulatory framework for a strong internal market in which investments in AI technologies are promoted could face an adverse effect back by additional liability provisions.

In view of the new provisions, compliance is increasingly turning into a kind of privilege for some enterprises. The vast majority of provisions threatens to promote precisely what they are trying to prevent: the increasing monopolisation of tech companies.

Behrang Raji, Legal Counsel Data Protection, Eppendorf SE (Germany).

1 In another article (Behrang Raj, The Artificial Intelligence Act (AIA) – a brief overview, DPOblog.eu, May 10, 2021), I already pointed out that the draft Artificial Intelligence Act (AIA) is part of the European Data Strategy from 2020.