For some time, the EU has been contemplating the introduction of a new Directive to make it easier for consumers to bring claims for a range of alleged harms caused by AI systems, commonly referred to as the AI Liability Directive (the “AILD”).

The AILD was envisaged as a supplement to the EU AI Act and revised EU Product Liability Directive (the “PLD”), both of which were adopted earlier this year.  The PLD was adopted to ensure that, among other things, “software” was included within the scope of products for which there may be strict liability for defects causing certain types of harm.

The existence of the PLD has led many EU countries to question whether the AILD is needed at all. To answer this question, the European Parliamentary Research Service conducted and published an impact assessment of the AILD (the “AILD Impact Assessment”) on 19 September 2024.[1] The AILD Impact Assessment concludes that there is nonetheless a need for the AILD, as the PLD does not adequately cover the range of damages that can result from AI-specific risks.

Member states have continued to resist the AILD following the publication of the AILD Impact Assessment, arguing that the proposal could lead to unnecessary complexity and legal uncertainty. As such, whether the AILD will continue to progress through the EU legislative process is unclear.

Nonetheless, businesses developing or deploying AI software systems need to be aware of the evolving regulatory landscape, including how EU and national legislators are thinking about attributing liability for harm allegedly caused by AI systems. It has been reported that there is support among EU member state governments (following a Dutch government request) to organise technical workshops to understand better how national tort laws currently deal with AI.[2] As such, we have summarised below the background to and key takeaways from the AILD Impact Assessment of which businesses should be cognisant as they undertake relevant AI initiatives.

Background

The European Commission first proposed the PLD, which is designed to provide redress to individuals to claim compensation for harm caused by products without the need to prove the manufacturer’s fault or negligence (so-called ‘strict liability’), in September 2022 alongside the AILD. The AILD is designed to supplement the PLD by adopting a mixed approach of strict and fault-based liability (whereby the individual has to prove the manufacturer’s fault or negligence) to enable individuals to claim compensation for harm caused by AI systems.

Both the PLD and the AILD complement the EU AI Act, which was adopted in March of this year.[3] Whereas the AI Act is concerned with the direct regulation of AI systems via laws, the AILD and the PLD are intended to deal with the indirect incentivisation of compliance via liability frameworks. This two-pronged approach to AI governance aims to ensure that there is both a robust regulatory environment for AI systems (via the AI Act) as well as sufficient and efficient redress for individuals who claim they have been harmed by such AI systems (via the AILD and PLD).

Key Takeaways from the AILD Impact Assessment

1. Compliance with the obligations of the AI Act[4]

To encourage compliance with the obligations of the AI Act, the AILD simplifies the process for individuals that have suffered damage caused by AI systems. It does this in particular by mandating that when an individual suffers harm as a result of using an AI system that was placed on the EU market without being compliant with the obligations of the AI Act, the AI system will be assumed to have caused that damage. In the absence of such a presumption, the individual would have to prove the fact that the AI system actually caused the harm.

The AILD reverses the burden of proving the causal link from the individual claiming harm, where the burden usually rests, to the AI provider in cases covered by the AILD, including where the individual has demonstrated that the AI provider breached certain requirements for high-risk AI systems under the AI Act, and that this was relevant to the harm caused by an AI system.[5] This procedural approach is intended to bring about meaningful advancements in enforcing the AI Act and obtaining compensation for affected individuals. It is important to note that the AILD Impact Assessment recommended that the AILD should include ways to counter this presumption.

2. Expand the scope of liability with novel category of ‘high-impact’ AI systems’[6]

The AILD Impact Assessment recommends introducing a new category of ‘high-impact’ AI systems to which liability under the AILD would attach. This new umbrella category of ‘high-impact’ AI systems would primarily encompass:

  • High-risk AI systems;
  • General-purpose AI systems[7];
  • Old Legislative Framework systems (autonomous vehicles, transportation-related AI applications more generally, and other AI systems falling under Annex I Section B of the AI Act, such as civil aviation security, marine equipment and rail systems); and
  • Insurance applications beyond health and life insurance.

By expanding the scope of the types of AI systems to which the legal requirements of the AILD attach from only high-risk AI systems to other use cases, the AILD would establish “a more comprehensive framework that captures a wider range of AI applications with significant risk potentials to individuals[8] while still incorporating the AI Act’s foundational concept of high-risk AI systems.

3. Fill the gaps of the PLD[9]

As outlined above, the PLD adopts a strict liability approach, which renders the instrument inappropriate for regulating certain types of harm, namely those where establishing a causal link between the AI system and the harm would be more complicated. As the AILD does not propose only strict liability but also includes fault-based elements, the AILD Impact Assessment proposes that the AILD should include the following areas of liability to complement the PLD’s comparatively narrow range of damages eligible for compensation:

  • Discrimination (instances where AI systems lead to discriminatory outcomes);
  • Personality and other fundamental rights (violations involving personal or fundamental rights);
  • Professionally used property (infringements related to property used in a professional context);
  • Pure economic loss (direct financial loss); and
  • Sustainability effects (environmental and climate impact).

4. Adopt the AILD as a Regulation rather than a Directive[10]

A key recommendation of the AILD Impact Assessment is to adopt the AILD in the form of a Software Liability Regulation. Selecting the legal instrument of a regulation, instead of a directive, entails important procedural differences under EU law. Regulations are directly applicable in member states, i.e. there is no need for them to be transposed into national legislation, while directives can lead to diverging national transpositions. While noting that harmonisation via regulation may lead to a break with some member states’ existing legal traditions, the AILD Impact Assessment argues that a regulation would enhance clarity in the market and promote innovation by establishing consistent legal standards. After the publication of the AILD Impact Assessment, the PLD was adopted as a Directive, which would seem to make it less likely that any standalone AI liability initiatives would be adopted as a regulation in their own right.

5. Introduction of a framework for joint liability[11]

Noting the urgency of the fair sharing of liability between all of the actors involved along the AI value chain, the AILD Impact Assessment recommends introducing an “explicit framework for redress that allows for (partial) recovery based on several key principles” into the AILD.[12] Specifically, it suggests strategically incorporating a system of joint liability into the AILD to reduce reliance on varied national laws and to support a coherent legal environment. Such a system could incorporate one or more of the following policy options:

  • A presumption that each liable entity in the AI value chain contributing to the damage bears an identical share of the liability; and/or
  • A prohibition on clauses in contracts that waive or significantly modify the right of recourse for downstream actors in the AI value chain, which would mean that any agreement attempting to limit a downstream party’s ability to seek redress from an upstream actor in the AI value chain would be considered void.

Overall, companies actively using or deploying AI should watch this space – although the AILD may not continue through the EU legislative process, it is clear either way that EU member states are examining their existing product and tort liability frameworks for alleged fault-based harms arising from AI systems.


[1] Available at https://www.europarl.europa.eu/RegData/etudes/STUD/2024/762861/EPRS_STU(2024)762861_EN.pdf.

[2] See more information here.

[3] See our blog post ‘The AI Act has been published in the EU Official Journal’.

[4] See section 3.3.1 of the AILD Impact Assessment.

[5] Articles 9-15 of the AI Act, which set out the following requirements for high-risk AI systems: risk management systems; data and data governance; technical documentation; record-keeping; transparency and provision of information to deployers; human oversight; and accuracy, robustness and cybersecurity. See the full text of these provisions here: https://artificialintelligenceact.eu/section/3-2/.

[6] See section 3.2 of the AILD Impact Assessment.

[7] In addition to adding general-purpose AI systems to the new ‘high-impact’ classification, thereby ensuring that procedural elements such as the evidence disclosure requirement apply to generative AI systems, the AILD Impact Assessment also recommends that a breach of the AI Act’s safety rules for general-purpose AI systems should trigger a causal link between the violation and any concrete harmful output produced by the system. See section 3.4.2 of the AILD Impact Assessment.

[8] See section 3.2.1 of the AILD Impact Assessment.

[9] See section 3.5 of the AILD Impact Assessment.

[10] See section 5 of the AILD Impact Assessment.

[11] See section 4.2 of the AILD Impact Assessment.

[12] See section 4.2.2 of the AILD Impact Assessment.