In September 2022, the European Commission published its proposal for a new product liability directive (“PLD”), and a proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (“AILD”).

A directive on the approximation of laws concerning liability for defective products was first adopted in the EU in 1985. Under these rules, consumers in the EU can claim compensation for damage caused by defective products. The proposed revisions to the PLD are intended to clarify that the product liability regime extends to digital technologies, including AI-enabled goods and services, and that providers of digital services and software can be held liable under such regime.

The AILD is part of a package of complementary regulatory measures, including the proposed EU Artificial Intelligence Act, which are intended to regulate the safety of AI systems placed on the EU market and harmonise rules relating to the same to ensure legal certainty in view of facilitating investment and innovation in AI. The AILD seeks to introduce certain uniform requirements in respect of non-contractual fault-based civil law claims for damages caused by an AI system, including in relation to disclosure of evidence relating to high-risk AI systems, and the burden of proof in such cases.

The tables below summarise the key measures as proposed in the PLD and AILD.  The consultation processes with respect to the PLD and AILD are currently ongoing. The next step is for the draft text of each proposal to be considered by the European Parliament and the Council of Europe.

Product Liability Directive

Key Point Additional Details
The PLD applies in respect of non-contractual claims for damage suffered by natural persons caused by defective products (including software and digital manufacturing files).  The PLD is a no-fault (strict) liability regime, which applies in respect of defective products that are made available or used in the course of a commercial activity within the EU. A product will be considered defective when, taking all circumstances into account, it does not provide the safety which the public at large is entitled to expect.

The revisions to the PLD seek to ensure that “manufacturers” of such defective products (or components thereof) can be held liable on a strict-liability, no-fault basis for certain damage (including death or personal injury, property damage and loss or corruption of data) caused by that product.

A “manufacturer” under the PLD will include any person who develops a product, who has a product designed or manufactured, or who markets that product under its name or trademark. The European Commission contemplates that this includes software providers and providers of digital services (where those services affect how a product works).

Even if a manufacturer is based outside the EU, other persons can be held liable for damage caused by the defective product, including authorised representatives, importers; any person offering at least two of certain services (warehousing, packaging, addressing or dispatch), and distributors.

What might this mean in practice? Products comprising software (other than free and open source software) or integrated in, or interconnected with, digital services are likely to fall within the broad definition of  “product” under the PLD, irrespective of the mode of supply or usage (e.g. whether stored on a device or accessed via cloud technologies).

Relatedly, the PLD provides that manufacturers will remain liable for defectiveness that comes into being after a product has been placed on the market or put into service as a result of software or related services within their control, be it in the form of software upgrades or updates, or other product modifications. Such software or related services will be considered within the manufacturer’s control when supplied by that manufacturer or where that manufacturer authorises them or otherwise influences their supply by a third party. Manufacturers will also be liable for damage caused by a lack of software updates or upgrades, where those updates or upgrades are within their control and are necessary to maintain product safety.

It will therefore be important to consider the impact of the PLD reforms on the nature of current business operations within Europe and supply chain structures, including the contractual risk allocation underlying arrangements with parties in those supply chains.
The PLD introduces claimant-friendly (rebuttable) presumptions in respect of product defectiveness and related damage. In relation to relevant claims, the revisions to the PLD introduce three claimant-friendly rebuttable presumptions:

defectiveness of a product is presumed where: (i) the defendant has failed to comply with an obligation to disclose evidence relevant to the claim, (ii) the claimant establishes that such product does not comply with mandatory safety requirements set out in law, or (iii) the claimant establishes that the damage was caused by an obvious malfunction during normal use or under ordinary circumstances;

causality is presumed where it is established that a product is defective, and the damage caused is of a kind typically consistent with the defect in question; and

where a national court judges that the claimant faces excessive difficulties, due to technical or scientific complexity, to prove the defectiveness of the product and/or causality, these will be presumed where the claimant has demonstrated that (i) the product contributed to the damage and (ii) the defectiveness of the product and/or causality are likely.  

What might this mean in practice? The newly introduced presumptions are rebuttable, so their impact will depend on the relevant claim.  In relation to software, in particular, the European Commission has suggested that it can be difficult to prove defects arising from such software due to the complexity of the product itself but also potentially the number of entities involved in the delivery of an end product (where the software is just one component).
Courts may order disclosure of evidence to support relevant claims.  The PLD requires that national courts will be empowered, upon the request of a claimant who has presented facts and evidence sufficient to support the plausibility of its claim, to order the defendant against which the claim was brought (whether the manufacturer or another economic operator as described above) to disclose relevant evidence at its disposal. Such disclosure will be limited to what is necessary and proportionate to support the claim.

In determining whether such disclosure of evidence is proportionate, courts shall consider the legitimate interests of all parties, including third parties concerned, in particular in relation to the protection of trade secrets and of confidential information. Where the disclosure of a trade secret or alleged trade secret is ordered, courts may, upon request of a party or on their own initiative, take specific measures necessary to preserve confidentiality when that evidence is used or referred to in legal proceedings.

What might this mean in practice? Manufacturers may therefore be required to disclose information relating to their products under the PLD regime, including trade secrets and confidential information.  They can, however, request the courts put in place safeguards to preserve the confidentiality of such information.

AI Liability Directive

Key Point Additional Details
The AILD applies in respect of non-contractual fault-based civil law claims for compensation of damage suffered by natural or legal persons caused with the involvement of an AI system.The AILD applies where such claims are brought under national fault-based liability regimes, and relate to damage arising from or in connection with software developed using machine learning and/or logic or knowledge-based approaches that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The AILD reforms seek to ensure that “providers” and “users” of AI systems can be held liable in respect of such claims.

A “provider” in this case will include (i) any company that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service within the EU, under its own name or trademark; and (ii) any manufacturer of a product together with which a “high-risk AI system” is placed on the market or put into service under the name of such product manufacturer.

An AI system is considered “high-risk” where: (i) the AI system is intended to be used as a safety component of a product, or is itself a product, which is required to undergo a third-party ex-ante conformity assessment pursuant to the AI Act, or (ii) the AI system is one that relates to an area listed in Annex III of the AI Act (e.g., biometric identification and categorization of natural persons, or access to and enjoyment of essential private services and public services and benefits).

What might this mean in practice? Any products comprising AI-enabled software are likely to fall within the scope of the AILD as software developed using machine learning and/or logic or knowledge-based approaches that can, for a given set of human-defined objectives, generate outputs. It will therefore be important to consider the impact of the AILD in any decision regarding the expansion or structure of relevant business operations within Europe.

For example, a company may be classified as a “provider” under the AILD if it intends to supply its AI technology for distribution or use within the EU market directly to users. A company may also be classified as a “provider” if its AI-enabled products are “intended to be used as a safety component” of other end products.
The AILD introduces a claimant-friendly (rebuttable) presumption in respect of the causal link between the fault of the defendant and the output or failure of the AI System.The AILD provides that courts shall presume a causal link between the fault of a defendant and the output produced by the AI system (or the failure of the AI system to product an output), where:

the claimant has demonstrated the defendant’s fault in failing to comply with a duty of care under applicable law. For claims against a provider or user of a high-risk AI system, this will require the claimant to demonstrate that the provider or user has breached certain requirements of the AI Act;

it is reasonably likely, based on the circumstances of the case, that the defendant’s failure has influenced the output produced by the AI system or the failure of the AI system to produce an output; and

the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage (together, the “Conditions”).

However, the presumption described above is rebuttable and will only be applied by the courts where:

in the case of a claim concerning a high-risk AI system, the defendant has failed to demonstrate that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link;

in the case of a claim concerning a non-high-risk AI system, the court considers that it is excessively difficult for the claimant to prove the causal link; and

in the case of a claim against a defendant who used the AI system in the course of a personal, non-professional activity, the defendant materially interfered with the conditions of the operation of the AI system or it was required and able, but failed, to determine the conditions of operation of the AI system.

What might this mean in practice? Claimants will likely be able to rely on the presumption if the Conditions apply, including if they can demonstrate pursuant to the first Condition that a company has failed to comply with its obligations with respect to high-risk AI systems under the AI Act.

Such obligations include: (1) that the AI system uses techniques involving training of models developed on the basis of training, validation and testing data sets meeting certain quality criteria; and (2) the AI system was designed and developed in a way that allows for effective oversight by end users during use.
The AILD introduces requirements in respect of disclosure of evidence by providers of AI systems.The proposed reforms provide that courts may order disclosure of evidence by a provider or user of a high-risk AI system, either upon the request of a claimant or potential claimant (where such provider or user has refused to disclose relevant evidence at its disposal about a specific high-risk AI system that is suspected of having caused damage). Courts may only order such disclosure if:

the claimant (or potential claimant) has undertaken all proportionate attempts at gathering the relevant evidence from the defendant;

in support of such request from a potential claimant, the potential claimant has presented facts and evidence sufficient to support the plausibility of a claim for damages; and

disclosure of such evidence is necessary and proportionate to support a potential claim or a claim for damages (together, the “Requirements”).

In determining whether such disclosure of evidence is proportionate, courts shall consider the legitimate interests of all parties, including third parties concerned, in particular in relation to the protection of trade secrets and confidential information.

Where the disclosure of a trade secret or alleged trade secret is ordered, courts may, upon request of a party or on their own initiative, take specific measures necessary to preserve confidentiality when that evidence is used or referred to in legal proceedings.

Where a defendant fails to comply with an order by a court in a claim for damages to disclose or to preserve evidence at its disposal, a court shall presume the defendant’s non-compliance with a relevant duty of care (as described in the first Condition in key point above).

What might this mean in practice? The Requirements are premised on the assumption that the design, development, deployment and operation of high-risk AI systems typically involves a large number of parties. Disclosure of the kind described in this key point is intended to allow injured persons to ascertain whether a claim for damages is well-founded.

Claimants will likely be able to rely on these new provisions to obtain evidence to support their claims where a company’s products constitute high-risk AI systems, making it easier to establish claims against that company. Such disclosure obligations are potentially extensive, covering trade secrets and confidential information. Companies can, however, request the courts put in place safeguards to preserve the confidentiality of such information.