The Artificial Intelligence Liability Directive



What is the Artificial Intelligence Liability Directive?

On 28 September 2022, the European Commission released the proposal for an Artificial Intelligence Liability Directive ("AI Liability Directive"), that deals with claims for harm caused by AI systems, or the use of AI, adapting non-contractual civil liability rules to artificial intelligence.

The AI Liability Directive complements the Artificial Intelligence Act by introducing a new liability regime that ensures legal certainty, enhances consumer trust in AI, and assists consumers’ liability claims for damage caused by AI-enabled products and services.

It applies to AI systems that are available on the EU market, or operating within the EU market.

Before the directive, national liability rules, in particular based on fault, were not suited to handling liability claims for damage caused by AI-enabled products and services. Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy and opacity, make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim.

In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims could therefore be deterred from claiming compensation altogether.

If a victim brings a claim, national courts, faced with the specific characteristics of AI, may adapt the way in which they apply existing rules on an ad hoc basis to come to a just result for the victim.

This caused legal uncertainty. Businesses would have difficulties to predict how the liability rules would be applied, and thus to assess and insure their liability exposure. It would particularly affect businesses trading across borders, and small and medium-sized enterprises (SMEs), which cannot rely on in-house legal expertise or capital reserves.

Several Member States were considering, or even concretely planning, legislative action on civil liability for AI. If the EU did not act, Member States would adapt their national liability rules to the challenges of AI. This would result in further fragmentation and increased costs for businesses active throughout the EU.


Understanding the AI Liability Directive.

AI can harm interests and rights that are protected by EU or national law. For instance, the use of AI can adversely affect a number of fundamental rights, including life, physical integrity, non-discrimination, and equal treatment.

The AI Act introduces requirements intended to reduce risks to safety and fundamental rights. Other EU law instruments regulate general and sectoral rules applicable also to AI-enabled products. While such requirements intended to reduce risks to safety and fundamental rights, and prevent, monitor and address societal concerns, they do not provide individual relief to those that have suffered damage caused by AI.

Existing requirements provide in particular for authorisations, checks, monitoring and administrative sanctions in relation to AI systems in order to prevent damage. They do not provide for compensation of the injured person for damage caused by an output or the failure to produce an output by an AI system.

To reap the economic and societal benefits of AI and promote the transition to the digital economy, it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems. Such adaptations should contribute to societal and consumer trust and thereby promote the roll-out of AI. Such adaptations should also maintain trust in the judicial system, by ensuring that victims of damage caused with the involvement of AI have the same effective compensation as victims of damage caused by other technologies.

This Directive follows a minimum harmonisation approach. Such an approach allows claimants in cases of damage caused by AI systems to invoke more favourable rules of national law. Thus, national laws could, for example, maintain reversals of the burden of proof under national fault-based regimes, or national no-fault liability (referred to as ‘strict liability’) regimes of which there are already a large variety in national laws, possibly applying to damage caused by AI systems.

Access to information about specific high-risk AI systems that are suspected of having caused damage is an important factor to ascertain whether to claim compensation and to substantiate claims for compensation. Moreover, for high risk AI systems, the AI Act provides for specific documentation, information and logging requirements, but does not provide a right to the injured person to access that information.

It is therefore appropriate to lay down rules on the disclosure of relevant evidence by those that have it at their disposal, for the purposes of establishing liability. This should also provide an additional incentive to comply with the relevant requirements laid down in the AI Act to document or record the relevant information.

The large number of people usually involved in the design, development, deployment and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for damage caused and to prove the conditions for a claim for damages.

To allow injured persons to ascertain whether a claim for damages is well-founded, it is appropriate to grant potential claimants a right to request a court to order the disclosure of relevant evidence before submitting a claim for damages.

Such disclosure should only be ordered where the potential claimant presents facts and information sufficient to support the plausibility of a claim for damages and it has made a prior request to the provider, the person subject to the obligations of a provider or the user to disclose such evidence at their disposal about specific high-risk AI systems that are suspected of having caused damage which has been refused.

Ordering disclosure should lead to a reduction of unnecessary litigation and avoid costs for the possible litigants caused by claims which are unjustified or likely to be unsuccessful.

National courts will be able, in the course of civil proceedings, to order the disclosure or preservation of relevant evidence related to the damage caused by high-risk AI systems from persons who are already under an obligation to document or record information pursuant to the AI Act.

There could be situations where the evidence relevant for the case is held by entities that would not be parties to the claim for damages but which are under an obligation to document or record such evidence pursuant to the AI Act. It is thus necessary to provide for the conditions under which such third parties to the claim can be ordered to disclose the relevant evidence.