The Articles of the AI Liability Directive (Proposal 28.9.2022)



Preamble 11 to 20.

(11) The laws of the Member States concerning the liability of producers for damage caused by the defectiveness of their products are already harmonised at Union level by Council Directive 85/374/EEC. Those laws do not, however, affect Member States’ rules of contractual or non-contractual liability, such as warranty, fault or strict liability, based on other grounds than the defect of the product.

While at the same time the revision of Council Directive 85/374/EEC seeks to clarify and ensure that injured person can claim compensation for damages caused by defective AI-enabled products, it should therefore be clarified that the provisions of this Directive do not affect any rights which an injured person may have under national rules implementing Directive 85/374/EEC. In addition, in the field of transport, Union law regulating the liability of transport operators should remain unaffected by this Directive.


(12) [The Digital Services Act (DSA)] fully harmonises the rules applicable to providers of intermediary services in the internal market, covering the societal risks stemming from the services offered by those providers, including as regards the AI systems they use. This Directive does not affect the provisions of [the Digital Services Act (DSA)] that provide a comprehensive and fully harmonised framework for due diligence obligations for algorithmic decision-making by hosting service providers, including the exemption from liability for the dissemination of illegal content uploaded by recipients of their services where the conditions of that Regulation are met.


(13) Other than in respect of the presumptions it lays down, this Directive does not harmonise national laws regarding which party has the burden of proof or which degree of certainty is required as regards the standard of proof.


(14) This Directive should follow a minimum harmonisation approach. Such an approach allows claimants in cases of damage caused by AI systems to invoke more favourable rules of national law. Thus, national laws could, for example, maintain reversals of the burden of proof under national fault-based regimes, or national no-fault liability (referred to as ‘strict liability’) regimes of which there are already a large variety in national laws, possibly applying to damage caused by AI systems.


(15) Consistency with [the AI Act] should also be ensured. It is therefore appropriate for this Directive to use the same definitions in respect of AI systems, providers and users. In addition, this Directive should only cover claims for damages when the damage is caused by an output or the failure to produce an output by an AI system through the fault of a person, for example the provider or the user under [the AI Act].

There is no need to cover liability claims when the damage is caused by a human assessment followed by a human act or omission, while the AI system only provided information or advice which was taken into account by the relevant human actor. In the latter case, it is possible to trace back the damage to a human act or omission, as the AI system output is not interposed between the human act or omission and the damage, and thereby establishing causality is not more difficult than in situations where an AI system is not involved.


(16) Access to information about specific high-risk AI systems that are suspected of having caused damage is an important factor to ascertain whether to claim compensation and to substantiate claims for compensation. Moreover, for high risk AI systems, [the AI Act] provides for specific documentation, information and logging requirements, but does not provide a right to the injured person to access that information. It is therefore appropriate to lay down rules on the disclosure of relevant evidence by those that have it at their disposal, for the purposes of establishing liability. This should also provide an additional incentive to comply with the relevant requirements laid down in [the AI Act] to document or record the relevant information.


(17) The large number of people usually involved in the design, development, deployment and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for damage caused and to prove the conditions for a claim for damages. To allow injured persons to ascertain whether a claim for damages is well-founded, it is appropriate to grant potential claimants a right to request a court to order the disclosure of relevant evidence before submitting a claim for damages. Such disclosure should only be ordered where the potential claimant presents facts and information sufficient to support the plausibility of a claim for damages and it has made a prior request to the provider, the person subject to the obligations of a provider or the user to disclose such evidence at their disposal about specific high-risk AI systems that are suspected of having caused damage which has been refused.

Ordering disclosure should lead to a reduction of unnecessary litigation and avoid costs for the possible litigants caused by claims which are unjustified or likely to be unsuccessful. The refusal of the provider, the person subject to the obligations of a provider or the user prior to the request to the court to disclose evidence should not trigger the presumption of non-compliance with relevant duties of care by the person who refuses such disclosure.


(18) The limitation of disclosure of evidence as regards high-risk AI systems is consistent with [the AI Act], which provides certain specific documentation, record keeping and information obligations for operators involved in the design, development and deployment of high-risk AI systems. Such consistency also ensures the necessary proportionality by avoiding that operators of AI systems posing lower or no risk would be expected to document information to a level similar to that required for high-risk AI systems under [the AI Act].


(19) National courts should be able, in the course of civil proceedings, to order the disclosure or preservation of relevant evidence related to the damage caused by high-risk AI systems from persons who are already under an obligation to document or record information pursuant to [the AI Act], be they providers, persons under the same obligations as providers, or users of an AI system, either as defendants or third parties to the claim.

There could be situations where the evidence relevant for the case is held by entities that would not be parties to the claim for damages but which are under an obligation to document or record such evidence pursuant to [the AI Act]. It is thus necessary to provide for the conditions under which such third parties to the claim can be ordered to disclose the relevant evidence.


(20) To maintain the balance between the interests of the parties involved in the claim for damages and of third parties concerned, the courts should order the disclosure of evidence only where this is necessary and proportionate for supporting the claim or potential claim for damages. In this respect, disclosure should only concern evidence that is necessary for a decision on the respective claim for damages, for example only the parts of the relevant records or data sets required to prove non-compliance with a requirement laid down by [the AI Act].

To ensure the proportionality of such disclosure or preservation measures, national courts should have effective means to safeguard the legitimate interests of all parties involved, for instance the protection of trade secrets within the meaning of Directive (EU) 2016/943 of the European Parliament and of the Council and of confidential information, such as information related to public or national security.

In respect of trade secrets or alleged trade secrets which the court has identified as confidential within the meaning of Directive (EU) 2016/943, national courts should be empowered to take specific measures to ensure the confidentiality of trade secrets during and after the proceedings, while achieving a fair and proportionate balance between the trade-secret holder's interest in maintaining secrecy and the interest of the injured person.

This should include measures to restrict access to documents containing trade secrets and access to hearings or documents and transcripts thereof to a limited number of people. When deciding on such measures, national courts should take into account the need to ensure the right to an effective remedy and to a fair trial, the legitimate interests of the parties and, where appropriate, of third parties, and any potential harm to either party or, where appropriate, to third parties, resulting from the granting or rejection of such measures. Moreover, to ensure a proportionate application of a disclosure measure towards third parties in claims for damages, the national courts should order disclosure from third parties only if the evidence cannot be obtained from the defendant.



Contact us

Cyber Risk GmbH
Dammstrasse 16
8810 Horgen
Tel: +41 79 505 89 60
Email: george.lekatis@cyber-risk-gmbh.com








Web: https://www.cyber-risk-gmbh.com









We process and store data in compliance with both, the Swiss Federal Act on Data Protection (FADP) and the EU General Data Protection Regulation (GDPR). The service provider is Hostpoint. The servers are located in the Interxion data center in Zürich, the data is saved exclusively in Switzerland, and the support, development and administration activities are also based entirely in Switzerland.


Understanding Cybersecurity in the European Union.

1. The NIS 2 Directive

2. The European Cyber Resilience Act

3. The Digital Operational Resilience Act (DORA)

4. The Critical Entities Resilience Directive (CER)

5. The Digital Services Act (DSA)

6. The Digital Markets Act (DMA)

7. The European Health Data Space (EHDS)

8. The European Chips Act

9. The European Data Act

10. The European Data Governance Act (DGA)

11. The EU Cyber Solidarity Act

12. The Artificial Intelligence Act

13. The Artificial Intelligence Liability Directive

14. The Framework for Artificial Intelligence Cybersecurity Practices (FAICP)

15. The European ePrivacy Regulation

16. The European Digital Identity Regulation

17. The European Cyber Defence Policy

18. The Strategic Compass of the European Union

19. The EU Cyber Diplomacy Toolbox