Preamble 1 to 10.
(1) Artificial Intelligence (‘AI’) is a set of enabling technologies which can contribute to a wide array of benefits across the entire spectrum of the economy and society. It has a large potential for technological progress and allows new business models in many sectors of the digital economy.
(2) At the same time, depending on the circumstances of its specific application and use, AI can generate risks and harm interests and rights that are protected by Union or national law. For instance, the use of AI can adversely affect a number of fundamental rights, including life, physical integrity and in respect to non-discrimination and equal treatment.
Regulation (EU) …/… of the European Parliament and of the Council [the AI Act] 31 provides for requirements intended to reduce risks to safety and fundamental rights, while other Union law instruments regulate general 32 and sectoral product safety rules applicable also to AI-enabled machinery products 33 and radio equipment. 34 While such requirements intended to reduce risks to safety and fundamental rights are meant to prevent, monitor and address risks and thus address societal concerns, they do not provide individual relief to those that have suffered damage caused by AI.
Existing requirements provide in particular for authorisations, checks, monitoring and administrative sanctions in relation to AI systems in order to prevent damage. They do not provide for compensation of the injured person for damage caused by an output or the failure to produce an output by an AI system.
(3) When an injured person seeks compensation for damage suffered, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (‘fault’) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage.
However, when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet this burden of proof. In particular, it may be excessively difficult to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake.
(4) In such cases, the level of redress afforded by national civil liability rules may be lower than in cases where technologies other than AI are involved in causing damage. Such compensation gaps may contribute to a lower level of societal acceptance of AI and trust in AI-enabled products and services.
(5) To reap the economic and societal benefits of AI and promote the transition to the digital economy, it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems. Such adaptations should contribute to societal and consumer trust and thereby promote the roll-out of AI. Such adaptations should also maintain trust in the judicial system, by ensuring that victims of damage caused with the involvement of AI have the same effective compensation as victims of damage caused by other technologies.
(6) Interested stakeholders – injured persons suffering damage, potentially liable persons, insurers – face legal uncertainty as to how national courts, when confronted with the specific challenges of AI, might apply the existing liability rules in individual cases in order to achieve just results. In the absence of Union action, at least some Member States are likely to adapt their civil liability rules to address compensation gaps and legal uncertainty linked to the specific characteristics of certain AI systems. This would create legal fragmentation and internal market barriers for businesses that develop or provide innovative AI-enabled products or services. Small and medium-sized enterprises would be particularly affected.
(7) The purpose of this Directive is to contribute to the proper functioning of the internal market by harmonising certain national non-contractual fault-based liability rules, so as to ensure that persons claiming compensation for damage caused to them by an AI system enjoy a level of protection equivalent to that enjoyed by persons claiming compensation for damage caused without the involvement of an AI system.
This objective cannot be sufficiently achieved by the Member States because the relevant internal market obstacles are linked to the risk of unilateral and fragmented regulatory measures at national level. Given the digital nature of the products and services falling within the scope of this Directive, the latter is particularly relevant in a cross-border context.
(8) The objective of ensuring legal certainty and preventing compensation gaps in cases where AI systems are involved can thus be better achieved at Union level. Therefore, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Directive does not go beyond what is necessary in order to achieve that objective.
(9) It is therefore necessary to harmonise in a targeted manner specific aspects of fault-based liability rules at Union level. Such harmonisation should increase legal certainty and create a level playing field for AI systems, thereby improving the functioning of the internal market as regards the production and dissemination of AI-enabled products and services.
(10) To ensure proportionality, it is appropriate to harmonise in a targeted manner only those fault-based liability rules that govern the burden of proof for persons claiming compensation for damage caused by AI systems. This Directive should not harmonise general aspects of civil liability which are regulated in different ways by national civil liability rules, such as the definition of fault or causality, the different types of damage that give rise to claims for damages, the distribution of liability over multiple tortfeasors, contributory conduct, the calculation of damages or limitation periods.
Contact us
Cyber Risk GmbH
Dammstrasse 16
8810 Horgen
Tel: +41 79 505 89 60
Email: george.lekatis@cyber-risk-gmbh.com
Web: https://www.cyber-risk-gmbh.com
We process and store data in compliance with both, the Swiss Federal Act on Data Protection (FADP) and the EU General Data Protection Regulation (GDPR). The service provider is Hostpoint. The servers are located in the Interxion data center in Zürich, the data is saved exclusively in Switzerland, and the support, development and administration activities are also based entirely in Switzerland.
Understanding Cybersecurity in the European Union.
2. The European Cyber Resilience Act
3. The Digital Operational Resilience Act (DORA)
4. The Critical Entities Resilience Directive (CER)
5. The Digital Services Act (DSA)
6. The Digital Markets Act (DMA)
7. The European Health Data Space (EHDS)
10. The European Data Governance Act (DGA)
11. The EU Cyber Solidarity Act
12. The Artificial Intelligence Act
13. The Artificial Intelligence Liability Directive
14. The Framework for Artificial Intelligence Cybersecurity Practices (FAICP)
15. The European ePrivacy Regulation
16. The European Digital Identity Regulation
17. The European Cyber Defence Policy