Preamble 21 to 33.
(21) While national courts have the means of enforcing their orders for disclosure through various measures, any such enforcement measures could delay claims for damages and thus potentially create additional expenses for the litigants. For injured persons, such delays and additional expenses may make their recourse to an effective judicial remedy more difficult. Therefore, where a defendant in a claim for damages fails to disclose evidence at its disposal ordered by a court, it is appropriate to lay down a presumption of non-compliance with those duties of care which that evidence was intended to prove. This rebuttable presumption will reduce the duration of litigation and facilitate more efficient court proceedings. The defendant should be able to rebut that presumption by submitting evidence to the contrary.
(22) In order to address the difficulties to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake, it is appropriate to provide, under certain conditions, for a presumption of causality. While in a fault-based claim the claimant usually has to prove the damage, the human act or omission constituting fault of the defendant and the causality link between the two, this Directive does not harmonise the conditions under which national courts establish fault.
They remain governed by the applicable national law and, where harmonised, by applicable Union law. Similarly, this Directive does not harmonise the conditions related to the damage, for instance what damages are compensable, which are also regulated by applicable national and Union law. For the presumption of causality under this Directive to apply, the fault of the defendant should be established as a human act or omission which does not meet a duty of care under Union law or national law that is directly intended to protect against the damage that occurred. Thus, this presumption can apply, for example, in a claim for damages for physical injury when the court establishes the fault of the defendant for non-complying with the instructions of use which are meant to prevent harm to natural persons.
Non-compliance with duties of care that were not directly intended to protect against the damage that occurred do not lead to the application of the presumption, for example a provider’s failure to file required documentation with competent authorities would not lead to the application of the presumption in claims for damages due to physical injury. It should also be necessary to establish that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output. Finally, the claimant should still be required to prove that the output or failure to produce an output gave rise to the damage.
(23) Such a fault can be established in respect of non-compliance with Union rules which specifically regulate high-risk AI systems like the requirements introduced for certain high-risk AI systems by [the AI Act], requirements which may be introduced by future sectoral legislation for other high-risk AI systems according to [Article 2(2) of the AI Act], or duties of care which are linked to certain activities and which are applicable irrespective whether AI is used for that activity. At the same time, this Directive neither creates nor harmonises the requirements or the liability of entities whose activity is regulated under those legal acts, and therefore does not create new liability claims.
Establishing a breach of such a requirement that amounts to fault will be done according to the provisions of those applicable rules of Union Law, since this Directive neither introduces new requirements nor affects existing requirements. For example, the exemption of liability for providers of intermediary services and the due diligence obligations to which they are subject pursuant to [the Digital Services Act] are not affected by this Directive. Similarly, the compliance with requirements imposed on online platforms to avoid unauthorised communication to the public of copyright protected works is to be established under Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market and other relevant Union copyright law.
24) In areas not harmonised by Union law, national law continues to apply and fault is established under the applicable national law. All national liability regimes have duties of care, taking as a standard of conduct different expressions of the principle how a reasonable person should act, which also ensure the safe operation of AI systems in order to prevent damage to recognised legal interests.
Such duties of care could for instance require users of AI systems to choose for certain tasks a particular AI system with concrete characteristics or to exclude certain segments of a population from being exposed to a particular AI system. National law can also introduce specific obligations meant to prevent risks for certain activities, which are applicable irrespective whether AI is used for that activity, for example traffic rules or obligations specifically designed for AI systems, such as additional national requirements for users of high-risk AI systems pursuant to Article 29 (2) of [the AI Act]. This Directive neither introduces such requirements nor affects the conditions for establishing fault in case of breach of such requirements.
(25) Even when fault consisting of a non-compliance with a duty of care directly intended to protect against the damage that occurred is established, not every fault should lead to the application of the rebuttable presumption linking it to the output of the AI. Such a presumption should only apply when it can be considered reasonably likely, from the circumstances in which the damage occurred, that such fault has influenced the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the damage.
It can be for example considered reasonably likely that the fault has influenced the output or failure to produce an output, when that fault consists in breaching a duty of care in respect of limiting the perimeter of operation of the AI system and the damage occurred outside the perimeter of operation. On the contrary, a breach of a requirement to file certain documents or to register with a given authority, even though this might be foreseen for that particular activity or even be applicable expressly to the operation of an AI system, could not be considered as reasonably likely to have influenced the output produced by the AI system or the failure of the AI system to produce an output.
(26) This Directive covers the fault constituting non-compliance with certain listed requirements laid down in Chapters 2 and 3 of [the AI Act] for providers and users of high-risk AI systems, the non-compliance with which can lead, under certain conditions, to a presumption of causality. The AI Act provides for full harmonisation of requirements for AI systems, unless otherwise explicitly laid down therein.
It harmonises the specific requirements for high-risk AI systems. Hence, for the purposes of claims for damages in which a presumption of causality according to this Directive is applied, the potential fault of providers or persons subject to the obligations of a provider pursuant to [the AI Act] is established only through a non-compliance with such requirements. Given that in practice it may be difficult for the claimant to prove such non-compliance when the defendant is a provider of the AI system, and in full consistency with the logic of [the AI Act], this Directive should also provide that the steps undertaken by the provider within the risk management system and the results of the risk management system, i.e. the decision to adopt or not to adopt certain risk management measures, should be taken into account in the determination of whether the provider has complied with the relevant requirements under the AI Act referred to in this Directive.
The risk management system put in place by the provider pursuant to [the AI Act] is a continuous iterative process run throughout the lifecycle of the high-risk AI system, whereby the provider ensures compliance with mandatory requirements meant to mitigate risks and can, therefore, be a useful element for the purpose of the assessment of this compliance. This Directive also covers the cases of users’ fault, when this fault consists in non-compliance with certain specific requirements set by [the AI Act]. In addition, the fault of users of high-risk AI systems may be established following non-compliance with other duties of care laid down in Union or national law, in light of Article 29 (2) of [the AI Act].
(27) While the specific characteristics of certain AI systems, like autonomy and opacity, could make it excessively difficult for the claimant to meet the burden of proof, there could be situations where such difficulties do not exist because there could be sufficient evidence and expertise available to the complainant to prove the causal link. This could be the case, for example, in respect of high-risk AI systems where the claimant could reasonably access sufficient evidence and expertise through documentation and logging requirements pursuant to [the AI Act]. In such situations, the court should not apply the presumption.
(28) The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant. For example, such difficulties could be assessed in light of the characteristics of certain AI systems, such as autonomy and opacity, which render the explanation of the inner functioning of the AI system very difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output.
A national court should apply the presumption where the claimant is in an excessively difficult position to prove causation, since it is required to explain how the AI system was led by the human act or omission that constitutes fault to produce the output or the failure to produce an output which gave rise to the damage. However, the claimant should neither be required to explain the characteristics of the AI system concerned nor how these characteristics make it harder to establish the causal link.
(29) The application of the presumption of causality is meant to ensure for the injured person a similar level of protection as for situations where AI is not involved and where causality may therefore be easier to prove. Nevertheless, alleviating the burden of proving causation is not always appropriate under this Directive where the defendant is not a professional user but rather a person using the AI system for its private activities.
In such circumstances, in order to balance interests between the injured person and the non-professional user, it needs to be taken into account whether such non-professional users can add to the risk of an AI system causing damage through their behaviour. If the provider of an AI system has complied with all its obligations and, in consequence, that system was deemed sufficiently safe to be put on the market for a given use by non-professional users and it is then used for that task, a presumption of causality should not apply for the simple launch of the operation of such a system by such non-professional users.
A non-professional user that buys an AI system and simply launches it according to its purpose, without interfering materially with the conditions of operations, should not be covered by the causality presumption laid down by this Directive. However, if a national court determines that a non-professional user materially interfered with the conditions of operation of an AI system or was required and able to determine the conditions of operation of the AI system and failed to do so, then the presumption of causality should apply, where all the other conditions are fulfilled.
This could be the case, for example, when the non-professional user does not comply with the instructions of use or with other applicable duties of care when choosing the area of operation or when setting performance conditions of the AI system. This is without prejudice to the fact that the provider should determine the intended purpose of an AI system, including the specific context and conditions of use, and eliminate or minimise the risks of that system as appropriate at the time of the design and development, taking into account the knowledge and expertise of the intended user.
(30) Since this Directive introduces a rebuttable presumption, the defendant should be able to rebut it, in particular by showing that its fault could not have caused the damage.
(31) It is necessary to provide for a review of this Directive [five years] after the end of the transposition period. In particular, that review should examine whether there is a need to create no-fault liability rules for claims against the operator, as long as not already covered by other Union liability rules in particular Directive 85/374/EEC, combined with a mandatory insurance for the operation of certain AI systems, as suggested by the European Parliament.
In accordance with the principle of proportionality, it is appropriate to assess such a need in the light of relevant technological and regulatory developments in the coming years, taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs. Such a review should consider, among others, risks involving damage to important legal values like life, health and property of unwitting third parties through the operation of AI-enabled products or services.
That review should also analyse the effectiveness of the measures provided for in this Directive in dealing with such risks, as well as the development of appropriate solutions by the insurance market. To ensure the availability of the information necessary to conduct such a review, it is necessary to collect data and other necessary evidence covering the relevant matters.
(32) Given the need to make adaptations to national civil liability and procedural rules to foster the rolling-out of AI-enabled products and services under beneficial internal market conditions, societal acceptance and consumer trust in AI technology and the justice system, it is appropriate to set a deadline of not later than [two years after the entry into force] of this Directive for Member States to adopt the necessary transposition measures.
(33) In accordance with the Joint Political Declaration of 28 September 2011 of Member States and the Commission on explanatory documents, Member States have undertaken to accompany, in justified cases, the notification of their transposition measures with one or more documents explaining the relationship between the components of a directive and the corresponding parts of national transposition instruments. With regard to this Directive, the legislator considers the transmission of such documents to be justified.
Contact us
Cyber Risk GmbH
Dammstrasse 16
8810 Horgen
Tel: +41 79 505 89 60
Email: george.lekatis@cyber-risk-gmbh.com
Web: https://www.cyber-risk-gmbh.com
We process and store data in compliance with both, the Swiss Federal Act on Data Protection (FADP) and the EU General Data Protection Regulation (GDPR). The service provider is Hostpoint. The servers are located in the Interxion data center in Zürich, the data is saved exclusively in Switzerland, and the support, development and administration activities are also based entirely in Switzerland.
Understanding Cybersecurity in the European Union.
2. The European Cyber Resilience Act
3. The Digital Operational Resilience Act (DORA)
4. The Critical Entities Resilience Directive (CER)
5. The Digital Services Act (DSA)
6. The Digital Markets Act (DMA)
7. The European Health Data Space (EHDS)
10. The European Data Governance Act (DGA)
11. The EU Cyber Solidarity Act
12. The Artificial Intelligence Act
13. The Artificial Intelligence Liability Directive
14. The Framework for Artificial Intelligence Cybersecurity Practices (FAICP)
15. The European ePrivacy Regulation
16. The European Digital Identity Regulation
17. The European Cyber Defence Policy