Liability for AI – the next legislative challenge for the EU
The current legal national frameworks for determining AI liability, fault and causal connection, as well as the burden of proof in technology-related incidents, are not aligned with the rapid pace of innovation.
Following the adoption of the AI Act by the Council of the European Union last week (which is expected to be published in the Official Journal at the end of June 2024 and will then enter into force 20 days later), the EU is now turning to proposals to address the complex issues of AI liability. This is no less important than the regulation of AI itself, as potential liability risks for innovations have a significant impact on businesses – particularly those in the medical device sector and may be a barrier to entry into the market.
What is on the horizon?
The European legislator's approach to dealing with conflicting interests and liability issues is through a regulatory framework on liability that is "tailor-made" to AI. The future framework conditions should be set out in:
- The Directive of the European Parliament and of the Council on liability for defective products (Product Liability Directive) which is intended to regulate the fault-free liability of economic operators for defective products.
- The Directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual civil liability to artificial intelligence (AI Liability Directive) which shall cover liability claims based on the fault of a natural or legal person.
The Product Liability Directive
The objectives of the Product Liability Directive are to be achieved through the following measures:
- The inclusion of AI-based products in the scope of the Product Liability Directive.
- The designation of economic operators liable for defective products.
- The granting of a right to disclosure of evidence.
- The establishment of facilitation of proof for the injured party in the form of rebuttable assumptions about the defectiveness of the product and causality.
If a product is deemed defective, economic operators identified in the directive shall bear liability. This concept is already known from the European medical device legislation. According to the Product Liability Directive, an economic operator may be defined as:
- the manufacturer of a (end) product or component;
- the provider of a related service;
- the authorised representative;
- the importer;
- the fulfilment service provider or
- the dealer.
With regards to AI, the directive explicitly clarifies that, in addition to hardware manufacturers, software providers and providers of digital services affecting the operation of a product, may also be held liable. The directive also stipulates that the liability of a responsible economic operator for injury to a consumer cannot be limited or excluded, for example, by a contractual provision or national legislation.
At the procedural level, the Product Liability Directive introduces a novel claim for the injured plaintiff to request disclosure of relevant evidence from the defendant. This is contingent on certain conditions and is designed to counter procedural challenges faced by injured persons. Finally, to facilitate the burden of proof, the directive also stipulates refutable assumptions as to the defectiveness of the product concerned and the causal relationships.
The AI Liability Directive
In addition to the Product Liability Directive, the AI Liability Directive has the objective of facilitating the claimant's access to information and reducing the burden of proof for claims for damages asserted under national fault-based liability regimes involving AI systems. Consequently, the directive only aims to harmonise those aspects which help to reduce the difficulties of proof for injured parties in view of specific AI characteristics, but not to harmonise rules on liability in general.
The AI Liability Directive establishes harmonised rules on the disclosure and security of evidence concerning high-risk AI-systems and the burden of proof when asserting non-contractual fault-based civil claims before national courts in relation to damage caused by an AI-system.
If plaintiffs can demonstrate the veracity of their claims for damages by presenting facts and evidence, they are entitled to the disclosure of evidence pertinent to the specific high-risk AI system which is alleged to have caused the damage in question by the defendant. This may be accompanied by an application by the plaintiff for specific measures to secure such evidence. Should the defendant fail to comply with the court's orders to disclose or preserve evidence, a rebuttable presumption shall be applied that the defendant has breached a relevant duty of care.
In addition, the introduction of a rebuttable presumption is proposed in relation to the causal link between the fault of the defendant and the result or lack of result produced by the AI system.
Possible implications of the legislative proposals
These two initiatives of the European legislator are of considerable practical importance, particularly in the medical device sector. Companies in this sector are already attuned to the issue of product liability. Given the potential for the balance of risk between economic operators and consumers to be significantly shifted by the directives and the possibility of a significantly expanded circle of liable economic operators, it is advisable for stakeholders to monitor the legislative process and take operational and strategic measures at an early stage. As it is often the case in such matters, documentation is of utmost importance