Navigating the AI Liability Rules: European Institutions focus on consumer protection

On 28th September 2022, the European Commission adopted two proposals aimed at harmonizing national regulations concerning liability in the field of artificial intelligence (“AI”). These proposals address the challenges associated with AI systems when it comes to handling liability claims for damages caused by AI-enabled products and services. The two proposals are:

  • directive on liability for defective products, focuses on the liability of manufacturers for defective products. It seeks to modernize the current legal framework by including software and digital manufacturing files within the definition of a product. It also clarifies when a related service should be considered a component of a product;
  • directive on adapting non-contractual civil liability rules to artificial intelligence (also known as the AI Liability Directive), designed to adapt rules related to non-contractual civil liability to the context of AI system. It considers the unique characteristics of AI systems, such as complexity, autonomy, and opacity, which can make it challenging for those who suffer harm to identify responsible parties and meet the necessary conditions for a successful liability claim.

These proposals are part of the EU's digital strategy, which aims to regulate AI in order to promote responsible and effective development and dissemination of AI technologies within Europe. This digital strategy also includes the legislative proposal for a regulation that lays down harmonized rules on artificial intelligence (the “AI Act”).

EDPS Opinion
Pursuant to the Article 57 of the Regulation (EU) 2018/1725 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, the European Data Protection Supervisor (“EDPS”) is tasked with advising, on request, all Union institutions and bodies on legislative and administrative measures relating to the protection of natural persons’ rights and freedoms with regard to the processing of personal data. 

In light of this, the EDPS has published the Opinion 42/2023 welcoming the directive proposals. The EDPS believes that these proposals will ensure individuals harmed by AI systems receive an equivalent level of protection as those harmed in other circumstances. 

Specifically, the productive liability directive aims to ensure that EU-level system for compensating individuals who suffer physical injury or damage to property due to defective products is updated to consider the current digital landscape, particularly with the use of AI and smart products. 

Furthermore, the AI Liability Directive proposal aims to align national liability rules with the handling of liability claims for damage caused by AI-enabled products and services or situations where AI systems were involved. The EDPS has recognized that the specific characteristics of AI, including complexity, autonomy, and opacity, can create significant obstacles for individuals seeking remedies for harm resulting from the use of such systems.

According to the Opinion, published on 11th October 2023, EDPS recommends that:

  • since the AI Act applies to EU institutions, offices, bodies and agencies when they act as providers or users of AI systems, an equivalent level of protection should be guaranteed for individuals who have suffered damages caused by AI systems produced and/or used by EU institutions, bodies, and agencies, similar to individuals harmed by AI systems produced and/or used by private actors or national authorities. This is important, as the two proposals seems do not currently apply to EU institutions;
  • ensure that the proposals do not compromise the European data protection regulations and explicitly state that these proposals do not supersede EU data protection laws;
  • extend the procedural guarantees outlined in Articles 3 and 4 of the AI Liability Directive, which include the disclosure of evidence and the presumption of a causal link in the case of fault, to all cases of damages caused by an AI system, regardless of their classification as high risk or non-high-risk AI systems;
  • ensure that the information disclosed in accordance with Article 3 of the AI Liability Directive is accompanied by clear and comprehensible explanations;
  • consider additional measures to further ease the burden of proof for individuals who are damaged by AI systems.