The new EU product liability directive and its impact on AI
- aslawforai
- 2 dic 2024
- Tempo di lettura: 2 min
By Moritz Philipp
I. Introduction
On the 18th of November new product liability directive (Directive (EU) 2024/2853) was published and it is going to have a huge impact on the liability for Artificial Intelligence (AI).This directive is going to replace the almost 40 years old previous product liability directive (directive 85/374/EEC).
The update of the product liability directive was necessary for several reasons.
First and foremost, the 4th industrial revolution. The old product liability directive was older than the world wide web and therefore not tailored to legal problems stemming from the Internet of Things or the usage of AI.
While these are the most important considerations that led to the new product liability directive, also other factors, like the European “Green Deal” or new digitalized global supply chains, played a significant role.
II. Changes that affect the liability for AI
A key issue with the old directive lay in its definition of "product," as liability only applied to defective products. Article 2 defined a product as "all movables," explicitly including electricity—a recognition that limiting the scope to movables alone would inadequately protect consumers. However, the legislator failed to anticipate the critical role software would come to play in product functionality, leaving the status of software as a product highly debated.
The new directive resolves this by explicitly defining software as a product in Article 4(1), with an exception for open-source software in Article 2(2).
Furthermore, the old directive posed challenges due to Article 6, which exempted producers from liability if the product was not defective at the time it was put into circulation. This created legal uncertainty for AI systems, which can be trained and significantly altered after their release. Some argued that a producer would not be liable for defects arising after the AI left their control, while others contended that an AI could be considered defective if it had the potential to learn harmful behavior.
The new directive addresses this in Article 7(2)(c), requiring that defectiveness assessments consider the impact of an AI's ability to learn or acquire new features after being placed on the market. This provision generally holds AI producers liable for behavior developed post-deployment.
Finally, the new directive introduces a discovery procedure to address the inherent opacity of AI, a significant challenge for claimants in court. Article 9 provides that if a claimant presents sufficient facts and evidence to make the claim for compensation plausible, the defendant must disclose relevant evidence in their possession. Failure to do so results in the product being presumed defective under Article 10.
III. Evaluation
The new Product Liability Directive provides a much-needed update to its predecessor. While it primarily focuses on substantive law to address challenges arising from the proliferation of AI, perhaps the most notable innovation is the introduction of a new discovery procedure.
However, a key drawback is the directive’s strong reliance on technology-specific provisions, making it likely that, in 40 years, someone will once again critique the outdated nature of the then-current directive. This limitation, however, seems inherent to the nature of such regulation.



Commenti