top of page

Why do we need new liability rules for Artificial Intelligence?

Our draft INL report in the JURI-committee argues that there are a few specific cases, in which our well-functioning liability regimes cannot guarantee any longer that the harmed person is adequately compensated.



Who is liable when a delivery drone is crashing and harming me? The autonomous AI-system - which has caused the harm - is of course unable to compensate on its own. This rather simple case already raises numerous legal questions and confronts our liability regimes with a complicated challenge. Although the existing rules have proven their worth over the past decades and were so far able to deal with all new technologies, AI-systems represent a different case. Their autonomy and opacity, combined with their connectivity, vulnerability to cyber security threats and dependency to external data, make them very difficult to handle.


Potential liability claims


At the first glance, everything seems to be business as usual. If the harm took place between contractual partners, the contract law settles the compensation claims of the harmed party. If the AI-delivery drone had a defect (which led to the harm), the Product Liability Directive is applicable. If a hacker caused the harmful crash, he or she will be liable according to national tort law. But what happens if it is unclear who or what caused the crash? Or if even no one is at fault? The five above mentioned characteristics of AI-systems make it possible that cases like this appear. An innocent bystander - crossing a public square - can be severely harmed by an autonomous AI-system without defect and without any person clearly at fault. Although Europe should not hamper innovation in AI by excessive new obligations for programmer, producer or user, the protection of innocent victim must not take the back seat.


Liability of the deployer


To strike the right balance between protection and innovation, we propose with our INL the creation of a strict liability regime for the deployers (operators) of high-risk AI-systems. When those systems cause harm, the deployer needs to compensate the victim regardless if he/she was at fault or not. Comparable to the owner of a car or a dangerous pet, the deployer of an AI-system cannot exonerate, except in cases of force majeure. If the victim suffered physical harm, the liability of the deployer is limited to 10 Mio. Euros; if property was damaged, the maximum of 2 Mio. Euros is being compensated. In order to guarantee the payment of compensation, the deployer needs to have an insurance cover for the high-risk AI-system.


However, our approach works only with the right selection of high-risk systems. While the industry will try to keep their number as low as possible, consumer organizations will push for the maximal inclusion. Therefore, we suggest to hand over this highly-political classification-process to the European Commission in form of delegated acts. Together with a standing stakeholder committee (involving experts from civil society, research, businesses and national administrations), they will decide from a solely technical standpoint which technologies are fulfilling the high-risk criteria set out by our Regulation. Afterwards, the European Parliament and the Council have two months to object the decision before a transitional period of six months kicks in. Only then, the new classifications would apply across the European Union.


All AI-systems that are not classified as high-risk will remain under the existing fault liability regimes of the Member States. The deployer is consequently only liable for harm caused by the AI-system if he/she is at fault for the harmful act. Having the unique characteristics of AI in mind, we want the victim to benefit from a presumption of fault against the deployer, though. This means that the deployer needs to prove that due diligence was observed or that the AI-system was activated without his/her knowledge.

What are the next steps?


MEP Voss will present the draft INL in the JURI-committee on 12 May 2020, after which the other committee members have time until 28 May to table their respective amendments. The consideration of amendments in the committee will be followed by technical meetings as well as Shadow meetings, in which we will try reach a compromise text with the other political groups. The final vote in the JURI-committee is scheduled for September, the plenary debate for October 2020. You can download our draft legislative own-initiative report as well as some fact sheets below.


2020
.2014 (INL) - AI civil Liability_dra
Download 2014 (INL) - AI CIVIL LIABILITY_DRA • 628KB
2020
.2014 (INL) - AI civil Liability_fac
Download 2014 (INL) - AI CIVIL LIABILITY_FAC • 555KB

bottom of page