Our draft INL report in the JURI-committee argues that there are a few specific cases, in which our well-functioning liability regimes cannot guarantee any longer that the harmed person is adequately compensated.

Who is actually liable when a delivery drone is crashing and harming me during this process? The autonomous AI-system - which has caused the harm - is of course unable to compensate on its own. This rather simple case already raises numerous legal questions and confronts our liability regimes with a complicated challenge. Although our established liability rules have proven their worth over the past decades and were so far able to deal with all new technologies, AI-systems present a different case. Their autonomy and opacity, combined with their connectivity, vulnerability to cyber security threats and dependency to external data, make them very difficult to handle by the existing rules.
Potential liability claims
At the first glance, everything seems to be business as usual. If the harm took place between contractual partners, the contract law settles the compensation claims of the harmed party. If the AI-delivery drone had a defect (which led to the harm), the Product Liability Directive is applicable. If a hacker caused the harmful crash, he or she will be liable according to national tort law. But what happens if it is unclear who or what caused the crash? Or if even no one is at fault? The five above mentioned characteristics of AI-systems make it possible that cases like this appear. An innocent bystander - crossing a public square - can be severely harmed by an autonomous AI-system without defect and without any person clearly at fault. Although Europe should not hamper innovation in AI by excessive new obligations for programer, producer or user, the protection of this innocent victim must not take the back seat.
Liability of the deployer
To strike the right balance between protection and innovation, we propose with our INL the creation of a strict liability regime for the deployers (operators) of high-risk AI-systems. This means that when those systems cause harm, the deployer need to compensate the victims regardless if he/she was at fault or not. Comparable to the owner of a car or a dangerous pet, the deployer of an AI-system cannot exonerate, except in cases of force majeur. If the victim suffered physical harm, the liability of the deployer is limited to 10 Mio. Euros; if property was damaged, maximum 2 Mio. Euros is compensated. In order to guarantee the payment of compensation, the deployer needs to have an insurance cover for the high-risk AI-system.
Our approach works only with the right selection of high-risk system. While the industry will try to keep the number of high-risk systems as low as possible, consumer organisations will push for the maximal inclusion. Therefore, we suggest to hand over this highly-political classification-process to the European Commission in form of delegated acts. Together with a standing stakeholder committee (involving experts from civil society, businesses, Member States etc.), they will decide with a solely technical standpoint which technologies are fulfilling the high-risk criteria set out by our Regulation. Afterwards, the European Parliament and the Council have two months to object the decision before a transitional period of six months kicks in. Only then, the new classifications apply across the European Union.
All AI-systems that are not classified as high-risk will remain under the existing fault liability regimes of the Member States. The deployer is consequently only liable for harm caused by the AI-system if he/she is at fault for the harmful act. Having the unique characteristics of AI in mind, we want the victim to benefit from a presumption of fault against the deployer, though. This means that the deployer needs to prove that due diligence was observed or that the AI-system was activated without his/her knowledge.
What are the next steps?
MEP Voss will present the draft INL in the JURI-committee on 12 May 2020, after which the other committee members have time until 28 May to table their respective amendments. The consideration of amendments in committee will be followed by technical meetings as well as Shadow meetings, in which we will try to find a compromise text with the other political groups. The final vote in the JURI-committee is scheduled for September, the plenary debate for October 2020. You can download our draft as well as factsheets below.