The 1st of two AI Act draft reports of the European Parliament is available! On 15 March 2022 at 11h00 - two months before the joint IMCO/LIBE piece - Axel Voss will present our draft report in the JURI committee. Read already today how we want to improve the Commission's draft regulation and promote the use of trustworthy AI within the EU.
But wait, why are there two draft reports of the European Parliament for the AI Act (AIA)? Yes, kind of. This unique situation is the result of a long clash of competences between JURI and IMCO committee. You can read more about it on Twitter (here). In the end, the Conference of Presidents gave the JURI committee a Rule 57+. Check out this screenshot for more details:
With his double role as EPP LIBE Shadows and JURI Rapporteur of this Rule 57+ report, which can amend the whole AI Act, Axel Voss became the 3rd leading MEP for the AI Act. So what are we saying in our draft?
Let's begin by stating that we find the Commission's proposal a good starting point. However, the final regulation still requires a lot of work in order to cope with the complex and highly dynamic field of AI. The Council did a great job in this regard and made a lot of progress!
What is still needed? We tabled in total 309 AMs in 7 key areas:
Clear scope & AI def.
'Trustworthy AI' concept
High-risk à la White Paper
More data for AI training
Fair responsibilities in supply chain
Strong AI board + reestablished High Level Expert Group
Efficient consistency mechanism
(1) Reasoned with the connectivity, autonomy, data dependency, complexity, openness and opacity of new #AI systems, the #AIA should consequently focus only on such new systems. This means, only Machine Learning should be in its scope as others are already covered by existing laws.
(2) European Standardization Organizations are waiting for a political signal on "meta-level" that they can then translate into technical standards for the various AI applications. The excellent work of the High Level Expert Group on AI offers a balanced concept for such a signal.
(3) The current ANNEX III puts whole sectors under general suspicion. AI White Paper indicated better solution: AI system is considered high-risk when its intended purpose falls in critical area (new ANNEX III) + in manner that significant harm is likely (new risk assessment).
(4) Today's ML systems need lots of high quality data, which is why we proposed several changes to Art 10. To comply with #AIA, we also suggested to consider training/validating/testing of AI systems a 'legitimate interest' and 'compatible purpose' according to Art 6(1f) and (4) GDPR.
(5) The supply chain of AI systems is highly complex. A fair distribution of #AIA responsibilities among the actors is thus very difficult. General purpose AI systems make it even harder. We took the great Art 23a from the Council and added a few things to make it more balanced.
(6) The AI board must have a strong mandate + enough resources. It will be key to coordinate between Member States, assess implementation and constantly adjust the #AIA to new technological developments. Reestablished HLEG guarantees that the AI community is involved adequately.
(7) Each national competent authority should act as single point of contact for this Regulation. While we introduce one-stop-shop, we don't want to repeat GDPR shortcomings. Therefore: strengthening of consistency mechanism + allow AI board to make binding decisions if necessary.
Download the full JURI draft report here:
During his presentation in the JURI committee on 15 March, Axel Voss will present - on top of the draft report - four additional points that are necessary to make the #EU a global leader in #AI. Stay tuned!
Last but not least, this is only the 1st draft with our ideas on how to improve the #AIA. Further adjustments are definitely needed. So, what is (not) working and what is still missing? Our post-presentation-consultation starts today. Send your feedback to firstname.lastname@example.org!