Update on the GPAI Codes of Practice
- kaizenner
- Mar 27
- 3 min read
Yesterday, parliamentarians involved in the AI Act negotiations have sent a strong message to the European Commission, underlining that the final GPAI Code of Practice should better reflect the trilogue agreement.

Since Axel Voss could not fully attend the European Parliament's AI Working Group in the afternoon, we want to share his detailed views on this important matter below.
📣 At the same time, we want to call upon all stakeholders from EU industry, academia to civil society to speak up now! Without a sound Code, there is no burden sharing along the AI value chain (bad for EU companies) and no AI ecosystem of trust (bad for all of us). Such an outcome would be diametrically opposed to our economic interests as well as EU values. Help us to prevent it!
What has happened? The latest draft reveals a troubling capitulation to the wishes of a
handful of powerful upstream companies. And it risks undermining many of the key concepts
of Chapter V and IX in the #AIAct that have been carefully integrated by the European
Parliament and Council of the European Union.
Why is this a problem? Impressed by early breakthroughs, millions of end-users across
the EU are already heavily using foreign GPAI models, while EU businesses are rapidly
integrating them in their AI products and services across all sectors. But Europe bears the
cost of this strong dependency as many of those GPAI models remain insufficiently reliable
or safe. The AI Act tried to address that issue by shifting some responsibilities back
upstream to the providers of GPAI models.
Not anymore. Rather than helping to fairly distribute responsibilities along the AI value chain
- as envisioned by the AI Act - the Chairs were successfully pressured to water down those
legal obligations on upstream companies with the latest CoP draft.
In this regard, Axel Voss and I want to call upon the European Commission to stop any
form of interference but to respect the Chair’s independence as well as ruling C-355/10 of
the European Court of Justice. Political choices cannot be taken in or changed by technical
committees that are working on secondary legislation or similar instruments. Even more as
the EU co-legislators have just agreed on a common line with the adoption of the AI Act in
early 2024.
Looking at the proposed text, what are the most critical issues? In their letter, the MEPs
pointed out that systemic risks - including those to democratic processes, fundamental
rights, and even critical infrastructure - have been sidelined after they have spent months on
identifying the most affected areas by advanced AI technologies.
But there is more. Pre-deployment transparency has been alarmingly reduced while
third-party evaluations are likely to be removed, allowing GPAI providers to launch risky
models first and ask questions later. Documentation requirements as well as whistleblower
protection, once robust, has been gutted precisely when accountability is needed most.
Lastly, the part on copyright seems to be very unbalanced and became so vague that it could eventually even undermine the existing protection of rights holders.
Instead of continuing this list, let’s remember for a moment Mario Draghi’s warning about
Europe’s economic decline. If approved, the Code’s dismantling of the AI Act’s GPAI chapter
will grant foreign upstream companies an easy opt-out to avoid any accountability, at the expense of almost every EU company. The Code would in fact further weaken our already struggling EU economy and making us even more dependent.
Driven by the desire to collect as many high-profile signatures of foreign Tech CEOs as possible, the European Commission’s interference in the drafting process not only contradicts the political consensus underpinning the AI Act but it also ignores the new geopolitical circumstances. Backed by their governments, most foreign GPAI providers will likely ignore EU enforcement anyway. Not focussing on making the Code as effective and sound as possible, therefore directly endangers Europe’s #digital #sovereignty and leaves European downstream businesses even more vulnerable.
The #EU cannot afford this regulatory race to the bottom. In February, the European Commission already followed the wishes of some tech corporations and killed the AI Liability Directive (#AILD), creating a legal limbo for our AI ecosystem as well as existential liability risks for all EU companies. Europe must decide whether it finally prioritises its own citizens and companies - or bows a second time to the ring of some foreign Tech CEOs.
Download our points also here as a one-pager:
Comments