Consultation on GPAI guidelines
- kaizenner
- Apr 22
- 3 min read
The AI Office has finally begun clarifying key questions around the AI Act's GPAI rules - especially who is in scope, what constitutes "placing on the market," and how one becomes a GPAI provider. A new working paper suggests sensible FLOP-based thresholds, limits burdens on downstream users, and incentivizes adherence to the Code of Practice; offering long-awaited legal certainty, though some open issues and consultation flaws remain.

🚀 The AIOffice – finally – started to work on providing more clarity on the key questions of which actors can be expected to be in the scope of the GPAI rules of the AI Act, on what qualifies as 'placing on the market' or on how to become a provider of a GPAI model! The legal text of the law is quite vague in this regard, which has so far unsettled many AI firms.
This will hopefully change soon! From today until 22 May, all stakeholders can make their perspective heard, providing input on a Working Paper for the upcoming GPAI guidelines. [Remark: four weeks won't provide enough time for many EU companies to come up with meaningful feedback. If we want to make their voice heard, the ways we are doing consultations must finally be revamped].
But let’s go back to the content of the planned guidelines. After having a first read of the uploaded document, I want to draw your attention to three points:
⚠️ GPAI model definition: AI developers need to know if their AI technology could qualify as a GPAI model under the AI Act. By suggesting another FLOP threshold, the AI Office would presume a model that can generate text and/or images to be a GPAI model if its training compute is above 10^22 FLOPs. While these thresholds are never perfect, they are probably the best proxy to provide our AI firms with some kind of legal certainty.
⚠️ Downstream industry largely out of scope: thousands of European businesses already modify GPAI models trained by a few non-EU upstream tech corporations, and were worried of high compliance costs. Following the new AI Office approach, these actors would now have legal certainty to be out of scope in most cases. Only if a certain FLOP threshold of retraining is met, they would have to comply with the new GPAI rules. This approach fully aligns with the will of the EU co-legislators, who wanted to regulate the baseline (upstream) models but not the successive (downstream) models.
⚠️ Signing the Code of Practice is a benefit: if a GPAI provider signs the CoP, it can benefit from less administrative burden and more trust by the European Commission. In contrast, non-signatories have to report their own compliance measures and explain how these ensure compliance. As a result, they may face more request for information or even model evaluations.
My first impression is that many proposals are steps in the right direction. Of course, some questions on the overall GPAI approach remain: will the Commission increase the other FLOP threshold (10^25); when will it finally establish the Scientific Panel (invite experts); etc.
Anyway, whether you agree or disagree with my first assessment … please make sure to send your views to the European Commission. This crucial part of the AI Act must be backed by sufficient evidence from companies of different sizes and from different sectors. More info on the consultation via the link here.
Comments