top of page

GPAI Guidelines Published - Two weeks before the GPAI Rules Enter Into Force!!

  • kaizenner
  • Jul 18
  • 2 min read

The AI Office has finally clarified which actors will fall under the AI Act’s rules for GPAI models – way too late – but those explanations are bringing the much-needed clarity for industry. Importantly, the GPAI rules are now officially confirmed to be in force from 2 August 2025.

ree

The AI Office has finally clarified which actors will fall under the AI Act’s rules for GPAI models – way too late (!) - but those explanations are bringing the much-needed clarity for industry. Importantly, the GPAI rules are now officially confirmed to be in force from 2 August 2025 (🙅‍♂️ stop-the-clock!)


✅ The good news: the guidelines are rather targeted. Large parts of the European downstream AI industry are out of scope. As long as they do not significantly modify a GPAI model (i.e. retraining greater than 1/3 of the original compute). This means that the regulatory burden lies upstream, where the risks largely originate. Providers of powerful GPAI models (i.e. those trained with more than 10^25 FLOPs) must now take responsibility for systemic risks, transparency, and public safety.


My three key takeaways:

▪︎ Clarity: A clearer GPAI threshold (10^23 FLOPs) gives industry a benchmark. A boost from the 10^22 originally proposed by the AI Office, showing responsiveness to the public consultation.


▪︎ Responsibility: GPAI providers must notify early and proactively report to the AI Office since systemic risks can emerge at any point in the AI lifecycle … including during the development phase.


▪︎ Expertise: The document shows nicely that the AI Office has more and more technical AI experts that really help improving the quality of the Commission's output. The delays however also show that more (technical) staff is urgently needed - as frequently pointed out by the European Parliament.


⚠️ Risk #1: There is unclarity around the enforcement gap. The GPAI rules apply from August 2025 but the AI Office’s formal enforcement powers only kick in from August 2026. As Axel Voss rightly asked the European Commission in this week’s AIA Working Group: “How will the AI Office ensure compliance during this transition year?”. 


⚠️ Risk #2: Not only advanced AI models are blackboxes – the AI Office itself must avoid becoming one. The Codes of Practice is split into three chapters, but each must be followed in full. We will be watching closely to ensure there are no backroom deals or selective compliance when it comes to enforcing the GPAI chapter & Code. Transparency, third-party evaluations, and adherence to the full Safety & Security chapter are non-negotiable.


The AI Act can only work if powerful GPAI models are safe and if European companies are not left in the dark, when they are building their AI products and services on black-box GPAI models. We need accountability, not ambiguity: from GPAI providers but also from the AI Office itself.


More information here.

bottom of page