The AI Code of Practice of The EU Focuses on Transparency and Risk Management

The European Commission's third Iteration of General-Purpose AI Code of Practice establishes a two-part regulatory framework for general-purpose AI models, mandating comprehensive documentation, risk assessments, and compliance protocols for AI technology providers.

By Matthaios Tsimitakis
March 17, 2025

The European Commission released the third draft of the General-Purpose AI Code of Practice, marking a significant step in regulating artificial intelligence technologies. This version represents a more refined approach compared to previous drafts, featuring a streamlined structure and more focused commitments designed to help AI model providers demonstrate compliance with the EU AI Act.

The two-part framework differentiates between obligations for all general-purpose AI models and those with systemic risks. Providers must now disclose training data sources, and model capabilities, and implement robust copyright compliance and risk assessment mechanisms.

Specific key obligations include comprehensive transparency requirements, such as detailed documentation of training and testing processes, mandatory disclosure of model limitations, and implementation of opt-out mechanisms for intellectual property rights holders. 

Part 1: Applies to all GPAI models, covering documentation of training/testing processes and transparency obligations (Article 53 AI Act).

Part 2: Targets GPAI models with systemic risks (e.g., those exceeding 10^25 FLOPs in training), requiring rigorous risk assessments and mitigations (Article 55 AI Act).

AI providers must conduct rigorous stress tests, perform adversarial testing for high-risk models, and establish robust monitoring and incident reporting protocols to identify and mitigate potential systemic risks.

Developed through an extensive collaborative process, the Code involved nearly 1,000 stakeholders, including industry representatives, civil society organisations, and EU member states. Four working groups chaired by independent experts iteratively crafted the framework, with the final version expected by May 1, 2025.

While the Code represents a proactive approach to AI governance, industry stakeholders have expressed concerns about ambiguous risk thresholds and potential compliance costs, particularly for smaller developers.

Key enforcement mechanisms include potential fines of up to 3% of global turnover or €15 million for violations, underscoring the regulatory body's commitment to responsible AI development.

The process is ongoing but according to several sources there have been reactions by stakeholders in the creative sectors: 

Boniface de Champris from the Computers and Communication Industry Association (CCIA) commented that the draft makes “limited progress” and continues to fall “short of providing companies with the legal certainty that's needed to drive AI innovation in Europe.” 

“Unfortunately, the latest draft raises serious questions about whether no code is better than this code,” said Iacob Gammeltoft from News Media Europe, capturing the industry's sentiment. The draft's approach to copyright and content usage remains problematic for creators.

The core issue persists: insufficient protections for creative works. Laura Lazaro Cabrera from the Centre for Democracy & Technology Europe highlighted a critical weakness: "The third draft confirms what many of us had feared—that consideration and mitigation of the most serious fundamental rights risks would remain optional for general-purpose AI model providers."

With feedback accepted until March 30 and a final version expected in May, the creative industry remains cautiously engaged. A previous warning from 15 European rightsholders that the draft contradicts copyright law underscores the ongoing tensions.

The Code will be enforced by the EU AI Office starting August 2025. Certain guidelines address systemic risks and compliance requirements for AI providers. It targets large generative models like GPT-4, mandating stringent transparency, copyright protection, and risk mitigation protocols.

The European Commission views this Code as an interim compliance tool, with harmonized standards anticipated by August 2027, reflecting a balanced approach to fostering innovation while implementing critical safeguards.



Want to know more? Check out the European Commission's Q+A website

Image Credit: European Pariament