Brussels Tightens AI Labelling Rules as Deadline Looms for Tech Industry

The European Commission has published a streamlined second draft Code of Practice on AI-generated content labelling, imposing clearer obligations on tech providers and deployers to mark AI-created or manipulated media ahead of binding EU transparency rules taking effect on 2 August 2026

By Creatives Unite Newsroom
May 12, 2026
You can download this article in PDF format here!
Find out more here:

Article 50 of the EU Artificial Intelligence Act sets out transparency obligations for providers and deployers of certain AI systems, including generative and interactive AI systems and deepfakes. These obligations are designed to reduce the risks of deception, impersonation and misinformation and to foster trust and integrity in the information ecosystem.

The Act requires that outputs of generative AI systems be identifiable as AI-generated or manipulated and that users be informed when content constitutes a deepfake or when AI-generated text is published to inform the public on matters of public interest. Article 50 also covers emotion recognition systems, requiring disclosure when AI analyses a person's emotional state. 

The transparency obligations under Article 50 complement the transparency rules applicable to general-purpose AI models under Articles 53 and 55, which focus on documentation and information provided to the AI Office and to downstream providers, as well as the transparency of training data inputs. 

The Code of Practice

To assist with compliance, the EU AI Office has initiated the development of a voluntary code of practice on transparency of AI-generated content to be drafted by independent experts in an inclusive process. If approved by the Commission, the final code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations

The first draft of the Code of Practice on marking and labelling of AI-generated content was published on 17 December 2025, ahead of these rules entering into application. It was developed through a collaborative process involving hundreds of participants from industry, academia, civil society and member states, including two working groups established in November 2025, and incorporated 187 written submissions from a public consultation, alongside three workshops and a review of expert studies.

The second draft of the Code, published on 5 March 2026 and open for stakeholder feedback until 30 March 2026, represents a decisive attempt to bridge the gap between legal obligation and technical reality. It has been streamlined and simplified, providing more flexibility for signatories, reducing the compliance burden, and incorporating further technical considerations to improve legal clarity and practicality. It promotes the use of open standards for AI content marking and an EU icon for labelling to simplify compliance and reduce costs.

The code is expected to be finalised by the beginning of June 2026. The rules covering the transparency of AI-generated content will become applicable on 2 August 2026. European Commission

What Has Changed

The second draft makes material revisions across both its sections.

For AI system providers, the second draft mandates digitally signed, timestamped metadata — indicating whether content is AI-generated or manipulated and containing an interoperable identifier cross-referenceable by other layers. Fingerprinting or logging mechanisms are now framed explicitly as an optional additional measure to be implemented at the discretion of the signatory. 

General-purpose AI model providers are now "encouraged" rather than required to implement relevant marking techniques at the model level — a reclassification from mandatory to voluntary that increases the burden on system providers integrating such models, who cannot assume upstream models will arrive with built-in watermarking capabilities. 

For deployers, Section 2 of the draft, targeting deployers of AI systems, focuses on labelling deepfakes and text publications on matters of public interest. Relative to the first draft, this section adopts a more flexible and practice-oriented approach, and the taxonomy distinguishing AI-generated content from AI-assisted content has been completely removed. 

For creative works — content that is clearly artistic, satirical or fictional — labelling rules apply differently. The labelling requirement is more flexible and must be done in a way that provides transparency without disrupting the viewer's enjoyment or display of the work itself. 

Enforcement and Significance

Though formally voluntary, the Code carries considerable regulatory weight in practice. The Code of Practice is likely to become a key reference point for regulators and courts when assessing compliance with the AI Act's transparency obligations. With respect to providers and deployers that adhere to the Code, the Commission has indicated it will focus its enforcement activities on monitoring adherence to the Code and that such signatories will benefit from increased trust from the Commission and other stakeholders. 

The second draft also introduces a cooperative obligation to develop an interoperable, provider-agnostic detection interface, combined with the development of a shared repository of public watermarks, metadata repository addresses, and detector addresses. The detection interface is required to be executable locally on a computer and must provide a common entry point to all detection mechanisms employed by providers of generative AI systems. A third and final version of the Code of Practice is expected by June 2026. 

The Commission will simultaneously issue separate non-binding guidelines covering the full scope of Article 50, addressing elements beyond what the Code covers. These guidelines on transparent AI systems, intended to clarify the scope of application, relevant legal definitions, transparency obligations, exceptions and related horizontal issues, are under preparation and expected to be published in the second quarter of 2026.


Image: Infinum, Attribution 4.0 International, CC BY 4.0
LLMs were used to source and fact-check this story. CU wrote, edited and curated it