The European Commission has released the first draft of the Code of Practice on Transparency of AI-Generated Content to guide compliance with Article 50 of the AI Act. This voluntary code targets providers and deployers of generative AI systems, focusing on marking, detecting, and disclosing AI outputs like text, images, audio, and video to combat misinformation and deepfakes.
The European AI Office coordinates the drafting through two working groups: one for providers (ensuring machine-readable markings that are effective, interoperable, and robust) and one for deployers (requiring disclosures for deepfakes and public-interest text unless human-reviewed). Independent experts lead an inclusive process with stakeholders from industry, academia, civil society, and Member States, incorporating over 180 submissions; it runs from November 2025 to May 2026, ahead of August 2026 enforcement.
Providers must mark AI outputs in detectable formats, accounting for content types, costs, and state-of-the-art standards. Deployers disclose manipulated content resembling real entities (deepfakes) and AI-generated public-interest texts, with exceptions for editorially accountable publications. The first draft was published on December 17, 2025, following a kick-off in November; further workshops and revisions lead to a final version by May-June 2026. The Commission will issue parallel guidelines on scope, making this code a practical tool for demonstrating conformity. The Commission will collect feedback on the first draft from participants and observers to the Code of Practice until 23 January. The second draft will be drawn up by mid-March 2026, with the Code expected to be finalised by June next year.
The rules covering the transparency of AI-generated content will become applicable on 2 August 2026. Here is a quick guide:
The draft Code separates responsibilities between two groups:
> Providers: entities that place generative AI systems on the EU market or put them into service.
> Deployers: entities that use those systems to make content available to the public and are responsible for disclosure to users.
What the draft expects from AI providers
Providers are expected to ensure that AI-generated or AI-manipulated content is marked in machine-readable form. The draft explicitly rejects reliance on a single technique. Instead, it calls for a multi-layered approach to active marking, which may include:
- Embedded metadata where formats allow,
- Imperceptible watermarking,
- Fingerprinting or logging mechanisms,
- Alternative provenance approaches for text or formats that do not support metadata.
The draft stresses that marking techniques should be robust, including the ability to withstand common transformations such as compression or re-encoding, and should function across different platforms and tools.

Marking alone is not considered sufficient. Providers are also expected to ensure that AI-generated content can be detected. The draft sets out expectations that providers:
- Make available a free interface or API to verify whether content was generated by their systems,
- Return confidence scores and, where available, provenance information,
- Ensure detection mechanisms remain available even if a provider exits the market, including by making tools accessible to authorities.
In addition, the draft encourages the development of forensic detection mechanisms that do not rely solely on the presence of active markings, acknowledging that marks may be degraded or removed.
The draft specifies that marking and detection systems should meet four core criteria:
> Effectiveness,
>Reliability (including low false-positive and false-negative rates),
>Robustness against tampering or attack,
>Interoperability across systems and platforms.
Open standards and collaboration across the AI value chain are encouraged as a way to meet these goals.
Providers are expected to maintain internal compliance frameworks. These include:
* Testing marking and detection systems before and after deployment,
* Monitoring failures or attacks,
* Training relevant staff,
* Cooperating with market surveillance authorities.
Use of third-party tools is permitted, but responsibility for compliance remains with the provider.
What the draft expects from deployers
While providers handle technical marking and detection, deployers are responsible for public-facing disclosure. Deployers are expected to disclose AI involvement in a way that is:
- Clear,
- Accssible,
- Appropriate to context,
- Visible at the user’s first exposure to the content.
The draft introduces a common taxonomy distinguishing between:
- Fully AI-generated content, and
- AI-assisted content, with examples provided.
To support consistency, the draft includes an appendix proposing interim disclosure indicators, such as two-letter acronyms (“AI”, “KI”, “IA”), pending the development of a more standardised EU-wide symbol.
Deepfakes: stricter disclosure rules

The draft distinguishes between general AI-generated content and deepfakes, with stricter and more explicit disclosure obligations for the latter. Content classified as a deepfake is subject to more specific disclosure requirements. The draft requires disclosure at first exposure and sets out modality-specific approaches, including:
* for live video, a continuous visual indicator alongside an opening disclaimer,
* for recorded video, a visible indicator or disclaimer,
* for images, a fixed visible label,
* for audio, an audible disclaimer, repeated for longer content.
The draft allows proportionate treatment for artistic, fictional or satirical works, while still requiring that AI involvement be disclosed.
For text on matters of public interest, the draft establishes a clear rule: AI-generated text must be disclosed, unless a human takes editorial responsibility. Where a deployer claims that exemption, the draft requires documentation showing:
* The identity of the person with editorial responsibility,
* The review measures applied,
* The date of approval,
* A reference to the final approved version.
Without this documentation, disclosure remains mandatory.
What the draft does — and does not — do
The draft Code does not replace the AI Act, and it does not claim to provide definitive proof of legal compliance. Instead, it offers a structured implementation blueprint for Article 50 transparency obligations. Many definitions and enforcement details are left to forthcoming guidance and future revisions. The draft itself is explicitly framed as iterative and subject to change.
Disclaimer: This is a summary of the draft Code of Practice on marking and labelling of AI-generated content. Please read the full document here
Images: CC BY 4.0 Attribution International Deed