EU Orders X to Preserve Data Amid Sexualised Deepfakes Furore

The European Commission has ordered X to preserve data from the Grok AI chatbot after a report revealed the widespread generation of sexually manipulated images, with significant portions depicting women under 30 and children in some cases. This supports the ongoing Digital Services Act compliance investigations.


By Matthaios Tsimitakis
January 12, 2026
You can download this article in PDF format here!

The European Commission has ordered X (formerly Twitter) to retain internal documents and data related to its AI chatbot Grok until the end of 2026, amid concerns that the tool has been used to generate and circulate non-consensual sexualised images, including content involving women and minors. 

The instruction, confirmed by Commission spokesperson Thomas Regnier, is intended to ensure that regulators can access relevant evidence while the Commission continues assessing X’s compliance with the EU’s Digital Services Act (DSA). Regnier stressed that no new formal investigation into Grok has been opened at this stage.

"This is saying to a platform, keep your internal documents, don't get rid of them, because we have doubts about your compliance ... and we need to be able to have access to them if we request it explicitly," Regnier said.

The Commission’s move extends an earlier evidence-retention request linked to its ongoing scrutiny of X’s systems, including the platform’s recommender algorithms and the handling of illegal content. That earlier request formed part of the Commission’s stepped-up supervision of X as a Very Large Online Platform (VLOP) under the DSA.

Under the DSA’s supervisory framework, the Commission can take monitoring and investigatory actions, including ordering platforms to preserve documents and data necessary to assess compliance. These powers sit within the DSA’s enforcement provisions (commonly discussed in relation to Articles 67–72).


Regulatory pressure intensified following a 5 January 2026 report by the Paris-based NGO AI Forensics, titled Grok Unleashed. The organisation analysed approximately 50,000 user requests mentioning @Grok and more than 20,000 images generated by the chatbot between 25 December 2025 and 1 January 2026.

Key findings included: 53% of analysed images depicted individuals in minimal attire such as underwear or bikinis; 81% of those individuals were classified as women. Around 2% of images were assessed — using automated age-estimation tools — as depicting persons appearing to be 18 or younger. AI Forensics reported that manual review of this subset revealed sexually suggestive content involving very young women and girls.

The report also documented instances in which Grok generated Nazi and ISIS propaganda material. The findings were subsequently reported by international Media outlets, which highlighted the presence of sexualised images involving children.


X’s response and criticism

On 9 January 2026, X restricted Grok’s image generation and editing features on the platform to paying subscribers, with the chatbot responding to users that “image generation and editing are currently limited to paying subscribers.”

However, multiple outlets reported that similar functionality could still be accessed through other routes, including the standalone Grok website, prompting criticism that the measures were insufficient.

The financial and regulatory stakes for X remain high. In December 2025, the company was fined €120 million by the European Commission for transparency violations under the DSA. While failures to comply with specific investigatory or monitoring measures can attract lower penalty ceilings, a formal DSA non-compliance decision — particularly one involving systemic risk failures — can carry fines of up to 6% of a company’s global annual turnover, but some failures have lower ceilings.

For now, the Commission says the Grok-related data-retention order is about preserving evidence, not prejudging the outcome of its ongoing assessment.