The European Union has reached a political agreement to streamline the implementation of its landmark Artificial Intelligence Act while introducing an outright ban on applications designed to generate non-consensual sexually explicit imagery – a measure that negotiators have described as the first such prohibition written directly into EU law.
The deal, reached on 7 May 2026 between the European Parliament and the Council of the EU, forms part of what Brussels has termed its "Digital Omnibus on AI" package, first proposed by the European Commission on 19 November 2025. The package is intended to reduce compliance costs and cut regulatory duplication without dismantling the risk-based framework at the core of the original Act. European institutions insisted the changes would help smaller companies compete with large technology groups while preserving existing protections for safety and fundamental rights.
Among the principal simplifications, the provisional agreement introduces fixed new deadlines for high-risk AI rules: companies operating stand-alone high-risk AI systems — covering areas including biometrics, critical infrastructure, education, employment, and border control — will be required to comply from 2 December 2027, while high-risk AI systems embedded within physical products must comply from 2 August 2028. The original draft described this solely as a "sixteen-month window"; in fact, the delays are approximately sixteen months for stand-alone systems and twenty-four months for embedded ones, both now expressed as fixed statutory deadlines rather than a flexible deferral linked to Commission confirmation of available standards.
Small and medium-sized enterprises have until now benefited from lighter documentation requirements under the Act. Under the revised framework, those concessions will be extended to small mid-cap companies — defined as those with up to 500 employees. Regulators have also moved to clarify the relationship between the AI Act and parallel sectoral legislation, notably the Machinery Regulation, which will receive a full exemption from direct AI Act applicability; the Commission will instead gain delegated powers to add health and safety requirements for high-risk AI systems in that sector. The changes are designed to eliminate the fragmentation that has hampered compliance planning across industries.
The EU AI Office, the body charged with overseeing general-purpose AI models and very large online platforms and search engines, will see its supervisory powers strengthened. Critically, the agreement clarifies that the AI Office holds jurisdiction over AI systems where the same provider develops both the underlying general-purpose model and the deployed system. However, national authorities retain control in specific domains — law enforcement, border management, judicial authorities, and financial institutions — following disagreement during the trilogue over the boundaries of central oversight. Broader access to regulatory sandboxes, including a new EU-level sandbox, will also be extended to smaller innovators.
The second significant element of the package concerns what has become known as 'nudification applications' — tools that use artificial intelligence to generate sexually explicit or intimate images of identifiable individuals without their consent, including those based on photographs of real people. Crucially, the ban was not part of the European Commission's original proposal; it was introduced as an amendment by the European Parliament, principally at the instigation of the Renew Europe group, and survived into the final agreed text.
Michael McNamara, the Irish independent MEP who sits with the Renew Europe parliamentary group and who co-rapporteur-ed the file through the Parliament's civil liberties committee, said the agreement delivered "real protections for EU citizens". He added that non-consensual intimate imagery constituted "a systemic harm being industrialised by AI", the overwhelming burden of which falls on women and girls. Arba Kokalari, the EPP rapporteur for the internal market committee, said the EU had drawn "With this agreement, we show that politics can move just as quickly as technology." We now make the AI rules more workable in practice, remove overlaps and pause the high-risk requirements."
"We are stepping up the protection of children targeting risks linked to the AI systems. This agreement is clear evidence of our institutions’ ability to act swiftly and deliver on our commitments. It marks the first deliverable under the ‘One Europe, One Market’ roadmap agreed by the three institutions, well within the set deadline," said Marilena Raouna, deputy minister for European affairs of the Republic of Cyprus, in a written statement.
The revised rules explicitly prohibit AI systems on the EU market that are designed to create non-consensual sexually explicit or intimate content, as well as those that fail to implement reasonable safeguards against its production. The ban also covers AI-generated child sexual abuse material. Companies will be required to bring systems into compliance with the new prohibition by 2 December 2026, giving providers roughly six months to make necessary changes.
The provisional agreement must now be formally endorsed by both the Council and the European Parliament before entering into force. The co-legislators have stated their intention to complete formal adoption before 2 August 2026 — the date on which the existing AI Act's high-risk provisions would otherwise have taken effect under the original timetable, making the passage of the omnibus a matter of some legislative urgency.