The draft General-Purpose AI Code of Practice, released on Thursday, sets out guidelines for companies developing advanced AI models. It forms part of the EU's AI Act, which became law in August 2024.
The code targets major AI developers, including OpenAI, Google, and Meta, providing detailed guidance on how they should manage risks and ensure compliance with EU regulations.
Key measures include requirements for companies to be transparent about how their AI models are trained, including disclosure of data sources and web crawling methods. The draft also establishes frameworks for assessing potential risks, from cybersecurity threats to discrimination concerns.
Key aspects of the guidance include:
Companies developing the most powerful AI systems—those using computing power above a specified threshold—will face additional obligations to mitigate what the EU terms “systemic risks.”
The draft is open for stakeholder feedback until November 28, 2024, with the final version expected by May 2025. Companies failing to comply with the eventual regulations could face fines of up to €35 million, or 7% of global annual profits.
Small and medium-sized enterprises (SMEs) will receive some leniency under the rules, being required to pay the lower rather than the higher amount between the fixed sum and percentage of turnover.
The legislation creates a three-tier penalty system:
Working groups will meet later this month to refine the proposals based on input from industry experts and civil society organisations. Nearly 1,000 stakeholders will participate in these discussions, including representatives from EU Member States and international observers.
Among other things, the EU strives to achieve simplified compliance options for SMEs and startups, specific exemptions for open-source model providers, and flexibility to adapt to evolving technology.
The EU aims to have the first set of compliance measures in force by August 2025, with additional requirements for the most powerful AI systems coming into effect in 2027.