European Lawmakers Approve Landmark AI Regulation

The European Parliament approved sweeping new regulations on artificial intelligence that would set guidelines for how companies can develop and deploy AI technologies across the continent. The rules, known as the AI Act, aim to ensure AI systems used in the EU are safe, transparent and grounded in human rights and European values.


June 15, 2023

The AI Act establishes a risk-based framework that places obligations on companies proportional to the level of risk that an AI system poses. Systems deemed "high-risk" - such as those used in employment, healthcare, transport, and the judiciary - would face the strictest rules, including requirements for human oversight and review. "Limited risks" systems like chatbots and virtual assistants would require more transparency around their development and training.

The law also establishes the world's first rules targeting "foundational models" such as OpenAI's ChatGPT. Developers of these large language models would need to assess risks to rights and safety before deployment and ensure their training data does not violate copyright law. Some lawmakers had pushed for an outright ban on systems like ChatGPT, citing concerns about their advanced capabilities, but the final law takes a more balanced approach.

Copyright protection

The AI Act aims to tackle issues around copyright and intellectual property in AI systems, especially large language models trained on huge datasets. According to Thierry Breton, European Commissioner for the Internal Market, "We need to make sure AI systems are grounded in European values like fairness, transparency and accountability. That includes ensuring training data and models themselves do not violate intellectual property rights."

To address this, the law requires developers of "foundational models" like ChatGPT to assess copyright risks in their training data and models before deployment. To fight the significant danger of copyright infringement, the legislation will require AI chatbot developers to share all works used to train them by scientists, singers, artists, photographers, and journalists. They will also have to demonstrate that everything they did to teach the machine was legal.

If they do not, companies may be required to delete applications immediately or face a fine of up to 7% of their sales, which could amount to hundreds of millions of euros for tech behemoths.

Axel Voss, the European Parliament's lead negotiator on the AI Act, said "We cannot have a situation where companies are ignoring copyright law and intellectual property rights in a race to develop bigger and more advanced AI models. The AI Act will bring order and responsibility to the development of AI in Europe."

The rules stop short of an outright ban on generative models but aim to force companies to be more thoughtful about data sourcing and licensing. "We want to enable innovation, not stifle it," Voss said. "But innovation must be grounded and responsible. Companies cannot ignore copyright law in a rush to develop ever-larger AI models."

The law reflects a tension between promoting AI development and protecting intellectual property. Striking a balance has been a key challenge in drafting the regulation. "There were calls for strict rules, even bans on some systems. We took a more balanced approach," Breton said. "The AI Act will spur responsible innovation in Europe, not hold it back."

As AI becomes more sophisticated, it can create content that is hard to distinguish from human-generated content, leading to concerns over misinformation and fake news. The EU has called on tech giants like Facebook and Google to label AI-generated content to enhance transparency and accountability. However, it remains to be seen how effective this measure will be, and whether tech companies will comply without a legal obligation.

"AI brings up a lot of questions about society, ethics, and the economy. But now is not the time to put anything on hold," said Thierry Breton, European Commissioner for the Internal Market. "On the contrary, we need to move quickly and take responsibility."

The law must now be negotiated with EU member states and the European Commission before taking effect, likely in 2024 at the earliest.

The world is watching

Roberta Metsola, president of the European Parliament, praised the Act, as "legislation that will undoubtedly set the global standard for years to come." She stated that the EU now has the potential to set the tone globally, and that "a new age of scrutiny" has begun.

The rules follow China's release of draft principles on AI development last year and growing calls for guardrails to govern increasingly advanced and autonomous systems. The EU aims to set a global standard for AI regulation that protects citizens while allowing companies to continue innovating.

In a parallel discussion in the UK, the Labour Party has called for AI to be regulated in a similar way to medicine and nuclear power, with developers undergoing rigorous safety assessments before launching their AI systems.

--

CC-BY-4.0: © European Union 2021 – Source: EP