As the European Union forges ahead with enforcing the AI Act to govern the development and deployment of artificial intelligence, the debate over the future of intellectual property is becoming urgent. In this long read, Luxembourg-based, legal expert Erwin Sotiri, explains why.
By Matthaios TsimitakisThe European Union's new Artificial Intelligence (AI) Act, which came into force on August 1, 2024, aims to regulate the development and use of AI systems. While it is intended to protect EU citizens from safety and security risks, it has raised concerns from the creative industry about its impact on intellectual property (IP) rights.
One of the Act's primary clauses requires suppliers of general-purpose AI models to publish details about the data used in their training. This is intended to promote transparency and accountability. However, creative organizations have expressed concerns that this could jeopardize their intellectual property rights and commercial secrets. The copyright system first appeared in Europe in the 16th century, but it was not internationalized until the Berne Convention of 1886, which is still in effect today. The Berne Convention provision emphasizes the protection of (human) expression as an art form, excluding ideas and styles.
This tension is at the heart of the EU's strong regulatory ambition. Writers, painters, and other artistic individuals claim that these AI systems are co-opting their styles and ideas, jeopardising their careers. However, proponents of the technology argue that these tools are simply extensions of the creative process, with the potential to open up new avenues of artistic expression.
Finding a solution to this conflict of interest will pose a huge challenge to the EU's regulatory structure. Various approaches are under consideration to solve AI-generated work. Some propose automatically assigning ownership to the "original authors," however this presents logistical issues. Others propose using industry-regulator "sandboxes" as complicated frameworks to stimulate innovation while addressing copyright concerns. The decisions made in Brussels will have far-reaching consequences beyond Europe, setting precedents that will shape the global creative scene in the future.
Erwin SOTIRI is a distinguished Luxembourg lawyer with over 20 years of experience specialising in Fintech, Crypto and Digital Assets, Intellectual Property, and Open-Source Software. He has advised numerous high-profile clients on the legal implications of the digital asset space. He is a strong advocate for open-source software and copyright protection, with a deep knowledge of the legal issues surrounding these topics. He regularly comments on and analyses the challenges AI is posing to the creative industries, advocating for a middle ground between innovation and the protection of creativity.
The future of copyright law, as well as the idea of creativity, are both at stake in an algorithmically driven society. The EU's AI legislation is being developed, and parties are discussing how to strike a balance between appropriate remuneration for human intellect and the revolutionary potential of developing technology. The answers they unearth could fundamentally alter our view of authorship, inspiration, and the core of artistic expression. The decisions made in Brussels will be far-reaching, potentially shaping the global creative scene for years to come.
CU: A new AI directive will influence the creative industries. How do you expect the general rules to be implemented in different sectors? For example, animation studios create their ethical codes, but they're just testing the waters. How do you expect the Act to be implemented?
Er.S.: The EU AI Act, which was passed in December 2023 and enacted in May 2024, after extensive negotiations, is expected to have a significant impact on various industries, including the creative sector. The Act's implementation across different sectors is likely to be nuanced and sector-specific. The Act is structured in two main parts:
AI Classification: This section categorises AI systems based on their potential risks. The categories include "unacceptable risk," "high risk," and "limited risk." This classification system is accepted by most stakeholders.
Regulatory Framework: This part outlines how different AI systems should be regulated based on their risk classification.
One of the main challenges in drafting the Act was finding a balance between ensuring adequate oversight and not stifling innovation, particularly in less sensitive areas like the creative industries. For sectors which were already developing their ethical codes, the Act is likely to serve as a broader framework. These industries may need to align their existing practices with the Act's requirements, particularly in areas such as transparency, data governance, and human oversight.
The French position during negotiations was particularly complex. France sought strict regulations on police use of AI, partly because of security concerns surrounding events like the 2024 Olympics. At the same time, they advocated for lighter regulation of "foundation models" to encourage innovation.
The final version of the Act aims to strike a balance between these competing interests. It maintains the risk-based approach while providing a regulatory framework that allows for innovation in less sensitive areas, including the creative industries.
As the Act is implemented, we can expect to see more detailed guidelines emerge for specific sectors. Creative industries like animation studios will likely need to ensure their AI use aligns with the Act's principles, particularly in areas such as transparency and ethical considerations.
C.U. I understand the distinction at the government level, but what does that mean at the industry level? Will some developers create AI for the government and others for industry, with a clear separation? What are the key copyright challenges and potential solutions surrounding the use of copyrighted data in training AI models?
Er.S. I recently wrote an article about the copyright issues in the AI Act, published two weeks ago in a review called Pin Code Luxembourg. I explained that copyright is a critical issue because we need to create an economic model that fairly compensates rights holders while allowing the use of data in foundation models. It's a contentious issue, and the AI Act doesn't provide a definitive answer, although there are some hints.
One such approach is the present authorisation process, which requires rights holders to approve each use of their intellectual property for training AI models rather than issuing a blanket licence. However, this is challenging because most rights holders may be unwilling to allow usage, and many collective rights organisations struggle to coordinate authorisations, which were already impracticable during the first Internet era.
One other approach, we may need to consider is a financial compensation system, similar to private copy levies employed in other industries. This might involve charging a global fee for AI training models or requiring AI systems to use only synthetic data rather than copyrighted information. The copyright issue has yet to be resolved, and finding an acceptable solution would most likely require additional negotiations and a comprehensive, industry-wide plan.
C.U. Is the idea of a global benchmark related to what you're referring to? And how is the industry responding to these regular developments? Do you have any insights on that? I imagine it's still quite early, but are you getting any feedback from customers or others seeking guidance on how to prepare for the impending changes? They'll certainly be impacted.
Er.S.: The restrictions are intended to take effect over a long period of time, such as one to three years, and most actors are proceeding cautiously. By 2026, the actors will have worked it out. Even if it appears that there is little progress in that direction right now, it is possible that the majority of progress is taking place outside of the EU.
C.U: That is another key factor that creates an issue, both on the level of data that is being transferred from the EU to the US and also on how you regulate these companies that are US companies, right? How do you tax them? Is there an issue there?
Er.S. The issue of data flow between the EU and the United States, particularly personal data, is complex and varied. It includes not only data protection problems but also regulatory and tax issues. The EU's General Data Protection Regulation (GDPR) already establishes severe requirements for the transfer of personal data outside the EU, including to the United States. This applies to all companies that process data from EU residents, regardless of where they are headquartered.
The primary challenge is harmonising the EU's severe data protection rules with the US legal framework, which has differing standards for data privacy and government access to data. The right to erasure (or "right to be forgotten") under GDPR is especially difficult to execute with certain technologies, such as blockchain, because of its immutability.
The EU faces difficulty successfully regulating and taxing US technology businesses that operate in Europe. This is part of a larger global conversation over digital taxation. The OECD-led global tax reform agreement, signed in 2021, seeks to address some of these difficulties by requiring large multinational corporations, particularly digital companies, to pay taxes where they operate and earn profits.
--
C.U.: There's a philosophical problem behind this discussion: a contradiction as to how the industry perceives data, what they do with that, and how stakeholders are dealing with it. For example, the writers in the US were on strike, and their European counterparts followed. They argued that generative models are violating their copyright, whereas the industry says they're not. They're just editing, not using the rights, not even the data as such. They're just remixing them and using them through statistical models. So how can that conflict be resolved? I'm curious to hear your thoughts.
Er. S. You've raised an important and challenging problem that is at the centre of the current discussion between content providers and the AI sector. The core of this disagreement lies in how we interpret 'copying' in the age of AI.
AI models, particularly huge language models and generative AI work at a very fundamental level. They are not merely copying and pasting text; rather, they are analysing patterns, styles, and structures on an almost atomic scale. In some ways, they're attempting to replicate the inspiration or substance of original works rather than copying them verbatim.
This presents a challenge because traditional copyright laws do not protect ideas, styles, or inspiration. They defend the particular expression of those ideas. After all, human creators have always expanded on existing concepts and techniques to create new works. That's an essential component of the creative process.
However, the scale and speed with which AI can accomplish this raises new challenges. Content creators claim that, while their particular ideas may not be protected, their distinct style or signature is the result of years of effort and should be safeguarded.
The AI business, on the other hand, claims that it is not directly exploiting copyrighted material but ' remixing' or 'altering' data using statistical models. They contend that the output of AI is essentially novel and does not violate existing copyrights.
Resolving this conflict is difficult. To handle the unique issues brought by AI, we will very likely need to adapt our legal systems. We may need to develop new techniques to credit or recompense original creators whose works contribute to AI training data.
Finally, we need to encourage communication among content creators, the AI sector, and politicians. The goal should be to strike a balance that protects creators' rights while also allowing for technical innovation. It's a tough problem, and I anticipate seeing more arguments and legal concerns as AI technology advances.
C.U. You could argue that an artist like Vincent Van Gogh created a great work in a certain time and place, so you can't just take it out of that context. To understand the work, you have to understand the context. But the industry will say they're just using patterns, not dealing with originality. In the digital world, the distinction between original and copy has disappeared. It's a complex dilemma, an inherent tension the industry is grappling with.
Er.S. The tension between artistic context and the AI industry's approach to data is indeed at the heart of the current debate.
The crux of the problem lies in how we define and protect originality in the digital age, particularly when it comes to AI-generated content. This issue has been brought into sharp focus by recent decisions from the US Copyright Office (USCO).
Currently, the USCO does not recognize machine-generated works as copyrightable, even if they are merely a minor component of a larger, mostly human-created work. This approach has far-reaching consequences and has provoked intense debate.
There is a case to be made that the USCO may be overstepping its authority here. They may be acting outside of their customary competence by determining what is and is not copyrightable in this new technological context. This approach could potentially conflict with international copyright agreements, which often have a broader view of what constitutes protectable work.
What we're witnessing is a philosophical dilemma developing within the limits of our current legal structure. On the one hand, we have the traditional view of art and creativity, which emphasises context, intention, and human expression. On the other hand, we are witnessing a new technological reality in which the distinction between original and derivative work is becoming increasingly blurred.
The AI industry claims that they are simply applying patterns and not dealing with the concept of uniqueness in the conventional sense. However, this viewpoint does not entirely answer the concerns of artists and creators who believe their work is being utilised without proper credit or recompense.
Indeed, there are currently no definitive answers to this argument. We're in a new territory, seeking to apply legal and ethical frameworks developed for a pre-AI world to rapidly evolving technologies.
Moving forward, we'll need to have extensive discussions with all parties, including artists, AI developers, legal experts, and policymakers. Given these new tools, we may need to re-examine our fundamental understandings of notions such as creativity, originality, and authorship.
Finally, finding a solution would necessitate balancing the need to safeguard and incentivise human creativity against the potential benefits of AI-powered creativity. It's a challenge that will probably occupy us for years as we negotiate this new digital terrain.
C.U.: That's why it's so hard to regulate in the first place, right?
Er.S.: Yes, because it is difficult to regulate in this sector without compromising earlier copyright standards established through international treaties over many years. You would have to essentially move away from some of the building elements of copyright to set up new legal concepts that must be universally accepted.
C.U. In the analogue world, there was the concept of "fair use" or "common e," a type of use outside the scope of copyright regulations. Now we're discussing how that translates to the digital world, particularly in terms of production and consumption.
Er.S. The transition from analogue to digital, and now to AI-generated content, has indeed complicated our understanding of concepts like fair use.
First, let's address the fundamental issue: for AI-generated work to be licensed, it needs to be established as a copyrightable entity. This is our primary hurdle right now.
The U.S. Copyright Office has stated that it will not register works produced by an AI system without human involvement. They require human authorship as a prerequisite for copyright protection. This stance has sparked considerable debate in legal and creative circles.
I believe we need to reevaluate this position. While I understand the concerns, it's important to note that machines have long been tools in the creative process. Photography, for instance, relies heavily on technology, yet photographs are copyrightable based on elements like composition and angle.
There's potential for accepting AI-created works as copyrightable, provided certain conditions are met. Human authorship should still play a crucial role. For example, crafting a detailed prompt for an AI system could be considered a creative act. The human provides the creative impetus and direction, while the AI assists in execution.
The objective is to strike a balance between human innovation and technical support. We must guarantee that we safeguard the artistic integrity and creative input of human authors while also acknowledging the distinctive contributions of AI systems.
Regarding fair use in the digital age, it's becoming increasingly complex. The doctrine of fair use still exists, but its application to AI-generated content is unclear. Questions arise about whether using copyrighted material to train AI models falls under fair use, and how to apply fair use principles to AI-generated outputs.
Moving forward, we need to establish clear guidelines and criteria for copyright eligibility in AI-assisted works. This would help foster innovation and encourage collaboration between humans and machines while protecting the rights of creators.
It's also worth noting that this isn't just a U.S. issue. The World Intellectual Property Organization (WIPO) is actively discussing these challenges, and different countries are taking various approaches. For instance, the UK has introduced an exception to copyright for text and data mining, which could have implications for AI training.
Ultimately, we're in uncharted territory, and our legal frameworks are still catching up to the technology. It's a complex issue that will require an ongoing dialogue between creators, technologists, and lawmakers.
--
C.U. So the prompt is essentially becoming a way of directing the machine on what to do—using the machine as a tool for artistic creation or copywriting. The machine is quite versatile; it can be used for a variety of tasks. You could use it to fill out an application, or you could use a prompt to generate a piece of art. And if the person crafting the prompt is skilled enough, this can work quite well. How should we conceptualise the role of the prompt in this environment, where the computer is the primary creative tool?
Er.S. You've brought up an intriguing element of our changing relationship with AI, particularly in artistic industries. The role of the prompt in AI-assisted production is both critical and varied.
In essence, the prompt has become the key link between human creativity and AI capacities. It serves as a bridge between human intent and machine action, allowing us to use AI systems' tremendous computational capacity and pattern recognition abilities for creative purposes.
One could consider the prompt as a type of creative direction. A well-crafted prompt encourages the AI to develop material that is consistent with the user's creative goal, just as a film director guides performers and crew to achieve their vision. This elevates prompt engineering techniques to a new level of creativity.
The versatility you indicated is crucial. Whether we are utilising AI for artistic production, copywriting, or even more routine chores like filling out applications, the prompt is how we control the AI's output. It is where human creativity, word choice, and creative direction come into play.
However, it is crucial to emphasise that developing effective prompts is difficult. It necessitates a thorough understanding of both the AI system's capabilities and the intended goal. Prompt engineering is growing as a useful profession that combines programming, linguistics, and domain expertise.
In the context of artistic creation, the stimulus becomes a type of meta-creation. The artist does not actually create the final item, but rather the instructions that will lead to it. This raises important considerations regarding authorship and the creative process.
We should also think about the iterative nature of working with AI. Often, the original product does not entirely capture the creator's vision, necessitating fast improvement. This back-and-forth between human and machine can be interpreted as a new type of artistic interaction.
Looking ahead, we may witness the development of more advanced prompt interfaces, possibly including visual or aural cues rather than just text. This may further blur the distinction between traditional creative tools and AI assistance.
Finally, while the computer may be doing the heavy lifting in terms of content development, much of the creative decision-making comes when the human creates the prompt. It's a symbiotic partnership in which human creativity directs mechanical capabilities to generate results that neither could achieve alone.
This shift in the creative process raises important concerns about the nature of creativity, authorship, and the role of technology in the arts. As AI advances, so will our grasp of these principles and our approach to utilising these powerful tools in creative endeavours.
C.U. In the earlier phases of the digital revolution, we were able to take various forms of creative work and reduce them down to digital data. Unrestricted by the original production or usage contexts, this process of "datafication" allowed us to combine and recombine this data in novel ways. Now, we're entering a new phase where this data is being "re-articulated" through the process of prompting AI models. The industry often conceptualises this as an "autopilot". But is it so?
Er.S. In the earlier phases of the digital revolution, we took various forms of creative work and reduced them down to digital data. This process, known as "datafication," allowed us to combine and recombine this data in novel ways, unrestricted by the original production or usage contexts. Now, we're entering a new phase where this data is being "re-articulated" through the process of prompting AI models.
A simple prompt will deliver a generic result, whereas a more precise, unique prompt can augment and expand on the human vision in novel ways. A generic prompt may work for something like filling out a form, but when it comes to generating unique art, a talented human prompter can use the AI's powers to realise their ideas beyond what a single human could. The machine augments and expands on human cues and vision. The challenge is that this blurs the distinction between traditional concepts of authorship and creativity. Do humans deserve the same acclaim as conventional artists if they provide a highly detailed prompt that greatly influences the final product? Is AI's contribution significant enough to be considered collaborative work? There is little in the way of explicit regulations or precedent to guide this.
The difficulty originates from content atomisation, which divides data into tiny components that are then reassembled to create new works. Some argue that this still uses elements of the original data similarly, therefore there is some controversy. It is not a straightforward situation, as both sides have good points.
Finding a solution that adequately compensates the original writers whose work is being used could help close the gap. This would enable businesses to allow their content to be used in these new generative models while also addressing copyright concerns. If we can reach an agreement that keeps the original creators happy and compensates them fairly, they may be more willing to have their work used in this way. Resolving this boils down to finding a mutually acceptable approach to make them whole while still allowing them to benefit from their work.
To address these issues, I believe we need a multi-faceted approach. First, we need to foster open dialogue between creators, AI developers, legal experts, and policymakers to better understand the challenges and potential solutions. Second, we should invest in research to study the impact of AI on creativity, authorship, and intellectual property. This will help inform evidence-based policy decisions.
Third, we must adjust our legal structures to address the particular issues brought by AI-generated content. This could include changing copyright laws, adopting new licensing arrangements, or developing AI-specific legislation. Finally, we must prioritise justice and ethics in the development and deployment of AI systems to guarantee that the benefits are distributed evenly among all stakeholders.
Addressing these issues will require collaboration, innovation, and a willingness to adapt. It won't be easy, but it's essential if we want to harness the full potential of AI while respecting the rights and contributions of human creators.
C.U. So how could that be resolved? Are there any proposals on that level?
Er.S. Yes, the French legislator’s proposal, introduced in September 2023, aimed to address the complex issues surrounding AI-generated content and copyright law. One of the key provisions in the proposal was to assign ownership of AI-generated works to the authors or assignees of the original works used to train the AI, in cases where the work was created without direct human intervention.
While this proposal seems to be a step in the right direction, it raises some practical concerns. The main challenge lies in determining the exact authors or assignees whose works contributed to the creation of a single AI-generated work. Current AI systems are often trained on vast datasets from numerous sources, making it difficult to trace the origin of each individual contribution.
As of now, there are no tools that can effectively extract specific content pieces from an AI-generated work and identify all the authors or copyright owners who contributed to its creation. Reverse-engineering the machine-generated work to determine the percentage of inspiration from each human author would be a complex and impractical task.
C.U. I guess you would expect long and big legal battles to solve the problem, right? That's how you solve these issues.
Er.S. Yes, I believe we will see some significant legal battles regarding the licensing of AI-generated content, with extreme decisions on one side or another. This is a common pattern when new technologies disrupt established legal frameworks, as we've seen with previous innovations like file-sharing platforms and streaming services.
However, the focus right now is on figuring out the regulatory landscape for AI in the European Union. Under the AI Act, which was adopted by EU co-legislators in May 2024 and will apply from August 2, 2026, each member state must designate authorities that will report to the AI Board at the European level.
The AI Act proposes establishing coordinated AI regulatory sandboxes at the national level to promote AI innovation across the EU. These sandboxes will provide a controlled environment in which enterprises may test and experiment with novel AI goods and services while being monitored by regulators.
The sandboxes will need to be developed in partnership with businesses, national governments, and the European AI Board. This collaborative strategy seeks to strike a balance between fostering AI innovation and providing proper monitoring and risk management.
One of the most difficult problems would be ensuring consistency and harmonisation among the many member nations. While the AI Act establishes a standard framework, individual countries may differ in how they interpret and implement the laws, especially in the early phases.
Another difficulty will be developing the requisite competence and capacity within national bodies to successfully supervise and regulate AI technologies. This will necessitate major investment in educating and recruiting people who are well-versed in both the technical aspects of AI as well as its legal and ethical consequences.
Tensions may also arise between the desire to promote innovation and the need to defend fundamental rights and enforce accountability. Maintaining the appropriate balance will necessitate continuing dialogue and coordination among regulators, industry, and civil society players.
C.U. What about workers and creators?
Er.S. At this point, I don't believe that intellectual work is in immediate danger from AI. However, once robotics merge with AI, we may see a significant impact on mainly manual jobs. Robotics has the potential to take over human jobs in areas like care and maintenance, which could lead to significant disruptions in the labor market.
I believe that the revolution in AI-assisted intellectual work will face less resistance compared to the potential disruption caused by AI-powered robotics in manual labor. Many professionals are already using AI tools to augment their work, such as doctors using AI for diagnostic assistance or lawyers using AI for legal research and document review.
However, we need to be proactive in addressing the ethical challenges that arise from the use of AI in these fields. For example, ensuring transparency and accountability in AI decision-making, preventing bias and discrimination, and protecting privacy and data security.
Private companies developing AI systems are largely motivated by commercial goals and may not necessarily prioritise ethical considerations. While many firms have set ethical rules and principles for AI development, these ideals have occasionally been violated in the quest of profit or market supremacy.
Moreover, the rapid pace of AI development and the competitive pressure to bring new products and services to market can sometimes lead to insufficient testing and oversight, potentially exacerbating ethical risks.
This is why we need a strong public discourse and regulatory framework to ensure that AI development and deployment are consistent with societal values and interests. Governments, civil society organisations, and academic institutions play an important role in establishing the ethical landscape for AI and keeping commercial corporations accountable.