Nikhil Naren*

The emergence of AI-generated creations has spurred debate surrounding the need for special rights tailored to these unique works. Unlike traditional intellectual property concepts, such as copyright or patent law, AI-generated creations challenge conventional notions of authorship and ownership. This article aims to explain why sui generis rights could offer a specialised legal framework to address the complex nature of AI-generated content, acknowledging the role of both human creators and AI systems in the creative process. Such rights would foster innovation, incentivise investment in AI technology, and ensure equitable distribution of benefits. However, implementing sui generis rights for AI creations requires careful consideration of ethical, legal, and technical implications.
Introduction
The rapid development of artificial intelligence (“AI”) has raised questions about the legal status of AI-created works. Since AI is becoming increasingly capable of creating new content with minimal or no human input, scholars, governments, companies, and others are grappling with finding an answer regarding ways to protect AI-generated works within the framework of existing intellectual property (IP) systems. In their papers, Ginsburg, Budiardio, and Kaminski place the debate within the framework of the power struggle between traditional notions of authorship, which argue for the exclusion of AI, on the one hand, and a new dynamic interpretation of authorship that corresponds to the new AI age on the other hand.
How, then, can this impasse be resolved? Is there a place for sui generis rights to protect AI-generated works? The World Intellectual Property Organisation’s (WIPO) glossary defines sui generis as “of its kind or class.” Scholars refer to sui generis IP rights as a legal system of protection that shares some characteristics with IP law but differs from and is unique in how it enables the protection of the subject matter in question.
Gaps in the IP system leave AI-generated creative outputs exposed. Traditional IP rights may protect various aspects of AI. For example, copyright protection for AI algorithms is used to create original data compilations, and patent laws are used to protect cutting-edge technological aspects of AI inventions. However, there are gaps in the existing IP system. The current IP regime, not only falls short of adequately differentiating between, ‘AI-generated output’ and ‘AI-assisted output’, but the lack of differentiation further creates a confusing vacuum where AI-generated creative outputs are not adequately protected by traditional forms of IP protection.
This article proceeds in a three-fold manner. To begin with, it defines AI outputs to highlight the need for their legal protection. Moving on, it analyses the existing multi-jurisdictional protection of AI-generated works to highlight the disparity and inadequacy of laws. Towards the end, I discuss a harmonised legal framework proposed to determine the owner of AI-generated works. The author, in totality, aims to answer the question, could creating sui generis rights for AI be a way to protect AI-generated creative works?
Defining AI outputs
The 2020 WIPO paper entitled- “Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence,” creates a distinction between AI-generated outputs and AI-assisted outputs. Outputs generated with material human intervention or direction are defined as AI-assisted outputs, and those without human intervention are defined as AI-generated outputs. This, however, begs the question of what constitutes “assistance.” Does human assistance in processing data inputs and training at the early stages of creation amount to “assistance” in its true sense?
To answer this, experts suggest the “Test of Foreseeability.” According to Enrico and McDonagh, the test of foreseeability is a way to determine whether an AI assists in creating a creative output. For example, an AI can “assist” in a creative output when the person using the AI foresees the “end creation” or when the AI is guided in a manner deemed sufficient to achieve a foreseeable creative output by a human. In this way, when there is material human intervention, the output will be considered an AI-assisted output. On the other hand, if the output is generated without any human direction and the AI can make decisions while responding to unanticipated information or events, it will be considered “AI-generated work.”
However, Daria Kim, Senior Research Fellow at the Max Planck Institute for Innovation and Competition Law, suggests that this contention is misplaced because knowledge of the end solution should never be a pre-requisite, as that contradicts the very definition of problem-solving. The wealth of science experiments that have resulted in revolutionary inventions would make it absurd to mandate the foreseeability of the outcome as a condition for attribution of inventorship.
Notwithstanding the foreseeability test, there is still no accepted definition of what constitutes an AI-generated invention. Furthermore, current AI systems can generate works already protected as ‘creative works’. For example, the AI Art Generator, DreamUp, has faced several IP violation suits. The artists allege that the AI has been trained to create a pseudo-original image based on the five billion images scraped off the internet. The system-fed data is already protected as ‘creative work’ under the name of various artists, thus violating their copyright. To apply the test of foreseeability in a proper manner, it is imperative to differentiate between, AI-assisted and AI-generated outputs. It is also essential to understand how the respective computational processes were set up in each case.
The test of foreseeability, can be combined with the two-fold test of originality, to pave the path for adequate sui generis rights for AI creation. Considering the insentient nature of AI, it should not be vested with the moral rights of the creation, to protect the sanctity of a human’s special rights to integrity. Further, the standards for determining infringement of AI-generated work should be lower, since AI systems can’t be recognised as authors [for now!] and, therefore, require less protection. Hence, when it comes to granting IPR rights, experts suggest locating the role played by the human intellect as a methodology.
Why is legal protection of AI-generated works necessary?
Scholars argue that the legal protection of AI-generated works is necessary to encourage investment and AI developers to make developing new algorithms a priority.
On this point, Sautov and Marcus emphasise that economic interest can be secured only when developers know that when they create a complex algorithm to compose new music or draw digital art, the outputs will not be used by anyone for free. However, under no circumstances should the person who financed the development of the algorithm be regarded as the ‘owner’ of the work created by such AI, for the simple reason that the IPR regime rewards the inventor, not the investor.
A patchwork of IP laws
Currently, IP laws around the world differ in certain aspects. While some countries, such as China and Ireland, grant rights for AI outputs, others only grant such protection if the right holder is a natural person (i.e., a human). For example, in India, only a natural person can be considered the author of a work protected by copyright laws. In the case of Rupendra Kashyap v. Jiwan Publishing House Pvt. Ltd., 1994 IAD Delhi 1, 1994, the Indian Court observed that even when an attempt is made to grant an artificial person, the copyright claim of work, eventually, the authorship must necessarily be attributed to a natural person. The problem is compounded in countries like the United States, where the disparity in IP laws is evident across state lines. For example, Washington has the Washington Facial Recognition Law, to regulate the use of AI, while Massachusetts Bill 1619, lays down its guidelines for AI control. If the protection of AI-generated outputs is to be effective, such protection must be harmoniously developed and acknowledged across borders. Only then will it be possible to achieve the required levels of legal certainty, and the disparity in IP law regimes can be addressed.
Who is liable when things go wrong?
The issue of ‘locating responsibility’ further compounds the problems associated with different laws around AI in various jurisdictions. Wherever there is a right, there is a responsibility. Even if rights are granted to AI systems, these systems cannot exercise any responsibility over their system or enforce said rights. Who will bear the responsibility for any threats, risks, harm, or wrongs done by the AI-generated output? For instance, can an AI be held responsible for creating an artistic work that offends the religious sentiments of a particular community?
The problem of locating responsibility may be solved by identifying the different actors involved in achieving the output, such as those who developed the algorithms, analysed the data, input data, and those human developers who command/direct the AI during the training phase.
Ensuring the integrity of AI training data
In order to protect the integrity of the AI training data, due care needs to be given to the data sets on which the algorithms are trained to ensure they are free from ethical and other biases. For instance, if an algorithm is not trained on broad and clean data, it may express biases towards certain groups of people. For example, facial recognition technology trained on data sets from the West, but deployed in South-Asian countries, may exhibit racial bias.
In general, programmers, owners, and users of AI machines demonstrate an unwillingness to accept liability for the acts performed by the algorithm they program, own, or use, to create literary and artistic outputs. As Christoph Bartneck and others have discussed in the chapter— Responsibility and Liability in case of AI systems— to blame an agent for a wrong, causal chain of events that led to the wrong must be first established. This is usually difficult in cases of actions taken by AI based on its training data, and hence, it leads to difficulty in affixing liability.
Towards a harmonised legal framework on copyright
The EU Directive 2001/29/EC (Recital 4) supports the view that a harmonised legal framework on copyright is essential to foster substantial investment in creativity and innovation for growth and increased competitiveness. Further harmonising the IP legal framework requires that countries around the world negotiate and agree to a minimum set of international standards that are widely accepted. WIPO plays a key role in convening such international negotiations. The WIPO-administered agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) was an essential and much-needed development. However, the agreement needs a direct application to AI creations.
It is also to be recalled here that the purpose of IP laws is to advance human progress and creativity. The same can also be traced from Article 27 of the Universal Declaration on Human Rights, which grants a right of protection to the moral and material interests of authors resulting from their scientific, literary, or artistic production. Therefore, an international process to ensure existing IP laws are fit for the AI age would establish greater legal certainty around the protection of AI-generated works.
Adopting sui generis rights is a tested solution to the issue of granting special rights to works that cannot be protected by traditional forms of intellectual property laws. Take, for example, the granting of sui generis protection to databases in the European Union. In 1996, the EU Directive (Directive 96/9/EC) focused on harmonisation of database protection across the Member States of the European Union with a focus on fostering the growth of the EU database ecosystem.
According to desk research, the Database Directive has been applied in various fields, including sports data, legal databases, lists of poems, automobile lists, air travel service websites, and maps. While ensuring harmonised rights, the Directive also maintained freedoms for database users, whether they are makers, commercial entities, or individual consumers. However, with the technological advancements taking place, the scope of the Directive becomes limited; it does not cover the inventions by machines, thus lays down the need for the creation of an advanced law to serve a similar purpose.
Who holds the rights?
Some scholars argue that when a scientist develops a machine-learning algorithm, s/he does not hold rights in the outputs so generated, but rather that the right should vest in the system itself. Stephen Thaler’s ‘Creativity Machine’: an AI product that not only intimates original and creative thinking, but also produces it, is a prominent example. However, even in such cases, the performance of these AI systems is restricted to the areas envisaged by their designers, even if the final output is unforeseeable. However, Established patent laws require a person to be named an inventor on a patent application. However, even if designers of such systems are recognised as a ‘co-inventors’ on a patent application, such a workaround does not fully address the issue at hand. Indeed, it further strengthens the argument for creating sui generis rights for such inventions.
A standard narrative arguing against the grant of credit for machine-based invention to humans is focused on locating the ‘extent of human imagination’. This narrative claims that just because humans cannot foresee the results, they cannot be deemed the inventor. I believe that current debates around ownership rights to AI need careful re-thinking, especially given the unpredictable and rapid evolution of AI systems. Some key features to develop the necessary line of re-thinking could include (a) a reduced term of protection, (b) treatment of AI-generated works as ‘performances’ , (c) a grant of a lower standard of protection, etc.
While the majority of AI systems at present are unable to complete any task without human intervention, it seems inevitable, therefore, that as AI systems become more autonomous, the debates surrounding how the IP system needs to evolve and subsequently protect the outputs of these systems will intensify.
The WIPO Technology Trends Report 2019 highlights the increasing number of AI-related patent applications filed globally. This reflects the global nature of the AI sector and the desire of applicants to protect and commercialise their patented AI-related inventions in international markets.
UKIPO’s consultation report, acknowledges that international harmonisation is a pre-requisite to the reform of patent law. The report concluded that any change to laws on inventorship must be harmonised at a global level to provide a level-playing field for inventors across different jurisdictions. and to avoid causing any disadvantage to inventors from jurisdictions where AI-generated works are not currently protectable under IP law.
Therefore, a more limited scope of protection for AI-generated output is necessary. The formulation of a new class of rights should ideally strike a balance between encouraging investments in AI technology development and ensuring public access to the benefits of AI-generated works. At the initial stages, due to relatively lower standards of protection, there may be uncertainties. However, the establishment of such sui generis rights would ultimately require a redesign of the global IP regime, ensuring consistency and harmonisation in the rights and legal protection granted to AI-related works.
*Nikhil Naren is a British Chevening Scholar; Assistant Professor at Jindal Global Law School; and Of Counsel at Scriboard, New Delhi. His research focuses on the intersection of law and technology and its impact on society. He regularly contributes to columns in leading national dailies and is invited by law schools to conduct sessions on varied aspects of technology law. Nikhil has also co-authored two books on the subject.