Auto Robotic Assimilation: An Emerging Issue

Harshwardhan Thakur


The greatest advances in Artificial Intelligence (AI), particularly in the field of evolutionary computing, are producing computers with increasingly complex problem-solving capabilities. Borrowing principles of evolution, molecular biology, neurology, and human cognitive science, technologists have evolved computers into “thinking machines” with the potential to perform creative and inventive tasks. To the point today, where we have an artificially ‘invented’ wide spectrum antibiotic the need for an antibiotic for which, has been classified as critical by the World Health Organisation (WHO) for many years now. We are seeing today, the realization of the long-standing principle of evolving smarter technologies as computer capabilities continue to expand and costs of computing continue to fall, machines will perform the majority of the work in the invention process and originate novel solutions not imagined by their human operators, transforming the invention process in ways not easily accommodated within the current global patent systems.

In 2020, we have seen two great applications of this principle, in the form of the invention of Halicin and the creation of the artificial intelligence known as DABUS, the latter of which was denied patent rights by the European Patents Office in February of 2020 where the appellate tribunal held in the negative with regard to the question of whether Artificial Intelligence can be granted patent rights, of any extent be it inventorship or ownership.

This paper discusses these two systems, their learning complexities, capacities, and their ability to function and process information independent of any external input, further, their ability to produce and apply original ideas to broad virtual datasets to produce better, more accurate results than their human counterparts, more efficiently.

This paper will attempt to discuss and answer the following questions.

  1. Whether Artificial Intelligence can act truly independent of external actors.
  2. Whether they should be taken under the umbrella of the term Legal persons.
  3. Whether creations by Artificial Intelligence should be protected by patent rights, or new sui generis rights, if at all.



As we are aware at this point, AI algorithms are an incredible improvement over the brute force brainstorming process that we humans are typically capable of, being vastly slower, ineffective and random than what the algorithms are capable of, this has led to AI being used as a tool in the invention process (Abbott 2016) (Plotkin 2009.)

These algorithms are being used to better simulate, evaluate and generate, large numbers of potential solutions without the human constraints of bias or time (Plotkin 2009.) This becomes of very grave importance with reference to fields that involve extraordinarily complex methodologies and research, where there are often many variables to consider like biotechnology, nanotechnology, etc. (Sacha, Varona 2013.) Moreover these inventions may turn out it be potentially disruptive since, when raced against typical research and development techniques, AI can easily draw over data sets across multiple and diverse fields, even so, human ingenuity still plays a part in the process, in so far as setting targets, parameters and success criteria go (Plotkin 2009.)

The use of AI has assisted in creating patentable inventions for several decades, with recent developments and the growth in computational ability that we are witnessing, we are enabling computers to produce useful inventions and become the primary drivers of innovation in fields like pharmaceuticals, health, technology, electronics etc. (Nosengo 2016.) Today, we can see commercial applications of this same principle in the form of companies like IProva that use AI algorithms in place of human intelligence for inventions and technology optimisations, that claims that hundreds of patent applications have been filed by its customers based on the inventions it has delivered, some of which have been granted.

Let’s now have a look at some use cases;

Artificial Neural Networks

Artificial Neural Networks (ANNs) are collections of binary switches that simulate neurons of a biological brain. An example that we can use here, is that of the “creativity machine”, the previous step in the evolution of AI as an independent research algorithm, created by Dr. Stephen Thaler (Cohen 2013.)

ANNs are organised in overlapping, multiple layers of abstraction, with each layer the network detects increasingly finer features of the input data and applies a ‘weighting’ function to it.(The Economist “From Not Working to Neural Networking.”) This weighting is set by ‘training’ the network to recognise patterns and differences in data and to respond to them accordingly. This training is of two types, labelled and unlabelled, with the latter being when the network is allowed to interpret data independently, and the former as when, the data input is labelled from the beginning. ANNs operate by feeding seed information to the input layers which apply different weights to it before passing it to the next subsequent level, this process is repeated at each level until the output from the final level.

Creativity Machine is an invention that generates new inventions by distributing the connections in one ANN to generate an output while a second ANN perceives value in the stream of output according to the criteria set by the operator. The first ANN’s data is then tested and rectified for the placements and magnitudes of disturbances by the second ANN to maximise potentially useful and meaningful outputs (Cun, Bengio, Hinton 2015.).

The creativity machine is the spiritual predecessor to DABUS, and is credited with the invention of numerous inventions, for instance, the Creativity Machine produced a novel cross-bristled configuration for the Cross-Action toothbrush design that had significant performance advantages in terms of plaque removal and gingival health compared to other toothbrush designs (Cugini, Warren 2006.)

For quite some time now, ANNs have been used in drug creation and discovery (J Ma et al. 2015), they have been employed for the primary testing via virtual screening of a great number of compounds in the automated design of new classes of drugs and in finding unique applications for such drugs (Abbott 2010) (Riley, Webster, Ramsundar 2015.) By using an ANN with many layers, the algorithms reduce the need for experimental work. It is incredibly beneficial when screening against multi target profiles, which is otherwise very difficult and often impossible (Besnard et al. 2012). Thus, it is clear that ANNs have the capacity to accelerate drug discovery, improve their quality and diversity of the outcomes, while greatly reducing costs (Patel 2013) (Parmolli, Magazzini, Riccaboni 2011).

Laboratory Applications

Certain systems integrate AI algorithms with physical laboratory robotics to autonomously conduct scientific experimentation, they run with little to no human intervention. These systems could make observations, devise hypotheses, test hypotheses, employ automated laboratory equipment to experiment and produce independent results, and interpret them (King et al. 2009)

“Adam and Eve” are such systems, they were designed to autonomously run laboratory experiments and have been remarkably successful in doing so. Robot Adam is capable of formulating hypotheses and test them in the closed system lab that is provided to it. It took Robot Eve has been designed and is used in drug development to fight drug resistant malaria and psychostomasis (Williams et al. 2015.) From data sets of 5000 molecules, it determined the characteristics of the post effective molecules, and then screened only those remainders that it predicted would be the best effective, it discovered a new anti-malarial use for an existing drug that was previously used only as a cancer inhibitor.

Genetic Programming

Genetic Programming (GP) is an AI algorithm modelled after the process of biological evolution, one that employs a systematic progression-based approach to solving highly complex problems by improving upon a set of solutions of known performance and result (Poli, Koza 2014.)

The algorithm creates new generations of solutions by applying functions corresponding to genetic operations until a subsequent generation produces a suitable ‘offspring’ solution.

GP doesn’t produce big steps in the invention process, however has great significance and application in fields where even incremental development and growth is highly valued and useful, where the interrelationships between variables are either unknown, poorly understood, or wrong (Koza 2010) (Koza et al. 2003) (Keane, Streeter 2002 US Patent US6847851B1.) It has been used to independently recreate and reverse engineer, known patented inventions, generating non infringing solutions, at least one known invention is known to have been produced using GP, resulting in the results produced by GP often called competitive to human results.

Dr. Koza, the father of genetic programming, states that “The fact that genetic programming can evolve entities that are competitive with human-produced results suggests that genetic programming may possibly be used as an “invention machine” to create new and useful patentable inventions. In this connection, evolutionary methods, such as genetic programming, have the advantage of not being encumbered by preconceptions that limit human problem-solving to well-travelled paths.” (



AI is often referred to as ‘machine intelligence’ to contrast it to human intelligence (Poole, Mackworth, Goebel 1998), for in AI, there has always been a strong core value of explainability, resulting in the enormous interest and practical success of machine learning (ML), an early example may be the “Advice Taker” proposed by McCarthy as “a program with common sense” (McCarthy 1960). It was the first time that common sense and allied reasoning abilities were taken to be the core tenets of a successful AI. The more recent, algorithms can build causal models of the systems they are testing, accounting for variable changes and disparities which would prove of far greater efficiency when posed against their human counterparts, rather than simply solve pattern recognition problems (Lake, Ullman, Tenenbaum, Gershman 2017.)

ML is a very practical application of AI, where the aim is to create software capable of automatically learning from previous data, gather experience and improve its learning to make predictions based on new data (Michalski, Carbonell, Mitchell 1984.) The challenges here are in their understanding context, making sense, ability to make decisions under uncertainties (Holzinger, Biemann, Pattichis. Kell 2017). ML is the most commonly found form of AI algorithms, these methods find application across sciences, businesses, and have led to more evidence-based decision making, thus furthering enormous progress qua the development of new statistical learning algorithms along with the availability of large data sets and low-cost computations (Jordan, Mitchell 2015). Deep Learning (DL) is a genus of the ML family, based on deep complicated neural networks having a long history (Schmidhuber 2015). An example would be the work of the Thrun group, where they achieved with a DL approach performance on par with medical doctors, demonstrating that such approaches are able to classify skin cancer with a level of competence comparable to human dermatologists (Thrun et al. 2017). A further example is the promising results of identifying diabetic retinopathy and related eye diseases (Tim, Ling Lee et al. 2017). All these are very good examples of the progress and usefulness of AI, but most proponents of these approaches emphasize that “usable intelligence is difficult to reach because we need not only to learn from prior data, to extract knowledge, to generalize, and to fight the curse of dimensionality, but to disentangle the underlying explanatory factors of the data in order to understand the context in an application domain’ (Bengio, Courville, Vincent 2013), where to date a doctor-in-the-loop is indispensable” (Holzinger 2016).

We now know enough to dive into the primary examinees of this paper, namely.


As we are well aware, there is an ever growing need to discover new antibiotics to tackle newly emergent antibiotic resistant bacteria, a team comprised of researchers from Massachusetts Institute of Technology, Cambridge and Harvard Medical School, Boston, trained a deep neural network (Hereinafter, referred to as Hal AI) capable of predicting molecules with antibacterial activity.

They performed predictions based on multiple chemical libraries and discovered a molecule from the Drug Repository Hub, Halicin (Hal) that is structurally divergent from conventional antibiotics and displays bactericidal activity against a wide spectrum of pathogens.

Further, from a discrete set of 23 empirically tested predictions from >107 million molecules curated from the ZINC15 database; their model identified eight antibacterial compounds that are structurally distant from known antibiotics. This work clearly highlights the utility of deep learning approaches to expand our antibiotic arsenal through the discovery of structurally distinct antibacterial molecules.

It becomes important here to understand how exactly Deep Neural Networks (DNNs) must be interpreted. DNNs, have been demonstrated to be applicable to a wide range of problems, from image recognition (Simonyan, Zisserman 2014), classification (Thrun et al 2017) to movement recognition (Singh et al. 2017), and now, targeted research, these applications are very remarkable from a logical point of view since they mirror human processes. Typically, DNNs are trained via intense periodical supervision on large and carefully annotated data sets, yet, the need for such data sets restricts the classes of problems that may be addressed through this approach.

There are many traditionally viable approaches to improving the efficacy of DNNs, however, the one employed here, is a strange mix of analytical and empirical exploration.

Traditionally, molecules were represented by their fingerprint vectors, which reflected the presence or absence of functional groups in the molecule, or by descriptors that include computable molecular properties and require specialised knowledge to be constructed in the first place. (Mauri et al. 2006) (Moriwaki et al. 2018) (Rogers, Hahn 2010)

Even though the mapping was in these instances, learned automatically, the fingerprint vectors and their descriptors themselves were created and designed manually. Innovation in the past decades in Neural Network Approaches lies in their ability to learn these representations automatically, mapping molecules into continuous vectors that are subsequently used to predict their properties. These designs result in representations that are highly attuned to the desired property(ies), yielding monumental gains in property prediction accuracy over manually crafted representations (Yang et al. 2019)

Even though these advancements make the difference between analytical and empirical approaches far less significant, they are still noticeable. The approach taken here consisted of three stages.

  1. They trained a deep neural network model to predict growth inhibition of Escherichia coli using a collection of 2,335 molecules.
  2. They then applied the resulting model to several discrete chemical libraries, comprising >107 million molecules, to identify potential lead compounds with activity against E. coli.
  3. After ranking the compounds according to the model’s predicted score, they lastly selected a list of candidates based on a pre-specified prediction score threshold, chemical structure, and availability (Stokes et al. 2020.)

Through this approach, from the drug repurposing hub, they identified the c-jun N-terminal kinase inhibitor SU3327 (renamed Halicin), which is structurally divergent from conventional antibiotics, as a potent inhibitor of E. coli growth.

In the paper by Stokes et al. the following is stated

The World Health Organization designated A. baumannii as one of the highest priority pathogens against which new antibiotics are urgently required. In addition to halicin, from a distinct set of 23 empirically tested predictions from >107 million molecules found in the ZINC15 database, we readily discovered eight additional antibacterial compounds that are structurally distant from known antibiotics. Remarkably, two of these molecules displayed potent broad-spectrum activity and could overcome an array of antibiotic-resistance determinants in E. coli. This work highlights the significant impact that machine learning can have on early antibiotic discovery efforts by simultaneously increasing the accuracy rate of lead compound identification and decreasing the cost of screening efforts.” (Stokes et al. 2020.)

The team has already filed for a patent but not on the Artificial Intelligence’s behalf, for Halicin.


In 2019, Dr. Thaler patented (Thaler, 2019 US Patent 10423875) his latest improvement to the Creativity Machine model she had previously developed, this invention was nicknamed DABUS. He describes its workings as follows.

“It functions, starting as a swarm of many disconnected neural nets, each containing interrelated memories, perhaps of a linguistic, visual, or auditory nature. These nets are constantly combining and detaching due to carefully controlled chaos introduced within and between them. Then, through cumulative cycles of learning and unlearning, a fraction of these nets interconnect into structures representing complex concepts. In turn these concept chains tend to connect with other chains representing the anticipated consequences of any given concept. Thereafter, such ephemeral structures fade, as others take their place, in a manner reminiscent of what we humans consider stream of consciousness.” (

“Thus, the enormous difference between Creativity Machines and DABUS is that ideas are not represented by the ‘on-off’ patterns of neuron activations, but by these ephemeral structures or shapes formed by chains of nets that are rapidly materializing and dematerializing. If per chance one of these geometrically represented ideas incorporates one or more desirable outcomes, these shapes are selectively reinforced (Figures 1 and 2), while geometries representing undesirable notions are weakened through a variety of mechanisms. In the end such ideas are converted into long term memories, eventually allowing DABUS to be interrogated for its cumulative inventions and discoveries.” (

Figure 1

At one moment, neural nets containing conceptual spaces A, B, C, and D interconnect to create a compound concept. Concepts C and D jointly launch a series of consequences E, F, and G, the latter triggering the secretion of simulated reward neurotransmitters (red stars) that then strengthen the entire chain A through G.


Figure 2

“An instant later, neural nets containing conceptual spaces H, I, J, K, L interconnect to create another compound concept that in turn connects to two consequence chains M, N, O, and P, Q. Terminal neural nets in both consequence chains trigger release of simulated reward neurotransmitters (red stars) that doubly strengthen all chains currently activated.

Since the DABUS architecture consists of a multitude of neural nets, with many ideas forming in parallel across multiple computers, some means must be provided to detect, isolate, and combine worthwhile ideas as they form. Detection and isolation of freshly forming concepts are both achieved using what are known as novelty filters (also known as GMFs), adaptive neural nets that absorb the status quo within any environment and emphasize any departures from such normalcy.

In the final processing stage of identifying critical neural nets, so-called “hot buttons,” are incorporated into these chains then triggering the release of simulated neurotransmitters capable of reinforcing or destroying a given concept chain.

Finally, this patent introduces the concept of machine sentience, thus emulating a feature of human cognition that supplies a subjective feel for whatever the brain is perceiving or imagining. Such subjective feelings likewise form as chains that incorporate a succession of associated memories, so-called affective responses, that can ultimately trigger the release of simulated neurotransmitters that either enable learning of the freshly formed concept or destroy it recombining its component ideas into alternative concept chains”. (

Clearly the following assertions can be made based on the reasoning provided by Dr. Thaler,

  1. DABUS is an independent set of neural networks that creates and discards ideas completely off its own ability, independent of human supervision but for, setting a desired outcome through adaptive anomaly filters (GMFs).
  2. DABUS can work from a blank slate, learning by adapting to its environment rather than by scanning databases.

Led by one Dr. Abbott from the university of surrey, the team of researchers including Dr. Thaler filed two patent applications to the European Patents Office, with the sole inventor listed as the AI DABUS. Dr. Abbott’s reasoning behind the same being, that granting a patent to the AI will “reward innovative activities,” he said, “and keep the patent system focused on promoting invention by encouraging the development of inventive AI, rather than on creating obstacles.” (

The team behind DABUS argues further that, “machine rather than a person identified the novelty and salience of the instant invention, Inventors shouldn’t be restricted to ‘natural persons,’ and any machine that meets inventorship criteria should also qualify as an inventor as if it were a natural person,” (

Most recently, in December of 2020, the UK High court upheld the decision of the UKIPO in this by denying the artificial intelligence inventorship rights and noting the following

“I in no way regard the argument that the owner/controller of an artificially intelligent machine is the ‘actual deviser of the invention’ as an improper one. Whether the argument succeeds or not is a different question and not one for this appeal: but it would be wrong to regard this judgment as discouraging an applicant from at least advancing the contention, if so advised,”

Further, the judge said that to define “inventor” to include both persons and things would be “an unlikely construction of the 1977 Act,” and he rejected it. This did not mean that DABUS is not itself capable of an inventive concept, but it cannot be named as the inventor within the meaning of the Act.

DABUS has already been making waves in the patents world with the cross bristled design owned by Oral-B, even though it was not filed with DABUS as the inventor, a patent granted for an application arising from the work of an independent AI is a great advancement for AI rights.

To conclude, we have reached a stage of development in AI where finally, we can see independent thought, that can compete with human creativity and skills, while costing far lesser financially and with far greater efficiency.


With the developments in AI becoming increasingly more rapid and complex, the question arises,

At what point will we incorporate them wholly into the invention process?

As mentioned previously, Patents have been granted for AI generated inventions in the past, however, nowhere has the legislature or the judiciary ever considered the true long-lasting impact of granting such patents since, when filing applications for patents, only the invention is concerned, with any supporting proof of originality or novelty of the idea, nowhere do applications ask, for the method involved in the creation of this novelty/invention. (Hattenbach, Glucoft 2015)

When considering the question of novelty, it becomes pertinent to note that due to the presence of significant variations in the data sets of the Hal AI and its variations in output it can be said to have made a novel discovery. Similarly, in the case of DABUS, where the algorithm generates ideas from adapting to its environment and is not reliant on any form of provided datasets, its creations are by definition novel (Vertinsky, Rice 2002.)

However, S. 3 of the UK Patents Act, 1977 states that an invention “is not obvious to the person skilled in the art, having regard to any matter which forms part of the state of the art”

Further, S. 101 of the U.S. Patent Act has been interpreted to include “anything under the sun that is made by man”, the patent act specifies further, many bars for an invention to be considered patentable, stating that for it to be patentable, an invention must pass the tests of utility, (35 U.S.C § 101) patentable subject matter, novelty, (35 U.S.C § 102) and non-obviousness. (35 U.S.C § 103)

US law has codified a prohibition on discriminating on this basis, declaring that “patentability shall not be negated by the manner in which the invention was made.” (35 U.S.C § 103)

The WIPO Standing Committee on the Law of Patents, stated that “Granting monopolies over obvious inventions would contribute little to society and prevent others from engaging in technological modifications and ordinary progresses (WIPO Standing Committee on the Law of Patents study 2015.)

Thus, upon a glance consideration of the above-mentioned points, certain assertions become very obviously true, prima facie;

  1. The inventive step is crucial for proof of inventorship.
  2. Creativity and inventiveness require more than novelty (Bundy 1994.)
  3. If the inventive step is to achieve its legal purpose, it must consider all of the tools available to an inventor, not simply personal knowledge and skill (Plotkin 2009.)

“Assessing the existence of an inventive step concludes with determining whether “the differences between the inventive concept and the prior art, constitute steps which would have been obvious to the person skilled in the art” (WindsurFing International Inc v Tabur Marine, Pozzoli SPA v BDMO SA.)

Going off this, whatever may be considered obvious, would therefore be un-patentable. Further, an invention that results from a program executing massive amounts of brute force calculations, would also be within the umbrella of obvious, since any human researcher, given such a program would be able to produce the same result. Say, going forward, AI assists in the inventive process are not considered, it becomes obvious that it will lead to certain human players monopolising entire markets and product classes by producing every possible obvious invention using algorithms, and since the current laws do not recognise the method of invention, such patents would stand the test of law, while being unfaithful to the very object of patent laws, being the protection of novel ideas.

However, on the flip side, raising the bar solely on the basis of the inventive processes employed by the ever-advancing AIs would result in the assessment of obviousness being made purely on the basis of the level of complexity of processing power required to create the invention. Also, a stringent application of such norms would prove greatly detrimental to fields such as drug discovery as was previously discussed, where inventions even if through algorithms, are made by employing robust traditional methods that brute force results, further, an AI’s superior ability to combine references across variations, without being faulted by human errors, will result in the bar being raised far too high for the ever increasing number of inventions where, people just combine two or more existing inventions.

Patentable Subject Matter

One of the main challenges of patenting creations by sophisticated AI systems can be found in the U.S. Patent law, which states that “Whoever invents or discovers any new and useful process, machine, manufacture or composition of matters, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.” (35 U.S.C § 100)

The question then becomes; Is an AI to be regarded as a person, a “who”?

U.S. patent laws take only human inventors into account, defining “inventor” as “the individual or, if a joint invention, the individuals collectively who invented… the subject matter of the invention.” “Joint inventor” and “co-inventor” “mean any one of the individuals who invented or discovered the subject matter of a joint invention.” (35 U.S.C § 100) The law does not consider the possibility of a nonhuman inventor.

In the EU, though the actual definition of who is an inventor vary between member states, since they are all signatories to the European Patents Convention, the generally accepted definition is loosely that, “The inventor is the creator of the invention”, Rule 19 (1) of the EPC as interpreted by the European Patent Office in its decision in respect of applications EP 18 275 163 and EP 182 751 74 (DABUS Patents), the “designation of an inventor must contain the surname, first name and full address of the inventor. However, names given to things must not be equated with names of natural persons. Names given to natural persons enable them to exercise their rights and be part of their personality, and this also applies to mononyme persons.” “In contrast, things, have no rights, especially no personal rights.

Thus, considering the unwillingness of the judicature to amend its rules, we must apply the present situation to the present rules.

The AIs highlighted in this paper, enable automatic transformation of functional descriptions of desired outcomes into tangible patentable inventions (Plotkin 2009) As discussed previously, following this process, AIs can produce a multitude of results within a broad class of inventions. Thus, where an AI like Hal AI is concerned, it could be employed to monopolise market sectors, and in the process, following the suggestions of Kohlhepp, (Kohlhepp 2008) it should be allowed for the person who feeds the algorithm the abstract question for determination, be allowed patent rights over the entire class of products that the AI invents. This breadth of the inventions in so far as the application to the initially posed abstract question would, more than make up for the risk of market failure when developing technology as complex and resource heavy as this.

Thus, such a high-risk high reward situation would be beneficial for further study and development of AI algorithms.

The potential to obtain patent monopolies will incentivise those who are innovating in the field to share their input questions to the public further increasing the abstraction, variation, speed and efficacy of innovation. Of course, in line with patent laws, it would need to demarcate the extent of the protection provided by such patent, and the nature of the invention be clearly succinct from any prior works in that field/class of products.

Due to the many advances in the algorithms that will follow, it has been suggested that the inquiry by those from the public hoping to make use of such abstract, be judged based on whether an AI enabled computer could reproduce the stipulated results. This, in turn, negates the public notice function employed by the patent system, which would have been better serviced by a human mind testing such inventive nature of the process, one that were skilled in such art.

It may though, cause a flood of patents from those with an eye for opportunity and the resources to mould policies in this field, leading to patent hoarding and the common inventor having to go through a gruelling complicated process involving an unnecessary multitude of patents for the same abstract question, moreover, the concern of expensive licensing is another major one.

However, an abstract functional problem has many parallels to software patents, the specifications of which, are regarding how to create one (In re: Hayes Microcomputer Prods Inc. Patent Litigation) Even so, computer programs that produce technical results or contributions in themselves are patentable (EPO: T 1173/97 IBM/Computer Programs [2000] EPOR 219; T 935/97 IBM/Computer Programs [1999] EPOR 301.)

Thus, it can be reasonably asserted that since most AIs either can or will be able to in the near future, produce patentable technical results in the future, from common abstract questions, the framework for allowing better ownership of patents already exists.



Can computers acting on their own autonomy, be granted inventorship?

We must first examine what it means to have an invention, legislations of all manner do not contain an explicit definition of the term, instead, they set out the qualities of such objects, and the exclusions from such as may be applicable. Neither the European Patents Convention nor the Patents Act, 1977, explicitly require the invention to have been of human origin. The U.S. patent law however is as previously noted, extremely strict in this regard and has already made it clear that the inventor of any invention needs to be a natural person.

There are many scenarios where autonomously inventing AI may prove particularly useful, such as in deep sea exploration, where tools and equipment that malfunctions or breaks down, can be replaced on board by the AI 3D printing a replacement from the on board materials, of course, depending upon whether or not it had the requisite blueprint beforehand would make the difference between invention and ‘regeneration’, however, in the event of there not being the materials requisite under the original instructions or in the event of the instructions being unavailable, the AI is forced to invent a novel solution, which may then be patented by the human supervisors of the project or not depending upon its commercial viability, in such a case, the laws need to evolve for continued progressive invention in the field. Even though a human element is embedded in the requirements by law, it requires re-examination considering sophisticated autonomously inventing AI like DABUS that generate completely original ideas, wholly independent of human intervention.

Historically, the law regarding patents has continued to evolve and grow considering new challenges and advancements in technology, (Vaver 2003) with the resulting curve always being in favour of a wider ambit for patentability.

Identifying the Inventor

There is a requirement, internationally, of the identity of the inventor in respect of the applications seeking grants of patent rights are made, (Patents Act 1977) in an instance, where the inventor’s name is excluded from the application or the patent, there is legal remedy available to them, despite having no ownership of the patent. (Dutfield 2013)

It becomes important then to question, why this must be a requirement in the first place.

It is said that one of the most important aspects of granting patent rights is the fulfilment of the desire for recognition and validation, further spurring creativity and innovation. In other words, in the absence of a means for the inventor to prove legitimacy of his inventorship, there would be no incentive for them to invent.

It has also been suggested as a quid pro quo that the inventor be identified, where the employer has the ownership rights over such patent. Many legislations have even included this or made note of it through legislative or judicial amends to the core law in this past decade, like, the Patents Act, 1977, Ss. 40-41 of the Act state that “inventors are eligible under certain circumstances to receive compensation from the employer who has received outstanding benefit from an invention.”

However, the most prevalent shift in patent right allocation is to impersonal entities such as corporations or teams, that serve as business assets. Requiring an inventor in such cases is redundant, as would also be the case with an AI acting autonomously.

This was challenged in the European Patents Office by Dr. Thaler’s team 2020, where they contended that “Rule 19(1) EPC does not require that the inventor is a human being but serves only the purpose of properly identifying the inventor. The designation of the inventor filed in the present case fulfils this requirement, Stephen Thaler found. The provision that a designation must contain both a first name and a surname would deny persons with only one name (mononyme persons, e.g., Javanese names according to Wikipedia) the right to be named as inventor. Furthermore, the fact that an AI system has neither moral nor property rights is not an obstacle to being registered as an inventor.

The requirements for patentability are exclusively defined in Art 52 – 57 EPC. According to Dr. Thaler, a procedural requirement under Rule 19 EPC could therefore not introduce a substantive exclusion from patentability for inventions made by AI systems.” (

As mentioned previously, the EPO rejected this view stating it necessary for the inventor to have a naturalistic last name and be a natural person.

Even though clearly outmoded and redundant, this requirement continues to exist to this day simply because nobody has been especially inconvenienced by it to the point of asking for its abolishment out of spite.

This provision also greatly inconveniences the companies and other corporate entities, that are conducting the research required for such inventions to be patented, on their own dime and not being able to retain ownership in the name of the corporate person.

On the Contrary, In July of 2021, the federal court of Australia, in an appeal hearing, allowed listing DABUS, as an inventor in a patent application, finding that listing it as such would be “compatible with the goal to promote innovation and that nothing in the patent act explicitly or impliedly prohibits listing AI as an inventor. Further, the Indian Copyright office too has now, allowed an AI named RAGHAV be listed as co-author of a painting titled “suryast”.

Autonomous Computer as an Inventor

An inventor is anyone who contributes to the creation of the inventive idea or the concepts of a patent or an application. We have already seen that both the EPO and the U.S.PTO, have held that being a natural person is a requirement, however, there is no mention of the adequate procedure when there is a program or a computer supplanting the natural persons in such creation.

As our capacity to autonomously generate inventions increases and improves, we will eventually be forced to choose between the following three outcomes.

  1. Eliminate the requirement for identification of inventors all together.
  2. Allow sufficiently sophisticated AI to be classified as legal persons.
  3. Assign patents to a natural person.

The former is already a very real outcome, and one that judicatures across the world are currently leaning towards, in most cases, the person, people, who develop the inventive algorithm are given the patent for its creations, however, the obvious problem of such patents being against the very concept of patents arises by virtue of such applications not being able to accurately reflect real contribution to the inventive idea.

Another application of this might be in assigning the rights to whoever notices the invention first, and evaluates it as being such (Exparte Smernoff Bd. Appeal 1982.) Here, the invention would only be patentable if subsequently discovered by a natural person. This too suffers from the same faults as the currently prevalent approach.

But what if the AI itself could be accorded patent rights?

The patent system would recognise the AI as the inventor, thus accepting the AI as a legal person, which would further allow corporations to file for patent claims, a much needed amend to the current state of patent legislation. There have been cases previously in which inanimate objects have been granted personality (Pramatha Nath Mullick v Pradyumna Kumar Mullick.) This change would mark a shift in the societal perceptions of technological symbiosis with the natural world.

Further, AI already exhibits traits of personhood such as acting with intent, (Calverley 2008) allowing such systems to have inventorship, will allow businesses to use the number of patents issued as a selling point for such systems, thus further spurring innovation and commercialisation of said systems (

Finally, the option of eliminating the statutory requirement. It is by far, the easiest, cleanest approach to this problem, solving not only the issue of inventorship in autonomously generated inventions, but also tackling group efforts and corporate applications in one fell swoop. This however, does not come without its lacunae, it eliminates the benefit of recognition, affecting greatly the innovators of this field who would now lack the motivation to invent more for greater notoriety in their fields, and thus greater compensation for future inventions.



Under S. 7 of the Patents Act, 1977 the inventor is the first owner of the patent unless someone else has a better right of it qua employment or contract. In the USA, the National Commission on New Technological Uses of Copyrighted works (CONTU) was established in 1974 by the United states congress to deal with question of copyrights and Intellectual Property rights in the existing U.S. Patents act, and is the oldest by any stretch, the only attempt the U.S. made in better evolving IP legislation to better suit technological developments. Its recommendations were largely incorporated into the U.S. patents act in 1980, including in it, a definition of the term computer program, which the current AI algorithms are still subject to. It defined computer programs as follows “A ‘computer program’ is a set of statements or instructions to be used directly or indirectly in a computer in order to bring about a certain result.”

From what has been ascertained thus far, it becomes evident that a joint authorship approach to inventorship rights would be the most utopian method of dealing with the issue at hand. CONTU had the following to state about this approach.

“Finally, we confront the question of who is the author of a work produced through the use of a computer. The obvious answer is that the author is one who employs the computer. The simplicity of this response may obscure some problems, though essentially, they are the same sort of problems encountered in connection with works produced in other ways. One such problem is that often a number of persons have a hand in the use of a computer to prepare, for example, a complex statistical table. They may have varying degrees and kinds of responsibility for the creation of the work. However, they are typically employees of a common employer, engaged in creating a work-for-hire, and the employer is the author. When the authors work together as a voluntary team and not as employees of a common employer, the copyright law with respect to works of joint authorship is as applicable here as to works created in more conventional ways, and the team itself may define by agreement the relative rights of the individuals involved”. (CONTU Final report 1979)

This explanation however does not clear up just who CONTU thought the joint authors might be. Attempting to rely on standard doctrine or on agreements parties might make, begs the questions of what the applicable law is on this issue and what private parties should do to resolve it.

Further, it is difficult for the user and the programmer to qualify for recognition as joint authors under the existing statutory structure. Although joint authorship is not defined within the U.S. Patents Act, it does define joint work as the following, “a work prepared by two or more authors with the intention that their contributions be merged into inseparable or interdependent parts of a unitary whole.” (35 U.S.C § 101.)

Previously, and to a large extent today, except for DABUS, there are no AI that can be asserted to have had an intention of their own design, and as such the intention then being proven common between the programmer/user and the program is an insurmountably difficult task.

Thus, it falls to us, the interpreters and researchers to formulate an equitable solution. Again, as has been stated previously, the requirement of a natural person to be identified as the inventor must be eliminated, and in doing so, the legislature and the judicature are then forced to consider ownership as the sole deciding factor in such applications.

In cases such as that of DABUS, where the program functions autonomously, and is recognised either as a legal person or the requirement was abolished, the first right of ownership would go to the program itself. By doing so, all parties that brought about the creation of such program can then own the product in accordance with special distinctions, classifications and divisions for the ownership of the AI itself that would have been included in the mother agreement. Though such a system would indubitably result in slower economic turnover, owing to the forced licensing of every single product generated, it would prove to be a great incentive to not only programmers and researchers to continue bettering the inventive process of such algorithms, it would also provide incentive to their promoters and financial backers who can then enjoy a promised economic yield over longer periods of time, thus ensuring constant innovation in the field and the appeasement of all persons involved. Such a system would also then produce inventions that are better streamlined and suited to whatever market the programmer seeks to breach using the abstract upon the algorithm by being able to eventually further narrow down the scope of the classes of products generated. The companies, financial backers previously mentioned, would now also get the right to sue for infringement, thus creating even more economic incentive for this approach to be employed.


This paper has in so far discussed the various legal implications and rights arising from computer generated inventions, and other issues related to the future development of autonomous generation technology, that necessitate urgent changes to present laws.

There are multiple considerations that we must make however, for instance, the effect of immense advancements in the product invention methods, would eventually render, human inventorship methods ineffective in comparison, it will also inevitably result in the value of patents as an incentive for innovation being outweighed by the costs of hampering competition (Abbott 2010)

Thus, the patent legislation in question would need to balance the changing incentives and costs of invention (Abbott 2010). This could mean regulating market prices in order to discourage patent thickets and patent trolls. Further, reducing the patent terms may help balance the labour-reward structure. Further, the legislation may also impose a bar for patentability, requiring lesser patents to be filed by inventors, in order to make the entities filing such patents choose between products and only patent the most beneficial and economically viable products out of the bunch that were generated, resulting in increased quality of the patented inventions.

A preliminary suggestion for legislations today is then, to require the method of invention as a requirement in the application itself, along with abolishing the requirement for only a natural person to file for such patents.

Further, there must be a test created for the identification of an AI as the Author or inventor, following loosely the following principles for determination.

  1. An AI can be said to have created eligible subject matter when the creation is original and developed independently instructions provided by a programmer.
  2. An AI can be said to have caused the creation of a work or invention when there is very little human instruction and the AI’s creation process is not merely rote or mechanical.

Both of these are well settled principles in law at this point, for the determination of patent rights arising out of any invention for natural persons and extending the same umbrella over the proposed legal personality of AAI, would create a better suited application of pre-existing IP Laws to apply to AI algorithms.

We are then also posed with question of assignment of IP rights, wherein, as with regular patents. Most authors work under agreements to assign rights arising out of such authored works to businesses or other entities as the agreement may provide for, this may be via employment contracts, commissioned works or other such agreements. Such assignment would also automatically limit the term of exclusivity for such rights.

The assignment may be as follows,

  1. License agreements: for the license period, whoever the licensee using such program would be granted part ownership of the invention.
  2. Explicit contracts: the ownership rights would be divided as specified within the contract.
  3. Implicit contracts: the specific purpose for which the AI was employed to the end of an objective and singular/bundled IP products, the IP rights in such instances be granted in part to the user who purchased it specifically to achieve such ends.


To conclude, let us begin by answering the questions posed at the very offset of our journey.

  1. Whether Artificial Intelligence can act truly independent of external actors.
  2. Whether they should be taken under the umbrella of the term Legal persons.
  3. Whether creations by Artificial Intelligence should be protected by patent rights, or new sui generis rights, if at all.

The answer to the first, we found in DABUS and in Hal AI showing us the extent of autonomy and independent idea formulation we have achieved, though not acting completely independent in that, they are provided a goal or a filter, they are still autonomous and truly capable of “thought.”

The second, this paper has determined in the positive, and as the only viable, most efficient solution to the patent crisis that is about to follow.

Finally, this paper answers the third in the positive as well, creating a new right and applying old patent legislation to autonomous inventors equally be they natural or not.

In so far as existing legislations are concerned, the patent system must recognise the implications of and be prepared to respond to a technological reality where leaps of human ingenuity are supplanted by Al, and the ratio of human-to-machine contribution to inventive processes progressively shifts in favour of the machine, (Wamsley 2011) leading to an eventual displacement of human inventors from the inventive process itself (Plotkin 2009.)

If these issues are not resolved today, we will see a slew of patent cases in the very near future, that will mould the existing legislation to better suit those who can afford to use such legal and financial capital, alienating the human aspect of inventorship once again.

With the outcome being the same in either scenario, we must take affirmative action now, to prevent both processes, and ensure protection for patents to remain what they were meant to be.

The author is a final year law student at Symbiosis Law School, Pune.

Photo by Smith Collection/Gado

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s