The skilled machines disrupting drug design

Capable of identifying novel compounds for therapeutic use, AI is saving time and costs in a process that can take around 10 to 15 years and billions of pounds to complete.

For the pharmaceutical industry, which has traditionally relied on patents to protect innovation and fund R&D, this should be good news, but is an intelligent machine's output really patentable? This article explores just some of the issues. 

Clever algorithm or autonomous robot? 

"AI" is used to describe multiple technologies with a range of computer cognitive abilities, from clever algorithms, to autonomous computers with super-human intellect. Yet whilst the AIs at this latter end of the spectrum are, for the time being, confined to sci-fi, the AIs being developed for use in drug discovery are more advanced than some might think. Start-ups such as BenevolentAI and Healx are training their AIs to learn from public and proprietary resources – including scientific literature, clinical trials and compound libraries – to identify and plan the synthesis of new molecules or known molecules for new uses. Through machine learning, these AIs are able to analyse and learn from vast amounts of data to generate their own approaches to drug design. This is very different to programs currently used in biochemistry to model compounds or run simulations, which generally follow rules-based programming. 

AI – not just a buzzword in pharma 

The patentability of AI-derived innovations is a hot topic across many industries, but the way in which AI is being developed for use in drug discovery raises unique considerations. For one, AI is disrupting long-standing ecosystems in the life sciences. Where, traditionally, a pharma company's R&D has been kept in-house, behind closed doors, companies wanting to take advantage of the opportunities offered by AI are having to seek expertise from, and divest some of their R&D to third parties, including novel players in the field such as tech start-ups. AI is also prompting the rise of much smaller - typically tech - companies in an industry that is known for its pharma giants. What's more, these "little players" are looking to bring the entire AI and R&D process in-house – a feat which could now be possible as AI promises to slash the costs of R&D. These changes will have consequences, not only for the way in which any resulting patents are divested and owned, but also for patentability.

Computer inventors?

Before we look at patentability, we need to touch on an underlying tenet of patent law; namely that the inventor of a patent is human. In a future where a completely autonomous AI is capable of invention without any human involvement, the idea that the inventor must be human will be deeply problematic (and undoubtedly, there will be issues with other legal structures besides this). For the time being, however, the pharmaceutical industry, as with many other industries, is being confronted with the prospect of investing in and using AIs that still require human input. Humans are still required to design and train the AI. Of course, the issue of the "computer inventor" is more complex than this, but for the purposes of assessing patentability, we will adopt the general proposition that, provided there is human input, even at a high level, it will be possible to attribute patentable outcomes produced by the AI to a human, and that therefore there will be a human inventor. 

Novel results, but how're you going to prove it?

Imagine that your AI has, after assimilating a wealth of resources, suggested five lead compounds for therapeutic use. Even before assessing the novelty and inventiveness of those compounds, a patentee needs to be able to be able to show that it is plausible that they will work for the given therapeutic use. Whilst you might think it a foregone conclusion that compounds selected by a super-human intellect are bound to work, how do you show that when the AI's rationale is bound up in a highly complex neural network of algorithms? The problem is compounded by the fact that plausibility is to be assessed from the perspective of the notional person skilled in the art – a legal construct who is uninventive, of average skills and most definitely human at that. Would the algorithms underlying the AI's thinking be understandable to the skilled person? There is then issue of access to that information. One of the ways that AI is being used in drug discovery is through collaborations between AI and pharmaceutical companies. But if the AI is owned by the AI company, will they want the pharmaceutical company delving into the AI's underlying code and disclosing it to a patent office? 

Another means of demonstrating plausibility is to take the AI's selected compounds and test them further, whether in vitro or otherwise. It may be that all or some of the compounds identified by the AI can be shown to work and sufficiently disclosed in more recognizable terms, i.e. through test result and/or clinical studies, as opposed to algorithms. Yet, on this analysis, the patentable output is arguably not the group of compounds originally selected by the AI, but rather the compounds only once they have been shown to be effective by a human. This, then, is not so different to traditional R&D models, save that AI has cut down on a significant number of human work hours. 

Inventiveness – it's in the bag … or is it?

Whether or not an AI-derived compound is inventive is again to be viewed from the skilled person's perspective. So far, the real-world reaction to the AIs being used in drug design is that they are coming up with totally unexpected results. This is hardly surprising when the AIs can access a lifetime's worth of data and assimilate information from across fields of knowledge that, in reality, would have no cause to cross over. When benchmarked against what the skilled person could achieve – a skilled person having access to just their common general knowledge - any output from a skilled machine is surely going to be inventive? Two key features of AI in drug development are challenging this: iterative development and routine testing. 

An AI may have to be "trained" to solve a particular problem. Its first output might be close, but not quite, in suggesting compounds for a particular target, and it could require further data sets against which to improve its machine learning. Yet the rate at which these iterations are produced is likely to be immeasurably quicker than any human, and these iterations can be so small that from one to the next it might be possible to see the inventive direction of travel. Before the patentee has had the opportunity to pick up the phone to its patent attorney, hundreds of iterations could have been produced, some of which may form part of the prior art. Of course, an answer to this is making sure that those early disclosures don't become part of the state of the art. This might be easy to do when the R&D is concentrated in just one pharma company, but if it is spread across a collaboration with, for example an AI company, then it may be more difficult to control. 

Routine testing also gives rise to potential issues with inventiveness. The UK law on routine testing was recently thrust into the spotlight when the Court of Appeal in Actavis v ICOS [2017] EWCA Civ 1671 decided that, as long as the course that is likely to be taken is obvious to the skilled person, then, no matter how surprising the result, it will not be inventive. (Re) enter the skilled person. If it becomes obvious to a skilled person, during the course of their drug discovery program, to use an AI to produce the lead compounds, then the result – for example, a compound that has never before been tested for the therapeutic target – no matter how surprising, is not inventive. 

This needs dissecting of course. The first point to consider is that it must be obvious to use AI. At present AI is being used by specialist companies. Access to AI is far from becoming standard, let alone part of the average skilled person's common toolkit. Yet even if the use of a particular AI were to become the norm, the decision in Actavis v ICOS does not preclude a new or inventive use of that AI – perhaps by putting it towards a new purpose, or pointing it towards a different set of data to the data that it is known to use, such as proprietary compounds which are not state of the art. This alone is interesting because in years to come the Court considering whether or not the use of an AI is routine or not might have to come to a view on which algorithms and/or data was at the heart of the invention derived by the AI. 

What should AI and pharma companies be doing to protect their IP?

This all leads to some interesting questions on how to protect an AI-designed drug. Pharmaceutical companies looking to collaborate with tech companies to develop AI for use in drug discovery will clearly need to give careful thought to the collaboration agreement. Working out how to split the IP could be contentious. For instance, how do you draw the line around the patentable invention, when an AI might produce not just one innovation but multiple iterative advances? How should ownership be decided? If collaborators are unable to agree that just one of them should be the de facto owner of all patents arising from the AI's output, then output might instead need to be assigned according to whose proprietary data or know-how lies behind the inventive concept. Any company wishing to protect their AI-derived innovations will need to consider how to sufficiently (and plausibly) disclose that invention to a patent office. This may not be easy, particularly if another entity holds the necessary disclosure (because they designed the AI), or it is simply not possible to decipher the AI's nexus of thinking.  Early in a collaboration, parties will need to consider how these points are to be dealt with and write it into their agreement. 

And what about the future?

One cannot close the topic of AI in drug discovery without considering what it could mean for the enforcement of an AI-derived patent. Mirroring the difficulties patentees may have in demonstrating the plausibility and sufficiency of an AI-derived patent to a patent office, are practitioners who could face having to comb through reams of algorithms to prove or disprove the same, or, further, to demonstrate that the patent is invalid over the prior art. Whilst this may be familiar territory for practitioners heavily immersed in telecoms litigation, it is likely to be new for the life sciences. 

No longer sci-fi

AI in drug discovery is here and it is prompting vital questions about patent protection and enforcement. Such questions should be considered now by industry players, law makers and practitioners alike.

"This article was first published in IP Magazine, October 2018"

Download PDF Share Back To Listing
Loading data