Although artificial intelligence (AI) can sound like the stuff of science fiction, we are already interacting with AI-based systems on a daily basis. Voice assistants, purchase prediction, fraud detection, chatbots and a whole host of other applications already implement a number of techniques that fall under the umbrella term “AI”.
Enabled by the convergence of big data, enhanced statistical and pattern recognition techniques, and cheap, readily available processing power, the adoption of AI- based solutions is set to increase rapidly. Business process automation and service personalisation, in particular, are set to be growth areas in the short term, with applications involving interaction with the physical world, such as autonomous driving, coming on stream in the medium to long term.
While each AI discipline is different in its specific implementation, a number of themes that are common to many modern AI systems give rise to particular intellectual property (IP) questions:
- By replicating aspects of human cognition, AI systems have the potential to engage in acts of content creation.
- Many AI systems, especially those using supervised machine-learning techniques, undergo a training process in which they develop their own decision-making algorithms and rules by practicing decision making and using feedback to try and improve future decisions.
- Training AI systems often requires large volumes of training data to ensure that the system develops its decision-making algorithms based on data that reflects the full range of scenarios that it might encounter when operating in the real world.
- AI systems are often used to sift through large volumes of input data to detect statistical features or patterns.
This article considers the IP issues arising from each of these four themes and how these issues are likely to arise in the context of developing and using AI systems, in particular:
- IP issues that need to be addressed contractually.
- Underlying ownership issues.
- Dealing with content created by an AI system.
- Patenting inventions where the original idea was generated by an AI system.
- Infringement issues.
CONTRACTUAL ISSUES
While some areas of AI research attempt to build bespoke hardware to replicate human brain function, most of today’s AI systems are implemented as software running on off- the-shelf computer hardware; that is, using general purpose central processing units or, increasingly, graphics processing units. The IP rights in AI systems will therefore be those that arise in the context of developing other types of software.
Commercial agreements to develop AI systems will also need to take account of all the usual issues associated with developing software. This will include ownership and licensing of: any background or pre-existing IP that is incorporated into the system; and rights arising in elements of the system that are developed as part of the project (known as foreground IP). Development agreements will also need to cover any indemnities relating to infringement of third-party IP rights by the AI system (see “IP infringement” below).
Collaborative model
The current state of the market for AI systems will make a number of these general IP issues particularly acute. For example, AI developers often need access to relevant training data in order to develop their solutions and bring them to market. Rather than pay to license the training data from a third party, a common approach is for an AI supplier to collaborate with an organisation that holds the type of data which the AI supplier needs. In return, the organisation will get access to the developed AI solution.
However, the AI supplier will also want to offer the solution to others and could be hindered if that organisation obtains rights in the IP that is developed as part of the collaboration, or if the organisation’s background IP is incorporated as part of the finished product. In this situation, the ownership and licensing of IP rights in the system will be of particular importance.
Use of customer data
Another common model for the deployment of AI systems is for a seller to supply an underlying technology platform that is trained using a customer’s data to provide a system which is specifically adapted to the customer’s business. This training process will result in a set of customer-specific analytics that will work together with the seller’s underlying AI platform.
This gives rise to an important question regarding the ownership of IP rights in the customer-specific analytics. For example, some AI-based cybersecurity systems detect network intrusions by comparing current network traffic with the customer’s baseline network traffic. Questions may arise as to who owns any copyright, database right or other IP rights in this baseline data.
This issue would most likely come to the fore if the customer wanted to switch supplier and take with it the analytics generated by the original AI solution. The contractual wording in the original service agreement regarding IP ownership would come under close scrutiny as both parties look to see whether the original supplier can prevent the customer re-using the analytics with another supplier’s solution.
There are obviously conflicting interests here: it will be in a seller’s interest to lock the customer in to its AI platform whereas the customer will want the freedom to move its analytics to other sellers. The party that will prevail when negotiating these terms in AI service agreements will depend on the parties’ bargaining power. At present, the number of well-established AI service providers is relatively small but, as the market matures, customers may increasingly demand the right to move their analytics between different AI service providers in much the same way that they need to move their data from one database provider to another.
UNDERLYING OWNERSHIP
Contractual IP issues could potentially arise in any scenario where a general purpose AI system is developed by one entity and subsequently adapted by a second entity for a specific purpose by training. The key issue will be whether the first or the second entity owns the IP rights in the resulting trained system. Underlying this is a potentially far- reaching question regarding the interaction between the way in which AI systems are developed and the ownership of IP rights in those systems.
Copyright
Traditional software implements logic that has been designed by a human author. This logic is embodied either in source code or in some other data structure that can be interrogated by the software during operation in order to determine the particular course of action that the system should take in response to a given set of inputs.
Ownership of the copyright in source code or other data structures is defined by reference to their human author, but a fundamental feature of many AI systems is that the underlying logic is developed by the system itself as a result of a training process. Under current law it is not possible for the AI to be considered the author of this work. This gives rise to a number of challenges in establishing both the authorship and the subsistence of copyright in the code or data structure that arises as a result of the training process.
Authorship. The first question in determining copyright is whether there is a human author of the work. This will depend on the particular implementation of the AI training process and the level of human involvement in the creation of the work in which the system’s logic resides. At one end of the spectrum it is clear that a human using a computerised tool, such as text editing software, to implement their design for the logic that the system should use will be considered to be the author of the work in which the logic resides. At the other end of the spectrum, a human who simply provides training data to an AI system and presses “go” is unlikely to be considered the author of any work in which the logic generated by the AI system resides.
Most cases are likely to lie somewhere in between, with a human providing some additional input to the training process that helps the AI to create its logic. For example, many AI development tools, such as TensorFlow, allow developers to review and edit various aspects of an AI system as it progresses through a training process.
Computer-generated works. If no human author can be identified for the work, the second issue (under English law) will be to identify who made the arrangements necessary for the creation of the work, as that person will be considered the author of the work (section 9(3), Copyright, Designs and Patents Act 1988) (section 9(3)). This could be a difficult question if there are competing claims between the designer of the learning algorithm and the person who put the algorithm through a training process in order to create the trained system.
The designer of the learning algorithm might try to rely on Nova Productions Ltd v Mazooma Games Ltd where the Court of Appeal held that a person playing a computer game was not the author of screenshots taken while playing the game and had not undertaken any of the arrangements necessary for the creation of the images ([2007] EWCA Civ 219; www.practicallaw.com/7-314-1956). Instead, the court held that the persons who made the arrangements necessary for the creation of the screenshots were the game’s developers.
The person training the AI system would want to distinguish Nova Productions and argue that the training process either involved the use of their skill and labour so that they should be considered the author, or that they had undertaken the arrangements necessary for the creation of work.
Relevance of authorship. The question of authorship cannot be altered by agreement as it is a matter of status and fact, although it is possible for parties developing AI systems to agree who will be the owner of any copyright that resides in the work in which the trained AI’s logic resides. However, authorship remains relevant for two reasons:
- The duration of copyright in works with a human author is determined with reference to that person’s lifetime, but the duration of copyright in computer- generated works under section 9(3) is defined with reference to the calendar year in which the work is created and will always be shorter than human-authored works.
- The provision under section 9(3) is specific to the UK and has an equivalent in only a very few other jurisdictions, such as Ireland and New Zealand. There may, therefore, be circumstances in which the work representing the logic of a trained AI system attracts copyright protection in the UK but not in other jurisdictions.
Computer programs. There is an underlying question as to whether the work in which trained AI’s logic resides is capable of benefiting from copyright protection at all. In the UK, computer programs are protected as original literary works to the extent that skill, judgment and labour have gone into devising the form of expression of the program. At an EU level, the Software Directive (2009/24/ EC) requires that EU member states protect a computer program if it is original in the sense that it is the author’s own intellectual creation.
It may therefore be possible to argue in some circumstances that a computer program generated as a result of an AI system being trained does not meet the originality requirement because the form in which the program is expressed does not represent a human author’s skill, judgment and labour.
Databases. An equivalent argument may also be possible in relation to copyright in a database generated through the process of training an AI system, which will only be considered original if the selection or arrangement of the contents of the database constitutes the author’s own intellectual creation. It may be possible to point to aspects of human skill, judgment and labour in the development of the underlying AI training algorithm or the training process itself, but the difficulty will be identifying the causal link between these activities and the form of expression of the program that the AI generates or the selection or arrangement of the contents of the database.
Patents
There are challenges with patenting AI systems and platforms but, despite these, many AI-related patents have been granted (see box “Growth in AI-related patent applications”).
Patentable subject matter. An AI system is often, at a very high level, mimicking a human task. For example, Microsoft’s InnerEye project is an AI system that helps oncologists target cancer treatment more quickly. It does this by using machine-learning techniques to analyse magnetic resonance imaging scans of patients and delineate tumours from surrounding healthy tissue and bone, a task that had previously, and laboriously, been completed by the oncologist drawing contours on 3D images by hand.
If, at this level of specificity, a patent was to be applied for this type of invention, it would be refused because at least one of the fundamental requirements for patentability is not met; that is, there is no description of how the invention works. It is, effectively, a black box and would be considered insufficient as not containing an enabling disclosure; that is, the patent does not teach the ordinary skilled person how to perform the claimed invention (see “Black boxes” below for further discussion on IP infringement).
This problem is avoided if, instead of claiming the high-level idea of applying AI to the 3D delineation task, the algorithms, source code or other description of how an AI system works is disclosed in the patent application. However, this can lead to a further impediment to patenting: a patent cannot be granted for, among other things, mathematical methods, methods for performing a mental act (such as a method of teaching reading), business methods and programs for computers “as such” (section 1(2), Patents Act 1977) (1977 Act) (section 1(2)).
Even before patent applications were being made for AI-related inventions, the exclusions in section 1(2) were a source of controversy, not least because the English courts and the European Patent Office (EPO) have applied different approaches, with the English courts being more ready to find that this kind of invention is not patentable.
Under English law, an applicant can avoid the exclusion in section 1(2) if it can be shown that the particular apparatus or method claimed in the patent involves a contribution in a technical (that is, non-excluded) field. The test for technical contribution was broken down into a four-step test by the Court of Appeal in Aerotel Ltd v Teclco Holdings Ltd and Macrossan’s Application ([2006] EWCA Civ 1371; see News brief “Patenting inventions: the Court of Appeal’s four-step programme”, www.practicallaw.com/5-206-3960). The steps are to:
- Properly construe the claim.
- Identify the actual contribution.
- Ask whether the contribution falls solely within the excluded subject matter.
- Check whether the actual or alleged contribution is actually technical in nature.
When considering the computer program exclusion in section 1(2), it can also be helpful to consider the five signposts originally set out in AT&T Knowledge Ventures/CVON Innovations v Comptroller General of Patents and slightly amended by the Court of Appeal in HTC Europe Co Ltd v Apple Inc ([2009] EWHC 343 (Pat); [2013] EWCA Civ 451, www.practicallaw.com/9-532-4127). The signposts are whether:
- The claimed technical effect has a technical effect on a process that is carried on outside the computer.
- The claimed technical effect operates at the level of the architecture of the computer; that is, whether the effect is produced irrespective of the data being processed or the applications being run.
- The claimed technical effect results in the computer being made to operate in a new way.
- The program makes the computer a better computer in the sense of running more efficiently and effectively as a computer.
- The perceived problem is overcome by the claimed invention as opposed to merely being circumvented.
Palantir’s applications. There are, as yet, no court decisions that consider AI-related inventions. However, the Intellectual Property Office (IPO) has had to consider them and its decision in Palantir Technologies Inc gives an insight as to how it handles them (BL O/358/16). Palantir’s three patent applications were refused as they fell within the computer program exclusion in section 1(2).
The hearing officer described the invention in the first patent application as providing a method for selecting, from a larger dataset, a subset of records that are related to a common entity by using a classifier which produces a matching probability value for each record in the larger dataset. The subset is created from records having a probability value above a set threshold. The invention can identify potentially related records in a dataset whose records do not have well- defined common attributes. The classifier uses a separate exemplar dataset to define the target entity. The classifier is preferably a machine-learning algorithm and needs to be initially trained on a training dataset.
The classifier (or algorithm) itself was not claimed as inventive, rather, the claimed invention lay in the record matching method. Palantir relied on the fourth signpost in AT&T, stating that the invention made for a better computer because it lowered memory usage by reducing the number of mistaken aggregates of records. But the argument failed because there was no suggestion that the underlying computer was changed. As the hearing officer pointed out, since any program uses up resources on a computer, merely selecting a more efficient one cannot, of itself, be a relevant technical effect. Furthermore, none of the other AT&T signposts assisted Palantir: the method was at the application and user level and the records were user data rather than system data. Therefore, the contribution was not technical because it was nothing more than a computer program.
The inventions in Palentir’s second and third patent applications related to methods of comparing every record in a dataset with each of the other records, with the comparison being made only for selected attributes of each record. The claimed methods provide for greater computational throughput and acceptable memory consumption when compared to brute force direct matching which, with extremely large datasets, could exceed available resources. Palantir claimed that both the second and fourth signposts in AT&T were relevant. However, the argument based on the fourth signpost failed for the reasons already given and the argument based on the second signpost failed because the inventions did not operate at the architectural level, although they were able to operate on generic data.
Patent ownership. The key issue in relation to the ownership of a patent is to identify who came up with the inventive concept. Under patent law, that person is entitled to be named as the inventor and is primarily granted ownership of the patent. The inventor is the actual deviser of the invention, which, as noted above, is framed in terms of the technical contribution.
The deviser of the programming logic or algorithms is most likely to be the inventor of any AI-related patent. The programmers may qualify if they have been involved in devising the actual invention, but may not qualify if they have merely been the means for putting it into practice since a non-inventive contribution, such as painting something pink (given as an example in Stanelco Fibre Optics Ltd’s Applications [2004] EWHC 2263) does not qualify that person to be an inventor. Similarly, merely providing training data to an AI system would not qualify someone to be an inventor.
Another issue is whether an AI system could be considered an inventor if, through the training process, it came up with a new way to make the computer a better computer in the sense of running more efficiently and effectively as a computer; that is, the fourth signpost in AT&T for gauging whether or not a computer-related invention is patentable. Before deciding this issue, it would have to be assumed that the humans using the AI system realise that an invention has been made and are able to describe it sufficiently for the purpose of claiming a patent, which is not a foregone conclusion given the black box nature of AI systems. But if that is assumed, under the current law, it is not possible for the AI itself to be considered the inventor. Inventorship, like authorship, is considered a human activity (see “Inventive AI systems” below).
Furthermore, since there is no deeming provision which mirrors that in section 9(3), there is nothing that provides that the person making the arrangements for the creation of an invention reaps the ownership of the invention that is devised. In practice, therefore, the inventor could be the first person to observe and understand the invention but, if this is the case, the devisers of the underlying algorithms should arguably be considered joint inventors.
It could be said, however, that in attributing patents to people who are not the inventors, the real inventor, the AI system, is not being acknowledged. However, the consequence in adopting the position of AI as inventor is that, as the law presently stands, the invention could fall into the public domain.
Given the difficulties, it would therefore be sensible to provide for the question of ownership of any AI-related inventions by way of agreement rather than leaving it to the IPO and the courts to resolve the issue (see box “Best practice for contracting to develop AI systems”).
CREATIVE AI SYSTEMS
AI systems have the potential to be involved in acts of content creation that would usually be the sole preserve of a human. The use of certain types of AI systems in content creation has been common for a number of years; for example, text-to-speech word processing programs rely on AI, speech recognition and natural language processing techniques. However, this category of AI system is merely used as a tool to assist a human in content creation and does not give rise to any specific IP issues.
Challenges arise, however, in the context of AI systems that remove the need for human input in the creation of certain types of content. This is happening already, but the interaction between human and AI contribution to the works vary. At one end of the spectrum, AI systems can be used to detect and extract information from company financial reports and generate journalistic content by populating pre-prepared article templates. At the other end, The Next Rembrandt project trained a deep learning algorithm on 346 Rembrandt paintings and asked it to produce a new painting replicating the artist’s style and subject matter (www. nextrembrandt.com/).
In the context of copyright, the issues that arise from AI content creation are the same as those discussed above in the context of elements of AI systems that are generated automatically as a result of an AI training process (see “Copyright” above). These include whether the creation of a work by an AI system involves the exercise of sufficient skill, labour and judgment (under English law) or represent the author’s own intellectual creation (under the case law of the European Court of Justice) so that the output qualifies for copyright protection as an original work. This will depend on who is considered the author of the work and the extent of the author’s input into the creation of the content.
The starting point in this analysis will be that the AI itself is not a sentient being so is unable to exercise skill, labour and judgment or engage in intellectual creation. Therefore, anything that is to contribute to the originality of the work must come from a human. A human simply pressing a button labelled “write me a song” is unlikely to be sufficient, but a human carefully selecting specific inputs to provide to a trained AI with the intention that it will create a work of a certain nature could be enough to satisfy the originality requirement, even though the creation itself was performed by the AI.
This is perhaps analogous to the famous macaque monkey Naruto taking a selfie by pressing the shutter button on a carefully set up camera; any originality in that photograph arose from the skill, labour and judgment that the photographer, David Slater, used when setting up the camera, even though the exact composition and content of the photograph resulted from Naruto’s actions.
Another possible source of originality in AI-generated works could be in the skill, labour and judgement or intellectual creation expended in the process of training the AI to be able to create certain types of content or in building the underlying learning algorithm.
If multiple people have been involved, the authorship of works generated by an AI system could potentially be contentious. Establishing which of the parties can be considered the human author will involve weighing their relative contributions to the creation of the work. In addition, if no author can be found, then consideration will have to be given to whether any of the parties could be considered to have made the arrangements necessary for the creation of the work in order to qualify as the author under section 9(3).
INVENTIVE AI SYSTEMS
The use of AI systems as part of research and development activities adds a layer of complexity to the question of who the inventor is and therefore who should own the resulting patent. For example, suppose A builds an AI system for identifying the best material to use given a series of input parameters relating to a particular engineering context. B takes the system and uses it to search for the best material to use in the design of a tail fin for an unmanned aerial drone. C then looks at the three potential options identified by the system and decides to file a patent covering the use of the third material for tail fins in unmanned aerial drones, as C recognises that its use has surprising technical benefits over other candidates. Arguably, it is the evaluation of the candidate material and realising its potential that is inventive and therefore C is the inventor, but would B and possibly A also qualify?
This example, although complex, is similar to ones that have already arisen, for example, in the pharmaceutical industry when using high throughput screening (HTS) of molecules against a target of interest to identify a new drug. Some HTS methods are patented, but the molecules themselves are usually in the public domain. Therefore, the inventions and the patents come out of the anticipated use for the molecule and the inventive step is in evaluating the bioavailability, pharmacokinetics, toxicity or specificity of a molecule and recognising its suitability for use.
The new issue raised by AI is deciding who owns a new idea that the AI has generated. With reference to the example given above, suppose the AI devised a new material for use in the tail fin of an unmanned aerial drone, who would be the inventor? This is not a far-fetched example. Although the first versions of an AI system designed to play the board game, Go, were trained using data from actual games, the latest version, AlphaGo Zero, was not given any human data, it was merely given the rules and started playing against itself. Within three days it had surpassed the level of AlphaGo, the version that had, in 2016, beaten world champion Lee Sedol in four out of five games. DeepMind, AlphaGo’s developer, reported that AlphaGo Zero developed unconventional strategies and creative new moves that echoed and surpassed the novel techniques that AlphaGo had played in the games against Lee Sedol.
If deep neural networks are capable of inventing new moves when playing Go, they are capable of inventing more. If an AI system invents patentable subject matter, who should own it? As noted previously in relation to the training of AI systems, there are no provisions in patent law specifically addressing this question (see “Patents” above). Inventing is seen as a human endeavour. Section 7 of the 1977 Act refers to the inventor being a person and to ownership vesting primarily in the inventor. Other tests in the 1977 Act reinforce this: a patent is required to be non-obvious to the person skilled in the art and it must be sufficiently disclosed to enable the person of ordinary skill in the art to understand the invention.
If an AI system were to be considered an inventor, a number of consequences would follow which would set these rules on their head, including:
- Needing to change the rules on ownership and consider at a policy level who should be considered the owner of the AI-invented patent. Candidates include the owner of the software, the programmer, the user, the inventor (if any) of any invention in the AI system, the observer of the new invention and the person who analyses it.
- Considering whether the tests of obviousness and insufficiency should still be rooted in what the skilled person in the art is taken to think, know and understand. If they are not to be so rooted, and AI is taken to be the judge, there would arguably be no need for the patent system at all; a somewhat fatal conclusion for the patent profession.
IP INFRINGEMENT
Training and using AI systems may infringe third-party IP rights. For example, a system could infringe copyright by using third-party data as part of its training process or as an input to its decision making (see also box “Text and data mining exceptions”).
As an AI system is not a legal person, it cannot incur liability. Liability for IP infringement by an AI system will therefore accrue to the person or entity that controls or directs the actions of the AI. However, one emerging feature of modern AI systems is that they can be given some degree of freedom to determine which data sources are of most use to them or even to seek out new data sources, for example, by searching online.
This gives rise to a potential liability management challenge, as the AI’s discretion could make it difficult to establish in advance whether an AI system is going to infringe a third-party IP right. Equally, it could be difficult to establish retrospectively whether an AI system has infringed a third-party right or in some circumstances whether it or those controlling or directing it had the requisite knowledge to infringe certain rights.
Black boxes
Underlying the potential issues around infringement of IP rights by AI systems is something known in AI circles as the “black box problem”. This problem arises from the fact that the way in which some AI systems store their decision-making algorithms is not in a form that is easily understood by a human. Human-implemented logic in the form of source code is generally set out in a logical fashion and annotated with human readable comments explaining the purpose of each element. This can be compared to the logic developed by an AI neural network which might be represented as a database containing a huge array of weightings for different artificial neurones.
While a human can easily reverse engineer human source code to work out why a particular decision was taken, an AI neural network is potentially immune to human scrutiny. The fundamental difference between the two is that with human source code it is generally possible to predict how a system will respond to a given input, but the only way to find out what will happen with a given input to a neural network is to apply that input to the network and see how it behaves.
A number of institutions, including the Defence Advanced Research Projects Agency (in a project called Explanable XAI), the University of California, Berkeley and the Georgia Institute of Technology, are working to crack open the black box. But if an AI system cannot explain why it created a particular work, used a method or made a specific decision, then it may be impossible to decide whether it or those humans associated with it (as users, programmers or owners) are liable. Resolving this question is becoming increasingly urgent, not least because of the issues of liability surrounding self-driving cars.
LOOKING FORWARD
In a policy paper published on 26 April 2018, the government stated that it wanted the UK to be a global leader in the emerging revolution in AI technology (www.gov. uk/government/publications/artificial- intelligence-sector-deal/ai-sector-deal). The paper mentions the need to define technical standards to allow interoperability between AI systems and to share and use data but is silent on IP issues. Nor were IP issues mentioned in the European Commission’s announcement, published on 25 April 2018, about a series of measures to put AI to use in the EU and boost its competitiveness (https://ec.europa.eu/digital-single-market/ en/news/commission-presents-plan-make- most-artificial-intelligence). It will not be long, however, before there is an urgent need to address them, not least because both the UK and the EU view the creation, protection and exploitation of IP as a key plank to improving competitiveness.
This article first appeared in the July 2018 issue of PLC Magazine.
Comments