UK Supreme Court Rules that AI cannot be an ‘Inventor’ Under UK Patent Law

Share

In Thaler v Comptroller-General of Patents, Designs and Trade Marks [2023] UKSC 49, the UK Supreme Court ruled that AI cannot be an ‘inventor’ for the purposes of UK patent law. The ruling concludes a series of appeals from Dr Stephen Thaler and his collaborators, who argued that an AI system called ‘DABUS’ should be named as the inventor of two new inventions generated autonomously by it relating to food and beverage packaging and light beacons. This was part of a series of test cases, which have had limited success globally, seeking to establish that AI systems can make inventions and that the owners of such systems can apply for and secure the grant of patents for those inventions. The judgment noted that the broader questions of whether an invention generated autonomously by AI ought to be patentable, or whether the meaning of the term ‘inventor’ should be expanded to include machines powered by AI, were matters of policy that would need to be addressed by legislation.

The UK Supreme Court made three main findings.

  1. DABUS is not an ‘inventor’ under the Patents Act 1977 (“Patents Act”)
  2. An ‘inventor’ within the meaning of the Patents Act must be a natural person (a human being). Since DABUS is a machine, not a natural person, it cannot be an ‘inventor.’
  3. It was not Dr Thaler’s case that he was the inventor and had simply used DABUS as a highly sophisticated tool. Had Dr Thaler made that case and named himself as the inventor, the Court noted that its decision might have been different, but it was not the Court’s place to determine that question.
  1. Dr Thaler was not entitled to apply for and obtain a patent simply by virtue of his ownership of DABUS
  2. Dr Thaler sought to rely on the doctrine of accession whereby the owner of existing property would own new property generated by that existing property (in the same way that a farmer owns the cow and also the calf). The Court held that this only applies to tangible property and not to intangible inventions. For this reason, title to the invention cannot pass as a matter of law from the machine that generated it to the owner of that machine. This argument also assumes that DABUS itself can be an inventor within the meaning of the Patents Act, which, as the court had already established, it cannot.
  1. By failing to satisfy the requirements of the Patents Act, the two patent applications must be taken to have been withdrawn
  2. Because Dr Thaler had failed to name an inventor and had failed to state a valid right to apply for and obtain the patents, the UK Intellectual Property Office had been correct to find that Dr Thaler’s two patent applications would be taken to be withdrawn at the expiry of the 16-month period prescribed by UK patent law for this purpose.

Commentary

Dr Thaler’s UK patent applications were part of a project involving parallel applications to patent offices around the world. The UK Supreme Court’s ruling is unsurprising and follows similar decisions in the United States and Europe.

The ruling raises significant issues for the AI industry, but it is important to focus on what it confirms: that inventors must be natural persons for the purposes of UK patent law. The judgment does not impact the patentability of AI-generated inventions as it does not necessarily preclude a person from securing a patent, provided that a human being is named the inventor.

UK AI Regulation Bill Proposes New AI Regulator

Share

While the focus of attention in the world of AI has been the EU AI Act: EU AI Act Agreed – Discerning Data in recent weeks, there have also been some other noteworthy legislative developments. On 22 November 2023, the Artificial Intelligence (Regulation) Bill (the “Bill”) was introduced to the UK Parliament and passed the first reading in the House of Lords. The Bill seeks to establish a central AI authority (“AI Authority”) to oversee the UK’s regulatory approach to AI. The proposal for an AI Authority comes after the UK Government formally announced a UK AI Safety Institute at the global AI Safety Summit at Bletchley Park (summarised here).

Whilst the Bill largely reflects the approach of the UK Government, this is a Private Members’ Bill (“PMB”). PMBs are legislative proposals introduced into one of the UK Houses of Parliament by ‘backbench’ members (members who are not Government Ministers). Most PMBs fail to pass unless the UK Government steps in to support their progress through the legislative process.

Continue reading “UK AI Regulation Bill Proposes New AI Regulator”

EU AI Act Agreed

Share

Late on Friday (December 8th), the European Union Commission, Parliament and Council concluded its “trilogue” negotiations for the EU Artificial Intelligence Act. The summary below is based on the information available to date. It will be some time before the definitive text is finalized and released since it will have to go through various committee stages and its legal language finalized in multiple languages.

Prohibited AI Applications

The following applications of AI will be prohibited:

Continue reading “EU AI Act Agreed”

Bletchley Park AI Safety Summit 2023

Share

On 1 and 2 November 2023, world leaders, politicians, computer scientists and tech executives attended the global AI Safety Summit at Bletchley Park in the UK. Key political attendees included US Vice President Kamala Harris, European Commission President Ursula von der Leyen, UN Secretary-General António Guterres, and UK Prime Minister Rishi Sunak. Industry leaders also attended, including Elon Musk, Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, Amazon Web Services CEO Adam Selipsky, and Microsoft president Brad Smith.

Day 1: The Bletchley Declaration

On the first day of the summit, 28 countries and the EU signed the Bletchley Declaration (“Declaration”). The Declaration establishes an internationally shared understanding of the risks and opportunities of AI and the need for sustainable technological development to protect human rights and to foster public trust and confidence in AI systems. In addition to the EU, signatories include the UK, the US and, significantly, China. Nevertheless, there are notable absences, most obviously, Russia.

Continue reading “Bletchley Park AI Safety Summit 2023”

Artificial Intelligence Briefing: FTC Holds Forum on Commercial Surveillance and Data Security

Share

Our latest briefing explores the recent FTC commercial surveillance and data security forum (including discussion on widespread use of AI and algorithms in advertising), California’s inquiry into potentially discriminatory health care algorithms, and the recent California Department of Insurance workshop that could shape future rulemaking regarding the industry’s use of artificial intelligence, machine learning and algorithms.

Continue reading “Artificial Intelligence Briefing: FTC Holds Forum on Commercial Surveillance and Data Security”

NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment

Share

The National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.

NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.”  Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).

The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.

Continue reading “NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment”

©2024 Faegre Drinker Biddle & Reath LLP. All Rights Reserved. Attorney Advertising.
Privacy Policy