In the final of a series of articles looking at how Generative AI is impacting trade marks and designs, we explore the evolving regulatory framework surrounding AI generated copyright. (For further information, and to view the other articles in the series, please visit our Generative AI hub).
The looming European AI Act does not get into the issue of whether AI-generated content merits copyright protection or not or on whether training on third party material constitutes copyright protection or not, but it does oblige developers to disclose in detail in a publicly available manner when they have trained their LLMs (large language models) on third party content and this will have to be carried out using approved safeguards. Furthermore, when publishers of content have used AI wholly or partially to generate or edit it, they will also have to disclose the fact.
UK businesses are clearly anticipating similar rules applying in the UK; the media are already disclosing when news articles have been edited using AI (inviting users to email the publisher to notify them of any errors they detect) and marketing and professional advisor firms using AI tools are already disclosing the scope and nature of such use (and safeguards imposed on it) in AI policies which echo existing privacy policies required to attain compliance with GDPR. In fact, the concerns raised when GDPR came in, that compliance was in many ways impossible and that the regulatory burden imposed would stifle investment and deter US companies, are already being levelled at the AI Act. Emmanuel Macron has warned, “… we will regulate things that we will no longer produce or invent. This is never a good idea.” Like GDPR, the EU Act also looks set to be rapidly outstripped by the exponential evolution in machine-learning capability.
If EU leaders are worried, US business and political figures are even more concerned and differences between regulatory philosophies between the EU and US seem increasingly likely to create friction and confusion for those wishing to do business in both territories. As was touched on above, this was evident with the differences between the EU’s GDPR legislation and the US’s approach to privacy (with the exception of a few States), and seems certain to be the case with the differences between the way in which the two jurisdictions regulate AI. Understanding the differences between these two approaches is therefore not only of legal interest, but essential for any business operating in the field of AI.
Unlike in the EU but as in the UK and elsewhere, the overall national response in the US has yet to crystallise. Because there is no comprehensive Federal legal or regulatory framework currently in place, AI is regulated to date through regulatory or judicial applications of non-AI specific State statutes or AI-Specific State privacy legislation. The way the US develops may depend on the views of the new administration, but legislative history would imply that where EU's AI Act exemplifies a precautionary approach, prioritising consumer protection and ethical AI use, the US may lean towards a more innovation-friendly, laissez-faire stance. The Republicans have already voiced opposition to what they perceive as overly restrictive AI governance. This divergence reflects broader regulatory philosophies: the US often favours market-driven, innovation-centric policies, whereas the EU has tended to prioritise rights protection and regulatory oversight.
As was said above, AI-specific legislation currently in force in the US has been restricted to measures at a State level. However, efforts to harmonise AI laws in the US are underway, as evidenced by Kamala Harris’ March 2024 announcement, which emphasised the need for federal legislation to address the potential for bias in AI used for political, commercial, and healthcare purposes. This builds on the October 2023 Executive Order from President Joe Biden, which signalled an international cooperation approach by aligning with the International Convention on AI.
Further regulation in the US therefore seems likely. Some big companies in the sector have agreed to internal/external testing of their systems before release; to allay public concern but perhaps also to get ahead of future regulation. Such voluntary oversight could well end up being mandatory. There have also been indications that Federal restrictions might be imposed on the use of AI tech that generates human voices.
The Senate public hearings in September 2023 shed light on what possible legislation it might include, with measures such as the regulation of AI in political advertisements, regulation of AI being used in surveillance of employees and protection of voice and visual likenesses of individuals from generative AI creation. On the copyright front, the US District Court has ruled that human authorship is an essential part of a valid copyright claim and that the Copyright Office will refuse to register a work unless it has been created by a human. This contradicts the UK approach covered in Who owns the content generated by AI?. Furthermore, in terms of litigation over the issue of whether LLMs training on third party content comprises copyright infringement, the US is proving to be the biggest battleground of all.
Kamala Harris’ announcement in March 2024 revealed that the last Administration was particularly concerned that AI tools could be susceptible to bias when used for political, commercial, recruitment and healthcare purposes and the State legislation that has come on the statute books in many cases addresses these concerns. On the other hand the Republican party’s position in July 2024 was: “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.” There is clearly some attention-grabbing, electioneering language at play here but a more business-friendly, regulation-light approach looks likely under the new presidency.