theregister.com

On the issue of AI copyright, Blair Institute favors tech bros over Cool Britannia

Opinion Former UK prime minister Tony Blair became famous for standing shoulder to shoulder with allies, even though the fallout from the Iraq war forever sullied his reputation. Nonetheless, the institute that bears his name makes it clear who it stands with when it comes to using copyrighted material to fuel the expansion of machine learning into every human domain.

Part-funded by Oracle – itself no stranger to disputes over intellectual property – the Tony Blair Institute for Global Change outlined what it called "an ambitious program for cementing the UK's leadership in frontier-AI development and the creative industries."

In doing so, it claims to tackle the thorny issue of using copyrighted material for training machine learning models. It comes down in favor of UK government proposals to allow exceptions to copyright rules in the case of text and data mining (TDM) needed to feed the AI industry's voracious appetite, with an option given to content producers to opt out.

"While a TDM exception with opt-out will require careful implementation to be effective, we believe it is sound policy for legal, economic, and geopolitical reasons," it said.

The "we" here is doing some heavy lifting and presumably refers to the report's authors, all from the science and technology community; none representing artists, writers, musicians, or anyone else with an interest in having their rights enforced.

To rewind, the major LLMs stupefying the popular and political debate on machine learning have been trained on vast amounts of copyrighted material, something publishers have become understandably vexed by. The Blair Institute report authors reply with an argument that is close to the idea of "fair use" as applied in the US.

"To argue that commercial AI models cannot learn from open content on the web would be close to arguing that knowledge workers cannot profit from insights they get when reading the same content," the report said.

But anyone whose content feed has been flooded with Studio Ghibli fakes since OpenAI turned on the mimicry feature in its ChatGPT image generation tool might see the problem. It is not about a knowledge worker gaining insight, rather it is global reproduction on a massive scale at the touch of a button.

"I'm honestly pretty shocked at how brazenly pro-big tech the final version of the [Blair Institute] report is, and that their proposed solutions are an academic center and a tax on consumers," responded Ed Newton-Rex.

Taking to social media platform Bluesky, the CEO of Fairly Trained, a nonprofit organization that certifies which companies take a more consent-based approach to ML training data, said the report parroted the false claim that there is uncertainty over copyright law and AI in the UK.

Meanwhile, it was also untrue that a TDM and opt-out system increases the control of rights holders over their work.

"They suggest an opt-out scheme would give rights holders more control over how their works are used than they currently have (this is false; licensing is currently required by law)," he said.

Here, it is worth remembering that LLMs have benefited from the unauthorized use of creative work. As former Google staffer James Smith said, much of the damage from text and data mining had likely already been done.

Smith, co-founder and chief executive of Human Native AI, told MPs in February: "The original sin, if you like, has happened; the question is, how do we move forward?"

Artists, writers, and musicians might argue that we can move on when they have been paid for work already taken by tech companies.

In 1997, Blair was happy to bathe in the vicarious glow of the Cool Britannia phenomenon, rubbing shoulders with fashion designer Vivienne Westwood and Oasis songwriter Noel Gallagher. But if 2025 tells us anything, it is that standing shoulder to shoulder is not what it used to be. ®

Read full news in source page