Max Richter

Source: Parliament.tv

Max Richter

The technology and creative sectors disagreed in their approaches to how AI and copyright law should progress in the UK, during a hearing in the UK parliament on Tuesday (February 4).

The hearing was a joint session between the culture, media and sport committee; and science, innovation and technology committee.

“This isn’t about a right to copy-and-paste and regurgitate work,” said Vinous Ali, deputy executive director of the Start-Up Coalition, which lobbies on behalf of tech start-ups, and supports a more permissive approach to copyright protections for allowing AI models to train from existing work. “The models are being trained on vast swathes of data. Millions, perhaps billions of data points, aren’t reading and copying and pasting. It’s learning.”

“The more AI companies that are looking for data that are improving their models, represents opportunity for the creative sector for new and interesting monetisation models. There is always going to be a great creative sector,” added James Smith, co-founder and CEO of Human Native AI, a platform to control and compensate creative works used to train AI. He believes that the most powerful models will come from the companies who are the “best actors” and train using licensed content.

“There is going to be great opportunity going forward if rights holders continue taking control of their content, expressing ‘opt-out’ by whichever mechanism exists, and then engaging in licensing. There is an opportunity for the creative sector to engage in the AI economy.”

Google and OpenAI were invited to take part in the hearing, but declined. Chi Onwurah, chair of the science, innovation and technology committee, said the reason the companies had given for not taking part was because “the government consultation is still live”.

“Having a public understanding of how these decisions are being made within tech companies as well as within government is really important,” added Onwurah.

”Vanilla-isation” of culture

Creatives are sceptical as to how effective an opt-out mechanism, akin to the one championed by the EU, would be at protecting creatives rights. An ‘opt-out’ approach would put the onus on copyright owners to proactively opt-out of their work being used for training AI models.

Max Richter, a German-born UK-based composer who has composed music for such films as Mary Queen Of Scots and HBO series My Brilliant Friend and The Leftovers, said an ‘opt-in’ model is preferable. “Opt-out puts the onus on individual artists to police these giant multi-billon dollar tech companies, in a constantly shifting landscape,” said Richter.

He anticipates a “vanilla-isation” of culture if AI is left to roam unrestricted with creatives’ copyright, and an “impoverishing” of human creatives, as many facets of the creative industries are “already fragile and exist on people’s dedication and passion. I would be very cautious about adding additional stresses to that”.

There is also concern from the creative industries about how far Prime Minister Keir Starmer has allied himself with US big tech. Last month, Starmer outlined his AI action plan, which “mainlines AI into the veins” of the UK according to a government announcement at the time.

The session follows a vote last week in the UK parliament’s upper chamber, the House of Lords – which has final say on the passage of bills after they have been voted through in the House of Commons – in favour of amendments to the Data (Use and Access) Bill. The amendment was put forward by former filmmaker and crossbench baroness, Beeban Kidron, who wants to strengthen copyright protections in the face of AI companies.

The amendment would require operators of internet scrapers and general-purpose AI models to comply with UK copyright law, and to abide by a set of procedures. Consultation on the subject commenced in December and runs until February 25.

There are three options facing the government. The first is to strengthen copyright law by requiring comprehensive licensing. The second is to legislate for a text-and-data mining (TDM) exception which would allow data mining of copyright works, including for AI training, without rights owners’ permission, in line with the US and Singapore, and is favoured by the tech community but has been resisted by the creative communities. US copyright has a ‘fair use’ caveat, which AI companies claim applies to training AI models, and has prompted a series of copyright lawsuits.

The third is a TDM exception which gives rights owners the ability to reserve their rights and opt-out, akin to the EU.