Artificial intelligence, or AI, is now an inextricable part of our lives. We’ve all no doubt had our fun with it, especially when apps like ChatGPT were first widely released.
I asked it to write two columns, one in the style of myself, the other as if by the former UK prime minister Boris Johnson. Mine was fairly serious and geopolitical, Mr Johnson’s was eerily close to his inimitable prose style. “Bananas, dear readers, are not just a fruit,” ChatGPT Boris wrote. “They are a symbol of our national character – strong, straight and slightly bent at times.” Somewhere on my phone I still have a video my elder son created of US President Donald Trump supposedly saying how much he liked reading this newspaper.
We were always told that AI would move scarily fast – and it has. So much so that we have now entered an era of unprecedented litigation over the extreme dangers AI can pose to content creators, across the entire spectrum. Robert Thomson, the chief executive of News Corp, was direct about this when I saw him being interviewed at a conference in Singapore last week. As head of News Corp, which includes The Wall Street Journal, the Times and The Sun in the UK, the book publishers HarperCollins and real estate businesses, Mr Thomson was asked what opportunities he saw from AI.
“Yeah – suing people,” he replied. “Which we do a lot of.” Mr Thomson laughed, as did the audience; it wasn’t the answer they were expecting. He could be forgiven for being momentarily upbeat over such a serious issue, however, as last month News Corp was part of the biggest copyright settlement in history, when the AI firm Anthropic agreed to pay $1.5 billion to settle a class action by authors who said their work had been stolen to train the firm’s AI models.
It’s just one of many cases. Encyclopaedia Britannica and Merriam Webster are suing Perplexity AI for allegedly scraping their websites, and copying and reproducing their articles without permission. News Corp is also suing Perplexity – Mr Thomson hopes revealing internal emails from the firm will come out during the legal action, saying at the conference “there’s one way to stop them being made public, which is: pay us a lot of money”.
The New York Times and others have a copyright lawsuit against OpenAI for, they say, using their content without payment or permission to train ChatGPT. Rolling Stone magazine’s parent company is suing Google over its AI summaries, which it says have caused major drops in traffic to their website and thus revenue. Getty Images have sued an AI firm for allegedly copying and reproducing millions of images; and there’s a long list of others.
Last year Abba’s Bjorn Ulvaeus, the president of CISAC – the International Confederation of Societies of Authors and Composers – joined 15,000 creators to issue this statement: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
Mr Ulvaeus has talked in the past of how tricky it can be to ensure that money follows the talent. What if AI created a song based not just on his music, but on the works of 10 other musicians as well? Who could tell? And wouldn’t it count as what is legally known as “fair use”? (Probably not, according to a US court ruling earlier this year that was welcomed by copyright holders.)
If all that sounds rather dire, even those who warn most strongly of AI’s risks to content creators can also see its potential benefits. Mr Ulvaeus, for instance, is writing a musical with the assistance of songwriting tools, although they often “come out with garbage”, he says, and are “very bad at lyrics”.
Last year, News Corp signed a five-year deal with OpenAI, for more than $250 million, to allow the tech firm to use News Corp’s content. “It was indicative to me of the way we have to have dialogues with a range of companies who are frankly potential partners but at the same time potential foes,” said Mr Thomson in his Singapore interview. From his side, he said, “it was us trying to visualise products that didn’t exist, and also OpenAI understanding that what we couldn’t allow was the creation of a cannibalisation engine that would actually undermine the very act of creation”. It was a question of grappling with “unknowables”, Mr Thomson said. But “we’re at a moment in business history when ‘unknowables’ are real”.
Both he and his interviewer agreed that the stage we are at with AI has many similarities to the period from the late 1990s when the internet rose from being a curious new helper – I remember a colleague at a London newspaper telling me at the time “there’s a search engine I use called ‘Google’” – to a disrupter that overturned, and ravaged much of, the news media industry in particular.
So these are testing times for content creators – a term that journalists shunned in those days, but which most would now embrace as part of a wider community that stands to gain or lose from this latest technological revolution.
On one point, Mr Thomson was surely on the money. Don’t leave AI to the IT department, he said. If it isn’t infused across every institution, you could be more vulnerable to AI risk, but just as importantly, he said, “you’re probably not going to be conscious of AI opportunity”.
And if you’re not sure what that could be for you? Well, you could always ask Copilot.