In a recent development in the ongoing copyright fight between music publishers and Anthropic’s Claude chatbot, a US district judge has granted court authority to intervene if the chatbot spits out song lyrics without paying licensing fees to rights holders.
A Small Win for Music Publishers
On Thursday, music publishers secured a small win in their fight against Anthropic. The deal reached between Anthropic and publisher plaintiffs who license some of the most popular songs on the planet resolves one aspect of the dispute. Through the agreement, Anthropic admitted no wrongdoing and agreed to maintain its current strong guardrails on its AI models and products throughout the litigation.
Guardrails: A Crucial Aspect
Anthropic has repeatedly claimed in court filings that its guardrails effectively work to prevent outputs containing actual song lyrics from hits like Beyoncé’s "Halo," Spice Girls’ "Wannabe," Bob Dylan’s "Like a Rolling Stone," or any of the 500 songs at the center of the suit. The deal stipulates that Anthropic will apply equally strong guardrails to any new products or offerings.
Court Authority and Intervention
The agreement grants the court authority to intervene should publishers discover more allegedly infringing outputs. Before seeking such an intervention, publishers may notify Anthropic of any allegedly harmful outputs, including:
- Partial or complete song lyrics
- Derivative works mimicking the lyrical style of famous artists
After an expeditious review, Anthropic will provide a "detailed response" explaining any remedies or clearly stating its intent not to address the issue.
Ongoing Dispute: AI Training on Lyrics
The deal does not settle publishers’ more substantial complaint alleging that Anthropic training its AI models on works violates copyright law. The matter of whether Anthropic has strong enough guardrails to block allegedly harmful outputs is settled, allowing the court to focus on arguments regarding "publishers’ request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers’ lyrics to train future AI models."
Anthropic’s Reversal and Deal
In a court filing, Anthropic had tried to argue that relief sought preventing harmful outputs ‘in response to future users’ queries’ was ‘moot.’ However, the company reversed course to reach the deal. Anthropic accused publishers of "purposefully" engineering prompts on an outdated version of Claude to produce outputs that no ordinary Claude user would allegedly ever see.
Expert Testimony and Evidence
Publishers sought help from an expert, Ed Newton-Rex, the CEO of Fairly Trained, a non-profit organization that certifies generative AI companies that obtain licenses for the data they use or use public domain or openly licensed data. Newton-Rex conducted his own testing on Claude’s current guardrails and surveyed public comments, allegedly finding two "simple" jailbreaks allowing him to generate whole or partial lyrics from 46 songs named in the suit.
Anthropic’s Response and Rebuttal
At first, Anthropic tried to get this evidence tossed, arguing that publishers were trying to "shoehorn" "improper" "new evidence into the record." However, publishers dug in, arguing that they needed to "correct the record" regarding Anthropic’s claims about supposedly effective current guardrails to prove there was "ongoing harm" necessitating an injunction.
The Importance of Fair Use
The question of whether AI training is a fair use of copyrighted works is complex and remains hotly disputed in court. For Anthropic, the stakes could be high, with a loss potentially triggering more than $75 million in fines, as well as an order possibly forcing Anthropic to reveal and destroy all the copyrighted works in its training data.
Conclusion
The lyrics battle between music publishers and Anthropic’s Claude chatbot continues. The recent agreement grants court authority to intervene if the chatbot spits out song lyrics without paying licensing fees to rights holders. However, the matter of whether AI training is a fair use of copyrighted works remains hotly disputed in court.
References
- "Claude isn’t designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement." – Anthropic’s spokesperson
- "We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use." – Anthropic’s statement