We all want to know about American libel law, now that President Trump has launched his pre-action missile at the BBC. If he pursues his claim it will be under Florida law, where his defamation action will not be statute barred. In the UK such claims must be commenced within one year of publication; Florida allows two. There are other significant differences between English and American defamation systems, which I will explore in this and the following post. Whatever the outcome of Trump v the BBC, the question that is occupying libel lawyers in the US at the moment is not a human run journalistic enterprise, whatever its flaws. It is the collision between antiquated libel laws the world over and the runaway publication machine called Artificial Intelligence.
No UK court has yet issued a judgment in a libel or defamation claim concerning AI-generated content, but several cases and legal actions are emerging and the issue is widely anticipated to reach the courts soon. I will discuss these later. There is rather more activity on this front across the pond. American defamation law is very different from ours, but we can see the enormous problems that arise when a technology provider is presented with a libel writ in respect of a statement that has been distributed by AI, if it has caused serious harm to a person’s reputation. A recent example is set out in an article in The New York Times by Ken Bensinger, who reports that a solar contractor in Minnesota, called Wolf River Electric, noticed a dramatic fall off in sales.
“When they pressed their former customers for an explanation, the answers left them floored.
The clients said they had bailed after learning from Google searches that the company had settled a lawsuit with the state attorney general over deceptive sales practices. But the company had never been sued by the government, let alone settled a case involving such claims.
Confusion became concern when Wolf River executives checked for themselves. Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.
Unsurprisingly, Wolf River executives decided they had no choice but to sue Google for defamation. This is just one instance of half a dozen libel claims filed in the US over the past two years over content produced by AI tools that generate text and images. Another case dating back to 2023 involved a talk radio host and a Second Amendment advocate (the right to carry a gun) who found out that AI had falsely accused him of embezzlement – this was discovered by a journalist looking up the radio presenter’s name on the internet.
Unlike the UK, American defamation law requires the claimant to prove intent. Obviously it is impossible to know what is going on inside the algorithms that drive AI models like ChatGPT and Perplexity. As the radio host’s lawyer said
“Frankenstein can’t make a monster that runs around murdering people and then claim he had nothing to do with it,”
As it turns out, in that case the court dismissed the libel claim because the journalist in question had not trusted the ChatGPT allegations. The judge ruled that “If the individual who reads a challenged statement does not subjectively believe it to be factual, then the statement is not defamatory,”
Most of these claims in America are settled by the tech giants, who worry that a verdict finding that a company is liable for the output of its AI model could open the floodgates to litigation from others who discover falsehoods about themselves put out by large language learning models. And then there are the costs implications of a full scale court battle. These claims have be defended, at great expense, even if the AI company prevails. As a result, AI companies that produce such software may find it impossible to get liability insurance.
So the companies don’t want to risk fighting these claims – but how long can they afford to go on settling?
Private companies like Wolf River have a good chance of establishing that defamation had occurred, because under American law they only have to show that the tech company acted negligently rather than with a “reckless disregard” for the truth, which applies to public figures. Wolf River can also prove a quantifiable loss in terminated contracts as a result of the misstatements made about it.
This is becoming particularly significant as AI programs are becoming integrated into search engines and other products, so that viewers tend to trust them in the way that they may not entirely believe the results of a search on AI itself. As one American defamation specialist noted,
“even if users realize that AI programs are no more reliable than, say, rumor or gossip, the law generally recognizes that rumor and gossip can be quite damaging, and can therefore be actionable.” (Eugene Volokh, Large Libel Models? Liability for AI Output in The Journal of Free Speech Law, Vol I Issue 1 2023)
The relevant section of the US Code, 47 USC 230, 7 U.S.C. § 230 is unlikely to provide AI companies with immunity for material composed and communicated by their AI programs. Section 230 is within the Communications Decency Act of 1996 and provides protections for online intermediaries with respect to content posted by third parties. This provision states that
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
But AI’s output is composed by the programmes themselves, they are not quotations from existing sites. In drafting this provision the way it did, Congress did not intend to provide immunity to companies that deploy software which itself creates messages or pictures that had never been expressed by third parties. Text that is distributed by an AI company will tend to be associated with in the average reader’s mind with the credibility of the programme and the company. Equally, there is no liability in defamation for a user (like Wolf River’s cancelled clients) who believes an AI programme’s false output about a person or a company and as a result declines to do business with them. The liability lies squarely with the AI company.
In US libel law, the threshold key inquiry is whether the “challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact.” (Takieh v. O’Meara, 497 P.3d 1000, 1006 (Ariz. Ct. App. 2021)
As Volokh comments,
After OpenAI promotes its superiority to 90% of test-takers at producing answers to complicated questions, it can’t then turn around and, in a libel lawsuit, argue that it’s all just Jabberwocky.
Libel law looks at the natural and probable effect of assertions on the average lay reader, not at how something is perceived as a technical expert. For this reason, no immunity arises from the fact that everyone understands that AI programmes are not perfect.
Nor is it possible, in US law in any event, for tech companies to rely on the inclusion in their AI programmes of disclaimers that stress the risk that their output will contain errors. American defamation law has long treated false, potentially reputation-damaging assertions about people as actionable even when it is evident that the assertions might be false.
“Publication” in the context of AI is particularly problematic because publication happens each time an AI programme responds to a use prompt, and the Restatement (Second) of Torts notes that liability arises in respect of separate aggregate publications on different occasions. “In these cases the publication reaches a new group and the repetition justifies a new cause of action”.
The Restatement makes clear that “publication” in libel cases is a legal term of art:
Publication of defamatory matter is its communication intentionally or by a negligent act to one other than the person defamed.
There is no doubt then that a statement put out to a user by an AI programme is “publication” to that user.
Most state courts view written defamatory publications as actionable without showing provable economic loss, although the First Amendment (right to free speech) limits this doctrine where there is a public interest where the falsehood is merely negligent rather than reckless or knowing falsehood. In cases such as River Wolf, where the speech is on matters of private concern to a private company, damages need not be proven (although obviously in their case they were able to prove economic loss).
Negligence, in both the US and the UK, has always been an exception to the pure economic loss rule, which militates against damages claims in the context of defective products that haven’t actually caused physical damage, only financial loss.
Does an AI company avoid liability when an analogy is drawn with a traditional newspaper where a human or humans are distributing libellous stories, whereas no human at an AI company would have written, edited, or even typeset the assertions? The answer to this is that newspapers/bookstores/newstands are still liable if they have reason to know of the defamatory character of the material they are distributing.
There is, moreover, a question that many of us users of AI have been asking: if the AI program can spot errors in its own output when asked for more detail, as we know they frequently do, that suggests a reasonable alternative design. Why can’t the AI be programmed to recheck its work automatically? That would obviate a need for the “apolgies” we so often received when double checking a response to a request.
And for those of us who remember the recent nonsensical responses by Google’s Gemini AI system to certain questions, which revealed the baked-in “woke” biases of the programmers, we might wonder why these tech companies, so determined to prevent their software from generating offensive content, cannot deal with content that the law has long recognised as potentially highly damaging I.e. libellous). Eugene Volkov highlights the fact that a one-hundred page Open AI document proclaims its potential avoidance of incitement to violence, instructions for finding illegal content, harassing or hateful content, erotic content or encouragement of self harm behaviours. But nowhere in this document is there a reference to libel, defamation or reputation.
The author concludes his article by saying that the courts can themselves revise the common-law tort law rules in light of the special features of AI technology. Courts made the common-law rules in a pre-AI era; and they can change the rules if they think the rules have become inapt as to new technological developments.
But of course we must not go overboard in this regard: The utility of AI programmes would be diminished if their functionality had to be sharply reduced in order to prevent libel.

