Last week a Georgia state court granted summary judgment in favor of OpenAI, ending a closely watched defamation lawsuit over false information—sometimes called “hallucinations”—generated by its generative AI product, ChatGPT.  The plaintiff, Mark Walters, is a nationally syndicated radio host and prominent gun rights advocate who sued OpenAI after ChatGPT produced output incorrectly stating that he had been accused of embezzlement in a lawsuit filed by the Second Amendment Foundation (“SAF”).  Walters is not, and never was, a party to that case. 

The incident began on May 3, 2023, when an editor of AmmoLand.com used ChatGPT to summarize a real complaint filed by SAF against the Attorney General of Washington.  Through various prompts, ChatGPT generated a false summary alleging that Walters had embezzled funds from SAF—a fictitious claim.  The editor quickly recognized the error, verified the complaint’s actual content, and never published the false information.  Nonetheless, Walters sued OpenAI claiming that this statement in ChatGPT’s output constituted defamation.[1] 

The Court’s Ruling

The Superior Court of Gwinnett County granted summary judgment in favor of OpenAI, dismissing Walters’ defamation claim on three separate grounds. 

No Defamatory Meaning: The Court ruled that no “reasonable reader” in the editor’s position would have believed the statement to be factually accurate.  There were multiple “red flags” warning the editor that mistaken output was a real possibility.  Prior to outputting the “hallucination” about Walters, ChatGPT warned the editor that (1) it could not access the internet or the link provided to the complaint, and (2) the complaint was filed after ChatGPT’s “knowledge cutoff date,” i.e., during a period for which it had no information.  ChatGPT’s terms of service also provided a general warning to users that it can and does at times provide inaccurate information.  And from past experience, the editor knew that ChatGPT may provide “flat-out fictional responses.”  The Court held that these warning signs “objectively established to any reasonable reader that the challenged ChatGPT output was not stating actual facts.” 

No Negligence or Malice by OpenAI:  The Court also held that Walters failed to establish the requisite degree of negligence by OpenAI.  In defamation cases, the required showing turns on whether the plaintiff is a public figure or public official or instead merely a private citizen.  Private citizens must show only “ordinary negligence” i.e., that the defendant failed to demonstrate the care that a “reasonable publisher in its position would have employed prior to publishing” the fact at issue.  On the other hand, public figures and public officials must meet the higher “actual malice” standard, which requires clear and convincing evidence that the defendant either “knew that the allegedly defamatory statements were false” or published the defamatory content with a reckless disregard for its truth.[2]    

Here, the Court held that Walters’ defamation claim failed under either standard.  The Court first held that Walters qualifies as a public figure due to his prominence as a radio host and commentator who is self-described as “the loudest voice in America fighting for gun rights.”[3] Yet, Walters failed to show “actual malice,” because there was no evidence that OpenAI subjectively knew that the challenged output was false at the time it was published or recklessly disregarded the possibility that it might be false and published it anyway.  Rather, the Court found that the undisputed evidence shows that OpenAI “has gone to great lengths to reduce hallucination in ChatGPT” and put the public on notice of the potential for factually inaccurate outputs. 

The Court further held that even if “actual malice” did not apply, Walters’ claim still fails because he could not satisfy the “ordinary negligence” standard.  Specifically, Walters failed to provide evidence identifying the procedures a reasonable publisher in OpenAI’s position would have employed, let alone that OpenAI deviated from this standard.   

No Damages:  No damages were warranted because Walters conceded that he suffered no harm or economic injury given that the output was shown only to the editor who prompted it, and the editor never believed it or republished it.  Punitive damages were likewise not appropriate because Walters failed to request a correction or retraction as required under Georgia law.  Finally, the Court rejected any claim for presumed damages, emphasizing that no injury occurred and that the statement involved a matter of public concern.

Key Takeaways

This ruling is the first of its kind to address the viability of a defamation claim against a generative AI product.  But as a test case, the dismissal of Walters’ claims is not surprising given the absence of any broader dissemination and the fact that no one was misled by the output.    

It remains to be seen whether a defamation claim against a generative AI platform based on different facts could be successful.  For example, it is not clear that generative AI can possess the requisite knowledge or intent, even if material were ultimately published to a wider audience,  as this decision suggests that companies should not be liable based on knowledge that their AI is merely capable of generating defamatory outputs.  This would “impose a standard of strict liability, not [ordinary] negligence” because anyone is capable of uttering a false statement.  Similarly, unlike the statements of company employees—which can serve as a basis for liability against a company—generative AI products are not human and may not be said to “knowingly” create a false statement or “recklessly” disregard the truth in the same way an employee might under the “actual malice” standard. 

There is also a significant question as to who is responsible for the dissemination of false and potentially defamatory AI outputs.  In the context of newsgathering organizations, reporters are responsible for fact-checking their sources before publication.  Like information learned from witnesses and internet searches, information obtained from ChatGPT or other large language models should be vetted for accuracy and reliability before inclusion in an article or other publication.  Where the publication fails to do so and publishes anyway, particularly given warnings about the propensity for generative AI models to hallucinate, a court may find that it is the reporter or publication who disseminated the false statement, rather than the AI model that generated it, who are responsible for the alleged defamation and resulting reputational harm.  Until case law is more developed on potential liability for false generative AI outputs, companies would be well-served to ensure that they have robust customer-facing terms of service outlining the potential risks for factually incorrect outputs, and anyone considering using or citing to AI-generated outputs in published materials should continue to take reasonable steps to confirm the accuracy of that information, just as they would for any other source.


[1] Defamation comes in two forms:  defamation in written form constitutes libel, and in spoken form constitutes slander.  In this case, the Court did not distinguish between the two, but given that the alleged defamation was in the form of a written output from ChatGPT, it would have been appropriately categorized as libel.   

[2] The “actual malice” standard was first enunciated by the Supreme Court in New York Times v. Sullivan, a landmark decision intended to expand the protections for freedom of the press and ensure its ability to criticize public officials under the First Amendment.  376 U.S. 254 (1964).   

[3] The Court further held that the “actual malice” standard should apply in any case because at a minimum Walters qualifies as a “limited-purpose” public figure due to his involvement in public controversies surrounding the Second Amendment and gun rights.”