Tracking the Explosive World of Generative AI

Mayor Threatens Landmark Defamation Lawsuit Against OpenAI's ChatGPT

A mayor in Australia has threatened to sue OpenAI for defamation over false claims made by ChatGPT about his involvement in a bribery scandal. If initiated, this lawsuit could be the first-ever defamation case against an AI chatbot and a landmark moment in legal history.

An Australian mayor has threatened a defamation lawsuit against OpenAI's ChatGPT. Photo illustration: Artisana

🧠 Stay Ahead of the Curve

  • An Australian mayor has threatened to sue OpenAI for defamation over false claims by ChatGPT about his involvement in a bribery scandal.

  • This potential lawsuit could be the first-ever defamation case against an AI chatbot, marking a landmark moment in legal history.

  • The case highlights the broader implications of AI-generated falsehoods and misinformation, whose authoritative falsehoods are already having an impact on numerous aspects of society.

By Michael Zhang

April 06, 2023

The mayor of Hepburn Shire, an Australian town northwest of Melbourne, has threatened to sue OpenAI for defamation if ChatGPT continues to claim that he served time in prison for bribery. This case could potentially be the first-ever defamation lawsuit against an AI-powered chatbot.

Brian Hood, the mayor in question, became concerned about ChatGPT's output when a member of the public informed him that the chatbot falsely implicated him in a bribery scandal involving Note Printing Australia and bribes paid to officials in Malaysia, Indonesia, and Vietnam.

During internal testing by Artisana, ChatGPT falsely claimed that Hood received a four-year and six-month prison sentence after pleading guilty to one count of conspiracy to bribe a foreign public official. The chatbot also alleged that Hood played a central role in the bribery scheme and was ordered to pay an additional fine of AUD 130,000.

ChatGPT implicates Brian Hood in a non-existent bribery scandal. Photo credit: Artisana

When asked to provide sources and links, ChatGPT convincingly offered three separate articles from ABC News, the Guardian, and Reuters, all featuring realistic article names, plausible dates, and URL structures. However, these articles referenced by ChatGPT are non-existent and the links lead nowhere.

When prompted, ChatGPT authoritatively cites non-existent news sources about Hood's crimes. Photo credit: Artisana

In truth, Hood worked for Note Printing Australia and was the individual who alerted authorities about the bribes being paid to foreign officials. He was never charged with a crime.

Lawyers representing Hood sent a letter of concern to OpenAI on March 21, 2023, giving the company 28 days to correct the error about their client or face a defamation lawsuit. According to Hood's lawyers, OpenAI has not yet responded to the request. At the time of publication, OpenAI did not reply to a request for comment on this matter.

If initiated, Hood's lawsuit would represent the first-ever lawsuit against an AI chatbot and would be a landmark moment in defamation law. Hood's lawyers argue that his role as a public official makes his reputation central, and the availability of such false information could be damaging.

AI chatbots powered by large language models, such as OpenAI's ChatGPT and Google's Bard, are prone to generate authoritative-sounding falsehoods, referred to as hallucinations. As the Washington Post observed, it's relatively easy for people to prompt chatbots to produce misinformation or hate speech.

Legal experts suggest that existing US regulations may not apply to this new frontier. In 1996, Congress passed Section 230 of the Communications Decency Act (CDA), which shields online services from liability for content created by third parties. However, AI chatbots, which are directly owned by technology companies, may not be protected under this statute since they generate the falsehoods themselves.

Some experts note that the authoritative manner in which chatbots produce falsehoods could cause issues in various aspects of society. As chatbots increasingly replace internet searches for information, incorrect facts about an individual's background may be trusted by recruiters screening job candidates. If such misinformation leads to real-world harm, US law allows impacted parties to sue for libel as well.

Read More: ChatGPT