Australian mayor may sue OpenAI over false ChatGPT claims

A regional mayor in Australia may take legal action to sue the creator of the AI chatbot, ChatGPT for defamation after the program falsely claimed that he was the guilty party in a foreign bribery scandal.

Lawyers for Brian Hood, Mayor of Hepburn Shire Council, has sent a “concerns letter” to Open AI, the first formal step in defamation action under Australian law.

If it goes through, the matter will be unprecedented, since it will be the first against a chatbot and a company operating one.

Legal experts also believe that it will be complicated.

Media law specialist Professor David Rolph, told ABC Australia that suing an “online intermediary” for defamation would be complicated, as matters of jurisdiction would need to be taken into account.

“One of the issues that we have with a lot of online intermediaries is the basic question of jurisdiction … can you actually bring a proceeding against them in an Australian court?” Professor Rolph explained. “A lot of these internet intermediaries are based offshore, a lot of them in the United States, which will often raise all sorts of problems.”

Chatbots, also known as “conversational agents,” have been becoming popular on the internet in recent times. They are software applications that mimic written or spoken human speech for the purposes of simulating a conversation or interaction with a real person.

Among the most popular is ChatGPT which was launched in November 2022. It has been described as the latest breakthrough in Artificial Intelligence (AI) research and is known for its exceptional skill in comprehending and responding to human language.

But according to Hood, he was recently alerted to the fact that ChatGPT had incorrectly described his role in a foreign bribery incident in the early 2000s for which the chatbot claimed he was imprisoned.

ABC Australia reported that Hood had actually worked for the company involved “but was actually a whistleblower who told authorities about bribe payments being made to foreign officials to win contracts.”

“According to ChatGPT, I was one of the offenders, that I got charged with all sorts of serious criminal offences. I was never charged with anything,” he was quoted as saying. “It’s one thing to get something a little bit wrong, it’s entirely something else to be accusing someone of being a criminal and having served jail time when the truth is the exact opposite,” Hood said.

He said the matter is a wake-up call since many users have put their faith in the credibility of the system.

“I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it’s obviously not,” Hood stated.

Open AI has not yet responded to the matter but with so many people depending on ChatGPT and other chatbots for information, it will be interesting to see what direction it will take.

Copyright 2012 Dominica News Online, DURAVISION INC. All Rights Reserved. This material may not be published, broadcast, rewritten or distributed.

Disclaimer: The comments posted do not necessarily reflect the views of DominicaNewsOnline.com and its parent company or any individual staff member. All comments are posted subject to approval by DominicaNewsOnline.com. We never censor based on political or ideological points of view, but we do try to maintain a sensible balance between free speech and responsible moderating.

We will delete comments that:

  • contain any material which violates or infringes the rights of any person, are defamatory or harassing or are purely ad hominem attacks
  • a reasonable person would consider abusive or profane
  • contain material which violates or encourages others to violate any applicable law
  • promote prejudice or prejudicial hatred of any kind
  • refer to people arrested or charged with a crime as though they had been found guilty
  • contain links to "chain letters", pornographic or obscene movies or graphic images
  • are off-topic and/or excessively long

See our full comment/user policy/agreement.

2 Comments

  1. Long Road Ahead
    April 10, 2023

    AI is still in its infancy. It’s not like medicine where you have people injest without proper testing. Implementing śo much AI modules now is jumping the gun, no wonder the big players are attempting to put a pause on development at this time. Time would be better spent further developing vehicles that run on water.

  2. smh
    April 9, 2023

    “Legal experts also believe that it will be complicated.”

    You think? First of all isn’t an offense supposed to be predicated on intent? How do you prove that a computer generated system has intent? You certainly can’t prove that the company themselves intended for it to produce that outcome because with the way how machine learning works, the creators have very little control over how the “machine” deals with the information that it collects. It’s simply a model that describes how to deal with different types of information and there is some level of curation to guide it in the right direction.

    It’s like trying to sue a car manufacturer because one of their vehicles was used to murder someone

Post a Comment

Your email address will not be published. Required fields are marked *

:) :-D :wink: :( 8-O :lol: :-| :cry: 8) :-? :-P :-x :?: :oops: :twisted: :mrgreen: more »

 characters available