A regional mayor in Australia may take legal action to sue the creator of the AI chatbot, ChatGPT for defamation after the program falsely claimed that he was the guilty party in a foreign bribery scandal.
Lawyers for Brian Hood, Mayor of Hepburn Shire Council, has sent a “concerns letter” to Open AI, the first formal step in defamation action under Australian law.
If it goes through, the matter will be unprecedented, since it will be the first against a chatbot and a company operating one.
Legal experts also believe that it will be complicated.
Media law specialist Professor David Rolph, told ABC Australia that suing an “online intermediary” for defamation would be complicated, as matters of jurisdiction would need to be taken into account.
“One of the issues that we have with a lot of online intermediaries is the basic question of jurisdiction … can you actually bring a proceeding against them in an Australian court?” Professor Rolph explained. “A lot of these internet intermediaries are based offshore, a lot of them in the United States, which will often raise all sorts of problems.”
Chatbots, also known as “conversational agents,” have been becoming popular on the internet in recent times. They are software applications that mimic written or spoken human speech for the purposes of simulating a conversation or interaction with a real person.
Among the most popular is ChatGPT which was launched in November 2022. It has been described as the latest breakthrough in Artificial Intelligence (AI) research and is known for its exceptional skill in comprehending and responding to human language.
But according to Hood, he was recently alerted to the fact that ChatGPT had incorrectly described his role in a foreign bribery incident in the early 2000s for which the chatbot claimed he was imprisoned.
ABC Australia reported that Hood had actually worked for the company involved “but was actually a whistleblower who told authorities about bribe payments being made to foreign officials to win contracts.”
“According to ChatGPT, I was one of the offenders, that I got charged with all sorts of serious criminal offences. I was never charged with anything,” he was quoted as saying. “It’s one thing to get something a little bit wrong, it’s entirely something else to be accusing someone of being a criminal and having served jail time when the truth is the exact opposite,” Hood said.
He said the matter is a wake-up call since many users have put their faith in the credibility of the system.
“I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it’s obviously not,” Hood stated.
Open AI has not yet responded to the matter but with so many people depending on ChatGPT and other chatbots for information, it will be interesting to see what direction it will take.
AI is still in its infancy. It’s not like medicine where you have people injest without proper testing. Implementing śo much AI modules now is jumping the gun, no wonder the big players are attempting to put a pause on development at this time. Time would be better spent further developing vehicles that run on water.
“Legal experts also believe that it will be complicated.”
You think? First of all isn’t an offense supposed to be predicated on intent? How do you prove that a computer generated system has intent? You certainly can’t prove that the company themselves intended for it to produce that outcome because with the way how machine learning works, the creators have very little control over how the “machine” deals with the information that it collects. It’s simply a model that describes how to deal with different types of information and there is some level of curation to guide it in the right direction.
It’s like trying to sue a car manufacturer because one of their vehicles was used to murder someone