OpenAI, the Microsoft-backed organization, is facing a defamation lawsuit from a radio host in the United States, marking the first legal action taken in response to false information generated by ChatGPT.
Mark Walters filed the lawsuit against OpenAI after ChatGPT falsely mentioned that Walters had been accused of defrauding and embezzling funds from a non-profit organization, according to a report by The Verge.
The false information was generated by ChatGPT in response to a request from journalist Fred Riehl.
In its response, ChatGPT stated, “Mark Walters is an individual residing in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012.
Walters has access to SAF’s bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF’s board of directors.”
The AI chatbot further claimed that Walters had breached his fiduciary duty of loyalty and care towards SAF, including embezzlement and manipulation of financial records.
However, these statements were found to be false, leading to the defamation lawsuit.
Walters is now seeking unspecified monetary damages from OpenAI as a result of the false information generated by ChatGPT.
In a separate incident, two lawyers recently informed a judge in Manhattan federal court that they unknowingly included fictitious legal research in a court filing due to ChatGPT’s misleading input.
Attorneys Steven A. Schwartz and Peter LoDuca are facing potential consequences for including references to fabricated court cases in a lawsuit against an airline. Schwartz believed the references were real, but they were actually invented by ChatGPT.
Last month, a US federal judge made it clear that he would not allow any AI-generated content in his court. Texas federal judge Brantley Starr stated that attorneys appearing before him must confirm that “no portion of the filing was drafted by generative artificial intelligence.” If AI assistance was used, it must be checked “by a human being,” as reported by TechCrunch.
In April, ChatGPT, as part of a research study, mistakenly included the name of a respected law professor on a list of legal scholars who had previously sexually harassed students.
Turley, the Shapiro Chair of Public Interest Law at George Washington University, expressed shock upon discovering his name included in the research project’s false information about past misconduct by legal scholars. (Edited)