ChatGPT has falsely accused an American law professor of sexual assault after naming him in a list of legal staff scholars who have assaulted someone.

The hit AI chatbot claimed Johnathan Turley from George Washington University made sexual advances towards a student during a trip to Alaska, a trip he never attended with a school he had never worked at.

ChatGPT even cited a non-existent The Washington Post article to back up its false claim.

Experts have spoken out warning that more false claims could be on their way, noting the lack of source reliability and consciousness of OpenAI’s system.

Scott Zoldi, chief analytics officer at data analytics company FICO states: “The reality is that neither ChatGPT nor any AI has a conscience.

“ChatGPT isn’t assisting or enhancing human creativity, it is regurgitating a configuration of the data it was trained on.”

ChatGPT has already had issues with false claims, this week, the company was threatened with a defamation lawsuit from a mayor in Australia.

Brian Hood, the mayor of Hepburn Shire in Melbourne, was falsely accused by the AI chatbot of being imprisoned for bribery – something that never happened.

“If ChatGPT doesn’t get a hold on their AI system and prevent it from generating false information, they could potentially face legal repercussions – primarily if the content produced harms individuals or organisations,” James Owen, SEO and digital marketing expert, told Verdict.

Adding: “If possible, developers should strive to create an internal review process where certain generated content is checked by human reviewers prior to being released – taking steps to correct any misinformation and prevent further dissemination of any false claims.”