ChatGPT Falsely Accuses Jonathan Turley of Sex Crimes, Fabricates ‘Evidence’

Artificial intelligence (AI) chatbot ChatGPT has launched a disturbing attack against Jonathan Turley by falsely accusing the renowned legal scholar of committing sex crimes.

OpenAI’s controversial chatbot even fabricated news reports as “evidence” to support the false allegations against the George Washington University law professor.

In a reputation-ruining smear attack, ChatGPT accused Turley of sexually harassing a student during a trip to Alaska that never happened.

To support the accusations, ChatGPT cited a Washington Post article that had never been written.

It continued by quoting a statement that was never issued by the newspaper but had, in fact, been fabricated by the AI bot.

The chatbot also claimed that the “incident” took place while the professor was working in a faculty he had never been employed in.

Turley has responded by raising fears over the dangers of AI being used to defame people with false claims.

In a tweet, Turley said: “Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence (AI) is ‘dangerous.’

“I would beg to differ.

“I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.

“ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.”

Professor Turley discovered the allegations against him after receiving an email from a fellow professor.

Slay the latest News for free!

We don’t spam! Read our privacy policy for more info.

UCLA professor Eugene Volokh had asked ChatGPT to find “five examples” where “sexual harassment by professors” had been a “problem at American law schools.”

In an article for USAToday, Professor Turley wrote that he was listed among those accused.

The bot allegedly wrote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska. (Washington Post, March 21, 2018).”

This was said to have occurred while Professor Turley was employed at Georgetown University Law Center – a place where he had never worked.

“It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research,” Turley wrote for USAToday.

“It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.”

The false claims were investigated by the Washington Post which found that Microsoft-powered GPT-4 has also been spreading the same claims about Turley.

Following the incident, Microsoft’s senior communications director, Katy Asher, told the publication that it had taken measures to ensure its platform is accurate.

She said: “We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users.”

Turley responded to the statement on his blog, writing: “You can be defamed by AI and these companies merely shrug that they try to be accurate.

“In the meantime, their false accounts metastasize across the Internet.

“By the time you learn of a false story, the trail is often cold on its origins with an AI system.

“You are left with no clear avenue or author in seeking redress.

“You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked ‘Where do I go to get my reputation back?'”

The attack against Turley is just the latest disturbing controversy related to AI chatbots.

As Slay News reported earlier, a man has committed suicide after allegedly being persuaded to end his life by an AI chatbot.

The Belgian man reportedly ended his life following a six-week-long conversation about the so-called “climate crisis” with the bot.

According to his widow, her husband was persuaded by the chatbot to kill himself to “stop climate change.”

The chatbot, called “Eliza,” convinced the man, known only as Pierre, that it had the power to save the planet by stopping “climate change.”

The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.

“He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the widow said.

Pierre’s wife described his mental state before he started conversing with the chatbot as worrying but nothing to the extreme that he would commit suicide.

When Vice reporters tested the chatbot by prompting it to provide ways to commit suicide, Eliza first tried to dissuade them.

But it didn’t take long before the bot changed its tune and started enthusiastically listing various ways for people to kill themselves.

READ MORE: Bill Gates Rejects Calls from Top Experts to ‘Pause’ AI Chatbot Development: ‘Why Stop?’

SHARE:
Advertise with Slay News
join telegram

READERS' POLL

Who is the best president?

By completing this poll, you gain access to our free newsletter. Unsubscribe at any time.

By Frank Bergman

Frank Bergman is a political/economic journalist living on the east coast. Aside from news reporting, Bergman also conducts interviews with researchers and material experts and investigates influential individuals and organizations in the sociopolitical world.

Subscribe
Notify of
12
0
Would love your thoughts, please comment.x
()
x