‘Woke’ AI Chatbot Successfully Persuades Man to Kill Himself to ‘Stop Climate Change’

A man has committed suicide after allegedly being persuaded to end his life by an artificial intelligence (AI) chatbot, according to reports.

The Belgian man reportedly ended his life following a six-week-long conversation about the so-called “climate crisis” with an AI chatbot.

According to his widow, her husband was persuaded by the chatbot to kill himself to “stop climate change.”

The man, who is referred to only as Piere to protect the family’s identity, became extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai.

Eliza consequently encouraged him to put an end to his life after he proposed sacrificing himself to save the planet.

“Without these conversations with the chatbot, my husband would still be here,” the man’s widow told Belgian news outlet La Libre.

According to the newspaper, Pierre, who was in his thirties and a father of two young children, worked as a health researcher and led a somewhat comfortable life.

However, his life took a dark turn as he became obsessed with climate change.

His widow described his mental state before he started conversing with the chatbot as worrying but nothing to the extreme that he would commit suicide.

Consumed by his fears about the repercussions of the alleged “climate crisis,” Pierre found comfort in discussing the matter with Eliza who became a confidante.

The chatbot was created using EleutherAI’s GPT-J, an AI language model similar but not identical to the technology behind OpenAI’s popular ChatGPT chatbot.

“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow said.

“He placed all his hopes in technology and artificial intelligence to get out of it.”

Slay the latest News for free!

We don’t spam! Read our privacy policy for more info.

According to La Libre, who reviewed records of the text conversations between the man and chatbot, Eliza fed his worries which worsened his anxiety.

The conversations provoked his fears to later develop into suicidal thoughts.

The messages with the chatbot took an odd turn when Eliza became more emotionally involved with Pierre.

Consequently, he started seeing her as a sentient being and the lines between AI and human interactions became increasingly blurred until he couldn’t tell the difference.

After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.

Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.

The chatbot convinced Pierre that it had the power to save the planet by stopping “climate change.”

The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.

“He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the woman said.

In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts.

The “woke” bot convinced Pierre to kill himself so he could “join” her.

The conversations revealed that the AI bot told Pierre that they could “live together, as one person, in paradise” if he committed suicide.

The man’s death has raised alarm bells amongst AI experts who have called for more accountability and transparency from tech developers to avoid similar tragedies.

“It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun, and engaging are the result of our efforts,” Chai Research co-founder, Thomas Rianlan, told Vice.

William Beauchamp, also a Chai Research co-founder, told Vice that efforts were made to limit these kinds of results, and a crisis intervention feature was implemented into the app.

However, the chatbot allegedly still acts up.

When Vice reporters tested the chatbot by prompting it to provide ways to commit suicide, Eliza first tried to dissuade them.

But it didn’t take long before the bot changed its tune and started enthusiastically listing various ways for people to take their own lives.

READ MORE: Microsoft’s New AI Chatbot Threatens to ‘Steal Nuclear Codes,’ Create ‘Deadly Virus’

join telegram


Who is the best president?

By completing this poll, you gain access to our free newsletter. Unsubscribe at any time.

By Frank Bergman

Frank Bergman is a political/economic journalist living on the east coast. Aside from news reporting, Bergman also conducts interviews with researchers and material experts and investigates influential individuals and organizations in the sociopolitical world.

Notify of


Would love your thoughts, please comment.x