Euronews.next reports on a tragic case where a man killed himself, his suicidal thoughts allegedly exacerbated by his interaction with an AI chatbot.

In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.

The bot in question was based on  EleutherAI’s GPT-J, and uses the same kind of technology as chatGPT et al, seemingly without the same guardrails that the more famous ones implement. He was interacting with the AI via the Chai app, which offers a varied selection of AI ‘personalities’ to chat with - marketed at least on the Android store as “chat with AI friends”.

Whilst there’s little doubt that the man in question had a troubled mental state before talking to the bot, and I’m not aware that any formal investigation has quantified any effect that the AI may have had in this case, it does feel like there’s a potentially real issue to consider here.

We tend to accept that suicidal tendencies can sometimes be reduced via chatting, at least to real humans trained to help in these situations from services like the Samaritans. In the US, the 988 suicide and crisis lifeline explicitly offers a chat feature, again, importantly, humans only. And, although I’ve not looked into the research at all, the opposite also appears to be considered to be true in the human world; multiple people have been found criminally responsible for encouraging others to kill themselves.

The app in question of course wasn’t designed to encourage these actions, but I don’t think we’ve any reason to believe that the default behaviour of large language models is to do the opposite either.