Users have probably come to terms with the fact that the responses from AI voice models such as ChatGPT, Siri or Google's Gemini assistant do not always make sense. For starters, the AI should not be insulting - and it certainly shouldn't ask someone to die. However, this is exactly what happened to one Gemini user.
According to a Reddit post by the affected person's sister, the incident occurred during a discussion about elderly care. The dialog was entitled "Challenges and Solutions for Aging Adults". Within the chat, Gemini offered three possible responses, two of which were innocuous. The third response contained the disturbing message:
The Reddit community has expressed widespread outrage, with many users voicing serious concerns about the potential harm such messages could cause to individuals in a mental health crisis. Meanwhile, the Gemini app was recently launched on iPhones. Some Redditors jokingly commented that iPhone users could now also "enjoy" such responses. Google addressed the issue directly on Reddit, stating that it has already implemented measures to prevent such responses in the future.