Fictitious facts, invented quotes or sources that appear entirely fabricated – AI can be incredibly useful, but it still carries the risk of hallucinations. According to OpenAI researchers, one key factor is a simple reward mechanism that encourages the AI to make guesses. A study published on October 3 on arXiv.org also suggests that users themselves may play a part in triggering these hallucinated responses.
The study titled “Mind the Gap: Linguistic Divergence and Adaptation Strategies in Human-LLM Assistant vs. Human-Human Interactions” suggests that many so-called AI hallucinations may originate from the way users communicate. Researchers analyzed over 13,000 human-to-human conversations and 1,357 real interactions between people and AI chatbots. The findings show that users tend to write very differently when talking to AIs – messages are shorter, less grammatical, less polite and use a more limited vocabulary. These differences can influence how clearly and confidently language models respond.
The analysis focused on six linguistic dimensions, including grammar, politeness, vocabulary range and information content. While grammar and politeness were more than 5% and 14% higher in human-to-human conversations, the actual information conveyed remained nearly identical. In other words, users share the same content with AIs – but in a noticeably harsher tone.
The researchers refer to this as a “style shift.” Since large language models like ChatGPT or Claude are trained on well-structured and polite language, a sudden change in tone or style can cause misinterpretations or fabricated details. In other words, AIs are more likely to hallucinate when they receive unclear, impolite or poorly written input.
Possible solutions on both the AI and user side
If AI models are trained to handle a wider range of language styles, their ability to understand user intent improves – by at least 3%, according to the study. The researchers also tested a second approach: automatically paraphrasing user input in real time. However, this slightly reduced performance, as emotional and contextual nuances were often lost. As a result, the authors recommend making style-aware training a new standard in AI fine-tuning.
If you want your AI assistant to produce fewer made-up responses, the study suggests treating it more like a human – by writing in complete sentences, using proper grammar, maintaining a clear style and adopting a polite tone.
Source(s)
Image source: Pexels / Ketut Subiyanto