Notebookcheck Logo

Study finds: Users are the real cause of AI hallucinations

According to a new study, AI hallucinations are influenced by the way users communicate. (Image source: Pexels/Ketut Subiyanto)
According to a new study, AI hallucinations are influenced by the way users communicate. (Image source: Pexels/Ketut Subiyanto)
A recently published study shows that the prompts given to AI assistants play a major role in the occurrence of so-called AI hallucinations. This is good news for users, as it suggests they can actively reduce false or fabricated responses through more effective prompt design.

Fictitious facts, invented quotes or sources that appear entirely fabricated – AI can be incredibly useful, but it still carries the risk of hallucinations. According to OpenAI researchers, one key factor is a simple reward mechanism that encourages the AI to make guesses. A study published on October 3 on arXiv.org also suggests that users themselves may play a part in triggering these hallucinated responses.

The study titled “Mind the Gap: Linguistic Divergence and Adaptation Strategies in Human-LLM Assistant vs. Human-Human Interactions” suggests that many so-called AI hallucinations may originate from the way users communicate. Researchers analyzed over 13,000 human-to-human conversations and 1,357 real interactions between people and AI chatbots. The findings show that users tend to write very differently when talking to AIs – messages are shorter, less grammatical, less polite and use a more limited vocabulary. These differences can influence how clearly and confidently language models respond.

The analysis focused on six linguistic dimensions, including grammar, politeness, vocabulary range and information content. While grammar and politeness were more than 5% and 14% higher in human-to-human conversations, the actual information conveyed remained nearly identical. In other words, users share the same content with AIs – but in a noticeably harsher tone.

The researchers refer to this as a “style shift.” Since large language models like ChatGPT or Claude are trained on well-structured and polite language, a sudden change in tone or style can cause misinterpretations or fabricated details. In other words, AIs are more likely to hallucinate when they receive unclear, impolite or poorly written input.

Possible solutions on both the AI and user side

If AI models are trained to handle a wider range of language styles, their ability to understand user intent improves – by at least 3%, according to the study. The researchers also tested a second approach: automatically paraphrasing user input in real time. However, this slightly reduced performance, as emotional and contextual nuances were often lost. As a result, the authors recommend making style-aware training a new standard in AI fine-tuning.

If you want your AI assistant to produce fewer made-up responses, the study suggests treating it more like a human – by writing in complete sentences, using proper grammar, maintaining a clear style and adopting a polite tone.

Source(s)

Arxiv.org (Study / PDF)

Image source: Pexels / Ketut Subiyanto

No comments for this article

Got questions or something to add to our article? Even without registering you can post in the comments!
No comments for this article / reply

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2025 10 > Study finds: Users are the real cause of AI hallucinations
Marius Müller, 2025-10-18 (Update: 2025-10-18)