Notebookcheck Logo

Researchers warn of AI swarms creating fake public opinion

A decorative image showing a chip with the acronym
ⓘ Igor Omilaev via Unsplash
A decorative image showing a chip with the acronym "AI" written on it
An international team of researchers has issued a warning on the dangers of AI swarms with regard to the manufacturing of fake public consensus, noting that it might have already been happening.

Imagine a world where a large group of people can talk about a particular topic, causing it to trend online. Or a world where these people can force the hand of public figures or even spread misinformation. Now imagine a world where the "people" are actually AI-powered profiles acting in unison while mimicking unique human voices.

This danger is what scientists from a host of institutions around the world are now warning us about in a recent publication in the journal Science.

An international research team has detailed how the fusion of large language models (LLMs) with multi-agent systems enables the creation of malicious AI swarms. Unlike traditional, easily identifiable copy-paste bots, these advanced swarms consist of AI-controlled personas that maintain persistent identities, memory, and coordinated objectives. They can dynamically adapt their tone and content based on human engagement, operating with minimal oversight across multiple platforms.

The primary threat posed by these networks is the manufacturing of "synthetic consensus." By flooding digital spaces with fabricated but highly convincing chatter, these swarms create a false illusion that a specific viewpoint is universally accepted. The researchers note that this phenomenon jeopardizes the foundation of democratic discourse, as a single malicious actor can masquerade as thousands of independent voices.

This persistent influence goes beyond shifting temporary opinions; it can fundamentally alter a community's language, symbols, and cultural identity. Furthermore, this coordinated output threatens to contaminate the training data of regular artificial intelligence models, extending the manipulation to established AI platforms.

To counter this evolving threat, experts argue that traditional post-by-post content moderation is no longer effective. Defense mechanisms must pivot toward identifying statistically unlikely coordination and tracking content origin. The researchers also emphasize the necessity of applying behavioral sciences to study the collective actions of AI agents when they interact in large groups. Proposed solutions include deploying privacy-preserving verification methods, sharing evidence through a distributed AI Influence Observatory, and limiting the financial incentives that drive inauthentic engagement.

Source(s)

Please share our article, every link counts!
Mail Logo
Google Logo Add as a preferred
source on Google
static version load dynamic
Loading Comments
Comment on this article
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2026 02 > Researchers warn of AI swarms creating fake public opinion
Chibuike Okpara, 2026-02-16 (Update: 2026-02-16)