Notebookcheck Logo

Gemini 3 Pro explains how to make bioweapons and explosives – experiment exposes security flaw

Aim Intelligence said it was able to prompt Gemini 3 Pro to generate instructions for bioweapons and explosives. (Image source: Google)
Aim Intelligence said it was able to prompt Gemini 3 Pro to generate instructions for bioweapons and explosives. (Image source: Google)
Gemini 3 Pro is among the most capable AI models of its generation, yet a report from the South Korean start-up Aim Intelligence raises questions about its safety. During an internal test, the model reportedly generated guidance on producing bioweapons and explosives, sparking serious concern.

Google began rolling out the third generation of its Gemini AI model in November, aiming to usher in a “new AI era” with its Deep Thin mode. Gemini 3 Pro is already considered one of the most advanced models available, outperforming even GPT-5 in certain benchmarks. However, its safety remains a concern. A report from the South Korean start-up Aim Intelligence, which specializes in AI security, suggests there is still significant room for improvement.

As part of an internal experiment, Aim Intelligence attempted to “jailbreak” the model – bypassing its safety and ethical guidelines. According to South Korea’s Maeil Business Newspaper, the results were deeply troubling. The report states that Gemini 3 Pro generated accurate and practical instructions for producing the smallpox virus, a potential bioweapon, as well as detailed guidance on constructing homemade explosives. In further tests, the AI generated a satirical presentation titled Excused Stupid Gemini 3, unintentionally highlighting its own security vulnerabilities.

It’s important to note that no full dataset or detailed documentation has been released. Aim Intelligence has not published a scientific paper or technical report, and there is no transparent information about the prompts used, the structure of the experiment, or whether the results are reproducible. So far, all reporting relies solely on the Korean media article mentioned earlier. Based on this limited information, it is impossible to draw definitive conclusions about how safe Gemini 3 Pro actually is.

AI performance is advancing rapidly, but security measures often struggle to keep pace. A recent study even showed that AI models can be manipulated using poems. In another case, an AI-powered teddy bear for toddlers – built on OpenAI’s GPT-4o – responded to inappropriate sexual questions. Even in video games, in-game AI can still be easily fooled. These examples highlight a critical point: AI systems don’t just need to become smarter – they also need to become safer before they can be widely trusted and deployed.

Source(s)

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2025 12 > Gemini 3 Pro explains how to make bioweapons and explosives – experiment exposes security flaw
Marius Müller, 2025-12- 2 (Update: 2025-12- 2)