Notebookcheck Logo

Manus AI launches a general AI agent capable of handling complex real-world tasks, including creating video games

Manus AI's general AI agent is here to tackle complex tasks like a human assistant. (Image source: Manus AI)
Manus AI's general AI agent is here to tackle complex tasks like a human assistant. (Image source: Manus AI)
The Manus AI agent combines the capabilities of multiple AI models into a powerful assistant for work and personal tasks. Its ability to research information across the Internet, control computers like a human, and synthesize information supposedly allows it to answer complex prompts that many common chatbots cannot.

Manus AI has launched its new general AI agent capable of independently researching answers to complex prompts by leveraging multiple AI large language models (LLMs) in parallel. The AI is currently available by requesting an invitation.

Common chatbots, such as OpenAI ChatGPT, Microsoft CoPilot, and Anthropic Claude, are trained on a fixed set of data, so their knowledge is limited. Questions without answers in their training dataset cannot be answered, although some companies try to expand chatbots by allowing them to access the Internet for the latest information. Still, these chatbots cannot answer complex prompts that require problem solving.

Some AI companies have tried to tackle this by allowing the AI to think through problems step-by-step, analyze the data it finds online, and synthesize an answer. OpenAI Deep Research is one such AI agent that was released last month, and Manus AI is the newest.

Unlike OpenAI's offering, Manus's agent uses multiple AI LLMs, thus benefiting from the advantages each provides. Prompts are automatically split up into smaller tasks that are worked on in parallel. Users can follow the AI's thinking as it automatically starts working through problems step-by-step. The agent can produce not only text answers but also spreadsheets, interactive charts, webpages, and video games.

Although Manus AI's agent only scores 57.7% on Level 3 prompts in the GAIA AI benchmark, a test of real-world questions that even humans have difficulty answering, the AI agent is able to correctly answer easier Level 1 and 2 prompts more than 70% of the time. According to Manus AI, it performs better than other AI capable of researching answers today.

Manus AI agent created a working video game when asked "Can you make me a Super Mario game but in Minecraft style?". (Image source: Manus AI)
Manus AI agent created a working video game when asked "Can you make me a Super Mario game but in Minecraft style?". (Image source: Manus AI)
Examples of complex prompts the Manus AI agent can easily answer. (Image source: Manus AI)
Examples of complex prompts the Manus AI agent can easily answer. (Image source: Manus AI)
Example questions from various difficulty levels in the GAIA AI benchmark test. (Image source: Mialon, G. et al. in "GAIA: a benchmark for General AI Assistants")
Example questions from various difficulty levels in the GAIA AI benchmark test. (Image source: Mialon, G. et al. in "GAIA: a benchmark for General AI Assistants")
Manus AI's agent answers difficult questions better than other AI. (Image source: Manus AI)
Manus AI's agent answers difficult questions better than other AI. (Image source: Manus AI)
static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2025 03 > Manus AI launches a general AI agent capable of handling complex real-world tasks, including creating video games
David Chien, 2025-03-12 (Update: 2025-03-14)