Xiaomi Unveils Flagship AI Model MiMo V2 Pro
English

“Xiaomi has launched a flagship AI model

Xiaomi (China) has launched its flagship AI model MiMo V2 Pro, which has beaten DeepSeek V3.2, Claude Sonnet 4.6 and DeepSeek V3.2 in the rankings on the "OpenRouter" platform even before its official release, according to Logos Press.
Views: 28 Артур Вакуленчик Reading time: 2 minutes
Link copied
Xiaomi

The new AI model from "Xiaomi" in the rating surpassed the products of many industry leaders // Photo: alphamatch.ai.

The model is designed to perform agent tasks. It is available in the free service “MiMo Chat” and autonomous AI agent “MiMo Claw”.

The release is codenamed

The new AI model from Xiaomi first appeared on OpenRouter on March 11 under the codename “Hunter Alpha” without identifying the developer. Against this backdrop, there were reports that the release was a new DeepSeek model. The more it became known that the MiMo model development team was headed by former DeepSeek researcher Luo Fuli.

MiMoV2 Pro has 1 trillion parameters and a context window of 1 million tokens. In tests by independent researchers at Artificial Analysis, the model ranks eighth behind GPT 5.2 and GLM-5 from Z.ai. In the developer’s internal corporate tests the model is close to Claude Sonnet 4.5.

The release of the new Xiaomi model comes against the backdrop of the growing popularity of the OpenClaw framework, designed to create AI agents. They are able to perform various actions on behalf of users.

“Brain” in agent systems

MiMo V2 Pro supports five major frameworks for creating AI agents, including OpenClaw. According to Xiaomi’s press release, the new flagship base model is built for real-world agent workloads. It is designed to act as the “brain” in agent systems. So, it can coordinate complex workflows, manage production engineering tasks and deliver high performance results.

Xiaomi noted the high performance of MiMo V2 Pro in all major agent tests. The algorithm outperformed Claude 4.6 Sonnet in the area of program code generation, and overall performance in agent tasks is close to that of Opus 4.6. Optimization of the algorithm’s training process resulted in improved performance stability and tool call accuracy. This was achieved by optimizing the learning process.

Developers will be able to test the algorithm free of charge for a week.



Реклама недоступна
Must Read*

We always appreciate your feedback!

Read also