OpenCode Adds Xiaomi MiMo v2.5 Models to Go for Agentic Coding

The v2.5 series is designed for agent-centric (built for autonomous systems that plan and execute tasks) workloads requiring reasoning across long-chain tasks. By providing a specialized coding variant in the v2.5 Pro, OpenCode offers a high-performance alternative for engineering. This follows the integration of 1M context models into the platform.
You can access both models immediately through the OpenCode Go interface or API with no change to existing pricing. The v2.5 Pro is optimized for complex coding, while the standard v2.5 supports multimodal inputs. All interactions remain covered by zero data retention privacy standards for enterprise security.
Frequently asked questions
- What is the difference between MiMo v2.5 and MiMo v2.5 Pro?
- MiMo v2.5 is a natively multimodal model designed for general agentic tasks, while MiMo v2.5 Pro is specifically optimized for complex software engineering and coding. Both models are agent-centric, meaning they are built to handle multi-step reasoning and maintain stability during long-chain autonomous workflows within frameworks like OpenCode.
- What is the context window for the Xiaomi MiMo v2.5 models?
- Both MiMo v2.5 and MiMo v2.5 Pro feature a 1-million-token context window. This large capacity allows the models to process massive amounts of information, such as entire codebases, in a single interaction. This is particularly useful for agentic coding tasks that require a comprehensive understanding of a project's structure and stability on long-chain tasks.
- How much does it cost to use MiMo v2.5 on OpenCode Go?
- OpenCode has stated that the pricing for the new MiMo v2.5 and MiMo v2.5 Pro models remains unchanged on the Go platform. Users can access these upgraded capabilities through their existing subscription plans or API access without incurring additional costs compared to the previous v2 series models available on the service.
- Is MiMo v2.5 a multimodal model?
- Yes, MiMo v2.5 is a natively multimodal model that supports full-modal perception. This means it can understand and process text, images, and video inputs directly. This capability allows the model to serve as a versatile engine for agents that need to interact with diverse data types beyond simple text-based instructions.
- Are the MiMo v2.5 models on OpenCode Go private?
- Yes, all models accessed through the OpenCode Go service, including the new MiMo v2.5 series, are covered by zero data retention agreements. This enterprise-grade privacy standard ensures that your proprietary code and data are not stored or used for future model training, providing a secure environment for professional software development.


