Alibaba Qwen Releases Qwen3.6-27B Outperforming Its Previous 397B Flagship in Coding

Agentic Coding
LLM
Benchmark
Multimodal
Performance
Qwen

Alibaba Qwen Releases Qwen3.6-27B Outperforming Its Previous 397B Flagship in Coding
Alibaba released Qwen3.6-27B under an Apache 2.0 license, a dense model featuring native multimodality and "extended thinking" tokens. It follows the launch of Qwen3.6-Plus and the sparse Qwen3.6-35B-A3B as part of the team's latest model family.

The 27B model outperforms the previous generation's 397B-parameter MoE flagship on coding benchmarks. Delivering flagship-level agentic coding (AI that autonomously plans and executes multi-step software tasks) in a smaller footprint removes the deployment complexity associated with massive mixture-of-experts (architectures that only activate a fraction of parameters per token).

You can access the model via Qwen Studio, Alibaba Cloud's API, or by downloading weights from Hugging Face. It includes a new preserve_thinking feature that maintains reasoning traces across multi-turn interactions, making it a drop-in backend for agents like Claude Code. The model supports a 200K context window and handles multimodal inputs.

Read the full update →

Frequently asked questions

What is Qwen3.6-27B?
Qwen3.6-27B is a dense, 27-billion-parameter multimodal model released by Alibaba. Unlike mixture-of-experts models that only activate some parameters, this dense architecture is straightforward to deploy. It supports text, image, and video inputs and features an internal reasoning process called thinking mode to handle complex logic and agentic coding tasks with high accuracy.
How does Qwen3.6-27B compare to the larger Qwen3.5-397B-A17B?
Despite being roughly 15 times smaller in total parameter count, Qwen3.6-27B outperforms the previous 397-billion-parameter flagship across all major agentic coding benchmarks. This includes higher scores on SWE-bench Verified and Terminal-Bench 2.0. Its dense architecture makes it easier to serve and fine-tune compared to the massive mixture-of-experts structure of the older generation.
Is Qwen3.6-27B open source?
Yes, Qwen3.6-27B is released under the Apache 2.0 license, making it fully open for community use and commercial applications. The model weights are available for download on Hugging Face and ModelScope. This permissive licensing allows developers to self-host the model on their own infrastructure or integrate it into private development environments without proprietary restrictions.
What is the preserve_thinking feature in the Qwen3.6-27B API?
The preserve_thinking feature allows the model to maintain its internal reasoning traces from previous conversation turns during multi-step interactions. This is specifically recommended for agentic tasks where the model must remember its logic and planning steps across a long dialogue. By keeping these thinking tokens in context, the model achieves more reliable performance in complex autonomous workflows.
Which coding tools are compatible with Qwen3.6-27B?
Qwen3.6-27B is designed for deep integration with popular agentic coding assistants. It is natively compatible with OpenClaw, Qwen Code, and Claude Code via the Anthropic-compatible API protocol. Developers can use it as a backend for terminal-based agents that autonomously navigate codebases, execute commands, and edit files across multiple steps to solve complex software engineering issues.