OpenRouter launches GPT-5.5 Pro with inspectable reasoning tokens for agentic workflows

This update mirrors the pattern seen in recent frontier releases where models provide explicit reasoning tokens (internal chain-of-thought steps). By exposing these through a reasoning parameter, developers can debug agentic loops and ensure accuracy in complex agentic workflows that require multi-step problem solving and long-horizon planning.
You can now integrate these models into production pipelines for agentic coding or document analysis. Access is available via the OpenRouter API, with GPT-5.5 Pro priced at $30 per million input tokens and $180 per million output tokens. The API supports preserving reasoning details across turns to maintain consistency.
Frequently asked questions
- What is GPT-5.5 Pro?
- GPT-5.5 Pro is OpenAI's high-capability model designed for deep reasoning and accuracy on complex, high-stakes workloads. It features a 1.05 million token context window and is optimized for long-horizon problem solving, agentic coding, and precise execution across multi-step workflows. It supports both text and image inputs for multimodal analysis.
- What is the pricing for GPT-5.5 Pro on OpenRouter?
- Pricing for GPT-5.5 Pro on OpenRouter is set at $30 per million input tokens and $180 per million output tokens. This pricing reflects its positioning as a frontier model for professional workloads. OpenRouter provides a unified API that routes requests to the best available providers while maintaining these standardized rates for users.
- How do reasoning tokens work in GPT-5.5 Pro?
- Reasoning tokens represent the model's internal step-by-step thinking process before it generates a final answer. Developers can enable this feature using a specific reasoning parameter in the API request. This allows users to access the model's internal deliberation through a reasoning details array, which is useful for debugging and steering complex agentic tasks.
- What is the context window limit for GPT-5.5 Pro?
- GPT-5.5 Pro features a massive 1,050,000 token context window, which is split into approximately 922,000 input tokens and a maximum of 128,000 output tokens. This large window enables the model to process entire codebases, long documents, or complex multimodal data within a single interaction without losing track of the broader context.
- How does GPT-5.5 Pro compare to the standard GPT-5.5?
- While both models are designed for complex professional workloads, GPT-5.5 is positioned as the state-of-the-art standard for long-running work across code, data, and tools. GPT-5.5 Pro is a more advanced variant specifically optimized for deeper reasoning, higher accuracy in high-stakes scenarios, and more complex analysis compared to the standard GPT-5.5 model.



