OpenAI Reports 56 Percent Token Efficiency Gain for GPT-5.5 in Perplexity Workflows

Agentic Coding
Computer Use
GPT
AI Economics
Performance

OpenAI Reports 56 Percent Token Efficiency Gain for GPT-5.5 in Perplexity Workflows
OpenAI shared performance data from Perplexity, an AI-powered search engine, demonstrating the efficiency of the new GPT-5.5 model in agentic workflows. Using Codex, the platform for agentic coding, Perplexity developed an internal tool in under an hour. This efficiency enables rapid development of internal infrastructure.

The update highlights a 56% reduction in token usage for "Perplexity Computer" workflows—tasks involving computer use (AI controlling desktop interfaces). This efficiency is critical as agentic loops often suffer from high latency. By using fewer tokens, GPT-5.5 creates faster feedback loops while completing the same complex tasks.

You can leverage these efficiency gains by accessing GPT-5.5 through the Codex API or desktop application. The model is optimized for technical builds where reducing token volume translates to faster execution. Access is currently available for paid users on ChatGPT and Codex plans.

Read the full update →

Frequently asked questions

What is GPT-5.5?
GPT-5.5 is an OpenAI model optimized for agentic tasks and computer use within the Codex platform. It is designed to autonomously plan and execute multi-step actions, such as writing code or navigating digital environments. The model focuses on efficiency, allowing it to complete complex technical tasks with significantly fewer tokens than previous versions.
How much more efficient is GPT-5.5 compared to previous models?
In real-world workflows at Perplexity, GPT-5.5 demonstrated a 56 percent reduction in token usage for the same complex tasks. This improvement in token efficiency directly reduces the amount of data processed during agentic loops. This reduction results in faster feedback loops for users and more streamlined execution of multi-step computer workflows.
What can developers build with GPT-5.5 and Codex?
Developers can use GPT-5.5 within Codex to build internal tools and automate engineering workflows. For example, Perplexity used the model to develop a new internal tool in less than one hour. The model is particularly effective for computer-use tasks, where an AI agent must interact with software interfaces to achieve a goal.
How does the token reduction in GPT-5.5 benefit users?
The 56 percent reduction in token usage primarily benefits users by creating faster feedback loops. Because the model requires fewer tokens to complete complex tasks, the time spent in the agentic loop is reduced. This leads to quicker responses and more efficient interactions when using AI agents for multi-step computer and coding workflows.