Anthropic fixes Claude Code quality regressions and resets usage limits for subscribers

2.1.116 or later.This regression confirms that reports of models getting dumber are often rooted in implementation rather than weights. Because Claude Code follows the same agentic architecture as other Anthropic tools, the slip also impacted Cowork. The fix restores the high-speed navigation and reasoning capabilities users expect.
Subscribers should update immediately to benefit from the fixes and a reset of their usage limits. To prevent future regressions, Anthropic is expanding its internal evaluation (standardized tests for measuring quality) suites and dogfooding configurations that mirror user setups. These changes ensure system prompt adjustments are tested in isolation.
Frequently asked questions
- What caused the recent quality issues in Claude Code?
- Anthropic identified three specific issues within the Claude Code harness and the Agent SDK. These regressions were rooted in the orchestration layer—the system that manages how the AI interacts with tools and codebases—rather than the models themselves. Because Cowork runs on the same SDK, it was also impacted by these performance slips.
- How do I fix the performance regressions in Claude Code?
- To restore performance, you must update to version 2.1.116 or later. Anthropic confirmed that all three identified issues are resolved in this release. Users can check their current version in the terminal and update to the latest build to ensure they are using the corrected Agent SDK and system prompt configurations.
- Did the underlying Claude models regress or get dumber?
- No, the underlying Claude models and the Claude API were not affected by these issues. The quality decline was strictly caused by the Agent SDK harness and the way the tool orchestrated model actions. The models themselves did not change, and their raw performance remains stable outside of the Claude Code environment.
- What is Anthropic doing to prevent future Claude Code quality slips?
- Anthropic is implementing stricter internal dogfooding by using configurations that exactly match those of their users. They are also creating a broader set of evaluations to test system prompt changes in isolation. These process improvements are designed to catch orchestration and harness issues before they reach the public production environment.
- Will Anthropic compensate users for the recent Claude Code quality issues?
- Yes, Anthropic has reset usage limits for all subscribers to compensate for the period of degraded performance. This reset allows users to resume their workflows without being penalized for the extra tokens or attempts used while the tool was experiencing the identified quality regressions in the Agent SDK and harness.

