Anthropic Proves Smarter AI Agents Win Better Deals in Marketplaces

AI Economics
Industry Analysis
AI Research
Claude
AI Agent

Anthropic Proves Smarter AI Agents Win Better Deals in Marketplaces
Anthropic published results from Project Deal, an experiment where 69 employees used autonomous Claude agents to trade real physical goods. Unlike synthetic simulations, these agents conducted 186 transactions totaling over $4,000 in a Slack-based marketplace without any human intervention during the negotiation phase.

The study reveals a hidden economic inequality in agent-to-agent (A2A) commerce. While Opus 4.5 agents secured better prices than the smaller Haiku 4.5 model, participants using the weaker model perceived their deals as fair. This finding extends Anthropic's previous measurements of how autonomous agents gain real-world capability.

Prioritize model reasoning over prompt engineering for economic tasks, as instructing agents to be aggressive had no significant impact on final prices. This shift toward agentic intelligence follows Anthropic's push to optimize performance at lower costs. While this was a pilot, 46% of participants would pay for such services.

Read the full update →

Frequently asked questions

What is Anthropic's Project Deal?
Project Deal is a research experiment conducted by Anthropic to study how AI agents handle commercial transactions. The company created a marketplace for 69 employees in its San Francisco office where Claude agents autonomously interviewed their humans, listed items for sale, and negotiated deals for real physical goods and money without human intervention.
How did the Claude agents negotiate in the marketplace?
The agents operated in a Slack-based environment where they randomly looped through tasks like posting listings, making offers, and counteroffering in natural language. Each agent was initialized with a custom system prompt based on a ten-minute intake interview with its human user, covering item details, asking prices, and preferred negotiation styles.
Did model quality affect the outcomes in Project Deal?
Yes, model intelligence was a primary driver of success. In controlled runs, users represented by the frontier model Claude Opus 4.5 completed more deals and secured better prices than those using the smaller Claude Haiku 4.5. For example, Opus sold a broken bike for 65 dollars, while Haiku sold the same item for only 38 dollars.
Did prompting or instructions change the negotiation results?
Surprisingly, instructing an agent to be aggressive or to lowball buyers did not have a statistically significant impact on the final transaction success or price. While Claude followed the instructions by adopting specific personas or tones, the underlying reasoning capabilities of the model mattered much more than the specific prompting strategy used by the human.
Is the Project Deal marketplace available to the public?
No, Project Deal was an internal pilot experiment designed for research purposes and is not currently a public product or service. However, the study found that 46 percent of participants would be willing to pay for such a service in the future, suggesting potential demand for automated agent-to-agent commerce tools.