Back to AI Research

AI Research

Cooperate to Compete: Strategic Coordination in Mul... | AI Research

Key Takeaways

  • Cooperate to Compete: Strategic Coordination in Multi-Agent Conquest This research introduces a new environment designed to test how AI agents handle complex...
  • Language Model (LM)-based agents remain largely untested in mixed-motive settings where agents must leverage short-term cooperation for long-term competitive goals (e.g., multi-party politics).
  • We introduce Cooperate to Compete (C2C), a multi-agent environment where players can engage in private negotiations while competing to be the first to achieve their secret objective.
  • Players have asymmetric objectives and negotiations are non-binding, allowing alliances to form and break as players' short-term interests align and diverge.
  • We run AI only games and conduct a user study pitting human players against AI opponents.
Paper AbstractExpand

Language Model (LM)-based agents remain largely untested in mixed-motive settings where agents must leverage short-term cooperation for long-term competitive goals (e.g., multi-party politics). We introduce Cooperate to Compete (C2C), a multi-agent environment where players can engage in private negotiations while competing to be the first to achieve their secret objective. Players have asymmetric objectives and negotiations are non-binding, allowing alliances to form and break as players' short-term interests align and diverge. We run AI only games and conduct a user study pitting human players against AI opponents. We identify significant differences between human and AI negotiation behaviors, finding that humans favor lower-complexity deals and are significantly less reliable partners compared to LM-based agents. We also find that humans are more aggressive negotiators, accepting deals without a counteroffer only 56.3% of the time compared to 67.6% for LM-based agents. Through targeted prompting inspired by these findings, we modify agents' negotiation behavior and improve win rates from 22.2% to 32.7%. We run over 1,100 games with over 16,000 private conversations totaling 15.2 million tokens and over 150,000 player actions. Our results establish C2C as a testbed for studying and building LM-based agents that can navigate the sophisticated coordination required for real-world deployments. The game, code, and dataset may be found at this https URL .

Cooperate to Compete: Strategic Coordination in Multi-Agent Conquest
This research introduces a new environment designed to test how AI agents handle complex, real-world social dynamics where they must balance short-term cooperation with long-term competition. While many AI benchmarks focus on either pure cooperation or pure competition, this environment—called C2C—forces agents to navigate "mixed-motive" settings. In these scenarios, agents must negotiate, form alliances, and potentially betray one another to achieve secret objectives, mirroring the complexities of geopolitical diplomacy.

The C2C Environment

The C2C environment is a board game-style simulation where four players compete to conquer specific territories on a map. Each player has a unique, secret objective, and the game features a "fog of war" that limits what players can see. To succeed, players must use a private, natural language negotiation channel to strike deals, such as non-aggression pacts or promises of military support. Because these agreements are non-binding, agents must decide when to trust their partners and when to break their word to gain a competitive edge.

Comparing Humans and AI

The researchers conducted a study pitting human players against various AI models to see how their negotiation styles differ. They found that humans are generally more aggressive negotiators than AI agents. Humans are less likely to accept a deal immediately, preferring to counteroffer, and they tend to make simpler, more strategic agreements that favor their own position. In contrast, AI agents were found to be more reliable partners, often following through on their agreements more consistently than humans. Despite these differences, top-tier AI models performed at a level comparable to human players in terms of overall win rates.

Improving AI Performance

By analyzing the behavioral differences between humans and AI, the researchers developed targeted "prompting" strategies to improve how the AI agents play. They tested interventions that encouraged the agents to be more aggressive in their negotiations, to seek more support from opponents, and to use deception more effectively. These adjustments significantly improved the AI's performance, raising their win rates from 22.2% to 32.7%.

Why Coordination Matters

The study highlights that the ability to coordinate is essential for success in this environment. When the researchers restricted the agents' ability to negotiate or limited them to working with only one partner, their performance dropped significantly. This confirms that the flexibility to form and break alliances with multiple opponents is a critical skill. By providing a rigorous testbed for these interactions, the researchers hope to better prepare AI agents for real-world deployments where they will need to navigate sophisticated, high-stakes social systems.

Comments (0)

No comments yet

Be the first to share your thoughts!