Back to AI Research

AI Research

From Soliloquy to Agora: Memory-Enhanced LLM Agents... | AI Research

Key Takeaways

  • From Soliloquy to Agora: Memory-Enhanced LLM Agents with Decentralized Debate for Optimization Modeling Optimization modeling is essential for industries lik...
  • In this paper, we propose \emph{Agora-Opt}, a modular agentic framework for optimization modeling that combines decentralized debate with a read-write memory bank.
  • This design is flexible across both backbones and methods: it reduces base-model lock-in, transfers across different LLM families, and can be layered onto existing pipelines with minimal coupling.
  • Across public benchmarks, Agora-Opt achieves the strongest overall performance among all compared methods, outperforming strong zero-shot LLMs, training-centric approaches, and prior agentic baselines.
  • Our code and data are available at this https URL .
Paper AbstractExpand

Optimization modeling underpins real-world decision-making in logistics, manufacturing, energy, and public services, but reliably solving such problems from natural-language requirements remains challenging for current large language models (LLMs). In this paper, we propose \emph{Agora-Opt}, a modular agentic framework for optimization modeling that combines decentralized debate with a read-write memory bank. Agora-Opt allows multiple agent teams to independently produce end-to-end solutions and reconcile them through an outcome-grounded debate protocol, while memory stores solver-verified artifacts and past disagreement resolutions to support training-free improvement over time. This design is flexible across both backbones and methods: it reduces base-model lock-in, transfers across different LLM families, and can be layered onto existing pipelines with minimal coupling. Across public benchmarks, Agora-Opt achieves the strongest overall performance among all compared methods, outperforming strong zero-shot LLMs, training-centric approaches, and prior agentic baselines. Further analyses show robust gains across backbone choices and component variants, and demonstrate that decentralized debate offers a structural advantage over centralized selection by enabling agents to refine candidate solutions through interaction and even recover correct formulations when all initial candidates are flawed. These results suggest that reliable optimization modeling benefits from combining collaborative cross-checking with reusable experience, and position Agora-Opt as a practical and extensible foundation for trustworthy optimization modeling assistance. Our code and data are available at this https URL .

From Soliloquy to Agora: Memory-Enhanced LLM Agents with Decentralized Debate for Optimization Modeling
Optimization modeling is essential for industries like logistics and energy, but using Large Language Models (LLMs) to translate natural-language requirements into accurate mathematical models remains difficult. Current methods often rely on either training a model specifically for this task—which makes it hard to upgrade to newer, better models—or using agentic systems that lack the ability to learn from past mistakes. This paper introduces Agora-Opt, a framework that solves these issues by combining decentralized debate with a read-write memory bank, allowing multiple AI agents to collaborate and improve over time without needing constant retraining.

A Flexible Agentic Foundation

Agora-Opt treats the underlying LLM as an interchangeable component. Instead of locking the system into one specific model, the framework uses a role-structured pipeline where agents move from problem text to formulation, code generation, and solver execution. Because the system is modular, users can swap in newer or different LLM backbones without having to redesign the entire process or perform expensive parameter retuning. This design ensures the framework remains portable and adaptable as AI technology evolves.

Decentralized Debate for Better Accuracy

To overcome "single-model myopia"—where a single AI might confidently produce an incorrect solution—Agora-Opt uses a decentralized debate protocol. Rather than relying on a central moderator (which can inherit the biases of its own backbone), the framework runs multiple agent teams on different models. These teams work independently to produce solutions. The system only accepts an answer when the solver-verified outcomes align or when a set number of rounds is reached. This approach uses objective, quantitative results from solvers to reach a consensus, allowing the agents to cross-check each other and even recover correct formulations when initial attempts are flawed.

Learning Through Read-Write Memory

Unlike traditional systems that are "read-only" or static, Agora-Opt features a memory bank that actively records experiences. It consists of two parts: generation memory, which stores verified solutions and debugging steps to speed up future tasks, and debate memory, which tracks how previous disagreements were resolved. By saving these "lessons learned," the system becomes more capable with every use. This allows the agents to improve their performance between runs and ensures that valuable, solver-verified knowledge is preserved even when the underlying model is upgraded.

Proven Performance

Experiments across six public benchmarks demonstrate that Agora-Opt outperforms existing zero-shot LLMs, training-centric models, and previous agentic baselines. The research highlights that this framework is not only effective but also highly robust, showing consistent gains across different backbone choices. By combining collaborative cross-checking with a persistent memory of past successes and failures, Agora-Opt provides a practical, extensible foundation for reliable optimization modeling.

Comments (0)

No comments yet

Be the first to share your thoughts!