LLMs as ASP Programmers: Self-Correction Enables Task-Agnostic Nonmonotonic Reasoning Recent large language models (LLMs) are powerful, but they often struggle with complex logical problems, leading to high costs and inconsistent results. This paper introduces "LLM+ASP," a framework that improves reasoning by combining the natural language capabilities of LLMs with Answer Set Programming (ASP), a symbolic logic system. Unlike previous methods that required custom engineering for every specific task, this framework works across diverse problems without needing manual, task-specific instructions. By using an automated feedback loop where an ASP solver corrects the LLM's code, the system achieves significantly higher accuracy on difficult logical tasks. Bridging Language and Logic The core of this approach is the use of Answer Set Programming, a form of "nonmonotonic" logic. Unlike standard logic, which cannot easily handle exceptions or changing information, ASP is designed to manage default rules—such as assuming birds fly unless they are penguins. By translating natural language problems into ASP code, the LLM can offload the heavy lifting of logical search to a specialized solver. This allows the model to handle complex constraints and multiple possible outcomes more effectively than it could by relying on its own internal reasoning alone. The Power of Self-Correction A major finding of the research is that the LLM does not need to be a perfect programmer on its first attempt. The framework uses an iterative "self-correction" loop: the LLM generates an ASP program, the ASP solver executes it and provides feedback on any errors, and the LLM uses that feedback to refine its code. This cycle repeats until the model produces a correct solution. This process is the primary driver of the system's performance, effectively replacing the need for humans to write complex, domain-specific knowledge modules for every new problem. The "Context Rot" Phenomenon The researchers discovered that providing LLMs with too much documentation can actually hinder performance, a phenomenon they call "context rot." When comparing a verbose, 22,000-token manual to a compact, 2,600-token reference guide, the shorter version consistently led to better results. This suggests that while LLMs benefit from having a reference for ASP syntax and conventions, excessive information can distract the model and interfere with its ability to follow constraints. Performance and Efficiency When tested across six diverse benchmarks—including logic puzzles, Sudoku variants, and planning tasks—the LLM+ASP framework significantly outperformed standalone LLMs. On the most difficult problems, where baseline models often "give up" or show a sharp drop in accuracy, the LLM+ASP framework maintained high performance. Furthermore, the system proved to be more efficient; while standard LLMs consume vast amounts of computing power trying to navigate search spaces directly, the LLM+ASP framework uses its resources to generate compact code that guides the solver to the answer, maintaining efficiency even as problem complexity increases.