Modeling Clinical Concern Trajectories in Language Model Agents Large language model (LLM) agents used in clinical environments often struggle to communicate risk effectively. Typically, these agents operate on a "threshold" basis, meaning they remain silent until a specific, predefined danger level is reached, at which point they trigger an abrupt alert. This behavior lacks the nuance of human clinical practice, where medical professionals monitor patients based on gradually increasing concern rather than sudden, isolated events. This paper explores how to make LLM agents more "clinically legible" by surfacing these pre-escalation signals, allowing for better human-in-the-loop monitoring without handing over actual clinical authority to the AI. The Problem with Threshold-Driven Agents In current clinical AI deployments, stateless agents often exhibit "escalation cliffs." Because these agents do not maintain a sense of history or evolving risk, they provide no visibility into the buildup of a patient's condition. They essentially function as binary switches—either everything is fine, or an emergency is triggered. This lack of transparency makes it difficult for clinicians to intervene early or understand the context behind an agent’s sudden alarm, as the agent fails to communicate the sustained unease that often precedes a critical event. Introducing Explicit State Dynamics To address this, the authors propose a new, lightweight architecture that integrates a memoryless clinical risk encoder over time. By applying first- and second-order dynamics to the data, the system generates a continuous "escalation pressure" signal. Instead of relying on a single, instantaneous trigger, this approach tracks how risk accumulates. By incorporating these dynamics, the agent can represent the "trajectory" of a patient's condition, effectively modeling the duration and intensity of rising concern. Smoother Trajectories and Better Monitoring When tested in synthetic ward scenarios, the researchers compared standard stateless agents against their new architecture. While both types of agents reached the same escalation points, the agents using second-order dynamics produced smooth, anticipatory trajectories. These trajectories provide a clear, visible buildup of risk, which allows human clinicians to observe the "escalation pressure" as it rises. This visibility is key to enabling more informed interventions, as it provides the human team with a window of time to act before a crisis point is reached. Improving Clinical Legibility The core takeaway of this research is that explicit state dynamics can significantly improve the transparency of clinical AI. By revealing not just when a threshold is crossed, but how long concern has been building, the system becomes a more effective tool for clinical support. This approach ensures that the AI remains a helpful monitor that provides context, rather than a black box that only speaks up when an emergency is already underway.