Exploring Interaction Paradigms for LLM Agents in Scientific Visualization This paper investigates how different types of artificial intelligence agents perform when tasked with creating scientific visualization workflows. By testing eight representative agents across 15 standard visualization tasks, the researchers aim to understand the strengths and weaknesses of various interaction methods—such as writing code, using software interfaces, or employing specialized tools—to determine how these systems can be better designed for complex scientific data analysis. Comparing Agent Paradigms The study categorizes AI agents into three primary groups to see how they handle visualization workflows: Domain-Specific Agents: These use structured tools and APIs to perform tasks. They are highly efficient and stable but lack the flexibility to handle unexpected situations. General-Purpose Coding Agents: These agents write and execute code to complete tasks from start to finish. They are the most successful at completing tasks but require significantly more computational power and token usage. * Computer-Use Agents: These interact with software through graphical user interfaces (GUIs), similar to how a human would. While they are good at individual steps, they struggle with long, multi-step projects, suggesting that their main hurdle is planning over long horizons rather than understanding the interface itself. The Impact of Memory and Planning The research highlights that persistent memory—the ability for an agent to "remember" previous attempts—consistently improves performance. Agents with enabled memory were more efficient and successful because they could avoid repeating past mistakes. Furthermore, the study found that breaking complex tasks into smaller, manageable steps significantly improved the performance of GUI-based agents. By focusing on one step at a time, these agents avoided the compounding errors that often occur when trying to manage an entire workflow at once. Key Tradeoffs in Design The findings reveal that there is no "one-size-fits-all" solution for scientific visualization. A clear tradeoff exists between the high success rates of general-purpose coding agents and the high efficiency of domain-specific agents. While coding agents are more reliable for complex tasks, they are also more expensive to run. The researchers conclude that the next generation of scientific visualization systems should be hybrid, combining the structured, deterministic nature of tool-use agents with the adaptability of GUI-based systems and the long-term learning capabilities provided by persistent memory. Limitations and Future Directions The study notes that while some agents achieve high success rates, consistent performance across repeated trials remains a challenge. Even the most capable agents show variability in their execution, meaning their success often relies on favorable intermediate decisions rather than a perfectly stable reasoning process. Future development should focus on hierarchical workflow decomposition and better ways to supervise intermediate states, allowing for more robust and reliable AI-assisted scientific discovery.