Back to AI Research

AI Research

NeuroAgent: LLM Agents for Multimodal Neuroimaging... | AI Research

Key Takeaways

  • NeuroAgent: LLM Agents for Multimodal Neuroimaging Analysis and Research Neuroimaging research often requires complex, manual workflows to turn raw brain sca...
  • Multimodal neuroimaging analysis often involves complex, modality-specific preprocessing workflows that require careful configuration, quality control, and coordination across heterogeneous toolchains.
  • We evaluate the system on 1,470 subjects pooled across all ADNI phases (CN=1,000, AD=470), where all subjects have sMRI and tabular data, with subsets also having Tau-PET (n=469), fMRI (n=278), and DTI ($n=620$).
  • Pipeline ablation studies across multiple LLM backends show that capable models reach up to 100% intent-parsing accuracy, with the strongest backend (Qwen3.5-27B) reaching 84.8% end-to-end preprocessing step correctness.
  • Automated recovery limits manual intervention to edge cases where human review is required via the Human-In-The-Loop interface.
Paper AbstractExpand

Multimodal neuroimaging analysis often involves complex, modality-specific preprocessing workflows that require careful configuration, quality control, and coordination across heterogeneous toolchains. Beyond preprocessing, downstream statistical analysis and disease classification commonly require task-specific code, evaluation protocols, and data-format conventions, creating additional barriers between raw acquisitions and reproducible scientific analysis. We present NeuroAgent, an LLM-driven agentic framework that automates key preprocessing and analysis steps for heterogeneous neuroimaging data, including sMRI, fMRI, dMRI, and PET, and supports interactive downstream analysis through natural-language queries. NeuroAgent employs a hierarchical multi-agent architecture with a feedback-driven Generate-Execute-Validate engine: agents autonomously generate executable preprocessing code, detect and recover from runtime errors, and validate output integrity. We evaluate the system on 1,470 subjects pooled across all ADNI phases (CN=1,000, AD=470), where all subjects have sMRI and tabular data, with subsets also having Tau-PET (n=469), fMRI (n=278), and DTI ($n=620$). Pipeline ablation studies across multiple LLM backends show that capable models reach up to 100% intent-parsing accuracy, with the strongest backend (Qwen3.5-27B) reaching 84.8% end-to-end preprocessing step correctness. Automated recovery limits manual intervention to edge cases where human review is required via the Human-In-The-Loop interface. For Alzheimer's Disease classification using automatically preprocessed multimodal data, our agent ensemble achieves an AUC of 0.9518 with four modalities, outperforming all single-modality baselines. These results show that NeuroAgent can reduce the manual effort required for neuroimaging preprocessing and enable end-to-end automated analysis pipelines for neuroimaging research.

NeuroAgent: LLM Agents for Multimodal Neuroimaging Analysis and Research
Neuroimaging research often requires complex, manual workflows to turn raw brain scans into usable data. Researchers must navigate different software tools, file formats, and quality control steps, which can be time-consuming and prone to human error. NeuroAgent is an AI-driven framework designed to automate this entire process. By using a hierarchical team of "agents" powered by Large Language Models (LLMs), the system can autonomously handle the preprocessing of various brain imaging types—such as sMRI, fMRI, dMRI, and PET—and perform downstream analysis based on natural-language requests from researchers.

How the System Works

NeuroAgent functions as an intelligent orchestrator rather than a static script. It uses a "Generate-Execute-Validate" engine to manage the research pipeline. When a researcher provides a goal, a central planning agent breaks it down into specific tasks and determines the necessary steps. Specialized agents then generate the code needed to process the data using established neuroimaging tools. If a tool encounters an error, the system automatically reads the error logs, adjusts its approach, and retries the task. This loop continues until the output is validated for quality and structural integrity.

Multimodal Integration

A key strength of NeuroAgent is its ability to handle heterogeneous data. Because different types of brain scans (like structural MRI and functional MRI) often have interdependent requirements, the system automatically builds a dependency graph. For example, if a researcher requests an fMRI analysis, the system recognizes that it must first process the structural MRI to provide an anatomical reference. By integrating these disparate modalities into a single, organized dataset, the system allows for more sophisticated analyses, such as classifying Alzheimer’s disease using a combination of different scan types.

Research Performance

The researchers evaluated NeuroAgent using 1,470 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The system demonstrated high reliability, with the most capable model achieving 84.8% correctness in end-to-end preprocessing steps. In tests for Alzheimer’s disease classification, the agent ensemble achieved an AUC of 0.9518, outperforming models that relied on only a single type of imaging data. These results suggest that the framework can significantly reduce the manual labor currently required for neuroimaging research while maintaining high scientific accuracy.

Human-in-the-Loop Oversight

While NeuroAgent is designed for autonomy, it includes a "Human-in-the-Loop" interface to ensure safety and reliability. This feature allows researchers to supervise the process, approve critical decisions, and intervene if the system encounters an edge case it cannot resolve on its own. This hybrid approach balances the efficiency of automated AI agents with the necessary oversight required for clinical and scientific research, ensuring that the system acts as a helpful assistant rather than a "black box."

Comments (0)

No comments yet

Be the first to share your thoughts!