Back to AI Research

AI Research

Towards Agentic Investigation of Security Alerts | AI Research

Key Takeaways

  • Security analysts are currently overwhelmed by the sheer volume of security alerts and the lack of context provided by detection systems.
  • Security analysts are overwhelmed by the volume of alerts and the low context provided by many detection systems.
  • Early-stage investigations typically require manual correlation across multiple log sources, a task that is usually time-consuming.
  • Investigating these alerts is a manual, time-consuming process that requires correlating data across multiple log sources.
  • The proposed workflow moves away from simply asking an LLM to analyze raw data, which can be unreliable when dealing with high-volume, unstructured information.
Paper AbstractExpand

Security analysts are overwhelmed by the volume of alerts and the low context provided by many detection systems. Early-stage investigations typically require manual correlation across multiple log sources, a task that is usually time-consuming. In this paper, we present an experimental, agentic workflow that leverages large language models (LLMs) augmented with predefined queries and constrained tool access (structured SQL over Suricata logs and grep-based text search) to automate the first stages of alert investigation. The proposed workflow integrates queries to provide an overview of the available data, and LLM components that selects which queries to use based on the overview results, extracts raw evidence from the query results, and delivers a final verdict of the alert. Our results demonstrate that the LLM-powered workflow can investigate log sources, plan an investigation, and produce a final verdict that has a significantly higher accuracy than a verdict produced by the same LLM without the proposed workflow. By recognizing the inherent limitations of directly applying LLMs to high-volume and unstructured data, we propose combining existing investigation practices of real-world analysts with a structured approach to leverage LLMs as virtual security analysts, thereby assisting and reducing the manual workload.

Security analysts are currently overwhelmed by the sheer volume of security alerts and the lack of context provided by detection systems. Investigating these alerts is a manual, time-consuming process that requires correlating data across multiple log sources. The paper "Towards Agentic Investigation of Security Alerts" introduces an experimental, agentic workflow designed to automate the initial stages of these investigations by using Large Language Models (LLMs) as virtual security analysts.

Automating the Investigation Process

The proposed workflow moves away from simply asking an LLM to analyze raw data, which can be unreliable when dealing with high-volume, unstructured information. Instead, the researchers created a structured approach that gives the LLM "constrained tool access." The model is equipped with predefined queries, specifically structured SQL for Suricata logs and grep-based text search. By using these tools, the LLM can interact with log data in a controlled manner rather than attempting to process it all at once.

How the Agentic Workflow Functions

The system operates through a multi-step cycle that mimics the behavior of a human analyst:

  • Data Overview: The workflow first runs queries to generate a high-level summary of the available data.

  • Strategic Planning: Based on the results of that overview, the LLM selects the most relevant subsequent queries to perform.

  • Evidence Extraction: The model extracts raw evidence from the query results to build a case.

  • Final Verdict: Finally, the LLM synthesizes the gathered evidence to provide a definitive verdict on the security alert.

Significant Improvements in Accuracy

The researchers compared the performance of their agentic workflow against an LLM operating without this structured, tool-augmented approach. The results demonstrated that the proposed workflow significantly increases the accuracy of the final verdicts. By combining the reasoning capabilities of LLMs with the structured investigation practices used by real-world analysts, the system effectively reduces the manual workload required to triage security alerts.

A Practical Approach to Security

The study highlights the importance of recognizing the limitations of LLMs when applied directly to complex security data. By providing the model with a structured environment and specific tools, the authors show that LLMs can be effectively repurposed as virtual assistants. This approach helps bridge the gap between the high-level reasoning of AI and the technical, data-heavy requirements of modern cybersecurity operations.

Comments (0)

No comments yet

Be the first to share your thoughts!