Back to AI Research

AI Research

Towards Lawful Autonomous Driving: Deriving Scenari... | AI Research

Key Takeaways

  • Towards Lawful Autonomous Driving: Deriving Scenario-Aware Driving Requirements from Traffic Laws and Regulations Autonomous vehicles (AVs) often struggle to...
  • Driving in compliance with traffic laws and regulations is a basic requirement for human drivers, yet autonomous vehicles (AVs) can violate these requirements in diverse real-world scenarios.
  • To encode law compliance into AV systems, conventional approaches use formal logic languages to explicitly specify behavioral constraints, but this process is labor-intensive, hard to scale, and costly to maintain.
  • With recent advances in artificial intelligence, it is promising to leverage large language models (LLMs) to derive legal requirements from traffic laws and regulations.
  • However, without explicitly grounding and reasoning in structured traffic scenarios, LLMs often retrieve irrelevant provisions or miss applicable ones, yielding imprecise requirements.
Paper AbstractExpand

Driving in compliance with traffic laws and regulations is a basic requirement for human drivers, yet autonomous vehicles (AVs) can violate these requirements in diverse real-world scenarios. To encode law compliance into AV systems, conventional approaches use formal logic languages to explicitly specify behavioral constraints, but this process is labor-intensive, hard to scale, and costly to maintain. With recent advances in artificial intelligence, it is promising to leverage large language models (LLMs) to derive legal requirements from traffic laws and regulations. However, without explicitly grounding and reasoning in structured traffic scenarios, LLMs often retrieve irrelevant provisions or miss applicable ones, yielding imprecise requirements. To address this, we propose a novel pipeline that grounds LLM reasoning in a traffic scenario taxonomy through node-wise anchors that encode hierarchical semantics. On Chinese traffic laws and OnSite dataset (5,897 scenarios), our method improves law-scenario matching by 29.1\% and increases the accuracy of derived mandatory and prohibitive requirements by 36.9\% and 38.2\%, respectively. We further demonstrate real-world applicability by constructing a law-compliance layer for AV navigation and developing an onboard, real-time compliance monitor for in-field testing, providing a solid foundation for future AV development, deployment, and regulatory oversight.

Towards Lawful Autonomous Driving: Deriving Scenario-Aware Driving Requirements from Traffic Laws and Regulations
Autonomous vehicles (AVs) often struggle to follow traffic laws, leading to safety incidents and regulatory challenges. While human drivers learn to interpret laws through training and testing, AVs typically learn driving behaviors implicitly from data without a formal mechanism to understand legal requirements. This paper introduces a new pipeline that uses Large Language Models (LLMs) to automatically translate complex traffic laws into actionable, machine-readable driving requirements, ensuring that AVs can navigate public roads in compliance with legal standards.

Bridging the Gap Between Laws and Scenarios

A major challenge in making AVs lawful is that traffic laws are highly context-dependent. A rule that applies at a T-intersection might not apply on a highway, and laws can vary based on weather, road markings, or traffic signs. Existing methods often fail because they rely on generic text, causing LLMs to hallucinate or retrieve irrelevant rules. This research solves this by grounding legal reasoning in a "traffic scenario taxonomy." By organizing traffic environments into a structured hierarchy—covering everything from road geometry to weather conditions—the system provides the LLM with a clear, semantic map to identify exactly which legal provisions apply to a specific driving situation.

How the Pipeline Works

The proposed method uses "taxonomy-guided anchoring" to connect real-world scenarios to legal text. During training, the system learns "anchors"—specialized prompts for each node in the scenario taxonomy—that help the model understand the specific conditions under which a law is relevant. When the AV encounters a scenario, the system uses these anchors to retrieve the correct legal provisions. It then employs a "Chain-of-Thought" reasoning process to translate those laws into specific, actionable constraints, such as speed limits, yielding requirements, or turning prohibitions, which the vehicle can then use to guide its navigation and control systems.

Significant Improvements in Accuracy

The researchers tested their pipeline using Chinese traffic laws and the OnSite dataset, which contains nearly 6,000 real-world driving scenarios. Compared to standard LLM approaches, this method significantly reduced errors in matching laws to scenarios. Specifically, the accuracy of deriving mandatory driving requirements (what the car must do) improved by 36.9%, while the accuracy for prohibitive requirements (what the car must avoid) increased by 38.2%. These results suggest that grounding AI in a structured understanding of the environment is far more effective than relying on the model’s internal, unguided knowledge.

Real-World Applications

Beyond theoretical testing, the researchers demonstrated the practical utility of their work in two key areas. First, they built a "law-compliance layer" for digital navigation, which allows AVs to plan routes and maneuvers that are legally sound. Second, they developed an onboard, real-time compliance monitor for field testing. This tool acts as a digital supervisor, allowing developers and regulators to monitor whether an AV is adhering to traffic laws during operation. This provides a scalable foundation for future AV deployment and helps bridge the gap between rapid technological advancement and necessary regulatory oversight.

Comments (0)

No comments yet

Be the first to share your thoughts!