Towards Lawful Autonomous Driving: Deriving Scenario-Aware Driving Requirements from Traffic Laws and Regulations
Autonomous vehicles (AVs) often struggle to follow traffic laws, leading to safety incidents and regulatory challenges. While human drivers learn to interpret laws through training and testing, AVs typically learn driving behaviors implicitly from data without a formal mechanism to understand legal requirements. This paper introduces a new pipeline that uses Large Language Models (LLMs) to automatically translate complex traffic laws into actionable, machine-readable driving requirements, ensuring that AVs can navigate public roads in compliance with legal standards.
Bridging the Gap Between Laws and Scenarios
A major challenge in making AVs lawful is that traffic laws are highly context-dependent. A rule that applies at a T-intersection might not apply on a highway, and laws can vary based on weather, road markings, or traffic signs. Existing methods often fail because they rely on generic text, causing LLMs to hallucinate or retrieve irrelevant rules. This research solves this by grounding legal reasoning in a "traffic scenario taxonomy." By organizing traffic environments into a structured hierarchy—covering everything from road geometry to weather conditions—the system provides the LLM with a clear, semantic map to identify exactly which legal provisions apply to a specific driving situation.
How the Pipeline Works
The proposed method uses "taxonomy-guided anchoring" to connect real-world scenarios to legal text. During training, the system learns "anchors"—specialized prompts for each node in the scenario taxonomy—that help the model understand the specific conditions under which a law is relevant. When the AV encounters a scenario, the system uses these anchors to retrieve the correct legal provisions. It then employs a "Chain-of-Thought" reasoning process to translate those laws into specific, actionable constraints, such as speed limits, yielding requirements, or turning prohibitions, which the vehicle can then use to guide its navigation and control systems.
Significant Improvements in Accuracy
The researchers tested their pipeline using Chinese traffic laws and the OnSite dataset, which contains nearly 6,000 real-world driving scenarios. Compared to standard LLM approaches, this method significantly reduced errors in matching laws to scenarios. Specifically, the accuracy of deriving mandatory driving requirements (what the car must do) improved by 36.9%, while the accuracy for prohibitive requirements (what the car must avoid) increased by 38.2%. These results suggest that grounding AI in a structured understanding of the environment is far more effective than relying on the model’s internal, unguided knowledge.
Real-World Applications
Beyond theoretical testing, the researchers demonstrated the practical utility of their work in two key areas. First, they built a "law-compliance layer" for digital navigation, which allows AVs to plan routes and maneuvers that are legally sound. Second, they developed an onboard, real-time compliance monitor for field testing. This tool acts as a digital supervisor, allowing developers and regulators to monitor whether an AV is adhering to traffic laws during operation. This provides a scalable foundation for future AV deployment and helps bridge the gap between rapid technological advancement and necessary regulatory oversight.

Comments (0)
to join the discussion
No comments yet
Be the first to share your thoughts!