News & Events

Press Release

New Verification Framework Uncovers Safety Lapses in Self-Driving System Autoware

Using a newly developed verification framework, researchers uncovered safety limitations in open-source self-driving system during high-speed movements and sudden cut-ins, raising concerns for real-world deployments

A new study introduces a verification framework that rigorously evaluates the safety performance of autonomous driving systems against established industry standards. Using this framework, experiments with the open-source system Autoware revealed that it sometimes fails to prevent collisions in critical situations, such as high-speed cut-ins and abrupt lane changes. Given Autoware's current utilization in public transportation services, these results underscore the critical need for further development to ensure the safety and reliability of autonomous vehicles for widespread public adoption.

Imagine a future where you can sit back and relax while your car drives you safely to your destination. Self-driving cars promise this convenience, but before we fully embrace this technology, one crucial question remains: Can we truly trust them to handle anything the road throws their way?

In this view, Research Assistant Professor Duong Dinh Tran from Japan Advanced Institute of Science and Technology (JAIST) and his team, including Associate Professor Takashi Tomita and Professor Toshiaki Aoki at JAIST, decided to put the open-source autonomous driving system, Autoware, through a rigorous verification framework, revealing potential safety limitations in critical traffic situations. To thoroughly check how safe Autoware is, the researchers built a special virtual testing system. This system, explained in their study published in the journal IEEE Transactions on Reliability, acted like a digital proving ground for self-driving cars.

Using a language called AWSIM-Script, they could create simulations of various tricky traffic situations--- real-world dangers that car safety experts in Japan have identified. During these simulations, a tool called Runtime Monitor kept a detailed record of everything that happened, much like the black box in an airplane. Finally, another verification program, AW-Checker, analyzed these recordings to see if Autoware followed the rules of the road, as defined by the Japan Automobile Manufacturers Association (JAMA) safety standard. This standard provides a clear and structured way to evaluate the safety of Autonomous Driving Systems (ADSs).

Researchers focused on three particularly dangerous and frequently encountered scenarios defined by the JAMA safety standard: cut-in (a vehicle abruptly moving into the ego vehicle's lane), cut-out (a vehicle ahead suddenly changing lanes), and deceleration (a vehicle ahead suddenly braking). They compared Autoware's performance against the JAMA's "careful driver model," a benchmark representing the minimum expected safety level for ADSs.

These experiments revealed that Autoware did not consistently meet the minimum safety requirements as defined by the careful driver model. As Dr. Tran explained, "Experiments conducted using our framework showed that Autoware was unable to consistently avoid collisions, especially during high-speed driving and sudden lateral movements by other vehicles, when compared to a competent and cautious driver model."

One significant reason for these failures appeared to be errors in how Autoware predicted the movement of other vehicles. The system often predicted slow and gradual lane changes. However, when faced with vehicles making fast, aggressive lane changes (like in the cut-in scenario with high lateral velocity), Autoware's predictions were inaccurate, leading to delayed braking and subsequent collisions in the simulations.

Interestingly, the study also compared the effectiveness of different sensor setups for Autoware. One setup used only lidar, while the other combined data from both lidar and cameras. Surprisingly, the lidar-only mode generally performed better in these challenging scenarios than the camera-lidar fusion mode. The researchers suggest that inaccuracies in the machine learning-based object detection of the camera system might have introduced noise, negatively impacting the fusion algorithm's performance.

These findings have important real-world implications, as some customized versions of Autoware were already deployed on public roads to provide autonomous driving services. "Our study highlights how a runtime verification framework can effectively assess real-world autonomous driving systems like Autoware. Doing so helps developers identify and correct potential issues both before and after the system is deployed, ultimately fostering the development of safer and more reliable autonomous driving solutions for public use," noted Dr. Tran.

While this study provides valuable insights into Autoware's performance in specific traffic disturbances on non-intersection roads, the researchers plan to expand their work to include more complex scenarios, such as those at intersections and involving pedestrians. They also aim to investigate the impact of environmental factors like weather and road conditions in future studies.

pr20250526-1e.png

Image title: A scene from the cut-in scenario.
Image caption: A simulated traffic scenario from the experiments with Autoware, where a red car in the adjacent lane cuts into the path of the autonomous vehicle.
Image credit: Duong Dinh Tran from JAIST
License type: Original Content
Usage restrictions: Cannot be reproduced without permission.

Reference

Title of original paper: Safety Analysis of Autonomous Driving Systems: A Simulation-based Runtime Verification Approach
Authors: Duong Dinh Tran, Takashi Tomita, and Toshiaki Aoki
Journal: IEEE Transactions on Reliability
DOI: 10.1109/TR.2025.3561455

Funding information

This work was supported by JST, CREST Grant Number JPMJCR23M1.

May 23, 2025

PAGETOP