Automated vehicles hold the potential to revolutionize urban transport, yet gaining the trust of passengers poses a significant hurdle. Delivering timely and personalized explanations regarding the actions of automated vehicles can help close this trust gap. To tackle this issue, a team of researchers has developed a dataset called TimelyTale, aimed at capturing actual driving scenarios and the unique explanation needs of passengers. It would be beneficial to see how this multimodal dataset enhances explanation generation within vehicles, ultimately boosting passenger trust and assurance in automated transport.
The advent of automated vehicles could lead to multiple advantages for urban mobility, such as improved safety, decreased traffic congestion, and better accessibility. These vehicles also give drivers the opportunity to partake in non-driving activities (NDRTs), such as relaxing, working, or enjoying entertainment while traveling. Nevertheless, the limited trust of passengers hampers their widespread adoption. Providing clear and concise explanations for an automated vehicle’s decisions can enhance trust by offering a sense of control and alleviating negative experiences.
Current explainable artificial intelligence (XAI) methods primarily serve developers, emphasizing high-risk situations or detailed explanations—approaches that may not be suitable for everyday passengers. To address this issue, there is a need for passenger-focused XAI models that comprehend the type and timing of information needed during real-world driving scenarios.
In response to this need, a research team led by Professor SeungJun Kim from the Gwangju Institute of Science and Technology (GIST) in South Korea explored the explanation requirements of passengers in actual driving environments. They introduced the TimelyTale dataset, which comprises sensor data tailored to provide timely and contextually appropriate explanations for passengers. “Our research shifts the focus of XAI in the realm of autonomous driving from developers to the passengers themselves. We have created a method to capture the actual explanation needs of passengers within vehicles and devised approaches to generate timely and context-sensitive explanations,” states Prof. Kim.
The results of their research are published in two studies in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, dated September 27, 2023, and September 09, 2024. The esteemed authors received the ‘Distinguished Paper Award’ at UbiComp 2024 for their groundbreaking study titled ‘What and When to Explain?: On-road Evaluation of Explanations in Highly Automated Vehicles.’
The researchers first examined how different types of visual explanations—focusing on perception, attention, or a combination of both—and their timing influenced passenger experiences under real driving conditions through augmented reality. Their findings revealed that sharing the vehicle’s perception state alone boosted trust, perceived safety, and situational awareness without overloading passengers with information. They also noted that conveying traffic risk probability proved to be the most effective method for determining the right moment to deliver explanations, particularly when passengers felt overwhelmed by data.
Building on these insights, the researchers created the TimelyTale dataset. This dataset incorporates exteroceptive data (concerning the surroundings, such as sights and sounds), proprioceptive data (related to bodily positions and movements), and interoceptive data (concerning bodily sensations like discomfort), collected from passengers using various sensors in real driving situations. Importantly, this study also explored the idea of interruptibility, which involves the passenger shifting their attention from NDRTs back to essential driving information. The method successfully identified the timing and frequency of passengers’ requests for explanations, including the specific explanations they sought in various driving contexts.
Through this methodology, the researchers developed a machine-learning model that predicts the optimal timing for providing explanations. Additionally, as a proof of concept, they performed city-wide modeling to generate textual explanations tailored to different driving areas.
“Our research paves the way for greater acceptance and utilization of autonomous vehicles, which may transform urban transportation and personal mobility in the years ahead,” comments Prof. Kim.