Zum Inhalt

Collective Perception Virtual Safety Validation in Urban Environments: Scenarios, Tools, Metrics

  • Open Access
  • 2026
  • OriginalPaper
  • Buchkapitel
Erschienen in:

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Dieses Kapitel geht den Herausforderungen und Anforderungen der virtuellen Sicherheitsvalidierung für die kollektive Wahrnehmung (CP) in städtischen Umgebungen für vernetzte und automatisierte Fahrzeuge (CAVs) nach. Es untersucht die Notwendigkeit eines Rahmenwerks, das Lücken in Metriken, Szenarien und Simulationswerkzeugen behebt, wobei der Schwerpunkt auf der Wahrnehmungsebene von CP liegt. Der Text diskutiert verschiedene CP-Datensätze, sowohl simulierte als auch reale, und ihre Eignung zum Testen von CP-Modulen. Es skizziert auch die Anforderungen an die Validierung urbaner szenariobasierter CP-Tests, einschließlich der Erstellung von Szenarien, Metriken und Testumgebungen. Das Kapitel unterstreicht die Bedeutung der Berücksichtigung von Wahrnehmungsunsicherheit und kompromittierten Informationen in CP-Systemen. Abschließend werden die Herausforderungen und zukünftigen Arbeiten bei der Anwendung dieser Anforderungen auf ein bayesianisches CP-Modul in der CARLA-Simulationsumgebung diskutiert.

1 Introduction

The on-board sensors of connected and automated vehicles (CAVs) are limited by their range and inability to see around corners or blind spots [1]. Collective (or cooperative) perception (CP) is a multi-agent system [2] in which agents share perceptual information such as their state (e.g., vehicle position, pose, speed, acceleration), their tracked object list or even their tracked objects’ intentions. CP is currently standardized by the European Telecommunications Standards Institute (ETSI) as a second-generation Vehicle-to-Everything (V2X) communication service. In that context, the CP testing has been studied more as an instantiation of a Vehicular Ad-hoc NETwork (VANET), via a combination of traffic/network simulation environments and targeting at answering questions such as up to how many connected agents per topology, communication protocols, latency and frequency of messages to be broadcasted, without studying the collective perception content generation (e.g. fusion techniques) and the assessment of the derived (collective) object detection.
Based on the above, the problem is cast as a multi-agent perception testing problem where the focus is on the shared information content (pre-processed sensor data) and how this is affecting (i.e. what sensory data needs to be shared, at which resolution, etc.) the CP information fusion on the CAV side under the presence of occlusions and sensor measurement uncertainties (uncertainty propagation). CAV control layer, i.e. which cooperative strategies should be implemented are out of scope (for a broader specification of cooperative driving automation goals, see SAE J3216), since the present work focuses only on the perception layer.
Following this trend, the goal of the present analysis is to explore requirements towards building a virtual safety validation framework for urban CP testing, concretely addressing current gaps with respect to (a) metrics for assessing CP from a perception point of view when late fusion is employed, i.e. where detected bounding boxes and confidence scores are shared – following the ETSI CPM formulation, (b) interesting scenarios for (a), and (c) which types of simulation tools are necessary for (a).
Several surveys of highly qualified content have been released in the last two years trying to cover the CP task research and development challenges, proving the growing interest of both automated driving (AD) perception and VANETs communities [35]. In parallel, two surveys on object-level perception and CP safety assessment [6, 7] have also appeared, inspired by the recent ISO work on SOTIF [8]. Reviewing the aforementioned surveys and some of the works they are including, in the following sub-sections, we highlight certain aspects relevant to our CP testing context.
In recent years, there are many efforts on introducing large-scale CP datasets, for the automotive context, which are mostly generated by simulation and are appropriate for CP modules testing. This advent is partially due to the introduction of and community support for CARLA open source driving simulator [9] which makes easy the generation of virtual sensor data and offers a variety of ground truth data (incl. Instance semantic segmentation data that yields a unique pixel value for every object in a scene and pedestrian skeleton data). In July 2018, version 0.9.0 introduced the multi-client multi-agent support that opened the road for cooperative agents. CARLA supports basic sensor modeling for several sensors such as cameras, depth cameras, LiDAR (simulated ray cast), IMU and RADAR.
CP datasets generated in simulation: The first big contribution was made in 2021 by UCLA Mobility lab with the release of OpenCDA dataset and simulation benchmark [10] which supported testing both on individual autonomy level and traffic level and built a co-simulation platform (CARLA is part of it), a full-stack prototype cooperative driving system, and a scenario manager. OpenCDA also offered benchmark testing scenarios, state-of-the-art benchmark algorithms for all modules of an AD stack, benchmark testing road maps, and benchmark evaluation metrics but focusing on cooperative driving applications and not on CP. It is also noted that type of data exchanged among traffic agents are freely explored and not restricted to the messages already standardized by ETSI. OpenCDA has grown since then and today it constitutes an Open-source Ecosystem for Cooperative Driving Automation Research with an active community developing part of it around the globe [11]. Two similar efforts followed focusing on multi-agent perception: OPV2V [12] and V2X-Sim [13]. Both datasets provide object-level annotations from CARLA towns in order to support detection, tracking and segmentation perception tasks deploying early fusion/feature-fusion/late-fusion approached for CP content generation. OPV2V is implemented by the COOD framework which supports multi-GPU training and is used by V2VSet [14]. Scenarios supported by all the simulation frameworks above focus on urban, rural and more seldom on highway operational domain and include V2X in straight and curvy urban road segments, urban intersections, rural areas where only V2V applies and highway on-ramp scenarios. The aforementioned CP simulation benchmarks are equipped with RGB cameras and Lidar, allowing the collection of more than 10,000 frames and each scene contains at least 2 vehicles. As discussed in V2XViT [14], in simulation different perception error models can be introduced (pose error, agents’ synchronization error, time delay in V2X communication).
CP datasets from real-world recordings (incl. Infrastructure data): DAIR-V2X, OpenDAIR-V2X and soon to be released V2X-Seq [15] are the first real-world datasets for research on V2X AD. It comprises image data and LiDAR point cloud data from different observers. Notably, it supports early fusion and late fusion CP methods while it is planned to also support feature fusion.
Notably, in the initial OpenCDA benchmark, metrics for CAD safety assessment are proposed but metrics focused on CP are missing. Metrics for perception or CP based on sensor data late fusion methods are provided in OpV2V, V2XSim and V2XViT [14] depending on the CP algorithms in use. Still, as argued in [6] metrics and criteria that explicitly distinguish safety-relevant from safety-irrelevant perception errors are still under-explored.
Fig. 1.
Proposed CP simulation framework conceptual architecture with identified subsystems Images on top are captured from ICCS customized versions of CARLA v0.9.4. CARLA is an open source software available at https://carla.org.
Bild vergrößern

3 CP in an AD Scenario-Based Virtual Validation Pipeline

Starting with the three core subsystems, namely the task manager, the simulator and the simulation model validation, identified in the SUNRISE harmonized V&V simulation framework conceptual architecture [16], the System-under-Test (SuT) and problem formulation for the CP assessment task are discussed hereafter with the help of the diagram of Fig. 1.
In our setting, SuT is the local or distributed CP subsystem of the CAV under test as part of two possible CP system configurations, inline with [17]:
  • a) Multi-local CP system: allows sparsely distributed agents to form a global view on a common spatially distributed problem without any direct access to global knowledge and only based on a combination of locally perceived information. The main question is how to share and to combine the estimated information to achieve the most precise global estimate in the least possible time.
  • b) Global and Multi-local CP system: allows sparsely distributed agents to form a global view on a common spatially distributed problem based on a combination of locally perceived information and (partially available) global knowledge provided by a monitoring sensor-equipped infrastructure node.Studying CP from a perception point of view implies that the focus is on object detection/data association from multiple observers/scene understanding tasks and that networking aspects can be omitted from the present study for reasons of simplicity (no network co-simulation is employed and ETSI-alike CP messages are assumed available with frequency/delay that can vary and with encoding that can be customized to the perception task in hand). Realistic traffic generation (through a traffic simulator) can be also omitted in the first proof-of-concepts where multi-agent scenarios may include equal or less than 3 virtual traffic agents sharing the same urban area. SOTIF and cybersecurity considerations for CP imply the consideration of perception uncertainty created from information fusion coming from multiple moving observers as well as the consideration of compromised information (by a remote attacker). In current approaches, the CP problem is commonly addressed from the subject vehicle perspective. However, a system of systems approach shall be considered in the near future for holistically tackling CP SoTIF/FuSA aspects.

4 CP High-Level Validation Requirements

The goal of this section is to propose high-level validation requirements for urban CP scenario-based testing.

4.1 Scenario Generation

Support for multi-agent scenarios is required where mobile and static traffic agents can be a) non-connected, b) connected and transmitting their state information, c) connected and transmitting their state and perception information. For urban complex ODDs, scenarios shall include a sufficiently big number of traffic agents, e.g., vehicles, parked buses and vulnerable road users. Additionally, smart infrastructure nodes with or without sensors should be allowed to be added as static “connectivity” agents.
A small list of urban safety-related test scenarios in three target Operation Domains, namely in urban roads, intersections and roundabouts, are outlined in D7.1 [18].

4.2 Metrics and Considerations for CP Evaluation

With respect to object-level CP performance, a set of Key Performance Indicators (KPIs) can be considered such as (a) fused object localization performance, (b) fused object detection performance, (c) fused object classification performance, (d) fused object clutter rate, (e) fusion runtime performance, (f) robustness to perception errors, (g) robustness against CP messages delays. All KPIs shall be measured in various conditions of CAVs/AV penetration rate. Models for object-level perception data uncertainties generated by virtual road agents or infrastructure need to be assessed based on perception error models from real sensors across various environmental conditions.
Extra KPIs for the simulation validity may include correctness of the assumed perception uncertainty/error models. In that regard, a set of KPIs can be used when a real vehicle is part of the simulation scenario (hybrid testing setup) in order (including but not limited) to i) assess runtime capability of the hybrid setup by e.g. measuring the vehicle to simulation environment uni-directional or bi-directional real-time info sharing (ms), and ii) compare synthetic object detection noise in simulation against proving ground tests with a perception platform equipped vehicle.

4.3 Test Environment and Tools

Three test environments are considered for CP testing and validation:
Physical testing: Python or ROS modules and custom visualization using ideally real-world data.
Simulation only: all data from all cooperative nodes are simulated. CARLA is a good candidate for the environment/driving simulation as it binds well with external ROS modules that are needed for CP synthetic data generation.
Hybrid setup: a CAV tested physically in a controlled field area that is mapped into CARLA, exchanging real-time data with (cloud) simulation environment (digital twin). Matlab RoadRunner is a good candidate for map and scenario generation.

5 Challenges and Conclusions

In the present work collective perception validation challenges were discussed in detail based on comprehensive SoA review. Moreover, a preliminary set of high-level safety validation requirements for collective perception scenario-based testing in simulation is derived, by focusing on urban environments.
Future work will include application of these requirements for assessing a Bayesian-based CP module supporting virtual agents that can exchange ETSI CPM information and corresponding CP system safety validation in CARLA.

Acknowledgment

The work of this paper has been done in the context of the SUNRISE project which is co-funded by the European Commission’s Horizon Europe Research and Innovation Programme under grant agreement number 101069573. Views and opinions expressed, are those of the author(s) only and do not necessarily reflect those of the European Union or the European Climate, Infrastructure and Environment Executive Agency (CINEA). Neither the European Union nor the granting authority can be held responsible for them.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
download
DOWNLOAD
print
DRUCKEN
Titel
Collective Perception Virtual Safety Validation in Urban Environments: Scenarios, Tools, Metrics
Verfasst von
Anastasia Bolovinou
Ilias Panagiotopoulos
Athanasios Ballis
Angelos Amditis
Copyright-Jahr
2026
DOI
https://doi.org/10.1007/978-3-032-06763-0_113
1.
Zurück zum Zitat Allidina, T., Deka, L., Paluszczyszyn, D., Elizondo, D.: Selecting non-line of sight critical scenarios for connected autonomous vehicle testing. Software 1(3), 244–264 (2022)CrossRef
2.
Zurück zum Zitat Dorri, A., Kanhere, S.S., Jurdak, R.: Multi-agent systems: a survey. IEEE Access 6, 28573–28593 (2018)CrossRef
3.
Zurück zum Zitat Caillot, A., Ouerghi, S., Vasseur, P., Boutteau, R., Dupuis, Y.: Survey on cooperative perception in an automotive context. IEEE Trans. Intell. Transp. Syst. 23(9), 14204–14223 (2022)CrossRef
4.
Zurück zum Zitat Han, Y., Zhang, H., Li, H., Jin, Y., Lang, C., Li, Y.: Collaborative perception in autonomous driving: methods, datasets and challenges. ArXiv, abs/2301.06262 (2023)
5.
Zurück zum Zitat Malik, S., Khan, M.J., Khan, M.A., El-Sayed, H.: Collaborative perception-the missing piece in realizing fully autonomous driving. Sensors 23(18), 7854 (2023)CrossRef
6.
Zurück zum Zitat Hoss, M., Scholtes, M., Eckstein, L.: A review of testing object-based environment perception for safe automated driving. Autom. Innov. 5, 223–250 (2022)CrossRef
7.
Zurück zum Zitat Schiegg, F.A., Llatser, I., Bischoff, D., Volk, G.: Collective perception: a safety perspective. Sensors 21(1), 159 (2021)CrossRef
8.
Zurück zum Zitat Technical Committee ISO/TC 22, Road vehicles, Subcommittee SC 32, Electrical and electronic components and general system aspects: ISO/PAS 21448:2019 road vehicles-safety of the intended functionality (2019)
9.
Zurück zum Zitat CARLA, Open-source simulator for autonomous driving research. https://carla.org/
10.
Zurück zum Zitat Xu, R., Guo, Y., Han, X., Xia, X., Xiang, H., Ma, J.: OpenCDA: an open cooperative driving automation framework integrated with co-simulation. In: 24th IEEE International Conference on Intelligent Transportation Systems (ITSC) Proceedings, pp. 1155–1162, Indianapolis, USA (2021)
11.
Zurück zum Zitat Xu, R., Xiang, H., Han, X., Xia, X., Meng, Z., Chen, C. J., Correa-Jullian, C, Ma, J.: The OpenCDA open-source ecosystem for cooperative driving automation research. IEEE Trans. Intell. Veh., 1–13 (2023)
12.
Zurück zum Zitat Xu, R., Xiang, H., Xia, X., Han, X., Li J., Ma,J.: OPV2V: an open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. In: 2022 International Conference on Robotics and Automation (ICRA) Proceedings, pp. 2583–2589, Philadelphia, USA (2022)
13.
Zurück zum Zitat Li, Y., et al.: V2X-Sim: multi-agent collaborative perception dataset and benchmark for autonomous driving. IEEE Robot. Autom. Lett. 7(4), 10914–10921 (2022)CrossRef
14.
Zurück zum Zitat Xu, R., Xiang, H., Tu, Z., Xia, X., Yang, M. H., Ma, J.: V2X-ViT: vehicle-to-everything cooperative perception with vision transformer. Computer Vision – ECCV 2022: Lecture Notes in Computer Science, 13699. Springer, Cham
16.
17.
Zurück zum Zitat Bai, Z., Wu, G., Barth, M. J., Liu, Y., Sisbot, E. A., Oguchi, K., Huang, Z.: A survey and framework of cooperative perception: from heterogeneous singleton to hierarchical cooperation. arXiv preprint arXiv:2208.10590 (2022)
    Bildnachweise
    AVL List GmbH/© AVL List GmbH, dSpace, BorgWarner, Smalley, FEV, Xometry Europe GmbH/© Xometry Europe GmbH, The MathWorks Deutschland GmbH/© The MathWorks Deutschland GmbH, IPG Automotive GmbH/© IPG Automotive GmbH, HORIBA/© HORIBA, Outokumpu/© Outokumpu, Hioko/© Hioko, Head acoustics GmbH/© Head acoustics GmbH, Gentex GmbH/© Gentex GmbH, Ansys, Yokogawa GmbH/© Yokogawa GmbH, Softing Automotive Electronics GmbH/© Softing Automotive Electronics GmbH, measX GmbH & Co. KG