Real-time multi-sensor fusion for object detection and localization in self-driving cars: A Carla simulation
Email:
daotoan@utc.edu.vn
Keywords:
Camera-LiDAR Fusion, Real-Time, Object Detection, Object Localization, Self-Driving Cars, CARLA
Abstract
Research on integrating camera and LiDAR in self-driving car systems has important scientific significance in the context of developing 4.0 technology and applying artificial intelligence. The research contributes to improving the accuracy in recognizing and locating objects in complex environments. This is an important foundation for further research on optimizing response time and improving the safety of self-driving systems. This study proposes a real-time multi-sensor data fusion method, termed "Multi-Layer Fusion," for object detection and localization in autonomous vehicles. The fusion process leverages pixel-level and feature-level integration, ensuring seamless data synchronization and robust performance under adverse conditions. Experiments conducted on the CARLA simulator. The results show that the method significantly improves environmental perception and object localization, achieving a mean detection accuracy of 95% and a mean distance error of 0.54 meters across diverse conditions, with real-time performance at 30 FPS. These results demonstrate its robustness in both ideal and adverse scenariosReferences
[1]. Allied Market Research, Autonomous Vehicle Market by Level of Automation, Component, Application, and Region: Global Opportunity Analysis and Industry Forecast, (2021) 2025–2035. https://www.alliedmarketresearch.com/autonomous-vehicle-market.
[2]. S. Grigorescu, B. Trasnea, T. Cocias, G. Macesanu, A survey of deep learning techniques for autonomous driving, IEEE Transactions on Intelligent Transportation Systems, 21 (2020) 909-922. https://ieeexplore.ieee.org/document/8793489
[3]. Y. Kim, J. Kim, Multi-Object Tracking with Camera-LiDAR Fusion for Autonomous Vehicles: A Survey, IEEE Access, 10 (2022) 38819-38835. https://ieeexplore.ieee.org/document/10591139
[4]. Waymo, Our Journey, Waymo Official Website. https://waymo.com
[5]. Tesla, Inc., Autopilot, Tesla Official Website. https://www.tesla.com/autopilot
[6]. Trung Thi Hoa Trang Nguyen, Thanh Toan Dao, Thanh Binh Ngo, Vu Anh Phi, Self-Driving Car Navigation with Single-Beam LiDAR and Neural Networks Using JavaScript, IEEE Access, (2024). https://doi.org/10.1109/ACCESS.2024.3511572
[7]. X. Chen et al., Multi-view 3D Object Detection Network for Autonomous Driving, IEEE CVPR, (2017). https://doi.org/10.1109/CVPR.2017.691
[8]. J. Ku et al., Joint 3D proposal generation and object detection from view aggregation, IEEE IROS, (2017). https://doi.org/10.48550/arXiv.1712.02294
[9]. A. Geiger, P. Lenz, R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, IEEE CVPR, (2012). https://doi.org/10.1109/CVPR.2012.6248074
[10]. A. Bochkovskiy et al., YOLOv4: Optimal Speed and Accuracy of Object Detection, (2020).https://doi.org/10.48550/arXiv.2004.10934
[11]. A. Dosovitskiy et al., CARLA: An Open Urban Driving Simulator, arXiv, (2017). https://arxiv.org/abs/1711.03938
[12]. Carla, The CARLA Autonomous Driving Challenge, (2023). https://leaderboard.carla.org/challenge/
[2]. S. Grigorescu, B. Trasnea, T. Cocias, G. Macesanu, A survey of deep learning techniques for autonomous driving, IEEE Transactions on Intelligent Transportation Systems, 21 (2020) 909-922. https://ieeexplore.ieee.org/document/8793489
[3]. Y. Kim, J. Kim, Multi-Object Tracking with Camera-LiDAR Fusion for Autonomous Vehicles: A Survey, IEEE Access, 10 (2022) 38819-38835. https://ieeexplore.ieee.org/document/10591139
[4]. Waymo, Our Journey, Waymo Official Website. https://waymo.com
[5]. Tesla, Inc., Autopilot, Tesla Official Website. https://www.tesla.com/autopilot
[6]. Trung Thi Hoa Trang Nguyen, Thanh Toan Dao, Thanh Binh Ngo, Vu Anh Phi, Self-Driving Car Navigation with Single-Beam LiDAR and Neural Networks Using JavaScript, IEEE Access, (2024). https://doi.org/10.1109/ACCESS.2024.3511572
[7]. X. Chen et al., Multi-view 3D Object Detection Network for Autonomous Driving, IEEE CVPR, (2017). https://doi.org/10.1109/CVPR.2017.691
[8]. J. Ku et al., Joint 3D proposal generation and object detection from view aggregation, IEEE IROS, (2017). https://doi.org/10.48550/arXiv.1712.02294
[9]. A. Geiger, P. Lenz, R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, IEEE CVPR, (2012). https://doi.org/10.1109/CVPR.2012.6248074
[10]. A. Bochkovskiy et al., YOLOv4: Optimal Speed and Accuracy of Object Detection, (2020).https://doi.org/10.48550/arXiv.2004.10934
[11]. A. Dosovitskiy et al., CARLA: An Open Urban Driving Simulator, arXiv, (2017). https://arxiv.org/abs/1711.03938
[12]. Carla, The CARLA Autonomous Driving Challenge, (2023). https://leaderboard.carla.org/challenge/
Downloads
Download data is not yet available.

Received
10/12/2024
Revised
06/01/2025
Accepted
10/01/2025
Published
15/01/2025
Type
Research Article
How to Cite
Trung Thi Hoa Trang, N., Thanh Toan, D., & Thanh Binh, N. (1736874000). Real-time multi-sensor fusion for object detection and localization in self-driving cars: A Carla simulation. Transport and Communications Science Journal, 76(1), 64-79. https://doi.org/10.47869/tcsj.76.1.6
Abstract Views
309
Total Galley Views
213