Все выпуски
- 2024 Том 16
- 2023 Том 15
- 2022 Том 14
- 2021 Том 13
- 2020 Том 12
- 2019 Том 11
- 2018 Том 10
- 2017 Том 9
- 2016 Том 8
- 2015 Том 7
- 2014 Том 6
- 2013 Том 5
- 2012 Том 4
- 2011 Том 3
- 2010 Том 2
- 2009 Том 1
-
Lidar and camera data fusion in self-driving cars
Компьютерные исследования и моделирование, 2022, т. 14, № 6, с. 1239-1253Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.
Ключевые слова: autonomous vehicles, self-driving cars, sensors fusion, Lidar, camera, late fusion, point cloud, images, KITTI dataset, hardware verification.
Lidar and camera data fusion in self-driving cars
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1239-1253Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.
-
Надежность автоматизированных систем управления (АСУ) и безопасность автономных автомобилей основываются на предположении, что если система компьютерного зрения, установленная на автомобиле, способна идентифицировать объекты в поле видимости и АСУ способна достоверно оценить намерение и предсказать поведение каждого из этих объектов, то автомобиль может спокойно управляться без водителя. Однако как быть с объектами, которые не видны?
В данной статье мы рассматриваем задачу из двух частей: (1) статической (о потенциальных слепых зонах) и (2) динамической реального времени (об идентификации объектов в слепых зонах и информировании участников дорожного движения о таких объектах). Эта задача рассматривается в контексте городских перекрестков.
Ключевые слова: автономные автомобили, подключенные автомобили, подключенные перекрестки, слепые зоны, I2V, DSRC.Просмотров за год: 29.Intersections present a very demanding environment for all the parties involved. Challenges arise from complex vehicle trajectories; occasional absence of lane markings to guide vehicles; split phases that prevent determining who has the right of way; invisible vehicle approaches; illegal movements; simultaneous interactions among pedestrians, bicycles and vehicles. Unsurprisingly, most demonstrations of AVs are on freeways; but the full potential of automated vehicles — personalized transit, driverless taxis, delivery vehicles — can only be realized when AVs can sense the intersection environment to efficiently and safely maneuver through intersections.
AVs are equipped with an array of on-board sensors to interpret and suitably engage with their surroundings. Advanced algorithms utilize data streams from such sensors to support the movement of autonomous vehicles through a wide range of traffic and climatic conditions. However, there exist situations, in which additional information about the upcoming traffic environment would be beneficial to better inform the vehicles’ in-built tracking and navigation algorithms. A potential source for such information is from in-pavement sensors at an intersection that can be used to differentiate between motorized and non-motorized modes and track road user movements and interactions. This type of information, in addition to signal phasing, can be provided to the AV as it approaches an intersection, and incorporated into an improved prior for the probabilistic algorithms used to classify and track movement in the AV’s field of vision.
This paper is concerned with the situation in which there are objects that are not visible to the AV. The driving context is that of an intersection, and the lack of visibility is due to other vehicles that obstruct the AV’s view, leading to the creation of blind zones. Such obstruction is commonplace in intersections.
Our objective is:
1) inform a vehicle crossing the intersection about its potential blind zones;
2) inform the vehicle about the presence of agents (other vehicles, bicyclists or pedestrians) in those blind zones.
Журнал индексируется в Scopus
Полнотекстовая версия журнала доступна также на сайте научной электронной библиотеки eLIBRARY.RU
Журнал входит в систему Российского индекса научного цитирования.
Журнал включен в базу данных Russian Science Citation Index (RSCI) на платформе Web of Science
Международная Междисциплинарная Конференция "Математика. Компьютер. Образование"