Все выпуски
- 2025 Том 17
- 2024 Том 16
- 2023 Том 15
- 2022 Том 14
- 2021 Том 13
- 2020 Том 12
- 2019 Том 11
- 2018 Том 10
- 2017 Том 9
- 2016 Том 8
- 2015 Том 7
- 2014 Том 6
- 2013 Том 5
- 2012 Том 4
- 2011 Том 3
- 2010 Том 2
- 2009 Том 1
-
Lidar and camera data fusion in self-driving cars
Компьютерные исследования и моделирование, 2022, т. 14, № 6, с. 1239-1253Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.
Ключевые слова: autonomous vehicles, self-driving cars, sensors fusion, Lidar, camera, late fusion, point cloud, images, KITTI dataset, hardware verification.
Lidar and camera data fusion in self-driving cars
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1239-1253Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.
-
Обзор алгоритмических решений для развертывания нейронных сетей на легких устройствах
Компьютерные исследования и моделирование, 2024, т. 16, № 7, с. 1601-1619В современном мире, ориентированном на технологии, легкие устройства, такие как устройства Интернета вещей (IoT) и микроконтроллеры (MCU), становятся все более распространенными. Эти устройства более энергоэффективны и доступны по цене, но часто обладают урезанными возможностями, по сравнению со стандартными версиями, такими как ограниченная память и вычислительная мощность. Современные модели машинного обучения могут содержать миллионы параметров, что приводит к значительному росту требований по объему памяти. Эта сложность не только затрудняет развертывание больших моделей на устройствах с ограниченными ресурсами, но и увеличивает риск задержек и неэффективности при обработке данных, что критично в случаях, когда требуются ответы в реальном времени, таких как автономное вождение или медицинская диагностика.
В последние годы нейронные сети достигли значительного прогресса в методах оптимизации моделей, что помогает в развертывании и инференсе на этих небольших устройствах. Данный обзор представляет собой подробное исследование прогресса и последних достижений в оптимизации нейронных сетей, сосредотачиваясь на ключевых областях, таких как квантизация, прореживание, дистилляция знаний и поиск архитектур нейронных сетей. Обзор рассматривает, как эти алгоритмические решения развивались и как новые подходы улучшили существующие методы, делая нейронные сети более эффективными. Статья предназначена для исследователей, практиков и инженеров в области машинного обучения, которые могут быть незнакомы с этими методами, но хотят изучить доступные техники. В работе подчеркиваются текущие исследования в области оптимизации нейронных сетей для достижения лучшей производительности, снижения потребления энергии и ускорения времени обучения, что играет важную роль в дальнейшей масштабируемости нейронных сетей. Кроме того, в обзоре определяются пробелы в текущих исследованиях и закладывается основа для будущих исследований, направленных на повышение применимости и эффективности существующих стратегий оптимизации.
Ключевые слова: квантизация, поиск архитектуры нейронной сети, дистилляция знаний, обрезка, обучение с подкреплением, сжатие модели.
Review of algorithmic solutions for deployment of neural networks on lite devices
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1601-1619In today’s technology-driven world, lite devices like Internet of Things (IoT) devices and microcontrollers (MCUs) are becoming increasingly common. These devices are more energyefficient and affordable, often with reduced features compared to the standard versions such as very limited memory and processing power for typical machine learning models. However, modern machine learning models can have millions of parameters, resulting in a large memory footprint. This complexity not only makes it difficult to deploy these large models on resource constrained devices but also increases the risk of latency and inefficiency in processing, which is crucial in some cases where real-time responses are required such as autonomous driving and medical diagnostics. In recent years, neural networks have seen significant advancements in model optimization techniques that help deployment and inference on these small devices. This narrative review offers a thorough examination of the progression and latest developments in neural network optimization, focusing on key areas such as quantization, pruning, knowledge distillation, and neural architecture search. It examines how these algorithmic solutions have progressed and how new approaches have improved upon the existing techniques making neural networks more efficient. This review is designed for machine learning researchers, practitioners, and engineers who may be unfamiliar with these methods but wish to explore the available techniques. It highlights ongoing research in optimizing networks for achieving better performance, lowering energy consumption, and enabling faster training times, all of which play an important role in the continued scalability of neural networks. Additionally, it identifies gaps in current research and provides a foundation for future studies, aiming to enhance the applicability and effectiveness of existing optimization strategies.
-
Надежность автоматизированных систем управления (АСУ) и безопасность автономных автомобилей основываются на предположении, что если система компьютерного зрения, установленная на автомобиле, способна идентифицировать объекты в поле видимости и АСУ способна достоверно оценить намерение и предсказать поведение каждого из этих объектов, то автомобиль может спокойно управляться без водителя. Однако как быть с объектами, которые не видны?
В данной статье мы рассматриваем задачу из двух частей: (1) статической (о потенциальных слепых зонах) и (2) динамической реального времени (об идентификации объектов в слепых зонах и информировании участников дорожного движения о таких объектах). Эта задача рассматривается в контексте городских перекрестков.
Ключевые слова: автономные автомобили, подключенные автомобили, подключенные перекрестки, слепые зоны, I2V, DSRC.Просмотров за год: 29.Intersections present a very demanding environment for all the parties involved. Challenges arise from complex vehicle trajectories; occasional absence of lane markings to guide vehicles; split phases that prevent determining who has the right of way; invisible vehicle approaches; illegal movements; simultaneous interactions among pedestrians, bicycles and vehicles. Unsurprisingly, most demonstrations of AVs are on freeways; but the full potential of automated vehicles — personalized transit, driverless taxis, delivery vehicles — can only be realized when AVs can sense the intersection environment to efficiently and safely maneuver through intersections.
AVs are equipped with an array of on-board sensors to interpret and suitably engage with their surroundings. Advanced algorithms utilize data streams from such sensors to support the movement of autonomous vehicles through a wide range of traffic and climatic conditions. However, there exist situations, in which additional information about the upcoming traffic environment would be beneficial to better inform the vehicles’ in-built tracking and navigation algorithms. A potential source for such information is from in-pavement sensors at an intersection that can be used to differentiate between motorized and non-motorized modes and track road user movements and interactions. This type of information, in addition to signal phasing, can be provided to the AV as it approaches an intersection, and incorporated into an improved prior for the probabilistic algorithms used to classify and track movement in the AV’s field of vision.
This paper is concerned with the situation in which there are objects that are not visible to the AV. The driving context is that of an intersection, and the lack of visibility is due to other vehicles that obstruct the AV’s view, leading to the creation of blind zones. Such obstruction is commonplace in intersections.
Our objective is:
1) inform a vehicle crossing the intersection about its potential blind zones;
2) inform the vehicle about the presence of agents (other vehicles, bicyclists or pedestrians) in those blind zones.
Журнал индексируется в Scopus
Полнотекстовая версия журнала доступна также на сайте научной электронной библиотеки eLIBRARY.RU
Журнал входит в систему Российского индекса научного цитирования.
Журнал включен в базу данных Russian Science Citation Index (RSCI) на платформе Web of Science
Международная Междисциплинарная Конференция "Математика. Компьютер. Образование"