Deep Learning Based 3D Perception for Autonomous Driving
Project code: PN-III-P4-PCE-2021-1134
The aim of this project is to develop new computational models for 3D perception, with applications in the field of Autonomous Mobile Systems, based on artificial vision and especially on deep learning techniques. 3D perception is the process of organizing, identifying, interpreting and understanding sensory information, represented in the form of a 3D point cloud, associated activities include semantic segmentation of the 3D point cloud, object detection and representation through 3D cuboids, object tracking and motion forecast. The main problems to be solved are: multiple redundancy at the level of the sensory system; the use of artificial intelligence algorithms and solutions based on deep learning; multiple redundancy at the algorithmic level; independent perception solutions for each type of sensor; fusion of geometric, semantic, motion and thermal data at different levels of granularity.Based on the study of the current state and the experience gained, we propose the following objectives: innovative key technologies for perception based on deep learning; independent 3D perception solutions based on deep learning; solution based on multi-sensory fusion at different levels of granularity; configuring a demonstration vehicle for data acquisition, testing, comparison and evaluation.