Objectives


 

Objective 1

Sensors cross calibration
The aim of this objective is to perform a very accurate estimation of the extrinsic parameters of each sensor relative to a unique coordinate system (the ego vehicle coordinate system). The stereovision sensor will be used as reference. For the FIR camera and LiDAR sensors novel semiautomatic calibration methods will be proposed.
 

Objective 2

Multisensory data alignment and 3D spatial-temporal and appearance based low level representation
The aim is the pairwise registration between the 2D/ 3D sensory data, representing a low level sensor fusion in order to provide a 3D spatial-temporal and appearance based representation characterized by an increased density, redundancy, information quantity and aggregation power. If dense 3D data from a sensor with known extrinsic parameters (i.e. stereo) is available, the registration can be done by projecting the 3D points of the sensor on the 2D image (i.e. FIR camera). Otherwise a correlation approach based on epipolar geometry will be used.
 

Objective 3

Enhanced multi-redundant obstacles detection and classification and object level fusion
The aim is to extract relevant information from the multisensory data and use this information in order to provide multi-redundant object detection and classification from the scene. The raw sensorial data as well as the 3D spatial-temporal and appearance based representation will be used to provide multi-redundant object detections and classifications. The object level fusion of the detections coming from the multiple modalities will provide a better accuracy and higher confidence. The new detection and classification algorithms will be built on the top of state of the art machine learning methods.
 

Objective 4

Proof of concepts on demonstrators and dissemination of the results
The original methods developed in this project will be implemented as real-time algorithms that will be deployed on a mobile platform (vehicle) for testing and demonstration in real life scenarios. The demonstrators will also have the role of data acquisition, offline testing, evaluation and improvement. The intermediate and final results will be described in scientific papers that will be submitted to relevant publications in the related fields