Project Objectives
General objective: to design sensorial systems, models, detection and measurement algorithms, for tracking the head, eyes and facial features, able to work indoor and outdoor, without constraints on the user behavior.
Specific objectives:
O1: Sensorial system setup: the project will develop an original hardware setup based on three cameras to accurately estimate and track the features of interest. Two regular, monochrome cameras, with a large field of view (FOV) will form the stereovision system used to track the head movements, extract depth information and analyze the body posture. Based on the data gathered from these two cameras, a pan tilt unit will orient the third camera, with a narrower FOV, to perceive only the head area. The third camera is a color, high speed camera used for estimating facial features and to accurately detect rapid movements, such as micro saccades and microexpressions.
O2: Development of an efficient sensorial system calibration methodology: the three cameras along with the pan tilt unit will be calibrated into a common reference frame. The calibration will not be dependent on the user, but will be intrinsic to the sensorial system. In order to minimize the set up effort, an efficient reference target and a software tool for automatic identification of the target features has to be developed.
O3: Stereovision for head and face tracking: this objective aims at finding the most efficient dense stereovision solutions for 3D points reconstruction in the specific application of head and facial features tracking. First, existing stereovision solutions available to our group will be tried and their accuracy in measuring the 3D properties of a face will be evaluated. Improved stereovision solutions will be developed, if needed.
O4: Head geometry and pose modeling and tracking using stereo information: the head will be modeled as a parametric 3D object, along with its posture. The head will be tracked using the 2D and 3D data delivered by stereovision as measurement information, in a probabilistic estimation framework (Kalman filtering, Particle filtering). The position, attitude and motion of the head are important features in themselves, and the result will allow the third camera to lock onto the face and the eyes.
O5: Detection and tracking of facial features: Using the high speed camera focused on the subject’s face, facial features will be perceived: the pupil size will be analyzed in order to determine several pupillometric measures, such as pupillograms, the percent change of pupil size, etc; eye movements and saccade dynamics (amplitude, peak velocity, duration, latency); eye gaze will be 17 estimated and tracked; motions of the other facial features such as eyebrows or lips will be analyzed for micro expression estimation.
O6: Demonstrator applications: the detected and tracked features will be integrated into a demonstrator application such as driving attention monitoring, gaze tracking for human computer interface, or detection of several emotional states.
O7: Dissemination: articles and patents describing the most important technological achievements of the project will be written and submitted.