Moreover, it is possible to exhibit high levels of yawning witho

Moreover, it is possible to exhibit high levels of yawning without necessarily being in a hypovigilance state [48]. Therefore, facial muscle activity (including yawning and eyebrow raisings) offers little predictive information pertaining to sleep onset [14]. In fact, sleep can occur without yawning or even before any significant change in muscle activity or tonicity [14]. It has been shown in [49] that also head movement distance and velocity have a stronger correlation (>80%) to sleepiness than the correlations in [47] for changes in facial expression (60�C80%). Because of these reasons, and the fact that the percentage of time that the eyes are closed (the eyelids cover the pupils at least 80% or more) over a given period of time (PERCLOS [14]) has a significantly stronger correlation to fatigue [15], efforts should be placed on improving head and eye tracking methods.

Furthermore, recent works [50,51] confirm that among the different ocular variables, PERCLOS is the most effective to prevent errors or accidents caused by low vigilance states, thus confirming the original observations and findings reported in [14,15]. In this context, the contributions and novelty of this paper can be summarized as follows. A kinematic model of the driver’s motion is introduced to obtain the pose of the driver described by five degrees of freedom (lateral tilt, nod and yaw of the head about the neck and frontal and lateral tilt of the torso). The use of the driver’s kinematic model allows one to reach an outstanding performance, with an almost 100% tracking rate of the eyes.

A high tracking rate is key to the computation of the PERCLOS, since computing the PERCLOS requires the knowledge of where the eyes are and whether they are open or closed. Another contribution of this work is the use of the driver’s observed interpupillary distance (IPD) to estimate Entinostat the distance from the driver’s head to the camera (up to a scale factor), thus the approach yields the driver’s motion in 3D space. It is shown that tracking in 3D space the back-projected salient points (from 2D image space to 3D space) is equivalent to tracking points on the 2D image space when the knowledge of the distance between the driver and the camera is available. Therefore, an equivalent result to that of tracking the salient points in 3D space is possible by tracking points in 2D space together with the computed driver-camera distance when the salient points are assumed to be a set of coplanar points lying on the facial tangent plane.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>