Drowsy Driver Detection System

Please download to get full document.

View again

of 5
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Engineering

Published:

Views: 0 | Pages: 5

Extension: PDF | Download: 0

Share
Related documents
Description
Driver weariness is one of the key causes of road mishaps in the world. Detecting the drowsiness of the driver can be one of the surest ways of quantifying driver fatigue. In this project we aim to develop an archetype drowsiness detection system. This mechanism works by monitoring the eyes of the driver and sounding an alarm when he/she feels heavy eyed. The system so constructed is a non-intrusive real-time observing system. The primacy is on improving the safety of the driver. In this mechanism the eye blink of the driver is detected. If the driver’s eyes remain closed for more than a certain span of time, the driver is believed to be tired and an alarm is sounded. The programming for this is carried out in OpenCV using the Haar cascade library for the detection of facial features.
Transcript
  • 1. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 4 Issue: 10 184 - 188 ______________________________________________________________________________________ 184 IJRITCC | October 2016, Available @ http://www.ijritcc.org _______________________________________________________________________________________ Drowsy Driver Detection System Sudeep Deepak Ghate Department of Information Technology MCT’s Rajiv Gandhi Institute of Technology Mumbai,Maharashta,India. ghatesudip@gmail.com Vaibhav Jaiprakash Kheraria Department of Information Technology MCT’s Rajiv Gandhi Institute of Technology Mumbai, Maharashtra, India. kherariav@gmail.com Mayur Prashant Vanmali Department of Information Technology MCT’s Rajiv Gandhi Institute of Technology Mumbai, Maharashta, India. vanmalimayur@gmail.com Diyog Sevalal Yadav Department of Information Technology MCT’s Rajiv Gandhi Institute of Technology Mumbai, Maharshtra, India. diyogyadav@gmail.com Under the guidance of Prof. Rashmi Chawla Abstract — Driver weariness is one of the key causes of road mishaps in the world. Detecting the drowsiness of the driver can be one of the surest ways of quantifying driver fatigue. In this project we aim to develop an archetype drowsiness detection system. This mechanism works by monitoring the eyes of the driver and sounding an alarm when he/she feels heavy eyed. The system so constructed is a non-intrusive real-time observing system. The primacy is on improving the safety of the driver. In this mechanism the eye blink of the driver is detected. If the driver’s eyes remain closed for more than a certain span of time, the driver is believed to be tired and an alarm is sounded. The programming for this is carried out in OpenCV using the Haar cascade library for the detection of facial features. Keywords-weariness; quantifying; archetype; detection; non-intrusive; alarm; facial __________________________________________________*****_________________________________________________ I. INTRODUCTION For the past few decades, accidents caused by drowsy driving have occurred persistently. Therefore, many researchers and specialists have paid great efforts in this matter [1-3]. Some useful techniques for determining driver drowsiness can be generally separated into three main classes - [1]: the first category is centered on driver’s current state, which is involving the eye and eyelid movements, closed eyes period, and physiological state fluctuations. The second category is based on the vehicle’s performance, e.g. the driving speed. The third class is based on grouping of the driver’s current state and driver performance [1]. Regarding the fatigue detection, the driver’s drowsy status can be calculated through the closed eyes and head gesture conditions [2-3] by means of facial image processing. In this work, by the NIR-based facial image processing, primarily the precise eye position is recognized, and then the driver drowsy condition without/with glasses can be detected effectually. Eventually, the fatigue alert is generated to caution the driver properly. II. THE PROPOSED SYSTEM The proposed system is split into two cascaded computational procedures: (1) the driver eyes detection and (2) the drowsy driver detection, which are detailed as follows: 2.1 Driver Eyes Detection To overcome numerous illumination conditions, the NIR camera is used to capture the driver’s facial images. Only gray-scale images are processed without using color information, and the proposed system functions effectively in day and night. The proposed driver eyes detection scheme includes five functions, which comprise the pre-processing, face detection, face boundary detection, eye-glasses bridge detection, and eyes detection. Fig. 1 demonstrates the processing flow of the driver eyes detection. For the pre- processing, the gray-scale images are filtered by Sobel filters, and then processed by erosion and dilation. For the face detection, we use two-stage facial region detection scheme by Haar-like features [4]. At the Level-1 stage, the possible driver’s facial region is detected over a simple black-and white Haar-like facial feature. The possible facial searching range is narrowed after the Level-2 process, and more accurate facial
  • 2. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 4 Issue: 10 184 - 188 ______________________________________________________________________________________ 185 IJRITCC | October 2016, Available @ http://www.ijritcc.org _______________________________________________________________________________________ region is recognized by a smaller Haar-like feature (Fig. 1). The integral images are applied to expedite the Haar-like features searching. Based on the results of the facial region detection, the facial size and the eyes search region are accomplished through the centralization and equilibrium of facial features by using the facial boundary detection. Following, the eye detection scheme for eyes location is parted into two processing flows to judge whether the driver wears glasses or not. For the eye-glasses bridge detection, the number of edges in the bridge region along with wearing glasses will be more than that without wearing glasses. For the eyes detection with glasses, we use the horizontal projection, and compute the average to find the axis of symmetry. Since the shape of the glasses frame is mostly circular or rectangular, we can find the horizontal and vertical symmetry axes of the left and right frames respectively, and the center of the intersection can be achieved by two symmetrical axes. Then the center of the intersection will be the probable location of eyes. For the eyes detection without glasses, the inverted triangle eyes filter with local gradient patterns [5] is used to find the exact eyes location in the possible eyes region (Fig. 1). 2.2 Drowsy Driver Detection Based on the eyes detection, the region-of-interesting (ROI) of eyes location will be extracted for the succeeding process. The following drowsy driver detection comprises five functions, which include the edge filtering in ROI, binarization, iris location, open/closed eyes detection, and drowsy detection. Fig.( 2) illustrates the processing flow of the projected drowsy driver detection. For the edge filtering, the derivative of the Gaussian filer is used to convolute in the ROI region, and the edge information is obtained. To scale down the computational complication, the 2-D convolution scheme is substituted with two 1-D convolutions. Next, the edge map is processed by the histogram statistics, and the mean edge values will be the threshold for binarization. For the iris location, the circle Hough transform [6] is applied for iris detections. For each point belonging to the edge of the iris, it is feasible to identify the iris center. Accordingly, a three-dimensional cone accumulator is used to accumulate these edge points. If the edge points belong to the iris, the circles centered on these edge points will be coincided to create the intersection at the center of the target circle, and the center and radius of the iris can be found by the location which has the optimum intersection. Based on the location and radius of the iris, we can do the open/closed eyes detection. The closed eyes measurement index can be obtained by calculating the difference of sum of the gray-scale pixels between the inner and outer donut-shape regions. When the change is smaller than a threshold, the closed eye state will be detected. Fatigue extent is an important issue, and the characteristics of fatigue measurement reveals a concrete problem for road security [1]. Based on the closed eyes measurement index, the drowsy driver condition can be detected by evaluating the time period of closed eyes. When the closed eyes time period will surpasses a pre-defined threshold (e.g. 30 frames in Fig. 3), the alert that drowsiness is detected will be produced to warn the driver. III. ALGORITHM DESCRIPTION Initially, the face of the driver is identified within the acquired frames by means of the Viola-Jones algorithm. We avoid False positives by selecting, between probable multiple face candidates, the one whose size and position matches the typical location and dimension of the head of the driver. As soon as the rectangular area holding the face of the driver has been identified, it becomes the region of interest (ROI) in which we search for the eyes. Because of the position of the web camera (which will not be exactly in front of the driver but placed at his right, as shown in Fig. (3) or at his left in case of Right-hand drive, and only the right eye will be tracked to
  • 3. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 4 Issue: 10 184 - 188 ______________________________________________________________________________________ 186 IJRITCC | October 2016, Available @ http://www.ijritcc.org _______________________________________________________________________________________ scale down the algorithm computational load. Similar for the face, the eye area is primarily largely recognized by means of the Viola-Jones technique. [11] Fig. 3. Webcam position in the cockpit [11] The outcome of this procedure will be a rectangular region surrounding the eye, as shown in Fig. 4(a). Human morphology can too be exploited, due to the eyes being usually positioned, vertically, in a region occupying one third the area of the face and to be found marginally below the head’s upper limit. To precisely identify the eyes and their features inside the established rectangular region, in a principal execution of the project we have set up an AAM eye detection scheme. Later, producing an suitable training set of face images to construct the model, each and every picture was manually marked up by placing of proper points (landmarks) on the eye and eyebrow (Fig. 4b). The ICA algorithm was used then to build the finest fitting active model of appearance (Fig.4c), after an initialization of the model constraints according to the location and size of the latest determined eye region. [11] Figure 4. (a) determination of the eye region through the Viola-Jones method; (b) Manual annotation of eye and eyebrow by means of landmark points (AAM technique); (c) Eye tracking result through the ICA method. [11] While in effect, still, such an application turned out to be little capable for our purposes - on average, simply seven or less frames per second were processed, a rate which is insupportable for consistent detection of eye blink. We thus decided for a more simple and faster (but effective) algorithm, which is basically a combination of the Viola-Jones technique and the template matching approach (a normalized squared difference matching method). To scale down the computational load, the algorithm is applied to the area, suppose A, centralized on the eye position established in the earlier frame and double in size with respect to eye rectangle (Fig. 5a). The algorithm will be briefly summarized accordingly: 1. The eye will be exactly searched within A with the help of the Viola-Jones technique; 2. If the eye is found, its copy is saved as a template; 3. If eye not found, the latest valid template which is saved will be used in a template- matching process; 4. After a definite iterations of serial template matching processes (i.e. serial failures of Viola- Jones technique), a re-initialization process (face detection, etc.) is carried out, so to avoid a possible general failure of the tracking because of a sequence of incorrect matchings. The distinctiveness of this algorithm is that it tries to exploit the good qualities of both methods, the Viola-Jones and template matching techniques, to scale down error rates: when, for any of the reason, the Viola-Jones technique is not successful, the tracking proceeds with the template-matching, which depends on a constantly updated template. The assessment of the eye status can be done only when the driver's head is still and directed forward ever since a moving or turned head is usually, as such, signals a wakefulness state. The eye test is therefore performed only when the face is detected in a position of “rest”. To recognize such posture, the mean eye location about the last n frames can be calculated. This position thus becomes the center of a region used to test both the status of the eyes and the inclination of head: the vertical position of eye inside such area specifies how ample the driver's head is tilted vertically, hence permitting the identification of possibly risky situations (Fig. 5b and Fig. 5c). [11] Figure 5. (a) Inner rectangle-eye area (found in the latest frame). Outer rectangle: eye search area in the present frame; (b) Inclination level of the driver's head: downward tilt; (c) Inclination level of the driver's head: upward tilt [11] If the eye is inside the rest area, the blink test is carried out. To this purpose, the eye image is primarily converted to grayscale and then thresholded to pursue a binary image (Fig. 6). Since binary image (especially with infrared lighting) is many times characterized by the reflections which are on the pupil, which may negotiate the study of openness degrees, an additional processing is required. After the edge detection procedure, the enclosed boundary with optimal area is chosen and filled within, in order to exclude the gaps produced by pupil reflections. Throughout this process, the noise owing to shadows and portions of the eyebrows will be reduced. Consequently, the vertical projection of the histogram of the binary image is found. To eliminate micro peaks created by eyelash edges, a histogram levelling is performed by computing, for each of the column, the mean value about the
  • 4. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 4 Issue: 10 184 - 188 ______________________________________________________________________________________ 187 IJRITCC | October 2016, Available @ http://www.ijritcc.org _______________________________________________________________________________________ previous ten. The peak of the produced histogram corresponds to the location of the pupil. The eye openness level can be assessed by computing the average values about all of the columns and linking it with that relative to the "fully-open eye" case (Fig. 7). Furthermore, the peak position provides also a comprehensive indication regarding the driver's horizontal gaze direction. When the eye remains completely or partly closed more than a definite timeout, the driver should have fallen asleep, and appropriate actions must be taken immediately. [11] Fig. 6(a) and 6(b) [11] Fig. 7(a) and 7(b) [11] IV. EXPERIMENTS To assess the performance of the constructed eye detection and tracking algorithm, we have performed several tests in different types of environments (even if the by default setting is the car interior). Particularly, the following three events have been considered: (1) the Ferrari California; (2) different cars; and a room (college laboratory) well-lit with fluorescent light. In each of the reported preconditions, the experiments are done in different light conditions, like (1) daylight, (2) little lighting /zero lux, and (3) changing lighting. Two USB web-cams are used, dependent upon the light conditions: A Hercules Classic Silver in condition (1), and a X-33850 Extreme Night Camera in conditions (2) and (3). The very first device will be a traditional web camera, the one seconded will be provided with six infrared LEDs and infrared sensor, then it will be acquiring the images also in low light instances. optimum frame rate and resolution will be 30 frames per second and 640 x 480 will be for both devices. Every camera is connected, using a USB port, to a Vostro 1310 notebook, with these main characteristics: 1) Processor: Intel- Core 2 Duo (T8100) (2.1 GHz) 2) Memory will be 4 GB - 2 DIMM (DDR2-667) 3) HDD: 160GB ,5400RPM HDD 4) Graphics: 128MB NVIDIA, GeForce- 8400M GS 5) Operating System: Microsoft XP, 5 sets of data are collected, relating to correct acknowledgement of slow/fast blinks and precise tracking of slow and fast eye movements. These main types of errors are evident: 1) tracking miss- when, in consecutive frames, the tracking is incorrect 2) detection miss- when the user's face or eye are not detected because of shadows or occlusions; 3) blink miss- when eye blink cannot be detected (one with rapid blinks) 4) blink fail- when a blink is mistakenly detected (because of particular head postures), thus a restricted number of tests are carried out on Ferrari (30 video clips lasting 20 seconds each). More experiments can be done in the other two settings (100 video clips, lasting 20 seconds each) instead. Each of the test is performed through the same subject, a twenty-five years old male. Tables 1-4 in the following review percentages for success at the recognition of slow / fast blinks and eye movements, whereas Table 5 displays the fail percentages of blink detection. Throughout the experimental sessions in the settings 1 &2, the tester was ordered to "normal behaviour", driving the car. In the third setting, this behaviour can of course only get simulated (though prolonged blinks can be tested, all of them recognized correctly). [11] [11] V. CONCLUSION Although only one of the testers was utilized in this introductory experimental phase, the outcomes achieved are fairly convincing. We are also considering to enhance the performance of the system in low light condition, by adding an array of infrared LEDs to the night vision camera. The proposed algorithm appears to be convincing in detecting the eye blinks of a driver. Owing to the combination of the template matching techniques and Viola-Jones. Our next goal will be directed towards determining clear correlations between eye blink data (like duration and speed) and the driver’s vigilance state.
  • 5. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 4 Issue: 10 184 - 188 ______________________________________________________________________________________ 188 IJRITCC | October 2016, Available @ http://www.ijritcc.org _______________________________________________________________________________________ ACKNOWLEDGMENT We would like to express our special thanks of gratitude to our Guide Prof. Rashmi Chawla, as well as our Head of Department Prof. Sunil Wankhade, who gave us the golden opportunity to do this wonderful project on the topic Drowsy Driver Detection System, which also helped us in doing a lot of Research and we came to know about so many new concepts for which we are really thankful to them. REFERENCES [1] Q. Wang, J. Yang, M. Ren, and Y. Zheng, “ Driver Fatigue Detection: A Survey,” the 6th World Congress on Intelligent Control and Automation, pp. 8587- 8591, June 21-23, 2006. [2] J. F. Xie, M. Xie, and W. Zhu, “Driver Fatigue Detection based on Head Gesture and PERCLOS,” International Conference on Wavelet Active Media Technology and Information Processing, pp.128-131, Dec. 17-19, 2012. [3] A. Singh and J. Kaur, “Driver Fatigue Detection Using Machine Vision Approach,” IEEE 3rd International Advance Computing Conference, pp.645- 650, Feb 22-23, 2013. [4] P. Viola and M. J. Jones, ”Robust Real-Time Face Detection”, International Journal of Computer Vision, vol.57(2), pp.137–154, May 2004. [5] B. Jun, I. Choi, and D. Kim, “Local Transform Features and Hybridization for Accurate Face and Human Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.35, no.6, pp.1432-1436, June 2013. [6] REFLECT project website. Available: http://reflec
  • We Need Your Support
    Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

    Thanks to everyone for your continued support.

    No, Thanks