Next Article in Journal
Dynamic Application Partitioning and Task-Scheduling Secure Schemes for Biosensor Healthcare Workload in Mobile Edge Cloud
Next Article in Special Issue
Remote Eye Gaze Tracking Research: A Comparative Evaluation on Past and Recent Progress
Previous Article in Journal
An Investigation of Clock Skew Using a Wirelength-Aware Floorplanning Process in the Pre-Placement Stages of MSV Layouts
Previous Article in Special Issue
Exploring the Effect of Robot-Based Video Interventions for Children with Autism Spectrum Disorder as an Alternative to Remote Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles

Department of Design Science, Graduate School of Techno Design, Kookmin University, 77 Jeongnenug-ro, Seongbuk-gu, Seoul 02707, Korea
*
Author to whom correspondence should be addressed.
Submission received: 29 September 2021 / Revised: 9 November 2021 / Accepted: 11 November 2021 / Published: 15 November 2021
(This article belongs to the Special Issue Human Computer Interaction and Its Future)

Abstract

:
In autonomous driving vehicles, the driver can engage in non-driving-related tasks and does not have to pay attention to the driving conditions or engage in manual driving. If an unexpected situation arises that the autonomous vehicle cannot manage, then the vehicle should notify and help the driver to prepare themselves for retaking manual control of the vehicle. Several effective notification methods based on multimodal warning systems have been reported. In this paper, we propose an advanced method that employs alarms for specific conditions by analyzing the differences in the driver’s responses, based on their specific situation, to trigger visual and auditory alarms in autonomous vehicles. Using a driving simulation, we carried out human-in-the-loop experiments that included a total of 38 drivers and 2 scenarios (namely drowsiness and distraction scenarios), each of which included a control-switching stage for implementing an alarm during autonomous driving. Reaction time, gaze indicator, and questionnaire data were collected, and electroencephalography measurements were performed to verify the drowsiness. Based on the experimental results, the drivers exhibited a high alertness to the auditory alarms in both the drowsy and distracted conditions, and the change in the gaze indicator was higher in the distraction condition. The results of this study show that there was a distinct difference between the driver’s response to the alarms signaled in the drowsy and distracted conditions. Accordingly, we propose an advanced notification method and future goals for further investigation on vehicle alarms.

1. Introduction

Driving is a complicated task comprising a range of activities such as pathfinding, potential risk detection, and longitudinal and lateral vehicle operation in a continuously changing traffic environment [1]. Accordingly, drivers must be able to take appropriate actions based on the information collected from the various driving environments [2].
With technological developments, large amounts of information can be provided through advanced driver assistance systems, in-vehicle information systems, and other digital devices for convenience and to ensure the safety of the driver [3]. According to a 2019 report by the National Highway Traffic Safety Administration, 9% of the fatal road accidents that occurred in the United States in 2017 were reportedly caused by drivers’ distractions. Here, “distraction” refers to a situation in which the person’s attention to driving is diverted by several factors, including digital devices, such as cellular phones and radio, eating while driving, and conversing with fellow passengers [4].
Autonomous driving can reduce the accident risk attributed to human errors (including driver distraction) and, in doing so, assist in ensuring safe driving. Moreover, when autonomous driving is activated, the driver can participate in activities other than driving [5]. Autonomous vehicles are divided into levels ranging from 0 to 5, based on their automation levels. Level 0 is the lowest automation level, wherein no automation function exists, and a driver must control the entire vehicle. A higher level indicates more advanced automation. For example, level 5 involves complete automation in which the driver’s participation in the driving is not required at all. Level 3 is a conditional automation level in which autonomous driving is enabled within specific environments. If autonomous driving is conducted under the driving conditions satisfying level 3 requirements, then the driver does not need to monitor the driving environment. However, when the conditions change to those in which autonomous driving is not possible, the vehicle provides a takeover request (TOR) to the driver, who must then drive the vehicle manually [6].
The representative alarm of a partially autonomous vehicle (i.e., the TOR) typically triggers in circumstances when the autonomous vehicle enters into a situation that is beyond the domain parameters of the operational design or when a system error or unexpected change in the driving environment occurs [7]. In an autonomous driving situation, the drivers may engage in non-driving related tasks (NDRTs) such as social networking or watching a video. In these situations, a TOR can be a source of confusion when signaled. When a TOR is issued, the driver performing NDRTs should recognize the situation immediately and prepare themselves for manual driving [8].
To assist the driver in achieving situational recognition, it is important to deliver the information to them effectively. Accordingly, the development of human–machine interfaces (HMIs) for vehicles has emerged as the focal point of research on autonomous driving technology. Yun et al. [9] researched visual, auditory, and tactile alarm-based methods for the effective recognition of the TOR by the driver during autonomous system errors. They demonstrated that optimum results were achieved by combining the visual, auditory, and tactile alarm methods. Naujoks et al. [10] designed an HMI that combined a visual display and an auditory alarm, and performed driving experiments using a driving simulator to deduce the effectivity of the information-delivery method for a driver engaged in NDRTs. They concluded that providing both visual display and a female voice-guided auditory alarm reduced the workload of the driver, thereby rendering this reported method as the most preferred approach. Zhang et al. [11] studied effective information-delivery methods by combining no information, static voice information, dynamic voice information, static image information, and dynamic image information. Static information was effective for providing voice and image cues simultaneously; for dynamic information, no difference was detected between providing both voice and image and providing only voice information. Campbell et al. [12] proposed guidelines for a vehicle information system that considered human factors. A series of experiments were carried out to find the most effective method between providing an auditory and a visual alarm, and signaling both auditory and visual alarms. The results showed that the latter method (auditory and visual alarms together) hastened the reaction time of the driver and resulted in only a few errors. Campbell et al. [13] reported a method for providing visual, auditory, and tactile information in vehicles, i.e., the modality information provision method, which is based on three strategies: (i) a visual information delivery strategy that includes various visual factors such as color, location, size, and flashing; (ii) an auditory information provision approach that includes auditory signals such as voice, beeping sounds, high volume and frequency; and (iii) a tactile information provision method that includes vibration location and size. Telpaz et al. [14] studied the usability of haptic sheets in TOR scenarios requiring lane changes. A driving simulator-based experiment was conducted with 26 participants to collect and analyze the behavioral and gaze data. The results showed that the haptic-sheet-covered seats shortened the driver’s reaction time and aided in realizing a faster situational recognition [14]. Borojeni et al. [15] studied the effect of using an ambient light source placed in the driver’s peripheral vision to deliver the TOR. In this case, a light signal was emitted from a light-emitting diode (LED) strip located behind the steering wheel, and an audible warning was also transmitted. The experiment was conducted with 21 participants, and the reaction times along with the time-to-collision data were collected and analyzed. Dynamic LEDs located behind the steering wheel reportedly shorten the reaction times, aiding in safer driving, compared to the situation when all of the LEDs are in the ON state.
As discussed before, in the previously reported studies, the effects of visual, auditory, and tactile alarms provided by autonomous vehicles to drivers were analyzed. However, in most of these studies, the driver’s state or the effect of an alarm considering only one state (e.g., distraction) was not examined. The driver’s behavior is different depending on their state [16], and as a result, the degree of recognition of the situation is also different [17]. Therefore, an alarm that incorporates the driver’s state while issuing a warning in an autonomous vehicle is essential for safety and convenience.
In this study, we analyzed the difference between the reaction of drivers in drowsy and distracted states in a traffic environment, evaluated the influence of the different reactions on the information provided to the driver in a level 3 autonomous vehicle, and proposed an advanced alarm method for each state. The alarm method was designed based on the results of the previously reported studies. The driving simulator-based experiments included 38 participants, and each experiment was conducted twice (i.e., one for the drowsiness scenario and another for distraction scenario). To collect the data related to the driver’s reaction to the alarm, a control-switching stage was included in each scenario, and the driver’s reaction to the TOR was examined. The driver’s reaction time, gaze indicator, and survey data were collected and analyzed, and the drowsy state was additionally examined through electroencephalography (EEG).

2. Alarm Design

Based on the results of the previously reported studies, the alarm was designed by combining visual and auditory alarms, and different alarm methods were developed depending on the driver’s status. Given that the characteristics of a TOR for autonomous driving differ from those of a general alarm, the optimized timing of the alarm was also calculated. Campbell et al. [18,19] proposed a design method for multimodal warning messages, consisting of two or more of the following signal types: visual, tactile and auditory modalities. According to the report, the multimodal warning messages provide quick and reliable help to the drivers by understanding the situation. The report also outlines the different types of multimodal displays and some of the associated factors that need to be considered for delivering such messages. Examples of the visual displays include heads-up display (HUD), high head-down display (HHDD), low head-down display (LHDD), and instrument panel (IP) displays. Examples of the auditory displays are voice messages and simple tones, whereas haptic displays include vibrotactile seats, steering wheel torque, vibrotactile steering wheels, and other haptic/tactile displays. Next, the authors proposed the interface design guidelines for multimode warning messages. The reported visual interface design elements included messages (e.g., text, icon, functional, etc.), location (e.g., in the IP, HUD, etc.), color (e.g., red, yellow, green, etc.), size of icons and text (e.g., visual angle, height, distance from viewer to display, etc.), and temporal characteristics (e.g., flash rate, duty cycle, complex flash, etc.). Auditory display types (e.g., simple tones, earcons, auditory icons, speech messages, etc.), urgency, annoyance, and the size of auditory signals were suggested as the auditory interface design elements. In contrast, haptic display (e.g., accelerator pedal counterforce, vibrotactile seat, seatbelt vibration, etc.), tactile sensitivity, and direction elements were proposed as the haptic interface design elements. Prinzel et al. [20] proposed design factors such as viewing angle, size of the HUD box, and other required information to solve the problems arising from the human factors, such as the increased spatial disorientation of an airplane HUD. The International Organization for Standardization has established standards for various graphic symbols. Design requirements such as symbol shape, size, and color were presented in [21]. Gold et al. [22] studied the optimal advance warning time in the TOR situation. The TOR-time was defined as the time from the moment the TOR started until the vehicle collided with another vehicle. The experiment was conducted by dividing the TOR-time group into two time groups (i.e., 5 and 7 s time groups). The experimental results showed that a shorter TOR-time produced a faster reaction from the subject, although the driving performance worsened and the risk of collision became higher. Kim et al. [23] investigated the optimal TOR timing threshold (time from the start of the TOR until controlling the vehicle becomes impossible) in four different TOR situations. When the driver was distracted, the optimal TOR timing threshold in the TOR situation was 4.43 s. Gonçalves et al. [24] divided the actions that the driver can take in the TOR situation into three categories: left lane change, right lane change, and lane keeping maneuver (KLM). A function for calculating the optimal TOR time point during the KLM was proposed as well.

2.1. Visual Alarm

The visual alarm is provided via an HUD and a center infotainment display (CID). The design elements considered in this study included the alarm provision method, its color, location, and size, as well as a flashing method. The details are as follows:
  • Provision method:
    -
    A visual alarm should be used in conjunction with an auditory alarm.
  • Color:
    -
    Orange or yellow colors should be used to attract the driver’s attention.
    -
    Red color should be used in dangerous scenarios where the driver needs to take an immediate action.
    -
    The color green should be used when the system is normal.
  • Location:
    -
    Alarms requiring a quick response from the driver must be provided to the HUD within ±5° of the driver’s gaze.
  • Size:
    -
    The text and icon sizes of the provided information should be more than 0.5” in diameter, and in the case of an emergency, more than 1” in diameter.
  • Flashing method:
    -
    The optimal speed of flashing in an emergency should be 3–5 Hz.

2.2. Auditory Alarm

The auditory alarm was provided using a beeping sound and a pre-recorded female voice via a speaker. The associated design elements considered in this study included the alarm provision method, voice message, and beeping sound as:
  • Provision method:
    -
    The auditory alarm should be delivered within 0.5 s after the relevant event.
    -
    The auditory alarm should be supplementary to the visual alarm.
  • Voice message:
    -
    Phrases that provide visual information using text should be used.
    -
    Beeping sounds should be used before providing voice information.
    -
    In the case of an emergency, a rate of 150–200 words per minute should be used.
  • Beeping sound:
    -
    In case of an emergency, a beeping sound should be used at a volume of up to 115 dB; in other scenarios, the applied volume should be up to 90 dB.

2.3. Visual/Auditory Alarm-Raising Method According to the Driver’s State

Different alarms were employed depending on the drowsy/distracted state of the driver (Table 1 and Table 2). A visual alarm was provided through the HUD and CID (Figure 1 and Figure 2). In conjunction with the visual alarm, an auditory alarm and a warning sound were delivered as well. The alarm was designed to trigger 7 s before the collision.

3. Experimental Design

To investigate the different reactions of the driver to the pre-designed alarm, in the drowsy and distracted states, human-in-the-loop experiments were carried out. The primary experimental assumptions in this case were as follows.

3.1. Hypothesis

Hypothesis 1 (H1).
Survey/reaction time/gaze indicators linked to the TOR were different depending on the driver’s state.

3.2. Independent Variable

The driver’s state was adopted as the independent variable in the experiments. In total, two driver states (drowsy and distracted) were set up.

3.3. Dependent Variables

The dependent variables were as follows (summarized in Table 3).
  • Visual recognition (survey): The degree to which the TOR was visually recognized. This was assessed using the five-point Likert scale questionnaire results and was designed with the following options: (1) not recognized at all; (2) not recognized; (3) intermediate; (4) recognized; and (5) fully recognized.
  • Auditory recognition (survey): The degree of auditory TOR recognition. This was evaluated based on a five-point Likert scale questionnaire assessment and was designed with the following options: (1) not recognized at all; (2) not recognized; (3) intermediate; (4) recognized; and (5) fully recognized.
  • Reaction time(s): The time from the point of TOR generation to the completion of control takeover by the driver.
  • Blink (count/s): The number of eye blinks per second from the point of TOR generation to the completion of control takeover by the driver.
  • Gaze distance (mm): The driver’s gaze distance per second from the point of TOR generation to the completion of control takeover by the driver.
  • Pupil diameter (mm): Changes in the driver’s pupil diameter per second from the point of TOR generation to the completion of control takeover by the driver.

3.4. Experiment Equipment

The simulation information was visualized on three monitors. For the steering wheel and other controls, the G27 Logitech racing wheel was used, which was directly operated by the participants. UC-win/Road v.12 (Tokyo, Japan) was used as the simulation software, which was employed to craft and implement the experimental road and vehicle model. To provide the alarm to the driver, the dashboard HUD and CID, which were implemented as displays connected to the simulation computer, were placed at the center of the simulator cockpit (Figure 2). The designed visual alarm was displayed on the HUD and CID, along with the audible alarm.
The Tobii Pro Glasses 2 model was used as the gaze-tracking device, in which the glasses-type equipment and the analysis module were connected through cables. The gaze-related indicators, such as gaze point and pupil diameter, were measured, and the video from the driver’s perspective was recorded. The data were recorded using a sampling frequency of 50 Hz (Figure 2).
The BIOS-S24 model of Biobrain was used as the EEG sensor, which can measure the brainwaves using 21 channels (Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, Oz, and O2) of an internal electrode. The corresponding data were recorded using a sampling frequency of 250 Hz (Figure 2). The associated hat-type gears and signal-measuring equipment were connected through cables, and the EEG signal was used as a reference for identifying whether actual drowsiness was experienced during the drowsy state experiments.

3.5. Experiment Procedure

The experimental scenarios were divided into two situations: drowsiness and distraction. Each scenario included a situation in which the takeover alarm was delivered to the participant (driver) during the level 3 autonomous driving, under drowsiness/distraction (Figure 3).
The participants completed both the drowsiness and distraction scenario experiments, which were conducted on different days. For the drowsiness scenario, the sleeping hours of the participants, the day before the experiment, were limited to less than 4 h, and smoking, drinking, and taking stimulants were prohibited.

3.5.1. Drowsiness Scenario

  • Normal manual driving (30–60 min). Prior to the experiment, the participants in the evaluation group limited their sleep to less than 4 h the day before the evaluation. Within 30 min of starting the experiment using the simulator, the participants were made to drive along a tedious road at 80 km/h; here, the vehicle was driven manually until drowsiness set in. The participants’ drowsiness was judged by the participants themselves. If the participant did not feel drowsy even after driving for more than 60 min, the experimenter would stop the experiment.
  • Drowsy autonomous driving (20 min). The participants, recognizing that they felt drowsy, passed driving control to the vehicle by pressing the “autonomous driving” button; thereafter, the participant did not do anything related to driving or NDRTs but remained comfortable in the autonomous driving state.
  • Takeover request (5 s, three repetitions). System failure occurred on the road and the autonomous driving system provided a visual and auditory takeover alarm. When the participants recognized the alarm (the alarm repeated three times with a ringing duration >5 s), they held the steering wheel with both hands and looked forward.
  • Manual driving (5 min). The participants, upon transferring control of the vehicle back to themselves, manually drove the vehicle for approximately 5 min before the scenario ended.
  • Usability evaluation questionnaire (10 min). After completing the scenario, a questionnaire related to different aspects such as the effectiveness and recognition of the drowsiness alarm was provided to the participants.

3.5.2. Distracted Scenario

  • Normal manual driving (10 min). Using the simulator, the participants drove at 80 km/h in a normal state without performing any other tasks.
  • Distracted autonomous driving section (20 min). The participants pressed the “autonomous driving” button and transferred driving control to the vehicle. During the autonomous driving, the participants listened to a song and entered the lyrics of the song on a smartphone (visual/auditory/manual task). In this situation, the participants were not looking in the forward direction, and their hearing was focused on the music. Additionally, their hands were away from the steering wheel, indicating the visual/manual distraction state.
  • Takeover request (5 s, three repetitions). When system failure occurred on the road, the autonomous driving system provided a visual and auditory takeover alarm to the participants. Furthermore, when the participants recognized the alarm (the alarm repeated thrice with a ringing duration >5 s), they held the steering wheel with both hands and looked forward.
  • Manual driving (5 min). After retaking control of the vehicle, the participant manually drove the vehicle for roughly 5 min before the scenario ended.
  • Usability evaluation questionnaire (10 min). After completing the scenario, a questionnaire was provided to the participants to evaluation various aspects such as the effectiveness and recognition of the distraction alarm.

3.6. Participants

The experiments were carried out with 38 participants (29 males and 9 females). The average age was 36.4 y (standard deviation (SD): 13.6 y), and the average driving experience was 9 y. All the participants possessed driver’s licenses, owned a vehicle, and had driving experience of more than 3 y. For these experiments, subjects with a sufficient driving experience and hence no trouble with vehicle operation were recruited. The gender ratio was not considered since in this study, we did analyze the difference between males and females. The experiment was approved by the Institutional Review Board (7001988-201907-HR-652-02; IRB), and all of the IRB regulations were followed.

4. Results and Discussion

First, descriptive statistical analysis was performed on the dependent variables, and then normality was tested through the Kolmogorov–Smirnov test. Second, if the normality test conditions were satisfied, then the parametric test method (i.e., an independent two-sample t-test) was applied. If not, then the non-parametric test method (i.e., a Mann–Whitney test) was applied to analyze the statistically significant differences. The mean and SD of the dependent variables are summarized in Table 4, and a boxplot is presented to visualize the differences among the dependent variables (Figure 4).

4.1. Results of Visual and Auditory Recognition

Based on the five-point Likert scale measurements, the questionnaire was designed with the following options: (1) not recognized at all; (2) not recognized; (3) intermediate; (4) recognized; and (5) fully recognized.
The participants in the drowsiness (mean = 3.60, SD = 1.02) and distraction (mean = 3.92, SD = 1.19) states provided a score between intermediate (3) and recognized (4) as the responses to the visual recognition option, indicating a positive assessment. Although the score of the distraction-state participants was slightly higher, no significant differences were observed.
The participants in the drowsiness (mean = 4.44, SD = 0.82) and distraction (mean = 4.58, SD = 0.64) states assigned a score between recognized (4) and fully recognized (5) to the auditory recognition options, indicating a positive assessment. Similar to the visual recognition scores, no significant differences were found between the scores of participants in the two states.
Compared with the visual recognition, the auditory recognition received a higher positive evaluation in both the drowsy and distracted states. This suggested that the auditory information was recognized quickly than the visual information for the TOR, irrespective of the driver’s status.

4.2. Results of Reaction Time Detection

Initially, based on the normality test results derived by the Kolmogorov–Smirnov test, it was identified that no normality existed (p = 0.00, a = 0.05). Since it did not satisfy the normality requirements, a nonparametric test method (i.e., a Mann–Whitney test) was conducted. The results showed statistically significant differences (p = 0.03, a = 0.05); that is, the reaction time in the drowsy state (mean = 4.15, SD = 1.66) was faster than that in the distracted state (mean = 5.63, SD = 2.84).
At the assessment stage, using the questionnaire, the participants’ opinions about reaction time were collected, and the difference between the drowsy and distracted states was derived. First, in the drowsy state, the participants answered that they could not identify the traffic situation when the alarm sounded. As a result, they were startled, experienced a sense of danger, and prepared themselves to drive. By contrast, in the distracted state, they could leisurely prepare for the driving as they were able to identify the traffic situation sufficiently fast when the alarm ringed. This can be attributed to the difference in a driver’s safety margin when relying on visual perceptions [25]. In the drowsy state, where the driving situation was not visually recognized, the participants experienced a sense of danger, and, accordingly, exhibited a small psychological safety margin. Conversely, in the distracted state, they thought they had sufficiently recognized the driving situation visually, and hence, exhibited a relatively larger safety margin.

4.3. Results of Blink Counting

Initially, the normality test results derived by the Kolmogorov–Smirnov test showed normality (p = 0.34, a = 0.05). Therefore, the parametric test method (i.e., an independent two-sample t-test) was applied, and no statistically significant difference was obtained (p = 0.20, a = 0.05). That is, the blink count in the drowsy state (mean = 0.25, SD = 0.24) was lower than that in the distracted state (mean = 0.35, SD = 0.29).
The blink count is related to cognitive load (i.e., increasing the cognitive load, increases the blink count as well) [26]. In these experiments, the drivers recognized the alarm and judged the surrounding circumstances, which increased their cognitive load. Compared to the driver in the drowsy state, those in the distracted state experienced a higher increase in the cognitive load after the alarm was signaled.

4.4. Results of Gaze Distance Measurements

Initially, the normality test results obtained from the Kolmogorov–Smirnov test indicated the existence of normality (p = 0.24, a = 0.05). Since it satisfied normality, the parametric test (i.e., an independent two-sample t-test) was performed, and the corresponding results showed a statistically significant difference (p = 0.03, a = 0.05). That is, the gaze distance in the drowsy state (mean = 9.84, SD = 3.13) was smaller than that in the distracted state (mean = 13.22, SD = 7.44).
Visual distraction increases a driver’s gaze movement [27]. In this study, the drivers in the distracted state were using a cellular phone (NDRT) when the alarm sounded. To judge the driving situation, the driver’s gaze moved back and forth between the cellular phone and the front of the vehicle. In contrast, the drivers in the drowsy state, whose eyes were closed, moved their gaze to the front of the vehicle when the alarm sounded. This suggested that due to the visual distraction, drivers in the distracted state exhibited a relatively larger gaze distance.

4.5. Results of Pupil-Diameter-Change Measurements

The normality results of the Kolmogorov–Smirnov test identified that normality existed (p = 0.91, a = 0.05), and as a result, the parametric test (i.e., the independent two-sample t-test) was conducted, showing a statistically significant difference (p = 0.02, a = 0.05) in the measurement results; that is, the pupil diameter in the drowsy state (mean = 0.0075, SD = 0.0027) was smaller than that in the distracted state (mean = 0.0097, SD = 0.0038).
The pupil diameter is considered as a representative indicator of the attention concentration. If a person is distracted due to non-driving-related (secondary) tasks while driving, the pupil diameter increases [28]. This explains the large changes in the pupil diameter of the drivers in the distracted state.

4.6. EEG-Based Drowsiness Verification Results

To verify the state of the participants in the drowsiness scenario, EEG measurements performed using 21 channels (Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, Oz, O2) of the internal electrode in the EEG system. The ratio between the alpha (α) and beta (β) brain waves (i.e., α/(α + β)) was used the analysis indicator since these waves were highly correlated with the drowsy state. As the drowsiness increased, the activation degree of the α wave increased, whereas that of the β wave decreased [29,30]. Therefore, in these experiments, an α/(α + β) ratio higher than that obtained in the normal state was assumed to reflect a drowsy state. Similar to the case of the statistical test analysis following the t-test on the normal and drowsy state sections, the statistically significant data (p < 0.05) related to drowsiness were collected by performing EEG measurements of six participants in the drowsiness experiments (Figure 5).
An EEG signal is an indicator that is affected by various factors, including stress, emotions, thoughts, and body movements. Furthermore, in the EEG equipment that was used in these experiments, the signal-measuring part was connected to the head using cables. However, during the experiment, the bodies (including the head) of the participants moved continuously, and consequently, EEG signal losses occurred. Therefore, only the EEG data of a small number of participants (6 participants), out of the total number of participants (38 participants), were considered valid.

5. Proposed Advanced Alarm Method

This study was performed to develop an advanced alarm method that supplements those reported previously and used currently. The proposed method incorporates visual and auditory recognition, reaction time, and three gaze indicators deduced from the experimental results.
Visual and auditory recognition in the drowsy and distracted states were not significantly different. However, the auditory recognition received a higher score in both the states. This suggests that an auditory alarm was a more important factor to the drivers engaged in NDRTs in an autonomous driving situation. That is, it can be assumed that only a visual alarm is not sufficient for a driver changing their attention states to perform a successful situation recognition. To address this shortcoming, the addition of another visual alarm method, apart from the visual alarm methods used in this study (i.e., HUD and CID alarms) should be considered. For example, if ambient light is used inside the vehicle (e.g., on the doors, steering wheel, console box, and cockpit, a higher recognition can be achieved).
A significant difference in the reaction times was observed between the drowsy state, wherein a relatively faster reaction was observed, and the distracted state. According to the survey results, in the drowsy state, most drivers were unable to focus on the road and, were therefore, startled and experienced a sense of danger when the alarm triggered. As a result, their reaction time was faster. Although a faster reaction time is preferred, the risk of mishandling by the driver should be considered as well. In the distracted state, no significant difficulty was recorded in recognizing the traffic situation by the participants when the alarm triggered. Therefore, these participants reacted to the alarm in a more leisurely manner. However, a small number of drivers performed vehicle control too late due to confidence on their recognition and handling of the driving conditions. When the driver is in a drowsy state, intense alarms should be avoided to prevent the drivers from getting startled. Conversely, when the driver is in a distracted state, an alarm with an intense output should be used to alert the driver and prevent excessive psychological relaxation. Optimization can be achieved in such a way that, at an initial stage, a low-intensity sound can be used to avoid startling the driver; subsequently, the alarm’s sound intensity can be increased to urge the driver to retake control of the vehicle.
The number of eyeblinks, changes in the gaze distance, and variations in the pupil diameter were commonly larger among the participants in the distracted state. All these three indicators were related to the cognitive load of the participants. Since the drivers engaged in NDRTs perceived the alarm while engrossed in these tasks, their cognitive load increased instantly. As a result, the alarm in the distracted state should be more intense and simpler compared to that in the drowsy state.
Based on these results, complementary design elements were derived for some of the existing alarm methods (Table 5). In the future, we intend to design and evaluate an alarm that incorporates a corresponding complementary design element.

6. Conclusions

In this study, we proposed and developed an advanced alarm sounding method, based on the drowsy and distracted states of a driver in an autonomous vehicle. Based on a scenario in which a driver experiences a TOR (alarm) in a control-takeover situation during the autonomous driving, human-in-the-loop experiments were conducted in a driving simulation environment. Each experiment was performed twice in the drowsy and distracted states with 38 participants. Through the experiments, the visual and auditory alarm recognition of a driver, reaction time, eyeblink, gaze distance change, and pupil diameter change data were collected and analyzed. The drowsy state was verified through EEG measurements.
No statistical differences were found in the visual and auditory alarm recognition. However, the cognition of the auditory alarm was faster than that of the visual alarm. This indicates the importance of auditory notifications and the need for supplementary visual notifications. The reaction time showed a statistically significant difference, and it was confirmed that the reaction time in the drowsy state was faster. Although no statistically significant difference was found in the blink count, it was confirmed that the blink count in the distracted state was higher. The gaze distance and pupil diameter showed statistically significant differences, and higher values in the distracted state. This implies that a higher cognitive load occurs in the distracted state than in the drowsy state.
Through this study, we identified that the visual and auditory vehicle alarm-sounding methods proposed in the previously reported studies produced different results, based on the driver’s state. This indicated that the vehicle alarm should be designed by considering the driver’s state. Therefore, we proposed an advanced alarm method by analyzing the differences in the reaction indicators between the different states of the driver.
Our future aim is to investigate various alarm raising situations, apart from the TOR, in an autonomous vehicle, as well as to explore different visual and tactile alarm raising methods. Furthermore, we intend to increase the number of participants (including those of various ages and gender) in the experiment to perform an in-depth investigation of the vehicle–driver interactions.

Author Contributions

Conceptualization, J.-H.H. and D.-Y.J.; methodology, J.-H.H.; formal analysis, J.-H.H.; resources, J.-H.H.; data curation, J.-H.H.; writing—review and editing, D.-Y.J.; visualization, J.-H.H.; writing—original draft preparation, J.-H.H.; validation, D.-Y.J.; supervision, D.-Y.J.; project administration, D.-Y.J.; funding acquisition, D.-Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Technology Innovation Program (10099996, Development of HVI technology for autonomous vehicle–driver status monitoring and situation detection) funded by the Ministry of Trade, Industry Energy (MOTIE, Korea).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of YONSEI UNIVERSITY (7001988-201907-HR-652-02; IRB).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, L.D. Driving attention: Cognitive engineering in designing attractions and distractions. Front. Eng. 2008, 34, 32–38. [Google Scholar]
  2. Regan, M.A.; Strayer, D.L. Towards an understanding of driver inattention: Taxonomy and theory. Ann. Adv. Automot. Med. 2014, 58, 5–14. [Google Scholar] [PubMed]
  3. Hwang, Y.S.; Kim, K.H.; Yoon, D.S.; Sohn, J.C. Human-vehicle interaction: Technology trends in drivers’ driving workload management. Electron. Telecommun. Trends 2014, 29, 1–8. [Google Scholar]
  4. National Highway Traffic Safety Administration. Traffic Safety Facts Research Note: Distracted Driving in Fatal Crashes 2017; National Highway Traffic Safety Administration: Washington, DC, USA, 2019; DOT HS 812 700. Available online: https://www.nhtsa.gov/press-releases/us-dot-announces-2017-roadway-fatalities-down (accessed on 9 November 2021).
  5. Force, E.T. Automated Driving Road Map; European road Transport Research Advisory Council: Brussel, Belgium, 2015. [Google Scholar]
  6. SAE On-Road Automated Driving Committee. SAE J3016. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, Technical Report; SAE International: Warrendale, PA, USA, 2016. [Google Scholar]
  7. Bahram, M.; Aeberhard, M.; Wollherr, D. Please take over! An analysis and strategy for a driver take over request during autonomous driving. In IEEE Intelligent Vehicles Symposium (IV); IEEE: Seoul, Korea, 2015; pp. 913–919. [Google Scholar]
  8. Morales-Alvarez, W.; Sipele, O.; Léberon, R.; Tadjine, H.H.; Olaverri-Monreal, C. Automated driving: A literature review of the take over request in conditional automation. Electronics 2020, 9, 2087. [Google Scholar] [CrossRef]
  9. Yun, H.; Lee, J.W.; Yang, H.D.; Yang, J.H. Experimental Design for Multimodal Take-over Request for automated driving. In Communications in Computer and Information Science International Conference on Human-Computer Interaction; Springer: Cham, Switzerland, 2018; pp. 418–425. [Google Scholar]
  10. Naujoks, F.; Forster, Y.; Wiedemann, K.; Neukum, A. Improving usefulness of automated driving by lowering primary task interference through HMI design. J. Adv. Transport. 2017, 2017, 6105087. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, J.; Suto, K.; Fujiwara, A. Effects of in-vehicle warning information on drivers’ decelerating and accelerating behaviors near an arch-shaped intersection. Accid Anal. Prev. 2009, 41, 948–958. [Google Scholar] [CrossRef] [PubMed]
  12. Campbell, J.L.; Carney, C.; Kantowitz, B.H. Human Factors Design Guidelines for Advanced Traveler Information Systems (ATIS) and Commercial Vehicle Operations (CVO) (No. FHWA-RD-98-057); Federal Highway Administration: Washington, DC, USA, 1998.
  13. Campbell, J.L.; Brown, J.L.; Graving, J.S.; Richard, C.M.; Lichty, M.G.; Bacon, L.P.; Morgan, J.F.; Li, H.; Williams, D.N.; Sanquist, T. Human Factors Design Guidance for Level 2 and Level 3 Automated Driving Concepts; National Highway Traffic Safety Administration: Washington, DC, USA, 2018; Rep. DOT HS 812 555.
  14. Telpaz, A.; Rhindress, B.; Zelman, I.; Tsimhoni, O. Haptic seat for automated driving: Preparing the driver to take control effectively. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Nottingham, UK, 1–3 September 2015; pp. 23–30. [Google Scholar]
  15. Borojeni, S.S.; Chuang, L.; Heuten, W.; Boll, S. Assisting drivers with ambient take-over requests in highly automated driving. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 237–244. [Google Scholar]
  16. Royal, D. National Survey of Distracted and Drowsy Driving Attitudes and Behavior: 2002; Findings (No. DOT-HS-809-566); Office of Research and Traffic Records, National Highway Traffic Safety Administration: Washington, DC, USA, 2003; Volume 1.
  17. Kass, S.J.; Cole, K.S.; Stanny, C.J. Effects of distraction and experience on situation awareness and simulated driving. Transport. Res. Part. F Traffic Psych. Behav. 2007, 10, 321–329. [Google Scholar] [CrossRef]
  18. Campbell, J.L.; Brown, J.L.; Graving, J.S.; Richard, C.M.; Lichty, M.G.; Sanquist, T.; Morgan, J. Human Factors Design Guidance for Driver-Vehicle Interfaces; National Highway Traffic Safety Administration: Washington, DC, USA, 2016; Rep. DOT HS 812 360.
  19. Campbell, J.L.; Richard, C.M.; Brown, J.L.; McCallum, M. Crash Warning System Interfaces: Human Factors Insights and Lessons Learned, Final Report; National Highway Traffic Safety Administration: Washington, DC, USA, 2007; Rep. DOT HS 810 697.
  20. Prinzel, L.J.; Risser, M. Head-Up Displays and Attention Capture. NASA/TM-2004-213000; 2004. Available online: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040065771.pdf (accessed on 9 November 2021).
  21. ISO/TR 7239:1984. Development and Principles for Application of Public Information Symbols; International Organization for Standardization: Geneva, Switzerland, 1984. [Google Scholar]
  22. Gold, C.; Damböck, D.; Lorenz, L.; Bengler, K. ‘Take over!’ How long does it take to get the driver back into the loop? Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2013, 57, 1938–1942. [Google Scholar] [CrossRef] [Green Version]
  23. Kim, H.J.; Yang, J.H. Takeover requests in simulated partially autonomous vehicles considering human factors. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 735–740. [Google Scholar] [CrossRef]
  24. Gonçalves, J.; Olaverri-Monreal, C.; Bengler, K. Driver Capability Monitoring in Highly Automated Driving: From state to capability monitoring. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015; pp. 2329–2334. [Google Scholar]
  25. Hills, B.L. Vision, visibility, and perception in driving. Perception 1980, 9, 183–216. [Google Scholar] [CrossRef] [PubMed]
  26. Fogarty, C.; Stern, J.A. Eye movements and blinks: Their relationship to higher cognitive processes. Int. J. Psychophysiol. 1989, 8, 35–42. [Google Scholar] [CrossRef]
  27. Victor, T.W.; Harbluk, J.L.; Engström, J.A. Sensitivity of eye-movement measures to in-vehicle task difficulty. Transport. Res. Part. F Traffic Psych. Behav. 2005, 8, 167–190. [Google Scholar] [CrossRef]
  28. Sodhi, M.; Reimer, B.; Llamazares, I. Glance analysis of driver eye movements to evaluate distraction. Behav Res. Meth. Instrum. Comput. 2002, 34, 529–538. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Nguyen, T.; Ahn, S.; Jang, H.; Jun, S.C.; Kim, J.G. Utilization of a combined EEG/NIRS system to predict driver drowsiness. Sci. Rep. 2017, 7, 43933. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Yin, Y.; Zhu, Y.; Xiong, S.; Zhang, J. Drowsiness detection from EEG spectrum analysis. In Informatics in Control, Automation and Robotics; Springer: Berlin/Heidelberg, Germany, 2011; Volume 133, pp. 753–759. [Google Scholar]
Figure 1. Example of the visual alarm design (HUD and CID).
Figure 1. Example of the visual alarm design (HUD and CID).
Electronics 10 02796 g001
Figure 2. Alarm system and experimental equipment (driving simulator, Biobrain BIOS-S24, and Tobii Pro Glasses 2).
Figure 2. Alarm system and experimental equipment (driving simulator, Biobrain BIOS-S24, and Tobii Pro Glasses 2).
Electronics 10 02796 g002
Figure 3. Driving scenarios involving the driver being drowsy and distracted.
Figure 3. Driving scenarios involving the driver being drowsy and distracted.
Electronics 10 02796 g003
Figure 4. Boxplot of (a) visual and auditory recognition, (b) reaction time, (c) blink count, (d) gaze distance, and (e) changes in pupil diameter.
Figure 4. Boxplot of (a) visual and auditory recognition, (b) reaction time, (c) blink count, (d) gaze distance, and (e) changes in pupil diameter.
Electronics 10 02796 g004
Figure 5. α/(α + β) power spectrum of 6 participants.
Figure 5. α/(α + β) power spectrum of 6 participants.
Electronics 10 02796 g005
Table 1. Alarm method for drowsy drivers.
Table 1. Alarm method for drowsy drivers.
VisualAuditory
IconText VoiceBeep
Size (HUD)1 inch<0.5 inches<dB90 dB115 dB
Size (CID)2 inches<1 inch<Rate250 words/min4 times/s
Flash4 times/sHz-1500 Hz
LocationWithin 5° of the driver’s field of viewMethod1. Beep → 2. Voice
ColorRed-Female voice-
Method-Same phrase as in voiceSame phrase as in the text-
Table 2. Alarm method for distracted drivers.
Table 2. Alarm method for distracted drivers.
VisualAuditory
IconText VoiceBeep
Size (HUD)0.5 inch<0.3 inches<dB90 dB
Size (CID)1 inch<0.6 inch<Rate200 words/min3 times/s
Flash3 times/sHz-1000 Hz
LocationWithin 15° of the driver’s field of viewMethod1. Beep → 2. Voice
ColorOrange-Female voice-
Method-Same phrase as in voiceSame phrase as in the text-
Table 3. Summary of the dependent variables.
Table 3. Summary of the dependent variables.
ClassificationVariableDescriptions
QualitativeVisual recognitionThe level of visually recognizing TOR is evaluated using the five-point Likert scale: 1 = bad to 5 = good
Auditory recognitionThe level of auditorily recognizing TOR is evaluated using the five-point Likert scale: 1 = bad to 5 = good
QuantitativeReaction time (s)Time from the point of TOR generation to the completion of control takeover by a driver.
Blink count (count/s)The number of eye blinks per second from the point of TOR generation to the completion of control takeover by a driver:
N u m e r   o f   b l i n k s t i m e   ( s )
Gaze distance (mm/s)The gaze distance per second from the point of TOR generation to the completion of control takeover by a driver:
D i s t a n c e   b e t w e e n   g a z e   p o i n t   ( m m ) t i m e   ( s )
Pupil diameter (mm/s)The pupil diameter changes per second from the point of TOR generation to the completion of control takeover by a driver:
V a r i a t i o n   o f   p u p i l   d i a m e t e r   ( m m ) t i m e   ( s )
Table 4. Means and SD of the dependent variables.
Table 4. Means and SD of the dependent variables.
Dependent Variables.Mean (SD)p-Value
(a = 0.05)
Drowsy StateDistracted State
Visual recognition
(score)
3.60 (1.02)3.92 (1.19)-
Auditory recognition
(score)
4.44 (0.82)4.58 (0.64)-
Reaction time
(s)
4.15 (1.66)5.63 (2.84)0.03
Blink count
(count count/s)
0.25 (0.24)0.35 (0.29)0.20
Gaze distance
(mm/s)
9.84 (3.13)13.22 (7.44)0.03
Pupil diameter
(mm/s)
0.0075 (0.0027)0.0097 (0.0038)0.02
Table 5. Complementary design elements for an advanced alarm.
Table 5. Complementary design elements for an advanced alarm.
Design ElementsMethod
Location
-
The visual alarms are also needed for locations other than the HUD and CID.
-
Consider using ambient light inside the vehicle, such as doors, steering wheel, console box, cockpit, etc.
Flashing method
-
A gradual increase in flashing speed and intensity should be considered.
Icon and text
-
To avoid the driver’s cognitive load, only one icon and one sentence should be used.
Voice and beep sound
-
A gradual increase in voice and beep sound intensity should be considered.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, J.-H.; Ju, D.-Y. Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles. Electronics 2021, 10, 2796. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10222796

AMA Style

Han J-H, Ju D-Y. Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles. Electronics. 2021; 10(22):2796. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10222796

Chicago/Turabian Style

Han, Ji-Hyeok, and Da-Young Ju. 2021. "Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles" Electronics 10, no. 22: 2796. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10222796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop