1.1 Driver-vehicle interaction
In automated vehicles, driver-vehicle interaction (DVI) is not merely limited to interface design but is responsible for information processing and transition in dynamic, complex situations. The H‑metaphor [
20] is a proposed interpretation of DVI. Inspired by horse riding, the H‑metaphor resembles the driver to the rider and the automated vehicle to the horse. In this simulation, the automated vehicle is assumed to interact appropriately with the environment, be predictable, exhibit situationally appropriate behavior, have a multimodal interface, and assist humans. Although the H‑metaphor is a simplification of the DVI, it is limited to SAE Level 2 [
45] and it is challenging to generalize it to all driving scenarios.
Marberger et al. [
36] propose a holistic model for the transition process in SAE Level 3 [
45] from automated driving to manual driving and assign several phases to the transition process: automated mode with AD compatible driver state, takeover mode with the transition of driver state, a post-transition mode where the driver intervenes and stabilizes the control of the vehicle. The driver state transition means the reorientation of the driver state from non-driving related task (NDRT) or any other non-attentive state to a wakeful attentive driver state. The driver intervention [
9] refers to the deactivation of the automated mode by the driver, which can be issued in distinguished ways depending on the system design. The control stabilization interval is an additional time window required by the driver to gain the driving precision and to increase the control performance to the average driving performance of the individuals.
A general approach to DVI should cover all levels of automation and interaction and address situational and automa-tion-related failures. Four of the main failures detected in the human-machine relationship [
24] are loss of expertise as a consequence of assistance systems, complacency or overreliance of automation, trust and confidence built on user experience, and loss of adaptability to the environment caused by the human-out-of-loop phenomenon. Hoc [
24] introduces human-machine cooperation where each agent (driver or vehicle) has a goal and can interfere with the other agent in a way that it can manage the interference by cooperating in planning and action. Four requirements for efficient cooperation are [
64] mutual predictability of driver and automated system, directability of actions, shared situation representation with mutual intention, and calibrated reliance on automation to avoid over- and under-trust.
1.2 Driver model in interaction concept
The collaboration of human and technology requires precise product design based on the psychological and physiological principles of the user. Since the development of driver assistance systems and automated vehicles, DVI has become a focus in the design process. One of the aims of the DVI is to keep the driver in-the-loop when necessary and to transfer the driving task step by step from the automated system to the human driver [
19]. Flemisch et al. [
19] provide general guidance for the design of human-machine-interface (HMI) to form a suitable mental model of the user over the automated system and emphasizes the necessity of verifying the driver’s activity level before the task transition request. The driver state assessment component monitors the driver directly through cameras and indirectly by recording driver performance and detects driver inattention due to driver distraction and drowsiness [
52]. Even though these two elements are crucial variables, identifying the driver state requires more aspects to cover the complex structure of the human being. Three of the existing HMIs are mentioned below, all of which aim to increase the driver’s mode awareness.
The first HMI is Continental’s automated assistance in roadworks and congestion (ARC), which has a visual modality in the instrument cluster and center console to inform the driver about the level of automation, and haptic feedback on the accelerator pedal to indicate to the driver when the current velocity exceeds the maximum speed. The second HMI is Volvo Technology’s automatic queue assistance (AQuA), which has three levels of automation: manual driving, longitudinal assistance system, and automated driving. AQuA is limited to 30 km h
\({}^{-1}\) and indicates the level of automation and the extent to which the driver is supported. The third HMI is the temporary autopilot (TAP) [
47,
48] of Volkswagen. TAP has three modes similar to AQuA, but it is designed for higher speeds of up to 130 km h
\({}^{-1}\).
By transition of driving tasks in SAE Level 3 of automation, the driver state can be divided into three categories: sensory state, motor state, and cognitive state, which are evaluated under a specific arousal level and motivational condition of drivers [
36]. Even though a driver model is not explicitly specified in this study, the assessment of the current driver state and the target state for the driver are mentioned as essentials for modeling the transition process. Furthermore, the concept of driver availability [
36] is proposed as a temporal quantity that identifies at each time step whenever the driver has sufficient time budget for overtaking or not. Driver availability can be influenced by three main factors that affect driver state. First, the NDRT, which the drivers choose to perform during automated driving, has an impact on their sensory state [
41]. Depending on the modality of activity, the driver’s visual perception performance may change. An auditory task concentrates the driver’s gaze on the middle of the road [
63] and a visual task redirects the driver’s gaze from the driving scene to the NDRT. Second, the driver’s characteristics, such as experience [
31], cognitive capacity [
28], and risk tolerance [
42], personalize each driver’s intervention performance. Third, the way the takeover request (TOR) [
22] is presented also affects driver performance. Sensory latency, perceived urgency [
43], and the time required to maintain situation awareness depend on the TOR design. The transition process starts with automated driving (AD), where the driver has an AD compatible driver state [
36].
In the project “personalized, adaptive cooperative systems for highly automated cars (PAKoS)” collaboration between the human driver and the automated system is planned through driver monitoring, activity estimation, design of the HMI, and transition control [
18]. The driver state is defined as the body pose of the drivers [
37], which is observed by RGB- and depth-cameras during all driving modes, from manual to automated driving. The recognition of driver activity includes information about driver alertness [
67], which plays a crucial role in road safety. Activity detection can also help to increase driver comfort by implementing various control signals such as music or light. However, the mental state of the driver cannot be fully detected by behavioral measurements. Communication between the human and the vehicle also benefits from the detection of driver gestures. Furthermore, the prediction of the driver’s next action can prevent hazardous situations caused by driver errors. Therefore, the gathered camera data is processed with distinguishe‑d algorithms to classify driver activity [
8,
51,
58,
62,
65]. Then, the results from interior 3D models and convolutional neural network-based models are compared. To integrate driver characteristics into the interaction process, a user profile and subprofiles [
18] are introduced, which are the key part of a mobile phone application. The architecture of the user profile comprises three levels:
Persona, which is the personal information of the driver;
user needs, which explain the driver’s preferences;
product applications, which represent the user requirements and the manufacturer-dependent application parameters. To include specific configurations that are defined separately by the driver for specific situations, such as family trips, the subprofiles are added to the mobile phone application, as well. The transition of the driving task from an automated system to a human driver is identified in two phases. The first phase is the preparation of the driver [
17,
49,
50] where the driver is informed about the intention of the automated vehicle in the second phase of the transition. This process is realized by a haptic seat, visual aides on the head-up display (HUD), and auditory announcements [
18]. The second phase [
34] is supporting the driver to overtake control of the vehicle. In this phase, a game theory approach [
59] is utilized to realize collaborative driving based on haptic shared control. The interaction is based on a differential game between the human driver, the automated system, and the vehicle.
Manstetten et al. [
35] restrict the driver state to two variables, distraction and sleepiness. Distraction is measured by eye-tracking and facial features. Assessment of sleepiness is simply done by measuring the PERCLOS [
66] of drivers which is a measure of eyelid openness. Monitoring these quantified driver state variables, a driver model is defined which detects the driver’s inattention through filtering, feature extraction, and distinguished classification methods. In addition, the classifier receives the criticality of the driving situation from an environment model as well. Furthermore, the data that the HMI presents to the driver is a further input of the driver model to achieve a classified driver state. The detected driver state is then fed into a designed Attention and Activity Assistance system (AAA) [
32]. Depending on the input signal, the AAA makes decisions, sends messages to the other components and interacts with the driver. The AAA is able to detect distraction, prevent monotony, recommend breaks or route adjustments, and detect and prevent sleepiness. The present contribution gives a comprehensive DVI model in automated driving by equipping the feedback control structure [
10] with a driver model. In the next section, the fundamental aspects are explained. In Sect.
3 the structure of the feedback control for the DVI concept is examined in detail. The proposed driver model is described in Sect.
4. Sect.
5 illustrates an experiment performed in a driving simulator. Subsequently, the results obtained from the experiment are mapped to the proposed driver model to discuss the conformity of the model. Finally, in Sect.
6 the limitations of the experiment are explained and possible next steps are named.