Skip to main content
Erschienen in:

Open Access 10.09.2022

A Brain-Controlled Vehicle System Based on Steady State Visual Evoked Potentials

verfasst von: Zhao Zhang, Shuning Han, Huaihai Yi, Feng Duan, Fei Kang, Zhe Sun, Jordi Solé-Casals, Cesar F. Caiafa

Erschienen in: Cognitive Computation | Ausgabe 1/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we propose a human-vehicle cooperative driving system. The objectives of this research are twofold: (1) providing a feasible brain-controlled vehicle (BCV) mode; (2) providing a human-vehicle cooperative control mode. For the first aim, through a brain-computer interface (BCI), we can analyse the EEG signal and get the driving intentions of the driver. For the second aim, the human-vehicle cooperative control is manifested in the BCV combined with the obstacle detection assistance. Considering the potential dangers of driving a real motor vehicle in the outdoor, an obstacle detection module is essential in the human-vehicle cooperative driving system. Obstacle detection and emergency braking can ensure the safety of the driver and the vehicle during driving. EEG system based on steady-state visual evoked potential (SSVEP) is used in the BCI. Simulation and real vehicle driving experiment platform are designed to verify the feasibility of the proposed human-vehicle cooperative driving system. Five subjects participated in the simulation experiment and real the vehicle driving experiment. The outdoor experimental results show that the average accuracy of intention recognition is 90.68 ± 2.96% on the real vehicle platform. In this paper, we verified the feasibility of the SSVEP-based BCV mode and realised the human-vehicle cooperative driving system.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

A brain-computer interface (BCI) can convert different activity patterns of electroencephalogram (EEG) signals into control commands to manipulate devices and machines, such as vehicles [1, 2]. A brain-controlled vehicle (BCV) is a vehicle controlled by the driver’s mind through a BCI rather than the driver’s limbs. The driver’s intentions can be obtained by analysing their EEG signals and converted into control commands to drive a vehicle. BCVs provide a new supplementary way for driving, which can liberate driver’s limbs and enhance people’s driving experience. For people with physical disabilities, BCVs have the potential to help them recover their driving ability, thus broaden their scope of activities, and improve the living standards. BCVs also provide valuable experience for other brain-controlled machines and promote the study of BCI and intelligent machines [3].
Security is a significant issue for brain-controlled vehicles. Considering the current limited performance of BCI and the potential dangers of driving a vehicle in the outdoor, it is necessary to integrate intelligent driving technologies on BCVs. Intelligent driving technologies include obstacle detection technology, object tracking technology and vehicle control technology, which aim to sense driving environment, obtain road information and assist driving operation [46]. A manifestation of human-vehicle cooperative driving is combining the driver’s intention control with the intelligent assistant driving technology [7]. A BCV combined with the intelligent assistant driving technology can significantly improve the safety and driving performance.
In recent years, some achievements have been made in BCI assisted driving, brain-controlled wheelchair, brain control in simulated environment and so on. Nguyen and Chung developed a method for identifying the driver’s intentions for the emergency brakes control (EBC) in a simulated vehicle [8]. The algorithm consists of the EEG band power, auto-regressive model features and an NN classifier. Its accuracy was 91.00% in the simulated driving experiment. But this method was only used in simulation experiments, not in real vehicle experiments. Bi et al. [9] proposed a method of emergency situation detection by fusing EEG-based emergency intention detection model of driver with surrounding information. In the experiment, a set of sensors were embedded into the system to analyse the conditions of the environment. The driver’s emergency intention detection system was implemented in the experiment and the accuracy of the system was 94.89%. This research limited the application to emergency brakes only. Jafarifarmand and Badamchizadeh applied motor imagination-based BCI system to control a radio-control car completing a designed specified route without crossing the path borders [10]. The ICA-ANC, AR-CSP and SRSG-FasArt had been applied for ocular artefact removal, feature extraction and classification, respectively. The average classification accuracy was 90.43%. However, this method was only used in simulation experiments, not in real vehicle experiments. Li et al. proposed a fused system for vehicle driving decisions to control a simulated vehicle, which obtained the visual data, and the hybrid EEG signals simultaneously. The hybrid EEG signals consist of SSVEP and MI [11]. The wCCA, CSP and kNN were applied for EOG artefacts removing, feature extraction and classification, respectively. The success rate of the on-line experiment was 91.1%. Lu and Bi proposed an EEG-based longitudinal control system for brain-controlled vehicles, which combined a user interface, a BCI system and a longitudinal control module [3]. The method was tested by a simulated vehicle experiment in a virtual scene in the laboratory. The CSP was employed for augmenting the EEG signal SNR. The PSD features were extracted from the SSVEP patterns. The SVM classifier was used for classification. However, the accuracy of the results was not very robust for individual subjects. Khan et al. proposed a brain-actuated intelligent wheelchair with network of sonars and vision-based sensors which can be controlled by either SSVEP brain signals or joystick [12]. The SSVEP signals were segmented into 2-s window size with 50% overlap and features were extracted by using canonical correlation analysis. For offline control, an accuracy of 96% was achieved. However, for online control, the accuracy decreased. Zhuang et al. established a BCI system using motor imagery EEG signals to control a simulated vehicle with a shared control strategy [13]. The strategy transformed online EEG classification results into control command considering avoiding obstacles detected by a single-line LiDAR. The PSD features were identified by using ensemble, SVM and CNN classifiers. The average classification accuracy was 91.75%. The disadvantage of the study is that the algorithm is time-consuming, which causes delay in real-time systems. Fan et al. combined the SSVEP pattern and alpha EEG waves to control the vehicle simulator for destination selection [14]. The PSD features were extracted and categorised by the binary LDA algorithm. The average accuracy of the system was about 99% with an average selection time of about 26 s. Gohring et al. used a BCI which was connected to an autonomous car equipped with a variety of sensors to control steering and braking [15]. The ERD/ERS patterns from the EEG signal were extracted. However, the reliability of the algorithm was still insufficient. Other recent research on brain-controlled vehicles, with special reference to the terrestrial BCV, can be found in Hekmatmanesh et al. [16]. Table 1 summarises the most important ones and shows a subsequent comparison with the state of the art.
In this paper, we propose a human-vehicle cooperative driving system combining BCV mode with laser obstacle detection, where its feasibility is verified by simulation and real vehicle driving experiment. Our long-term goal is to develop a BCV that integrates intelligent driving technologies. Intelligent driving technologies constitute more of a complementary service than an alternative to physical driving. BCV systems integrating with intelligent driving technologies can assist drivers to operate a vehicle more safely and more conveniently. In this paper, we take a step forward towards this goal. We implement the BCV integrating with obstacle detection technology. Specially, we use EEG signals of the driver to control the vehicle in the outdoor experimental environment.
The rest of this paper is organised as follows. “Materials and Methods” section describes the experimental vehicle, the strategy of the human-vehicle cooperative driving system, the analysis methods of SSVEP-based BCI and how the experiments are designed. “Results” section presents the experimental results. “Discussion” section discusses the results of the experiments. Finally, “Conclusions” section concludes this work.

Materials and Methods

This section elaborates the adopted materials and methods for the brain-controlled vehicle system. Specifically, in the “The Experimental Vehicle” section, the description of the experimental vehicle is detailed. Then, we introduce the architecture of the human-vehicle cooperative driving system combining BCV mode with intelligent assistant driving technology, in the “Strategy of the Human-Vehicle Cooperative Driving System” section. How we acquire and process the EEG signals is fully detailed in the “SSVEP-Based BCI” section. In “Laser Ranging Obstacle Detection” and “Communication System” sections, we describe the obstacle detection system on the BCV and the communication system between the computer processing terminal and the BCV, respectively. Finally, how the experiments were conducted is described in the “Experiments of Brain-Controlled Vehicle” section.

The Experimental Vehicle

The appearance of the experimental vehicle is the same as that of a normal real car with electronic brake switch. The laser ranging sensor is located in the front of the vehicle to collect the distance data of the obstacle in front of the vehicle. The computer processing terminal receives EEG signals from the BCI and the laser ranging data, and generates the final vehicle control commands after data processing. The communication module of the experimental vehicle is modified such that the vehicle can receive and execute the vehicle control commands sent by the computer processing terminal. The schematic diagram of the experimental vehicle is shown in Fig. 1.

Strategy of the Human-Vehicle Cooperative Driving System

The system structure of the human-vehicle cooperative driving system combining the BCV mode with intelligent driving assistance technology is illustrated in Fig. 2. The system can be described in five parts: SSVEP-based BCI, obstacle detection system, computer processing terminal, communication system and the intelligent vehicle. (1) The SSVEP-based BCI consists of the SSVEP visual stimulus sources presented on a computer screen, EEG signal acquisition unit and processing unit. (2) The obstacle detection system includes the laser ranging sensor and the ranging data processing unit. (3) The computer processing terminal integrates the EEG signal processing unit, the ranging data processing unit and the command transmission determination unit. (4) The communication system consists of the serial port, the signal converter and high-speed controller area network (CAN) bus. (5) The intelligent vehicle with electronic brake switch is modified in the communication module.
The SSVEP-based BCI recognises the driver’s intention by analysing EEG signals. The obstacle detection system will send a braking signal if an obstacle in front of the vehicle is detected to be closer than a threshold. The communication system establishes a communication channel between the computer processing terminal and the experimental vehicle. Both the BCI and the obstacle detection system are the vehicle control command generation terminals. Control commands generated by the BCI and the obstacle detection system are not sent to the experimental vehicle until they are judged by the command transmission determination unit. The command transmission determination unit only sends the valid commands. Moving commands are invalid if the obstacle detection system detects that an obstacle is too close to the vehicle in front.
If a moving signal is sent to the experimental vehicle, the electronic brake switch loosens, and the vehicle moves straight at a constant speed of 1.38 m/s. The electronic brake switch clamps to stop the vehicle if the experimental vehicle receives a braking signal. The electronic brake switch returns its status, namely, loose or clamped, to the computer terminal in real time.
In the following sections, we detail the three main parts of the human-vehicle cooperative driving system: (1) the SSVEP-based BCI; (2) the laser ranging obstacle detection system and (3) the communication system.

SSVEP-Based BCI

BCI provides a direct communication channel between the human brain and the computer system. EEG signals commonly used in BCIs include SSVEP, P300 potentials and motor imagery [3, 13, 17]. In this paper, we use SSVEP signals in BCI. When a subject’s eyes are focused on a visual stimulus source with a constant and continuously flickering frequency, an SSVEP signal containing the same frequency or a multiple of the frequency of the visual stimulus source can be measured in the subject’s EEG signals, with the highest amplitude on the occipital lobe (visual cortex) [1820]. The condition for evoking SSVEPs is simple, and SSVEP signals are stable and easy to realise real-time control. For the SSVEP experiment, subjects do not need training before the experiment [2122].
Nakanishi et al. proposed a SSVEP detection method using the task-related component analysis (TRCA) with accuracy of 89.83% and trial lengths 1.2 ~ 1.5 s [23]. Kumar and Reddy proposed a subject-specific target detection framework, sum of squared correlations (SSCOR), to improve the performance of SSVEP. SSCOR had better performance than TRCA in the detection accuracy and information transfer rates [24]. However, both methods require acquiring individual training data prior to the online operation. Waytowich et al. used a compact convolutional neural network (CNN) to decode signals from a 12-class SSVEP dataset without user-specific calibration, which only required raw EEG signals for automatic feature extraction. The mean accuracy across subjects was approximately 80% with 4-s trial length in offline experiment [18]. Podmore et al. applied a deep convolutional neural network (DCNN), PodNet, and achieved 86% and 77% inter-subject classification accuracy for two data capture periods, respectively, 6 s and 2 s [25]. The above two studies have lower accuracy and longer trial time and did not carry out online experiments. Ravi et al. proposed a CNN-based classification method to enhance the detection accuracy of SSVEP in the presence of competing stimuli. The accuracy of the offline classification is 75.3% and that of the online simulation is 71.3% with a stimulus time of 6 s. The accuracy and trial time do not satisfy the vehicle control [26]. All above studies do not involve outdoor experiments.
In our work, the SSVEP-based BCI consists of the SSVEP visual stimulus sources presented on a computer screen, EEG signal acquisition unit and processing unit. We use two flickering frequencies of 8 Hz and 10 Hz as SSVEP visual stimulus sources and use non-invasive BCI to obtain the SSVEP EEG signals. According to the result of the offline test with different analysis time lengths, we choose 3 s as the analysis time length of SSVEP signals. We use canonical correlation analysis (CCA) method to classify SSVEPs and the overlap time windows voting (OTWV) method to improve the classification accuracy, which is training-free and used to control a vehicle outdoor. The driver’s intentions (moving or braking) are extracted by analysing the frequency features of SSVEP signals. The BCI sends a moving command or a braking command to the vehicle according to the classification results. Since repeated moving or braking commands are invalid for controlling the vehicle, there is no need for the driver to continuously focus on the stimulus if the vehicle stays in the desired state.
To better introduce the SSVEP-based BCI, we split it into several parts: SSVEP signals, EEG signal pre-processing, EEG signal acquisition unit, CCA method, offline test with different analysis time lengths and OTWV method.

SSVEP Signals

The SSVEP signals are oscillatory potentials elicited in EEG in response to periodic light stimulation. The SSVEP signals will occur in the visual cortex when a visual stimulation is applied to a human. Typical SSVEP response contains peaks at frequencies that are directly related to the stimulation frequency. The stimuli of different flickering frequencies will evoke the SSVEPs of different amplitude strengths. In general, the strongest, moderately strong and weak SSVEPs can be observed by the stimuli in the range of low frequency (1–12 Hz), medium frequency (12–30 Hz) and high frequency (30–60 Hz), respectively [27, 28]. In this paper, to obtain the strongest SSVEPs, the visual stimuli are in the low frequency range.
The first and second harmonics of the stimulus frequencies are used for classification in the CCA method, the first harmonic frequency of the stimulus should be different from the second harmonic frequency of the other stimulus [29]. Therefore, the SSVEP visual stimulus sources consist of two rectangular blocks with constant flickering frequencies of 10 Hz and 8 Hz, respectively. The size of both flashing rectangular blocks is 5 cm × 5 cm, and they are displayed on a laptop screen. The vehicle is controlled to move straight by the EEG signals evoked by the SSVEP visual stimulus source of 10 Hz. And the vehicle is controlled to brake if the flickering frequency of the SSVEP visual stimulus source is 8 Hz. Figure 3 shows the corresponding FFT frequency spectra of SSVEPs collected from Oz from a single subject for 3 s in response to 8 Hz (a) and 10 Hz (b) stimulation.

EEG Signal Acquisition Unit

The used EEG signal acquisition equipment is non-invasive. Compared with implanting a chip into the brain to enable intention control, non-invasive equipment does not cause harm to human body. The device g.USBamp of g.tec medical engineering GmbH (Austria) was used as the bio-signal amplifier, which allows 16-channel bio-signal acquisition. The sampling frequency of the EEG signals was 256 Hz per channel. In SSVEP-based BCIs, channels at the occipital and parietal (visual cortex) area are always selected to record the SSVEPs. Subjects were asked to wear a special cap with fixed electrodes, and the SSVEP signals were collected from \(O_{z}\), \(O_{1}\), \(O_{2}\), \(PO_{z}\), \(PO_{3}\) and \(PO_{4}\), according to the international 10/20 system, as shown in Fig. 4. The channels at the centre of visual cortex have higher amplitudes and therefore provide better features. The outdoor experiment of BCV requires high visual stimulation response. The six electrodes are selected to achieve high and stable classification accuracy [18, 26, 30].
The ground electrode of g.USBamp was \(F_{PZ}\), positioned on the forehead, while the reference electrode was placed on the right earlobe [31]. The amplifier was directly connected to a PC by a USB cable through which the amplified and digitalised EEG signals were sent to the PC for further processing.

EEG Signal Pre-processing

Pre-processing is an important step to remove noisy parts from the collected EEG signals and prepare them for further analysis. Several filters are used, depending on the aims of the studies. A 50-Hz notch filter and a Butterworth band-pass filter were used to filter the power line interference and the high frequency noise. The Butterworth band-pass filter was used to extract EEG signals with frequencies between 5 and 60 Hz. A comparison between the raw EEG signals and the filtered EEG signals using Butterworth band-pass filter is shown in Fig. 5. In this figure, the EEG signals were collected from \(O_{z}\), \(O_{1}\), \(O_{2}\), \(PO_{z}\), \(PO_{3}\) and \(PO_{4}\). Data from the 3-s EEG signals were segmented for noise removal.

Canonical Correlation Analysis Method

The most prominent features of SSVEPs are in frequency domain. SSVEPs can be classified according to the frequency components. CCA method is used to classify SSVEPs by comparing the correlation between the collected SSVEP signals and each stimulus frequency. CCA is a statistical method, which has been traditionally and widely used to analyse relationships between two sets of variables in various fields [27, 32, 33]. The objective of CCA is to analyse the degree of correlation between two data sets by finding their transformed variants with the highest correlation by calculating their correlation coefficient. Two data sets are more relevant if their correlation coefficient is higher.
The principle of the CCA method is described as follows. Given two data sets \(X\) and \(Y\), CCA attempts to find a pair of vectors \(W_{x}\) and \(W_{y}\) that maximise the correlation between linear combinations of \(x\) and \(y\), where \(x\) and \(y\) are calculated as:
$$x = X^{T} W_{x}$$
(1)
$$y = Y^{T} W_{y}$$
(2)
In addition, \(x\) and \(y\) are known as canonical variates, which are uncorrelated in each data set and have zero mean and unit variance. \(W_{x}\) and \(W_{y}\) are the canonical coefficient vectors. \(\rho\) is the correlation coefficient of \(x\) and \(y\), and can be calculated as follows:
$$\begin{gathered} \rho = \max (corr(x,y)) \\ = \max (\frac{Cov[x,y]}{{\sqrt {Var[x]Var[x]} }}) \\ = \max (\frac{{E[x^{T} ,y]}}{{\sqrt {E[x^{T} x]E[y^{T} y]} }}) \\ = \max (\frac{{E[W_{x}^{T} XY^{T} W_{y} ]}}{{\sqrt {E[W_{x}^{T} XX^{T} W_{x} ]E[W_{y}^{T} YY^{T} W_{y} ]} }}) \\ \end{gathered}$$
(3)
In Eq. (3), \(Var\), \(Cov\) and \(E\) represent the variance, the covariance and the expectation, respectively. The cross-correlation matrix of \(X\) and \(Y\) is described as:
$$C_{X,Y} = E(XY^{T} )$$
(4)
The autocorrelation matrices of \(X\) and \(Y\) are \(C_{X}\) and \(C_{Y}\), respectively, which are computed as:
$$C_{X} = E(XX^{T} )$$
(5)
$$C_{Y} = E(YY^{T} )$$
(6)
The collected SSVEP signals and a stimulus frequency are represented by two data sets, \(X\) and \(Y\), respectively. \(X\) and \(Y\) are used to calculate the CCA correlation coefficients. The CCA method can detect harmonic frequencies [29]. In this paper, SSVEP signals containing the same and twice (first and second harmonic) frequencies of the corresponding visual stimulus are analysed. The visual stimulus signals \(Y\) with frequency \(f\) are defined as follows:
$$Y = \left\{ {\begin{array}{*{20}c} {\sin (2\pi ft)} \\ {\cos (2\pi ft)} \\ {\sin (4\pi ft)} \\ {\cos (4\pi ft)} \\ \end{array} } \right\}$$
(7)
The flickering frequency of the visual stimulus that is more relevant to the collected SSVEP signals is considered as the classification result.

Offline Test with Different Analysis Time Lengths

To analyse the relationship between the time length of signals and the classification accuracy, the offline test with different analysis time lengths was carried out. At the same time, the accuracy of SSVEP classification was verified by the offline test. Five healthy subjects participated in the experiments (three males and two females). All subjects participated in the offline test on a voluntary basis and letters of consent were obtained from all participants.
In the offline test, each subject was asked to focus alternately on the two SSVEP visual stimulus sources of 8 Hz and 10 Hz. The SSVEP signals of each subject were collected to analyse the relationship between the time length of signals and the classification accuracy. SSVEP signals of four types of duration (1 s, 2 s, 3 s and 4 s) were analysed using the CCA method, respectively. The offline test was repeated 10 times, and the results including the correlation coefficient and the classification accuracy are presented in the “Offline Test Result of Different Analysis Time Lengths” section. Based on these results, 3 s was chosen as the analysis time for SSVEP signals in BCV experiments to obtain high classification accuracy and fast response time.

Overlap Time Windows Voting Method

The OTWV method was used to improve the classification accuracy of the SSVEP signals and the stability of command output. The principle of the OTWV method is shown in Fig. 6. SSVEP signals were analysed using a 3-s data window and a 1-s offset. From each 3-s data window, one result (one vote) is obtained. The classification result with more than two votes is considered the final result of the classification. The OTWV method can improve the classification accuracy of the SSVEP signals without increasing the time length, which can be proved as follows.
The classification accuracy of a 3-s SSVEP signals in a time window is \(p\), and the classification accuracy using the OTWV method is \(p^{\prime}\). For simplicity, we assume that the classification results for each time window are independent. There are two possible cases: (1) the signals in the 3-s time windows are all of the same class and (2) signals in one of the 3-s time windows are of different class from the other two windows. However, the first case holds in most of the trials because the classes of the signals in continuous time are usually the same. In the first condition, \(p^{\prime}\) can be calculated as:
$$p^{\prime} = C_{3}^{2} p^{2} (1 - p) + p^{3}$$
(8)
where \(C_{3}^{2}\) refers to the number of combinations of 2 elements taken from 3 elements at a time. The right-hand side of the equation presents the probability of the case that the classification of one of the 3-s time windows is wrong, and that the classification of the 3-s time windows is right. It means to solve the following inequation:
$$C_{3}^{2} p^{2} (1 - p) + p^{3} > p$$
(9)
The inequation can be simplified to:
$$p(2p - 1)(p - 1) < 0$$
(10)
In the second case, p' can be calculated as:
$$p^{\prime} = p^{3} + p^{2} (1 - p) + C_{2}^{1} p(1 - p)^{2}$$
(11)
In a similar way to the first case, the inequation is simplified to:
$$p(2p - 1)(p - 1) > 0$$
(12)
In the second case, if the classification accuracy \(p\) is larger than 0.5 and smaller than 1, the OTWV method can reduce the classification accuracy of the SSVEP signals. However, in most cases in the real experiment we meet the first condition. In the “Analysis of Overlap Time Windows Voting Method” section, we compare the classification accuracy of the SSVEP signals between experiment with OTWV method and without OTWV method. It shows that the OTWV method can actually improve the SSVEP signal classification accuracy.
Theoretically, with the OTWV method, a classification result is generated every 1 s in continuous signal processing, ignoring data processing and transmission time. However, without the OTWV method, it takes at least 3 s to generate a classification result. Therefore, the OTWV method improves the generating rate of the classification results. At the same time, OTWV method can avoid the frequent change of the output control command and improve the stability of vehicle control in online experiments.

Laser Ranging Obstacle Detection

Considering the potential dangers of driving a real motor vehicle in the outdoor, an obstacle detection module is essential on the BCV. Obstacle detection techniques can detect obstacles appearing around the vehicle and alert the driver about possible collisions with obstacles [34, 35]. Considering the potential dangers of driving a vehicle in the outdoor, the laser ranging obstacle detection system is integrated into the vehicle to improve the security of the BCV, which realises the human-vehicle cooperative driving system. The obstacle detection system includes the laser ranging sensor and the ranging data processing unit. The laser ranging sensor is located at the front of the vehicle to collect the distance data of the obstacle in front of the vehicle. It transfers the measured distance information to the ranging data processing unit. If an obstacle is detected to be too close in front of the vehicle, the obstacle detection system will send a braking signal to stop the vehicle and avoid collision, which keeps the vehicle in a safe state.
Millimetre wave radars, visual sensors and LiDAR sensors are all important sensors in the field of intelligent driving. Millimetre wave radars have high adaptability to weather. However, traffic scenario elements, such as roads, buildings, vegetation, vehicles, pedestrians and so on, will introduce noise interference, which will lead to the decline or even failure of the radar detection and measurement accuracy [36]. Visual sensors are used to detect roads, lane signs, obstacles and objects. However, they are easily influenced by light changes, and the detection accuracy will be greatly reduced when encountering complex shadows or bad weather conditions [37]. Therefore, visual sensors are usually combined with laser scanners to achieve high-accuracy information. LiDAR sensors are widely used to detect objects and obstacles with good range resolution and high accuracy. Generally, LiDAR sensors can be divided into 2D LiDAR and 3D LiDAR sensors. 3D LiDAR sensors can obtain much richer information of the surroundings. However, the data obtained by a 3D LiDAR is large and complicated, which takes longer processing time compared to a 2D LiDAR one. Moreover, a 3D LiDAR sensor is more expensive [38]. In this paper, considering the low cost and simple data processing, the UTM-30LX, produced by HOKUYO, is used as the 2D laser ranging sensor of the vehicle. The UTM-30LX is a compact, lightweight 2D LiDAR sensor with a 270° field-of-view up to 30 m. With enhanced internal filtering and ingress protection rating, this LIDAR device is less susceptible to ambient outdoor light [39]. The LiDAR sensor is located horizontally on the bonnet of the car. The effective measurement range of the laser ranging obstacle detection system is set at 3 m and 90° in front of the vehicle, as illustrated in Fig. 7.
The collision avoidance behaviour acts as a full stop. It will be activated if the distance between the vehicle and the obstacle is detected to be less than the effective measurement distance. Then, the obstacle detection system triggers a braking signal to stop the vehicle. The collision avoidance behaviour can ensure the safety of the driver and the vehicle during driving, thus improving the performance of the BCV.

Communication System

Control commands, generated by the BCI and the obstacle detection system, are judged by the command transmission determination unit. Then, the communication system sends the control command to the electronic brake switch to perform the BCV control. The communication system supports the communication between the computer processing terminal and the experimental vehicle. It requires a fast signal transmission using a BCI combined with obstacle detection technologies to control a real vehicle. In this paper, the high-speed CAN communication is selected to transmit the vehicle control signals [40]. The communication system consists of three parts: the serial port, the signal converter and the high-speed CAN bus. The serial port is the first part of the communication system. Through the serial port, the BCI sends control commands to the signal converter. The serial port baud rate is 115,200 bps using an 8-bit data format, no parity bit and one stop bit. The signal converter is a signal conversion interface between the serial port and the high-speed CAN bus. Through the signal converter and the high-speed CAN bus, control commands are sent to the controlled component of the experimental vehicle, namely, the electronic brake switch. Meanwhile, the electronic brake switch returns the status information of the vehicle to the computer terminal in real time via the communication system.
Table 2 shows the definition of the control protocol in terms of SSVEP frequencies, vehicle control commands and hexadecimal commands. The control protocol is defined according to the vehicle internal protocol. These defined hexadecimal commands are only used to control vehicle movement and braking.

Experiments of Brain-Controlled Vehicle

In this study, we conducted two kinds of experiments on five subjects. One is the simulated BCV experiment to verify the feasibility of the SSVEP-based BCI system; the other one is the real vehicle controlling experiment to verify the new controlling mode in the outdoor environment.

Subjects

Five healthy subjects aged between 21 and 27 participated in the experiment on a voluntary basis, and letters of consent were obtained from all of them. Some subjects had taken part in other earlier BCI experiments. However, none of them had experience in controlling a real vehicle via the BCI prior to the experiment. In addition, none of the subjects had a history of brain or neurological disease.

Experiment Design and Procedures

Two experiments were performed: (1) the simulated BCV experiment and (2) real vehicle controlling experiment. In the simulated vehicle controlling experiment, we verified the feasibility of the SSVEP-based BCI system to control a simulated vehicle in the virtual driving environment. In the real vehicle controlling experiment, we implemented the human-vehicle cooperative driving combining the BCV system with obstacle detection and verified the new controlling mode in the outdoor. The simulation experiment and the real vehicle driving experiment were performed on different days. Before the experiments, we gave the instructions to the subject so that they could operate correctly during the experiments.

Simulated Brain-Controlled Vehicle Experiment

The experiment was carried out in a virtual driving platform with the simulated vehicle based on open graphics library (OpenGL), as illustrated in Fig. 8. The simulated environment was run on the Windows 7 operating system.
The architecture of the simulated BCV driving experiment included two main parts, SSVEP-based BCI and virtual driving platform with simulated vehicle, as shown in Fig. 9. The SSVEP-based BCI module consisted of the SSVEP visual stimuli presented on a computer screen, SSVEP signal acquisition unit and SSVEP signal processing unit. The BCI sent generated control commands to the simulated vehicle via socket communication. After receiving a control command, the simulated vehicle performed the corresponding action.
The BCI analysed segments of 3 s of the SSVEP signals and sent the generated control commands to the simulated vehicle every 3 s. If the subject was focusing on the visual stimulus of 10 Hz or 8 Hz, the SSVEP signals would be recognised as a moving command or a braking command, respectively. The simulated vehicle moved straight or braked after a moving command or a stop command was sent to it.
Subjects were asked to wear the EEG signal acquisition equipment and sat in front of the computer screen. The experiment was repeated four times for each subject. In each time, subjects were required to successively send ten commands, including five moving commands and five braking commands. Moving commands and braking commands were alternately sent to control the simulated vehicle moving and braking. We timed the response time from the driver starts focusing on the stimulus to the time that the corresponding command is generated (the simulated vehicle starts or stops). The results of the simulation experiment, including the average response time and the accuracy, are shown in the “Result of Simulated Vehicle Controlling Experiment” section.

Real Vehicle Controlling Experiment

In the simulation experiment, subjects were familiar to use the BCI and prepared for controlling the experimental vehicle. In the real vehicle driving experiment, subjects controlled the real vehicle via the BCI combined with the laser obstacle detection in the outdoor. To realise human-vehicle cooperative driving, the outdoor environment was built on an empty site. The schematic diagram of the outdoor experimental environment is presented in Fig. 10. The vehicle is an automatic car of size 4.856 m × 1.926 m × 1.900 m, with seven-speed automatic transmission, electronic brake force distribution, antilock braking system, brake assist system, etc. Two flags were set on the roadside. The distance between the adjacent flags was 20 m. In addition, an obstacle was placed at the end of the experimental road. The size of the obstacle was about 70 cm × 50 cm × 150 cm (length × width × height). The obstacle detection range of the laser ranging obstacle detection system was set at 3 m. The distance between the obstacle and the second flag was 5 m.
Each subject was required to complete experiment five times. The subjects were asked to control the experimental vehicle to move from the start position each time. When the vehicle arrived at the flag positions, subjects controlled the vehicle to stop. If the obstacle was detected too close in front of the vehicle, the obstacle detection system sent a braking signal to stop the vehicle. The response time was timed from the moment the driver starts to focus on the stimulus until the corresponding command is generated (a beep is heard when the command is sent). This is where the process of the experiment ended. Subjects were asked to wear the EEG signal acquisition equipment, as shown in Fig. 11. The BCI analysed SSVEP signals and generated a hexadecimal vehicle control commands every 3 s. Control commands generated by the BCI and the obstacle detection system were sent to the experimental vehicle after they were judged by the command transmission determination unit. Moving commands of the BCI were invalid if the obstacle detection system detected an obstacle too close in front of the vehicle. Repeated commands to move or brake were also invalid, so the driver only has to focus on the stimulus when the vehicle state changes.

Results

In this section, we first report the results of offline tests with different analysis durations. Next, we compare the classification accuracy using and not using the OTWV method. Finally, we present the results of the simulated and real vehicle control experiments and compare the differences between them.

Offline Test Result of Different Analysis Time Lengths

The results of the offline test with different analysis time lengths are shown in Table 3. It can be found that the correlation coefficient \(\rho\) increases and the classification accuracy improves with the increase of the time length of the SSVEP signals.
The analysis time of SSVEP signals must not be too long or too short; otherwise, it will reduce the performance of the BCI. To find a balance between a high classification accuracy and quick response time, we used 3 s of SSVEP signals to recognise the user’s operation intentions in the experiments of BCV according to the result shown as 0.

Analysis of Overlap Time Windows Voting Method

The OTWV method can improve the classification accuracy of the SSVEP signals if the classification accuracy is larger than 0.5 and smaller than 1. 0 shows that the classification accuracy of 3-s SSVEP signals is about 80%, which is consistent with the condition of using the OTWV method.
We carried out the offline tests with or without the OTWV method. We chose continuous SSVEP signals of the five subjects when they were stimulated, and processed the data until the system with or without the OTWV method output ten classification results. Table 4 compares the classification accuracy of the SSVEP signals. It shows that the average classification accuracy of the SSVEP signals with the OTWV method is higher than that of the SSVEP signals without the OTWV method.

Result of Simulated Vehicle Controlling Experiment

Table 5 shows the performance of the simulated BCV across all subjects in the simulation experiment. Every subject repeated the experiment four times and sent ten commands in each trial. The average response time and accuracies of the five subjects are presented in 0. The average response time of all subjects is 3.08 s with standard deviation (SD) 0.0084 s, and the average accuracy is 92.0% (SD 0.075%).

Results of Real Vehicle Controlling Experiment

Table 6 shows the performance of the brain-controlled real vehicle operated by the five subjects in the outdoor. Each subject repeated the experiment five times. The five subjects’ average response time and accuracy of the BCV outdoor driving are presented in 0. The obstacle detection system detected the obstacle and made the BCV stop safely, to ensure that the subjects are safe in every time of the experiments. All subjects completed the outdoor experiment safely according to the expected requirements. The BCV was not hit in any way. We verified the feasibility of the BCV mode in the outdoor. The average response time of all subjects was 4.30 (SD 0.11) seconds, and the average accuracy was 90.68% (SD 2.96%).

Comparison Between the Simulation and Real Vehicle Controlling Experiment

We compared the mean response time and mean accuracy of each subject in the two experiments to analyse the differences in BCI performance between the simulation experiment and the real vehicle control experiment. Figure 12 shows the comparison of BCI performance between the two experiments. As shown, the performance of the five subjects in the real vehicle controlling experiment was not as good as that in the simulation experiment. The simulation experiment had a higher mean accuracy, shorter mean response time and smaller standard deviation than the real vehicle control experiment.

Discussion

In this paper, we propose a human-vehicle cooperative driving system which combines the BCV mode with laser obstacle detection, and its feasibility is verified by simulation and real vehicle driving experiment. The architecture of the human-vehicle cooperative driving system consists of three main parts, which are the SSVEP-based BCI, the laser obstacle detection system and the communication system.
The BCI system and the laser ranging obstacle detection system generate hexadecimal vehicle control commands and send them to the vehicle via the communication system to control the vehicle moving and braking. The BCI system recognises the driver’s intentions and converts the driver’s SSVEP signals into vehicle control commands. To avoid collisions with obstacles and keep the driver and the vehicle safe, the obstacle detection system will send a braking command to stop the vehicle if an obstacle is detected to be too close in front of the vehicle.
The CCA method and the OTWV method are used to analyse the SSVEP signals of the driver. The OTWV method can improve the classification accuracy and the result generating rate of the SSVEP signals. At the same time, the OTWV method can avoid the frequent change of the output control command and improve the stability of vehicle control in online experiments. EEG signal analysis for one order takes longer, while the interval between adjacent output orders is short.
Five healthy subjects participated in the experiments. In the simulation experiment, we verified the feasibility of using the SSVEP-based BCI system to control a simulated vehicle. The mean accuracy of all subjects was 92.00% (SD 0.075%) with a mean response time of 3.08 (SD 0.0084) seconds. Note that in particular, there are tests with a response time below 3 s. The reason is that the OTWV method improves the generating rate of the classification results as illustrated in the “Offline Test with Different Analysis Time Lengths” section.
In addition, we implemented the BCV system combined with obstacle detection and verified the feasibility of the control mode using human intentions to control a real vehicle in an outdoor experimental environment. All subjects safely completed the experiments in the outdoor according to the expected requirements. The mean accuracy, in that case, was 90.68% (SD 2.96%) with mean response time of 4.30 (SD 0.11) seconds.
The performance of the real vehicle control experiment was slightly lower than that of the simulation experiment. The simulation experiment had a higher mean accuracy, lower mean response time and lower standard deviation than the real vehicle control experiment.
One of the reasons may be that none of the participants had previous experience in controlling a real experimental vehicle via the BCI before the experiment. In addition, other uncontrollable factors still exist, such as light intensity, noise and the smoothness of the road, which have impacts on the performance of SSVEP-based BCIs in outdoor driving environments [4143]. More practice should facilitate improvement in the performance of controlling BCVs. Moreover, controlling a real vehicle in the outdoor experimental environment may influence the psychological state of the driver, which may also have an impact on the performance of the BCI [11]. In addition, the driver is distracted by focusing on the visual stimulus and paying attention to the surrounding environment when driving an SSVEP-based BCV. In the future, it would be important to improve the intelligence of the vehicle and try different and more comfortable BCI paradigms such as motor imagery.
Compared with other human–machine interfaces, like touch screen displays, hand gesture recognition systems and speech recognition systems, the users can use BCIs without body movement. In addition, analysing the EEG signals of drivers is currently the most direct and convenient way to obtain drivers’ intentions. Therefore, the brain-controlled mode can provide valuable services not only for people with physical disabilities, but also for healthy people. For people with physical disabilities, BCVs have the potential to help them recover their driving ability, thus broaden their scope of activities, and improve the living standards. For healthy people, BCVs provide a more cerebral control mode than the control using the limbs, thus liberate driver’s limbs and enhance people’s driving experience. Moreover, BCVs also provide valuable experience for other brain-controlled machines and promote the study of BCI and intelligent machines.
One limitation of the proposed framework is that stress and noise are the main factors that affect the accuracy level and reliability of the BCV [43]. Another limitation is that the vehicle control commands converted by EEG signals are switching values in the form of 0’s and 1’s, which cannot perform fine tuning, such as steering. Note also that our experiments were conducted in common weather conditions, such as sunny, cloudy and overcast. They did not involve all weather conditions, such as rainy and snowy days. Bad weather may have an impact on the performance of the BCV system in outdoor experimental environment.
The further development of intelligent driving technologies can bring additional benefits to the research of BCV. And we can better realise functions relating to brain controlling with the help of the intelligent driving system. BCV systems that integrate intelligent driving technologies can help drivers to operate a vehicle more safely and comfortably.
The outdoor driving environment of the experiments is simple. In the future, we will try to drive the BCV in more complex environment. The SSVEP-based BCVs are distracting for the drivers, we will try more comfortable BCI paradigm such as motor imagery. Current and future research in this field will further improve the intelligence level and humanisation level of driving mode.

Conclusions

In this paper, we propose a new human-vehicle cooperative driving system which combines the BCV with obstacle detection technology. Its feasibility is verified by the simulation and real vehicle driving experiment. The human-vehicle cooperative driving system consists of three main parts: the SSVEP-based BCI, the laser obstacle detection system and the communication system. Two experiments were carried out: (1) the simulated BCV experiment and (2) real vehicle controlling experiment. In the simulated vehicle controlling experiment, we verified the feasibility of the SSVEP-based BCI system to control a simulated vehicle in the virtual driving environment. We have implemented this cooperative human-vehicle driving mode by combining the BCV system with obstacle detection. The safe driving of the BCV was accomplished by this mode, which has been verified in the real vehicle control experiment.
Table 1
Summary of BCV methods, together with a comparison with our contribution
Authors/year
Methods
Average accuracy
Experiment condition
Limitations
Differences with our paper
Jafarifarmand et al. [10]
A motor imagination-based BCI system was applied to control a radio-control car completing a designed specified route without crossing the path borders. The ICA-ANC, AR-CSP and SRSG-FasArt were applied for ocular artefact removal, feature extraction and classification, respectively.
90.43%
A radio-control car
The experiment was unsatisfactory because of the large number of artefacts
The real vehicle experiments are carried out in our manuscript
Nguyen et al. [8]
A method was developed for identifying the driver’s intentions for the emergency brakes control in a vehicle during simulated driving. The algorithm consists of the EEG band power, auto-regressive model features and an NN classifier.
91.00%
Simulation
This method was only used in simulation experiments, not in real vehicle experiments
Both simulation and real vehicle experiments are carried out in our manuscript
Lu et al. [3]
An algorithm based on the longitude control system was designed to control the speed of a simulated vehicle in a virtual scene in the laboratory. In the algorithm, the CSP was employed for augmenting the EEG signal SNR, and then, PSD features were extracted from the SSVEP patterns and classified using the traditional SVM classifier with the traditional RBF kernel.
Average accuracy not mentioned, accuracy range from 49.70 to 100%
Simulated vehicle
The accuracy of the results had low robustness for individual subjects
The CCA algorithm analyses the correlation between SSVEP and stimulus frequency and has good robustness in our manuscript
Lu et al. [32]
The previous study was amended to extend to three classes with the same identification classifier, namely, changing lane, selecting path and following.
Average accuracy not mentioned, accuracy range from 44.20 to 100%
Simulation
The accuracy results showed the same high variation for the subjects as the previous study
The algorithm used in our manuscript is robust
Zhuang et al. [13]
A BCI system was established using motor imagery EEG signals to control a model vehicle with a shared control strategy. The PSD features were identified by using ensemble, SVM and CNN classifiers.
CNN = 87.17%, SVM = 83.25%, ensemble model = 91.75%
A model vehicle
The disadvantage of the study is that the algorithm is time-consuming, which causes delay in real-time systems
The real vehicle experiments are carried out in our manuscript
Bi et al. [9]
A method of emergency situation detection was proposed by fusing driver EEG signals with surrounding information. The RLDA was used as the classifier.
93.89%
Real vehicle
The moving and driving function of the BCV was not realised. The BCI does not implement the driving control
The moving and driving function of the BCV was realised in our manuscript
Bi et al. [44]
The model was designed based on the QN algorithm for predicting the driver’s intentions to navigate the vehicle in order to move forward and turn left and right. The QN was fed by the SSVEP patterns, velocity, acceleration, road information and vehicle position on road features to control the steering of a vehicle.
88.77%
Simulated vehicle
The robustness of the model was not reliable. The algorithm would require more subjects to achieve a precise model
The CCA that we use does not require subjects for model training. We use a “training-free” classification algorithm
Gohring et al. [15]
To control steering and braking, ERD/ERS patterns from the EEG signal were extracted regarding the OAC and normal driving scenarios. The camera and external sensors used in the study helped significantly in decreasing the evoke potential (EP) detection error rates.
90.00%
Real vehicle
The robustness of the model was not reliable. The reliability of the algorithm was still insufficient, because of the use of a threshold classifier
The CCA that we use does not require subjects for model training. We use a “training-free” classification algorithm
Li et al. [11]
A fused system for vehicle driving decisions was proposed to control a simulated vehicle, which obtained the visual data, and the hybrid EEG signals simultaneously. The hybrid EEG signals consist of steady state visual evoked potentials (SSVEP) and motor imagery. The wCCA, CSP and kNN had been applied for EOG artefacts removing, feature extraction and classification, respectively.
The average accuracy of the MI performance was 77.4%
A simulated vehicle
This method was only used in simulation experiments, not in real vehicle experiments
Both simulation and real vehicle experiments are carried out in our manuscript
Table 2
Definition of control protocol
SSVEP frequencies
Vehicle control commands
Hexadecimal commands
10 Hz
Move straight
A5 5A 04 B4 B8 AA
8 Hz
Brake
A5 5A 04 B3 B7 AA
Table 3
The offline test result of different analysis time lengths
Subjects
Time length of SSVEP signal (s)
8 Hz
10 Hz
Average \(\rho\)
Average accuracy
Average \(\rho\)
Average accuracy
Subject A
1
0.003
40%
0.003
50%
2
0.021
70%
0.022
70%
3
0.122
80%
0.131
80%
4
0.225
80%
0.231
90%
Subject B
1
0.003
50%
0.003
50%
2
0.019
60%
0.021
70%
3
0.115
80%
0.121
90%
4
0.212
90%
0.196
90%
Subject C
1
0.003
50%
0.004
50%
2
0.015
70%
0.019
60%
3
0.123
80%
0.119
80%
4
0.223
90%
0.215
90%
Subject D
1
0.003
50%
0.004
60%
2
0.016
60%
0.021
70%
3
0.122
80%
0.136
80%
4
0.186
90%
0.232
100%
Subject E
1
0.003
60%
0.003
50%
2
0.013
70%
0.018
80%
3
0.128
80%
0.139
90%
4
0.226
90%
0.251
100%
Table 4
Classification accuracy of the 3-s SSVEP signals obtained by CCA with/without the OTWV method
Subjects
8 Hz
10 Hz
Accuracy without OTWV
Average with OTWV
Accuracy without OTWV
Average with OTWV
A
80%
90%
80%
90%
B
80%
80%
90%
100%
C
80%
90%
80%
100%
D
80%
90%
80%
80%
E
80%
80%
90%
100%
Average
80%
86%
84%
94%
Table 5
Performance of the brain-controlled simulated vehicle in the simulation experiment
Subjects
Average response time for each trial (s)
Average time (s)
Accuracy
Trial 1
Trial 2
Trial 3
Trial 4
A
3.2
2.7
3.3
3.1
3.075
92.5%
B
2.7
2.9
3.5
3.3
3.1
87.5%
C
2.6
3.3
3.0
2.8
2.925
95.0%
D
3.1
3.4
3.0
3.2
3.175
92.5%
E
3.1
2.8
3.5
3.0
3.1
92.5%
Average
3.075
92.0%
Table 6
Performance of the BCV in the outdoor experiment
Subjects
 
Trial 1
Trial 2
Trial 3
Trial 4
Trial 5
Average
A
Average response time of each trial (s)
4.2
3.7
3.5
4.3
4.3
4.0
Number of BCI commands
5
5
7
5
5
Number of wrong BCI commands
0
0
2
0
1
Number of braking commands
1
1
1
1
1
Accuracy (%)
100
100
71.4
100
80
90.28
B
Average response time of each trial (s)
4.6
4.7
4.7
5.1
3.9
4.6
Number of BCI commands
5
7
7
5
5
Number of wrong BCI commands
0
2
2
0
0
Number of braking commands
1
1
1
1
1
Accuracy (%)
100
71.4
71.4
100
100
88.56
C
Average response time of each trial (s)
3.6
4.3
5.2
5.3
5.2
4.72
Number of BCI commands
6
6
5
5
5
Number of wrong BCI commands
1
1
0
0
0
Number of braking commands
1
1
1
1
1
Accuracy (%)
83.3
83.3
100
100
100
93.32
D
Average response time of each trial (s)
4.3
5.1
3.6
4.3
3.7
4.2
Number of BCI commands
5
7
5
5
5
Number of wrong BCI commands
1
2
0
0
0
Number of braking commands
1
1
1
1
1
Accuracy (%)
80
71.4
100
100
100
90.28
E
Average response time of each trial (s)
3.5
3.6
4.3
3.6
5.1
4.02
Number of BCI commands
5
6
5
7
5
Number of wrong BCI commands
0
1
0
2
0
Number of braking commands
1
1
1
1
1
Accuracy (%)
100
83.3
100
71.4
100
90.94

Acknowledgements

We thank the participants for their willingness and predisposition to participate in the experiments. We also thank the reviewers for their comments and suggestions that greatly improved the manuscript. Finally, we thank Pau Solé-Vilaró for reviewing the grammar, style and syntax of this manuscript.

Declarations

Ethics Approval

This article does not contain any studies involving human participants and/or animals by any of the authors.
Informed consent was obtained from all individual participants.

Conflict of Interest

The authors declare no competing interests.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Naresh B, Rambabu S, Basha DK. ARM controller and EEG based drowsiness tracking and controlling during driving. International Journal of Reconfigurable and Embedded Systems (IJRES). 2018;6:127–32.CrossRef Naresh B, Rambabu S, Basha DK. ARM controller and EEG based drowsiness tracking and controlling during driving. International Journal of Reconfigurable and Embedded Systems (IJRES). 2018;6:127–32.CrossRef
2.
Zurück zum Zitat Dev A, Rahman MA, Mamun N. Design of an EEG-based brain controlled wheelchair for quadriplegic patients. IEEE 3rd international conference for convergence in technology (I2CT), Pune, India, 6–8 April 2018, pp 1–5. Dev A, Rahman MA, Mamun N. Design of an EEG-based brain controlled wheelchair for quadriplegic patients. IEEE 3rd international conference for convergence in technology (I2CT), Pune, India, 6–8 April 2018, pp 1–5.
3.
Zurück zum Zitat Lu Y, Bi L. EEG signals-based longitudinal control system for a brain-controlled vehicle. IEEE Trans Neural Syst Rehabil Eng. 2019;27(2):323–32.CrossRef Lu Y, Bi L. EEG signals-based longitudinal control system for a brain-controlled vehicle. IEEE Trans Neural Syst Rehabil Eng. 2019;27(2):323–32.CrossRef
4.
Zurück zum Zitat Sparrow Robert; Howard Mark. When human beings are like drunk robots: driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies. 2017;80:206–15.CrossRef Sparrow Robert; Howard Mark. When human beings are like drunk robots: driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies. 2017;80:206–15.CrossRef
5.
Zurück zum Zitat Wang L, Sun P, et al. Advanced driver-assistance system (ADAS) for intelligent transportation based on the recognition of traffic cones. Advances in Civil Engineering. 2020;4:1–8. Wang L, Sun P, et al. Advanced driver-assistance system (ADAS) for intelligent transportation based on the recognition of traffic cones. Advances in Civil Engineering. 2020;4:1–8.
6.
Zurück zum Zitat Al Smadi T, Al-Maitah M. Artificial intelligent technology for safe driver assistance system. Int J Comp Aided Eng Technol. 2019;13:183–191. Al Smadi T, Al-Maitah M. Artificial intelligent technology for safe driver assistance system. Int J Comp Aided Eng Technol. 2019;13:183–191.
7.
Zurück zum Zitat Kuramochi H, Utsumi A, et al. Effect of human-machine cooperation on driving comfort in highly automated steering maneuvers. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications: adjunct proceedings (AutomotiveUI '19). Association for Computing Machinery, New York, NY, USA, 2019, pp 151–155. Kuramochi H, Utsumi A, et al. Effect of human-machine cooperation on driving comfort in highly automated steering maneuvers. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications: adjunct proceedings (AutomotiveUI '19). Association for Computing Machinery, New York, NY, USA, 2019, pp 151–155.
8.
Zurück zum Zitat Nguyen TH, Chung WY. Detection of driver braking intention using EEG signals during simulated driving. Sensors (Basel, Switzerland). 2019;19(13):2863. Nguyen TH, Chung WY. Detection of driver braking intention using EEG signals during simulated driving. Sensors (Basel, Switzerland). 2019;19(13):2863.
9.
Zurück zum Zitat Bi L, Wang H, Teng T, Guan C. A novel method of emergency situation detection for a brain-controlled vehicle by combining EEG signals with surrounding information. IEEE Trans Neural Syst Rehabil Eng. 2018;26(10):1926–34.CrossRef Bi L, Wang H, Teng T, Guan C. A novel method of emergency situation detection for a brain-controlled vehicle by combining EEG signals with surrounding information. IEEE Trans Neural Syst Rehabil Eng. 2018;26(10):1926–34.CrossRef
10.
Zurück zum Zitat Jafarifarmand A, Badamchizadeh MA. EEG artifacts handling in a real practical brain–computer interface controlled vehicle. IEEE Trans Neural Syst Rehabil Eng. 2019;27(6):1200–8.CrossRef Jafarifarmand A, Badamchizadeh MA. EEG artifacts handling in a real practical brain–computer interface controlled vehicle. IEEE Trans Neural Syst Rehabil Eng. 2019;27(6):1200–8.CrossRef
11.
Zurück zum Zitat Li W, Duan F, et al. A human-vehicle collaborative simulated driving system based on hybrid brain–computer interfaces and computer vision. IEEE Transactions on Cognitive and Developmental Systems. 2018;10(3):810–22.CrossRef Li W, Duan F, et al. A human-vehicle collaborative simulated driving system based on hybrid brain–computer interfaces and computer vision. IEEE Transactions on Cognitive and Developmental Systems. 2018;10(3):810–22.CrossRef
12.
Zurück zum Zitat Khan J, Khan MU, et al. Robust multi-sensor fusion for the development of EEG controlled vehicle. IEEE Sensors J. 2020. Khan J, Khan MU, et al. Robust multi-sensor fusion for the development of EEG controlled vehicle. IEEE Sensors J. 2020.
13.
Zurück zum Zitat Zhuang J, Geng K, Yin G. Ensemble learning based brain-computer interface system for ground vehicle control. IEEE Trans Sys, Man, and Cybernetics: Systems. 2019. Zhuang J, Geng K, Yin G. Ensemble learning based brain-computer interface system for ground vehicle control. IEEE Trans Sys, Man, and Cybernetics: Systems. 2019.
14.
Zurück zum Zitat Fan X, Bi L, et al. A brain–computer interface-based vehicle destination selection system using P300 and SSVEP signals. IEEE Trans Intell Transp Syst. 2015;15(1):274–83.CrossRef Fan X, Bi L, et al. A brain–computer interface-based vehicle destination selection system using P300 and SSVEP signals. IEEE Trans Intell Transp Syst. 2015;15(1):274–83.CrossRef
15.
Zurück zum Zitat Göhring D, Latotzky D, Wang M, Rojas R. Semi-autonomous car control using brain computer interfaces, Intelligent Autonomous Systems. 2013:393–408. Göhring D, Latotzky D, Wang M, Rojas R. Semi-autonomous car control using brain computer interfaces, Intelligent Autonomous Systems. 2013:393–408.
16.
Zurück zum Zitat Hekmatmanesh A, Nardelli PH, Handroos H. Review of the state-of-the-art of brain-controlled vehicles, IEEE Access. 2021;9:110173–110193. Hekmatmanesh A, Nardelli PH, Handroos H. Review of the state-of-the-art of brain-controlled vehicles, IEEE Access. 2021;9:110173–110193.
17.
Zurück zum Zitat Hadi MS, Esmaili P. Brain computer interface (BCI) for controlling path planning mobile robots: a review. 2019 3rd international symposium on multidisciplinary studies and innovative technologies (ISMSIT), Ankara, Turkey. 2019;1-4 Hadi MS, Esmaili P. Brain computer interface (BCI) for controlling path planning mobile robots: a review. 2019 3rd international symposium on multidisciplinary studies and innovative technologies (ISMSIT), Ankara, Turkey. 2019;1-4
18.
Zurück zum Zitat Waytowich N, Lawhern VJ, et al. Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials. J Neural Eng. 2018;15:066031.1–066031.13. Waytowich N, Lawhern VJ, et al. Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials. J Neural Eng. 2018;15:066031.1–066031.13.
19.
Zurück zum Zitat Islam MR, T. Tanaka Frequency recognition for SSVEP-based BCI with data adaptive reference signals, et al. IEEE international conference on digital signal processing (DSP). Singapore. 2015;2015:799–803. Islam MR, T. Tanaka Frequency recognition for SSVEP-based BCI with data adaptive reference signals, et al. IEEE international conference on digital signal processing (DSP). Singapore. 2015;2015:799–803.
20.
Zurück zum Zitat Babu B, Chandrasekaran R, et al. A study on wavelet analysis of SSVEP signals. Int J Eng Technol. 2018;7(2):10–3.CrossRef Babu B, Chandrasekaran R, et al. A study on wavelet analysis of SSVEP signals. Int J Eng Technol. 2018;7(2):10–3.CrossRef
21.
Zurück zum Zitat Chen X, Wang Y, et al. Effects of stimulation frequency and stimulation waveform on steady-state visual evoked potentials using a computer monitor. J Neural Eng. 2019;16:066007.1–066007.13. Chen X, Wang Y, et al. Effects of stimulation frequency and stimulation waveform on steady-state visual evoked potentials using a computer monitor. J Neural Eng. 2019;16:066007.1–066007.13.
22.
Zurück zum Zitat Kapgate D, et al. An optimized facial stimuli paradigm for hybrid SSVEP+P300 brain computer interface. J Neurosurg Sci. 2019. Kapgate D, et al. An optimized facial stimuli paradigm for hybrid SSVEP+P300 brain computer interface. J Neurosurg Sci. 2019.
23.
Zurück zum Zitat Nakanishi M, Wang Y, et al. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans Biomed Eng. 2018;65(1):104–12.CrossRef Nakanishi M, Wang Y, et al. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans Biomed Eng. 2018;65(1):104–12.CrossRef
24.
Zurück zum Zitat Kumar GRK, Reddy MR. Designing a sum of squared correlations framework for enhancing SSVEP-based BCIs. IEEE Trans Neural Syst Rehabil Eng. 2019;27:2044–50.CrossRef Kumar GRK, Reddy MR. Designing a sum of squared correlations framework for enhancing SSVEP-based BCIs. IEEE Trans Neural Syst Rehabil Eng. 2019;27:2044–50.CrossRef
25.
Zurück zum Zitat Podmore JJ, Breckon TP, et al. On the relative contribution of deep convolutional neural networks for SSVEP-based bio-signal decoding in BCI speller applications. IEEE Trans Neural Syst Rehabil Eng. 2019;27(4):611–8.CrossRef Podmore JJ, Breckon TP, et al. On the relative contribution of deep convolutional neural networks for SSVEP-based bio-signal decoding in BCI speller applications. IEEE Trans Neural Syst Rehabil Eng. 2019;27(4):611–8.CrossRef
26.
Zurück zum Zitat Ravi A, Manuel J, et al. A convolutional neural network for enhancing the detection of SSVEP in the presence of competing stimuli. In Proceedings of the annual international conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, Jul. 2019;6323–6326. Ravi A, Manuel J, et al. A convolutional neural network for enhancing the detection of SSVEP in the presence of competing stimuli. In Proceedings of the annual international conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, Jul. 2019;6323–6326.
27.
Zurück zum Zitat Brogin JA, Faber J, Bueno DD. Enhanced use practices in SSVEP-based BCIs using an analytical approach of canonical correlation analysis. Biomed Signal Proc Control. 2020;55. Brogin JA, Faber J, Bueno DD. Enhanced use practices in SSVEP-based BCIs using an analytical approach of canonical correlation analysis. Biomed Signal Proc Control. 2020;55.
28.
Zurück zum Zitat Floriano A, Carmona VL, et al. Bastos-Filho. A study of SSVEP from below-the-hairline areas in low-, medium-, and high-frequency ranges. Res Biomed Eng. 2019;35(1):71–76. Floriano A, Carmona VL, et al. Bastos-Filho. A study of SSVEP from below-the-hairline areas in low-, medium-, and high-frequency ranges. Res Biomed Eng. 2019;35(1):71–76.
29.
Zurück zum Zitat Hakvoort G, Reuderink B, Obbink M. Comparison of PSDA and CCA detection methods in a SSVEP-based BCI-system. 2011. Hakvoort G, Reuderink B, Obbink M. Comparison of PSDA and CCA detection methods in a SSVEP-based BCI-system. 2011.
30.
Zurück zum Zitat Ma K, Wang S, et al. Electrode channel optimisation method for steady-state visual evoked potentials. J Eng. 2019;2019(23):8632–6.CrossRef Ma K, Wang S, et al. Electrode channel optimisation method for steady-state visual evoked potentials. J Eng. 2019;2019(23):8632–6.CrossRef
31.
Zurück zum Zitat Takehara D, et al. Development of an ERD measurement system using Emotiv Epoc. Trans Japanese Soc Med Biol Eng. 2017;55:556–559. Takehara D, et al. Development of an ERD measurement system using Emotiv Epoc. Trans Japanese Soc Med Biol Eng. 2017;55:556–559.
32.
Zurück zum Zitat Li Y, Bin G, et al. Analysis of phase coding SSVEP based on canonical correlation analysis (CCA). 2011 5th international IEEE/EMBS conference on neural engineering (NER), Cancun, Mexico. 2011;368–371. Li Y, Bin G, et al. Analysis of phase coding SSVEP based on canonical correlation analysis (CCA). 2011 5th international IEEE/EMBS conference on neural engineering (NER), Cancun, Mexico. 2011;368–371.
33.
Zurück zum Zitat Al-Shargie F, Tang TB, Kiguchi M. Assessment of mental stress effects on prefrontal cortical activities using canonical correlation analysis: an fNIRS-EEG study. Biomed Opt Express. 2017;8:2583–2598. Al-Shargie F, Tang TB, Kiguchi M. Assessment of mental stress effects on prefrontal cortical activities using canonical correlation analysis: an fNIRS-EEG study. Biomed Opt Express. 2017;8:2583–2598.
34.
Zurück zum Zitat Chen LW, Chou PC. BIG-CCA: beacon-less, infrastructure-less, and GPS-less cooperative collision avoidance based on vehicular sensor networks. IEEE Trans Syst, Man, and Cybernetics: Syst. 2016;46(11):1518–1528. Chen LW, Chou PC. BIG-CCA: beacon-less, infrastructure-less, and GPS-less cooperative collision avoidance based on vehicular sensor networks. IEEE Trans Syst, Man, and Cybernetics: Syst. 2016;46(11):1518–1528.
35.
Zurück zum Zitat Fouad AM, Sharkawy RM, Onsy A, Fixed obstacle detection for autonomous vehicle. IEEE conference on power electronics and renewable energy (CPERE). Aswan City, Egypt. 2019;2019:217–21. Fouad AM, Sharkawy RM, Onsy A, Fixed obstacle detection for autonomous vehicle. IEEE conference on power electronics and renewable energy (CPERE). Aswan City, Egypt. 2019;2019:217–21.
36.
Zurück zum Zitat Li X, Deng W, et al. Research on millimeter wave radar simulation model for intelligent vehicle. Int J Automot Technol. 2020;21(2):275–84.CrossRef Li X, Deng W, et al. Research on millimeter wave radar simulation model for intelligent vehicle. Int J Automot Technol. 2020;21(2):275–84.CrossRef
37.
Zurück zum Zitat Linegar C, Churchill W, Newman P. Made to measure: bespoke landmarks for 24-hour, all-weather localisation with a camera. In Proceedings of the 2016 IEEE international conference on robotics and automation, Stockholm, Sweden. 2016:787–794. Linegar C, Churchill W, Newman P. Made to measure: bespoke landmarks for 24-hour, all-weather localisation with a camera. In Proceedings of the 2016 IEEE international conference on robotics and automation, Stockholm, Sweden. 2016:787–794.
38.
Zurück zum Zitat Pang C, Zhong X, et al. Adaptive obstacle detection for mobile robots in urban environments using downward-looking 2D LiDAR. Sensors. 2018;18(6):1749. Pang C, Zhong X, et al. Adaptive obstacle detection for mobile robots in urban environments using downward-looking 2D LiDAR. Sensors. 2018;18(6):1749.
40.
Zurück zum Zitat Umehara D, Shishido T, Ringing mitigation schemes for controller area network. IEEE vehicular networking conference (VNC). Taipei, Taiwan. 2018;2018:1–8. Umehara D, Shishido T, Ringing mitigation schemes for controller area network. IEEE vehicular networking conference (VNC). Taipei, Taiwan. 2018;2018:1–8.
41.
Zurück zum Zitat Bekdash M, Asirvadam VS, Kamel N, Visual evoked potentials response to different colors and intensities. international conference on biosignal analysis, processing and systems (ICBAPS). Kuala Lumpur, Malaysia. 2015;2015:104–7. Bekdash M, Asirvadam VS, Kamel N, Visual evoked potentials response to different colors and intensities. international conference on biosignal analysis, processing and systems (ICBAPS). Kuala Lumpur, Malaysia. 2015;2015:104–7.
42.
Zurück zum Zitat Zerafa R, Camilleri T, Camilleri KP, Falzon O. The effect of distractors on SSVEP-based brain-computer interfaces. Biomed Phys Eng Express. 2019:5. Zerafa R, Camilleri T, Camilleri KP, Falzon O. The effect of distractors on SSVEP-based brain-computer interfaces. Biomed Phys Eng Express. 2019:5.
43.
Zurück zum Zitat Hekmatmanesh A, et al. Biosignals in human factors research for heavy equipment operators: a review of available methods and their feasibility in laboratory and ambulatory studies. IEEE Access. 2021;9:97466–82.CrossRef Hekmatmanesh A, et al. Biosignals in human factors research for heavy equipment operators: a review of available methods and their feasibility in laboratory and ambulatory studies. IEEE Access. 2021;9:97466–82.CrossRef
44.
Zurück zum Zitat Bi L, Lu Y, Fan X, Lian J, and Liu Y. Queuing network modeling of driver EEG signals-based steering control. IEEE Trans Neural Syst Rehabil Eng. 2016; 25(8):1117–24 Bi L, Lu Y, Fan X, Lian J, and Liu Y. Queuing network modeling of driver EEG signals-based steering control. IEEE Trans Neural Syst Rehabil Eng. 2016; 25(8):1117–24
Metadaten
Titel
A Brain-Controlled Vehicle System Based on Steady State Visual Evoked Potentials
verfasst von
Zhao Zhang
Shuning Han
Huaihai Yi
Feng Duan
Fei Kang
Zhe Sun
Jordi Solé-Casals
Cesar F. Caiafa
Publikationsdatum
10.09.2022
Verlag
Springer US
Erschienen in
Cognitive Computation / Ausgabe 1/2023
Print ISSN: 1866-9956
Elektronische ISSN: 1866-9964
DOI
https://doi.org/10.1007/s12559-022-10051-1