Nissan’s Brain-to-Vehicle technology communicates our brains with vehicles
Automobile manufacturer Nissan started the year by presenting the first Brain-to-Vehicle (or B2V) technology in Las Vegas at CES 2018. This is a great revolution for the sector that connects the driver’s brain with the vehicle, radically changing the way we interact with vehicles. Instead of substituting the driver by an automatic pilot, the objective is to access the driver’s intentions between 0.2 and 0.8 seconds before they are executed. The driver’s actions are anticipated and a customized, more comfortable and safer driving experience is achieved. The project is based on a brain-computer interface with considerable advances in three axes: wearable brain-sensing devices, brain activity signal processing to anticipate human movements, and in new vehicle shared control strategies. Connecting our brains to vehicles to improve or adapt the driving style is, without a doubt, a subject that will be in the spotlight for the next years.
Nissan’s Brain-to-Vehicle or B2V technology
It takes between 0.2 and 0.4 seconds from the moment our brain sends a movement order until our muscles execute it. This is the time required for the order to travel through the nervous system, from brain to muscle activation. And between 0.2 and 0.6 seconds before the brain even “gives” the order, the brain “prepares” the movement. Thus, it is possible to identify the intention of the driver between 0.4 and 1 seconds before the driver actually steps on the brake, and therefore the car could start braking immediately. At 100km/h this means saving 27 meters of braking distance, the difference between life and death in a frontal collision. Anticipating intended movement improves reaction time and can be extended to other actions such as turning or maneuvering, as it involves motor behavior with our arms and legs that could be anticipated by the vehicle.
The intention of the driver can be known almost one second before the driver steps on the brake pedal. The vehicle could start braking immediately, saving a significant braking distance.
An important aspect of the integration of this information with the vehicle is that it is based on shared control, which means that the vehicle and driver share the driving, leading to more enjoyable and safe driving. This represents a new concept alternative to autonomous driving, which has the objective of substituting the driver by an automatic pilot, autonomous car or autonomous system.
Nissan’s Executive Vice-President Daniele Schillaci describes his vision of the future: “Through Nissan Intelligent Mobility, we are moving people to a better world by delivering more autonomy, more electrification and more connectivity”. And Dr. Lucian Gheorghe, Senior Innovation Researcher of Nissan Intelligent Mobility affirms that “the possible applications of this technology are incredible. This research will be a catalyst for Nissan’s innovation within our vehicles in the next years”.
These principles of brain function and its relationship with driving are the basies of the Brain-to-Vehicle technology presented by Nissan at the consumer electronics show CES 2018, the largest world trade show on consumer electronics. Nissan’s advance is the first milestone achieved in collaboration with Bitbrain, the Swiss Federal Institute of Technology and the Canadian National Research Council.
This project is supported by four basic research and development pillars of research and development:
A new wearable and wireless brain-sensing technology that the driver wears for the measurement of brain activity through electroencephalogram (EEG).
Data analysis of brain activity in real time, capable of detecting the anticipation to the driver’s movement.
New shared control procedures between the vehicle and the driver that utilize brain information.
New integration systems and tests based on simulations and real vehicles.
Minimalist EEG technology optimized for the project
In this line, Bitbrain has developed with Nissan an innovative wearable and minimalist EEG neurotechnology with dry sensors, optimized to measure brain activity related to movement. This is brain sensing EEG system that: a) does not require electrolytic conductive substances for its operation, b) is very comfortable and ergonomic, c) is designed to capture the natural behavior of the driver, and d) presents a technological and more discrete design than any other existing EEG technology. It can be placed on average in less than two minutes and can operate up to eight hours continuously, transmitting the brain activity of the driver to the vehicle’s bluetooth.
This project required the EEG system to present three key properties:
Acceptability by end-user: the equipment is comfortable to wear during a long period of time, presents “likeable” aesthetics and counts with the minimum amount of sensors required to avoid overloading the equipment.
Insensitivity to motion artifacts: the driver´s natural movements produce noise in the EEG measurements. The EEG is wireless -- cables are an usual source of noise -- and presents innovative active shielding that reduces the noise produced by any non-neuronal signal.
Reliable measurement of the necessary brain processes: the EEG can measure the movement-related cortical potentials (MRCPs) and motor event-related (de)synchronization (ERD/ERS). MRCPs are especially complex to measure with good quality when using dry EEG technology.
This video shows an example of the use of this neurotechnology in an intermediate prototype.
The development of this wearable EEG technology is a clear advance for this project and, more generally, for the application of these technologies outside lab settings. This technology captures the natural behavior of the driver and records EEG with unprecedented precision and reliability, which are both required to capture the motor and cognitive brain processes involved in driving a vehicle.
The new minimal EEG is a considerable advance towards reliable brain measurement devices that are adapted to daily life applications.
Brain-computer interface to anticipate movement
The brain-computer interface interprets signals from the driver’s brain activity. The B2V technology is based on the preparatory brain activity that precedes the execution of movement to anticipate the intentions of the user. This activity occurs mainly on the motor cortex and is identified through two neural EEG correlates: movement-related cortical potentials (MRCPs) and motor event-related (de)synchronization (ERD/ERS). A typical approach to observe these processes consists of aggregating several repetitions of the same movement and show what is known as the Grand Average.
The following picture shows the shape of these two brain processes when a person starts to walk (initiates movement with the right leg), measured on the leg area of the motor cortex. Note that the zero on the horizontal axis corresponds to the order onset, and also how the pre-motor potentials and oscillations precede the order (the brain preparation of movement). The objective of the brain-computer interface is to measure these brain processes and decode them the fastest possible.
The next video shows an example of these brain processes in a different application context, with a cerebrovascular accident patient (see scientific publication). The zero is the movement order onset, the yellow line represents the activation of the arm muscle (EMG, electromyographic activity), and the green line is the decoding of the brain activity (obtained with the brain-computer interface). The values of the yellow and green lines indicate the probability of movement. Note how intention is decoded before the movement is produced.
The brain processes utilized in the Brain-to-Vehicle are movement-related cortical potentials (MRCPs) and motor desynchronization (ERD/S).
Adapting the driving system to the driver
Despite what the previous images might have suggested, real-time EEG decoding of the brain activity related to anticipation of movement is a very complex process.
Firstly, decoding works on the activity generated by a single movement, instead of over an average, and therefore the signal-to-noise ratio is lower (this means that the EEG signal presents more noise than what was shown in the images).
Secondly, there is no a priori knowledge on the moment when the driver is going to brake, and therefore, we should decode the driver’s intentions continuously (as there is no specific time interval to identify decoding, precision of detection is reduced).
Thirdly, each person’s brain is different and thus the aforementioned movement-related brain processes also have different EEG signatures (there are large inter- and intra-personal variations).
These problems are approached by algorithms that anticipate movement based on signal processing techniques with automatic learning (machine learning and artificial intelligence) that require a specific calibration phase for each subject.
In this calibration phase, the user drives naturally while producing a set of movements similar to those we wish to decode. During execution of these movements, brain signals and information on the driver’s movements are recorded. EEG data will be used to calibrate and train the EEG decoding system. Once the EEG-based brain-computer interface has been trained, it is possible to decode, in real time, the driver’s intention to move approximately 0.5 seconds faster (or even 1 second before, depending on the person) than the real movement.
In the case of assistive driving, it is possible to use a simulator to train the brain-computer interface. Nissan, in fact, utilized a car simulator to train and evaluate the detection of the anticipation of movement. During CES, people that wished to test the Brain-to-Vehicle interface should firstly calibrate the system. Participants followed the instructions of the simulator while their brain activity and movements, such as turning the steering wheel and braking, were recorded. When the system detected that training was satisfactory, the participant was ready to drive the simulator with direct assistance obtained from their brain.
Nissan utilized a car simulator to train the automatic learning techniques to detect anticipation of movement.
Other brain-to-vehicles or brain 2 vehicles projects around the world
Nissan is not the only automotive manufacturer interested in the brain to vehicle concept, using EEG systems in their vehicles. Ford, for example, in collaboration with the King’s College of London, has compared the attention and reaction time of professional circuit pilots with those of usual drivers. For such, they used EEG and virtual reality systems. The results show that, at high speeds, pilots are much better at ignoring distractions. Although at this time Ford’s interest was to use these systems to improve the performance of its competition pilots, the world of car racing has always been a place of innovation before developments reach the real road.
If Ford wishes to monitor and improve the capabilities of its pilots, Audi has been utilizing EEG systems within the 25th Hour project to evaluate the experience of users in autonomous vehicles. Manufacturers know that with the rapid changes experienced by the automotive industry, the time spent in vehicles is going to be very different than today. By using a futuristic simulation of an autonomous vehicle, Audi has studied the expectatives of the millenials for these vehicles, focusing initially on how to generate an environment that favors productivity. EEG data show different levels of cognitive demand in function of the stimuli presented by the vehicle (driver’s view, relaxation space, etc).
In another project, developed by Bitbrain in collaboration with Ogilvy, a vehicle neuroconfigurator was developed for Seat automotive, to adapt the vehicle’s characteristics to the temper of drivers. The technology estimated the temperament of drivers in a process where videos were presented (stimuli), while their brain response was recorded by a minimalist wearable EEG. This technology was presented at the Paris Motor Show and was used by more than 8,000 people in less than 15 days. This was a milestone in the use of brain-computer interfaces, due to the intensity of use and because the participants used it autonomously and without the aid of technical personnel.
Nowadays there are many innovations to improve the interaction between humans and vehicles, and there is no doubt that the use of brain information opens a wide field for research and future potential applications. These applications include technologies for detecting and evaluating the driver, supervising and adapting the driving style, and to produce a driveing experience that is even more exciting and enjoyable. The global brain-to-vehicles technologies are already here, and the question is whether humans will relinquish control of their cars in the future or rather share control. What we definitely know is we will move beyond manual driving very soon.