EEG Synchronization With Other Biosensors (EEG, ECG, EMG, eye tracking, etc.), and Software

EEG Synchronization With Other Biosensors (EEG, ECG, EMG, eye tracking, etc.), and Software

12 Min.
Technical
By Andreu Oliver, PhD.
June 29, 2020

One of the first questions to answer when we approach a research methodology that involves the recording of multiple devices is always: how do I synchronize the various signals? Due to the fact that there are many different systems and devices used by researchers that come from multiple vendors, or are developed by the researchers themselves, this is always one of the first problems to solve. In this post we will explain why synchronization is so important and provide an overview of the different techniques that can be used to do live synchronization of data recordings.

Why do we need a good EEG synchronization with other devices when conducting a research study?

When conducting research with external stimuli or with multiple streams of input data, synchronization is the most trivial, yet one of the most important aspects of the experiment execution. Setting up a recording without ensuring correct synchronization can render the results of a study completely useless. We will always depend on a proper segmentation of the brain activity in order to establish correct casualties or correlation effects. Deciding on the right approach and how to configure an experiment depend a lot on the type of analysis we plan to carry out. 

One of the best examples of the importance of having proper synchronization is when working with EEG signals and Event Related Potentials (ERP)

An EEG ERP is the measured electrophysiological response of the human brain to an event. The event that evokes the potential can be internal (e.g. movement preparation) or external (e.g. auditory stimulus) and related to sensory-motor or cognitive processes.

Usually, ERPs are analyzed using Grand Averages of multiple responses to the event in order to remove noise and artifacts and find the common underlying pattern in the EEG signal. When we want to analyze EEG ERP, we need therefore to align the data to the onset of a stimulus event. The ERP is obtained as an average of the so-called single trials of EEG data on a window based on the onset, e.g. from the onset to +600ms. Any synchronization problems between the amplifiers and the stimuli presentation will impact the ERP timing window and can lead to a completely misguided result.

For example, in the image below, we present two different EEG ERP analyses with the same dataset where the only difference is that we have injected some jitter in the stimulus onset time (i.e. the onset is randomly moved to another EEG sample close to the real one). Note that in the example this effect is not only causing an inaccuracy in the fixed offset of presentation, but also a variable latency between the recorded presentation of the stimuli and the actual onset.

Erp Analyses

Graph 1: ERP response with and without jitter. Individual responses plus average result.

There are multiple relevant aspects for synchronization when executing an experiment, but the two main considerations when designing a synchronization method are: 1) the timing of stimuli presentation or event synchronization, and 2) the recording of multiple devices. The previous example focused on the first kind of synchronization. We are now going to dig a bit deeper in both aspects.

1. Stimuli presentation or event synchronization

To make sure that a time-sensitive analysis can be performed correctly, the first step to consider is the stimuli presentation. While it may seem like this is a straightforward process, from the moment the computer processor gives the order to present a stimulus, whether auditory, visual or other, many things happen.

The order from the processor needs to go through a series of steps to actually become the stimulus that the participant will perceive. These steps can involve the system memory, the graphics and/or audio card of the computer, the screen where the stimuli are presented, among others. All these steps require, often varying, processing time that must be managed for accurate results.

This presentation time accuracy requirement is the reason why very specialized stimuli presentation applications like ePrime exist. On their webpage, among other resources, you can find an in-depth explanation of the kind of delays that you can expect in the most common stimulation methods: visual and auditory.

Tobii Pro has also done time accuracy verifications of the software to make sure that stimuli delivery follows a proper timing precision. Results of the full test can be found here. On the graph you can find one of the most meaningful results of the test where the stimulus onset is represented on the time scale.

Tobii Pro Stimulus Onset Timestamp Accuracy Histogram

Graph 2: Stimulus-onset timestamp accuracy histogram of Setup 4 – Optimal performance requirements (Windows 10). The
histogram shows the time distribution and number of stimuli (occurrences) per time interval. Taken from Timing Guide for Stimulus Display in Pro Lab.

Performing the type of analysis when we do not control the occurrence of the event of interest (e.g. in ecological or mobile settings) is more complicated. Since the events are not under our control, we need to resort to other synchronization methods. For instance, we can use the press of a key or a button by the participant or the experimenter as a time locking mechanism (degrading synchronization due to jitter and introducing a certain delay).

Another alternative, useful for more ecological real-world scenarios, we rely on the external recording and analysis of the behavior to code the onset of the event of interest in the dataset. Although these solutions are generally less precise, the analyses done in these situations are usually based on processing long periods of time instead of the occurrence of a precise event.

An example of this type of protocol and analysis is fatigue detection. Fatigue detection, while performing certain tasks, e.g. during driving, is usually based on monitoring performance or on quantifying a certain state in the course of the task duration. Having a clear start and endpoint for the task is important, but the precision of those is not as relevant as when working with time-sensitive events as it is most likely that we would be treating the entire duration of a task as the unit to compare. Note that whenever possible event analysis can still be a relevant tool in this setting, especially as an indicator of performance. 

2.  The recording of multiple devices: Between devices synchronization

When we work with ERPs events can refer to a stimulus presentation, as the example above, or to a contingency with some other recording. A good example of this is a study where we are aiming to analyze the contingency between a certain gaze pattern recorded with an eye tracking system and a certain brain response as ERP recorded with an EEG system. One instance of this is what it is called Event Fixation Related Potential representing the EEG activity when a gaze event happens. 

In this situation, the synchronization between devices is as important, if not more, than the synchronization with the stimuli onset. This is mainly because we will mark our event onset when the eye tracking system gives us a specific gaze position (i.e. when the participant looks at the target), not when the stimuli appeared on the screen.

Efrp Analysis

Graph 3: Average result for the three experimental conditions Baccino, T., & Manunta, Y. (2005)

In this example from Baccino and Manunta (2005) focused on information processing we can see an EFRP analysis that requires accurate synchronization between an eye tracking and EEG system. They use the parafoveal-on-foveal paradigm to demonstrate the use of this combined analysis and try to understand the cognitive processes behind this effect. In order to do this, they present three pair-word conditions, a semantically associated target word, a semantically nonassociated target word, and a nonword. The onset of this analysis window is not the time of the stimuli presentation, but that of the first fixation in a designated area of interest. The EEG data on the graph represents those three conditions in the experiment in the first 200ms of the fixation duration in set area.

Two factors conditioned the data analysis of EFRP in terms of onset. First, the eye tracking data processing establishes the start of the fixation and determines the onset for the EEG analysis. Second, the synchronization between the device capturing the EEG data and one recording gaze data. Both affect the exact position where we will mark the onset. While the fixation detection criteria that we use can be a topic for discussion, the correct synchronization of the signals is critical.

How do we synch with hardware

There are mainly two approaches to ensure proper synchronization between both devices and with the stimuli presentation: 1) hardware-based and 2) software-based.

Then, you may wonder what is the most appropriate for your experiment,

The answer to this question will mainly depend on the characteristics of the experimental setting (mobility of the setup, flexibility, ergonomics, or participant experience that we want to achieve). But also in the required temporal accuracy that we need in order to analyze the data in a way that we can capture correctly the process we want to explore (ERPs, Source Localization, Frequency analysis…). In experimental protocols where time accuracy is essential, the recommendation will always be, whenever possible, to use a hardware synchronization method.

1. Hardware synchronization with stimuli presentation:

The first and most reliable option we have consists of a photodiode connected to the recording device. A photodiode is a sensor that detects changes in light. If the sensor is placed in front of the screen it can detect changes between black, grey, and white as they represent changes in luminosity. This allows us to detect changes on the screen when they really happen, and to directly record this signal with the data. In other words, the stimulus presentation is detected as soon as it is actually displayed on the screen and visual to the participant and this information is recorded simultaneously with, for example, the EEG data.

Photodode Signal Representation

Graph 4: Photodiode signal representation.

Photodiode Event Transformation for Eeglab

Graph 5: Photodiode event transformation for EEGlab.

It is important to stress that the photodiode signal needs to be recorded directly on the EEG or biosignal amplifier. If the setup relays on a photodiode that is recorded separately from the signal that we are trying to synchronize, then we will also need to synchronize both systems carefully to avoid sources of error.

TTL synchronization

Another hardware option is TTL synchronization. This system is based on a digital sync signal sent through a cable connection between the computer that is presenting the stimuli and the amplifier/computer that is recording the data. As this is a point-to-point connection, we are minimizing the interference due to protocols or/and interfaces, making the data transmission almost instantaneous (around 100ns of delay).

However, this synchronization methodology does not account for the delays that can be introduced on the stimuli processing side. As mentioned, equipment resources like processing power, system memory or graphics, or audio card delays can be introduced and the use of a TTL system may not take them into account properly.

Hardware synchronization between devices

When synchronizing multiple devices, we can use the same principles described for the TTL based stimuli synchronization. The TTL can be used as an input in form of a pulse between the two devices. This is usually performed using a “heartbeat” device inside one of the systems to be synchronized. The heartbeat system sends a TTL signal out of the amplifier every 8s (the time may vary depending on the hardware manufacturer) allowing the researcher to align the two datasets during post-processing.

It is important to notice that the accuracy of all these discussed hardware synchronization options is dependent on the hardware sampling rate: the accuracy of the synchronization will never exceed the sample period of the slowest device. For example, if we are recording at a sample rate of 256Hz, our temporal accuracy on the signal recording, and on the trigger recording, will never be lower than a single sample. In this case, this means that it will never be lower than 3,91ms - which means an estimated average error of +-1.95ms.

Software Synchronization

Although hardware synchronization is always our recommendation, there are situations where having a tethered connection to a computer would limit the behavior we want to record. Or, there are situations where the tradeoff between the required time accuracy and the usability of the setup make having a cable connection unnecessary. For these experimental procedures, software synchronization options can be used.

Start-stop synchronization

The simplest way of synchronizing two systems is to start and stop them at the same time. This idea is, however, difficult to implement since desynchronization over a period of time can easily be in the order of seconds. Note that, even when sending the start command to both recording devices at the same time, something difficult to achieve, systems may have different initialization times due to communication protocols, internal data processing, or other issues.

However, there are situations in which this may be acceptable and a reasonable option. Good examples are experiments where we record the spontaneous behavior of a participant during a free task (i.e. without a predefined train of stimuli) and we need to synchronize several devices that we are simultaneously recording. This can be a video feed (webcam, video recording…), an audio recording, a screen recording, etc.

This start-stop procedure will not yield the most accurate synchronization, but it will be enough for this kind of situation if our analysis uses metrics over long intervals (as a rule of thumb, one order of magnitude larger than the sync error). Usually, after we will reference everything to one of the recorded input streams (e.g. the webcam) to mark the intervals that we want to study on the EEG data.

TCP/IP Synchronization

A version of the synchronization described above is to use software-generated events. This can be done using the common TCP/IP protocol. The idea is to create a TCP/IP connection between the two systems that need to be synchronized and use a predefined protocol to mark events. For instance, using TCP/IP connections between a stimulus presentation software and the recording software will allow us to send events from one to the other in order to mark the signal collection. 

TCP/IP is quite flexible in the type of configuration and messages that can be exchanged. Usually one of the software works as a server and the others as clients. Once the connection is established, it is possible to send messages from a server to a client and vice versa through a specific IP address. Sometimes these messages are predefined by the software, and others the researcher can configure which information should be logged. Using ePrime, for example, we can configure the software to send an event at every onset of a stimulus.

The time accuracy of TCP/IP is acceptable, but it will not be as good as the one under hardware synchronization. The TCP/IP protocol uses higher-level system resources than the TTL signal which can introduce latencies, communication times suffer from network congestion and the different software stacks can also add other sources of delay. Moreover, this protocol is recorded between applications, the server, and the client, not directly by the amplifier, as with hardware solutions.

Despite these limitations, this methodology is easier to set up and it is more extended through the market which means that many applications are already compatible with the protocol. Besides, in situations where we want to have a wireless recording to wearable devices without a cable connection, but still want to present digital stimuli, this will always be a more precise solution than relying on Start-Stop synchronization.

Recommendations depending on the researcher situation or needs

As discussed, the most adequate technique to synchronize your setup will always depend on two factors, the time accuracy needed for the task at hand and the requirements of the experimental setup.

In situations where accurate synchronization is vital, hardware solutions are the gold standard. Depending on the systems that you use and the stimuli characteristics, one can opt for TTL or the photodiode. However, this will always imply a compromise in the mobility and usability of the system. In stationary labs with a static recording area, this may not be a problem. In setups where some degree of freedom of movement is needed, or when the equipment needs to be portable, this can be an inconvenience.

Finally, in experimental situations where time is not a critical factor or in where mobility and usability are required, software synchronization provides a good solution. This will very likely result in some loss of accuracy in the registered time of the events, but it is an acceptable tradeoff for situations where otherwise we could not record data.

NOTE: There are other synchronization options meant for post-processing synchronization that has been either very briefly mentioned (as an event marking) or not mentioned at all (timestamp synchronization). According to your needs, those may be options you should also consider.

About the Author

Andreu Oliver, Ph.D. - Global Business Development Director at Bitbrain (LinkedIn)

Andreu Oliver has a Psychology degree with a Cognitive Psychology specialization (2012) and an MSc in Marketing (2013). In 2017, he received the Ph.D. by the Department of Psychology of Communication and Change at the Universitat Autònoma de Barcelona. From 2015 on he has been working in the sector, first as a distributor for Tobii Pro in Spain and Portugal, and now as Global Business Development Director at Bitbrain.

Bibliography

  • Baccino, T., & Manunta, Y. (2005). Eye-Fixation-Related Potentials: Insight into Parafoveal Processing. Journal of Psychophysiology, 19(3), 204–215. doi:10.1027/0269-8803.19.3.204 

You might also be interested in: