Event triggering and data synchronization with mobile EEG

Introduction

Once your experiment is outlined and equipment ready, it is time to consider the possibilities of sending triggers with mobile EEG and ways to synchronize it with other devices/stimuli. We will start from triggering types, and how you can mark different events using mobile EEG. Author of this blog post, Dr. Mladenovic, will share examples of different triggering and synchronization examples using Smarting PRO mobile EEG.

*Introduction written by mbt team

General idea

Does your study involve capturing brain reactions to simuli? Independent of the type of stimuli you are using in your experiment – images, videos, games, tasks, sound etc., you will soon realize that you need to keep track of the timing when the stimuli are presented with respect to the EEG data, so that you can align one’s reactions to those stimuli in time. This is what people call triggering. You need a trigger, marker or event (different names for the same thing) to mark the moment each stimulus appeared, and to align (sync) it to the same moment in your EEG recording (or any other physiological recording). Also, you do not want your trigger to be late, e.g. if you press a button, you want it to mark the stream instantly without delay, as in figure below.

Fig 1. Example of manual triggering of a stimulus. In the first example (top), the marker/trigger is set correctly without delay capturing the correct EEG reaction. In the second example (bottom) it has a considerable delay causing the EEG reaction, captured afterwards, to make no sense.

1.Types of triggering

Various tools enable capturing participant’s behavioral responses to stimuli (keyboard press, mouse movements or clicks), as well as marking the start and end of a stimulus set by the experimenter. Whatever includes manual initialization/response of events it is called manual triggering. On the other hand, for example, you can program the stimuli to appear every 3 seconds, or each time a status of the stimulus changes, and this is called automatic triggering. Manual and automatic triggering can be performed via a cable or wirelessly.

1.1 Cable triggering (TTL) – precise but not mobile hardware triggering

Smarting Pro mobile EEG device has an input in a form of additional cable going in the EEG amplifier (like an audio jack), called TTL, plugged to a USB port of a computer on the other side. Basically, a stimulus from the computer as a visual/sound or as some key/mouse press etc., will be sent via USB (converted into UART) to be “physically” incorporated in the EEG signals (this is hardware, cable or wire triggering). This is one way to assure your stimuli will appear as markers at the right place and without delay. TTL is supported by many apps for experiment design, like Neurobs Presentation, PsychoPy, Pro Lab etc.

Fig 2. TTL setup with Smarting PRO where on one side an audio jack 2.5mm is plugged into Smarting PRO amplifier and on the other to the USB port of a computer.

Click here to see how to set up TTL in Presentation.

However, what happens if you want your participants to be mobile while watching (or listening to) stimuli, e.g., in a VR/Virtual Reality environment, or you want to design an outdoor study? In this type of study, you would need a wireless, mobile solution. In this form of complex setup, Smarting PRO provides wireless connection while keeping high bandwidth while losing almost no packets thanks to Bluetooth 5 (it has Bluetooth 5.0)

1.2 Wireless triggering with Lab Streaming Layer – wireless synchronization protocol

LSL or Lab Streaming Layer allows triggering at a sub millisecond precision, which is especially important for rapid brain reactions such as Event-related Potentials like P300 which arise and disappear within a few hundred milliseconds

LSL is open source and it supports an enormous number of applications: OpenViBE, Unity, Unreal, PsychoPy, NeuroKit2, Presentation, E-Prime 3.0 etc., and devices: Smarting PRO, Pupil Labs, (eye tracker), SR Research Eyelink, Microsoft Kinect, Shimmer ECG device, HTEC Vive PRO etc. See the list of supported apps and hardware here.

Script triggering with LSL – examples

PsychoPy is mainly used to capture behavioral responses to stimuli of participants, or to manually mark the start and end of a stimulus set by the experimenter. With LSL, whenever a keyboard key is pressed or a mouse moved or whatever behavioral activity is of interest, you can instantly create an LSL marker and align it to your EEG stream (or any physiological data stream). Also, the automatic triggering can be very easily implemented.

Check here how to send LSL automatic triggers in PsychoPy and record with mbt Streamer.

Also look at quick set up for LSL in Neurobs Presentation.

Synchronize different apps and devices with LSL – Hyperscanning with mobile EEG

With LSL, as we mentioned, you can sync a lot of data from various devices in real time, but, you can also allow easy synchronization between multiple EEG devices, called Hyperscanning. Smarting PRO is designed with hyperscanning studies in mind, and it uses LSL to synchronize all devices, with so-called Hyperscanning mode.

Fig 3. A hyperscanning example of an experiment where dancers and the musician react to each other. In this setup, the dancers’ EEG signals are streamed via Bluetooth (one-directional communication) to the computer closer to them, and converted into LSL streams. On the other hand, the musician’s EEG signals are streamed via Bluetooth to another computer closer to her (reducing Bluetooth packets losses with distance). Once converted from Bluetooth into LSL, these streams are being streamed and synchronized (multicast communication) onto all computers within the same network (wifi) no matter the distance.

LSL uses standard Internet protocol to send and receive data, so you can synchronize streams from as many devices or apps as you like, as long as all devices are connected to the same network (WAN or LAN). Great thing is that you do not need to write IP addresses but just give a name to your stream (you write the same name in the sending and receiving applications). This means that your code is agnostic to the device or network it is running on (no need for changing configuration files whenever you change a device and IP addresses).

In the Figure below, an LSL stream is sent from openViBE (generated signal) to control a game object (Player) in Unity.

Fig 4. Sending EEG signals from openViBE to Unity using LSL. The same name of the stream is used in both applications.

TIP2: Avoid leaving the default name of the LSL stream as other researchers in the building (using the same network) might be performing other unrelated experiments and using LSL with the same default name. In such cases you might receive their streams, and they yours.

LSL – sync example

Sync LSL streams from breathing belt (Ullo), mobile EEG, and Unity with HTEC VIVE PRO VR.

Fig 5. Streaming continuous movements of controllers/ head with triggers from Unity, while streaming breathing (Alyzee) and EEG (Smarting Pro) as continuous signals.

LSL – all in one place

You can record it all in one .xdf file, using for instance the mbt Streamer that typically serves to receive the EEG streams via Bluetooth, but it also acts similarly to LSL LabRecorder in the sense that it can record all available LSL streams. It also has additional features, e.g. besides recording external LSL streams it can record keyboard events directly from the same computer, or really precise 3D head motion.

Fig 6. Recording all LSL streams and markers on mbt Streamer.

XDF files can be opened with several analysis software environments such as Matlab, Julia, or Python (pyxdf, example below).

In short, you can send markers at every stimulus, synchronize continuous streams of as many devices and apps as you can imagine using one network, and capture them all into one single file, all with LSL. There is a strong community behind LSL, and lots of documentation available to get started.

Conclusion

As Dr Mladenovic mentioned in this short blog post, there are many ways to send or receive triggers, and synchronize with other devices in order to make a multimodal experiment setup and get all psychological information of interest for your particular research goals. If you need any help with this, feel free to check our support page or contact form. Good luck wpith your research!

*Conclusion written by mbt team

Recommended reading