Are you setting up an experiment with a mobile EEG device and you want to send triggers via Presentation? You can do it completely wirelessly and without much effort. In this short blog, we describe how to send triggers using LSL and mbt Streamer.
You can simply Enable sending LSL triggers from the General Settings of Presentation, to send a LSL stream called “Presentation” to any other app on the same computer, or any other device on the network.
Fig 1. Check the box saying “Send event data to LSL”.Fig 2. You need to Open the stream in Presentation for LSL, in LSL Properties.
The LSL stream will be automatically detected in the mbt Streamer, and can be recorded all into one .xdf file, as the LSL LabRecorder does. However, mbt Streamer can do much more, as it can integrate keyboard presses as markers, and include 3D head motion visualizations etc.
Fig 3. Recording LSL in mbt Streamer. To bring no confusion (kama) is the name of my computer but you can imagine streams coming from many different computers with minimal delay (in a few ms). I have also checked an option in mbt Streamer to record all keyboard presses as LSL on the computer.
Then you can open the .xdf file with Python or Matlab and analyze the data.
Fig 4. Example code to open a .xdf file in Python.
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
Did you think of performing an experiment in VR? And did you wish to better understand the behavioral reactions as well as psychological states of your participants?
You could capture hand or head movements, breathing reactions along with EEG recordings. First, your EEG device needs to be mobile and thin enough to fit the VR headset, e.g. Smarting PRO mobile EEG. Then, you need to add triggers to your VR environment each time a certain stimulus appears, and align them to all the above mentioned continuous streams (EEG, breathing, hand/head movements).
Fig 1. Smarting Pro with HTC Vive and Alyzé breathing belt
Here is an example on how to stream the positions and rotations of both VR Controllers (hands) and head of a HTC Vive VR headset, as a continuous LSL StreamOutlet from Unity. At the same time, triggers from Unity are being sent each time an event of interest occurs. On the other hand, breathing using Alyzé (Ullo) and EEG using Smarting PRO are being streamed as well. Finally all the streams are recorded using the mbt Streamer. All data is automatically synced by LSL.
Used here:
Windows 10, 64 bit, all on one computer; but you can use multiple computers, typically one for VR and another for physiological signal recordings (EEG and breathing), in which case all computers must share the same network e.g, local wifi.
Fig 2. Simple Sample scene in Unity, example from SteamVR
Then Add LSL4Unity to your project using the Window->Package Manager-> Add Package from disc…
Fig 3. Open the package.json file.
Then import all the existing Samples (all 3 of them).
Fig 4. Importing Samples from LSL4Unity
Streaming continuous signals
To track the position and rotation of your controllers and send them as LSL streams to Lab Recorder or mbt Streamer, you should simply drag and drop the “PoseOutlet.cs” for both controllers as a new Component in the Inspector.
Fig 5. Drag and drop script “PoseOutlet to the controllers and camera”
If you wish to track the head movements as well, just do the same, drag and drop the same script, but this time into the Camera.
Streaming Triggers
Let’s send a trigger every time the ball touches the ground, as it is bouncing in the Simple Sample scene. To do so, simply drag and drop the “SimpleOutletTriggerEvent.cs” onto the Sphere (game object that is bouncing on the floor).
Fig 6. Drag and drop script to the Sphere game object
Then open the script, you can see the stream name and you can rename it if you like.
Then notice we are sending strings as Marker type, and irregular sampling rate (equal to zero).
StreamInfo streamInfo = new StreamInfo(StreamName, StreamType, 1, LSL.LSL.IRREGULAR_RATE,
channel_format_t.cf_string, hash.ToString());
outlet = new StreamOutlet(streamInfo)
And to enable to send LSL triggers each time the ball touches the ground, instead of “void OnTriggerEnter(Collider other)” and “OnTriggerExit(Collider other)”, type:
To use the breathing belt and convert bluetooth Low Energy (BLE) data to LSL type in PowerShell or Command Prompt “stream_breathing_amp_multi.py -m MAC address of your device”, as below:
Obviously, you need to open the script from its location (in my case it is in Downloads), and to find the right MAC address of your device (e.g., you can find out with a great app called: NRF Connect).
Visualize in Real Time with openViBE
To visualize in real time your streams or perform some signal processing, you can use openViBE. Play your scene in Unity (leave it on Play mode), and open the openViBE Acquisition Server;
Fig 7. OpenViBE Driver Properties of the Acquisition Server
You can see 3 streams from Unity denoting the rotation and position of 2 controllers and the camera. We cannot see our Markers as they are “string” and not “int32” (the only type accepted by Acquisition Server).
Then in openViBE Designer create the simplest scenario (Acquisition client and Signal Display) to view the streams in real time (below is displayed only the Right Hand Controller):
Fig 8. Real time visualization of controllers positions and rotations, using openViBE Designer
Record all streams at once
And finally, let’s record it all in one place, in the mbt Streamer and you’re good to go!
Fig 9. 6 seconds recording summary by mbt Streamer, on computer (kama); recording 3 unity streams, 1 EEG Smarting PRO, breathing signals and triggers from Unity.
As you can see in Fig 9., there are 3 unity continuous signal streams, 1 marker stream, a breathing belt (ullo_bb) and EEG. Once I stopped recording, after 6 seconds we can see that the ball bounced 14 times as it produced 14 events.
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
If you wish to keep track of the time each stimulus appears in PsychoPy during your experiment, and align each of them to your EEG recording, you would need to automatically send triggers each time a stimulus appears. This is possible with LSL. In this blog, we will explain how to precisely send triggers with PsychoPy and mobile EEG using Smarting PRO for this specific example.
How To
Windows 10, 64bit (in this example we used Windows 10 but check if it applies to other platforms)
In Stroop, words are appearing one after the other in different colors, e.g., RED then RED, the participant should note the color of the word ignoring the word itself.
Open or Create your experiment in the Builder (example from tutorial above), then compile to Python script.
Fig 1. Compile the experiment to Python
Then open the Coder window and add at the end of section called # — Import packages —:
from pylsl import StreamInfo, StreamOutlet
# Set up LabStreamingLayer stream.
info = StreamInfo(name='PsychoPy_LSL', type='Markers', channel_count=1, nominal_srate=0, channel_format='string', source_id='psy_marker')
outlet = StreamOutlet(info) # Broadcast the stream.
Obviously, you need to install pylsl (pip install pylsl) in order to make this code (LSL) work.
If you want to send continuous data, the nominal_sampling rate should change from 0 (irregular sampling rate) to some regular value. We are sending each word as a “string” of letters the participant is seeing on screen. If in some cases you wish to send numbers then instead of “string” write “int32” or “float32”. LSL will automatically associate timestamps for each marker’s appearance.
Now, you want to send a marker whenever a word appears on the screen, and whenever the user responds with a keyboard press.
In the code you will find a section called # *response* updates, write:
I am printing in the console of PsychoPy for debugging but you can comment it. If you are using only one screen, to see the console output during the experiment, you must change in section # — Setup the Window — , fullscreen=False instead of True.
In the section called # *text* updates, write:
# Send LSL Marker : word and its colour as string
mark = word +"_"+ colour
print("Word_color: [%s]" % mark)
outlet.push_sample([mark]) # Push event marker.
Word and colour are the names of my variables (in the .csv conditions file), so if you wrote other names, you should change them. I wanted to send not only the word that is appearing on the screen but its respective color as well.
WARNING: Whenever you Compile to Python from the Builder it will erase your code in the Coder, so don’t forget to Save your python code from the Coder.
At the same time, you wish to record brain activity, e.g. to measure workload levels during the experiment, or some error related potentials, or EEG coherence sensitive to congruent and incongruent words-colors etc.
Open mbt Streamer and Run the PsychoPy experiment from the Coder (as the Builder did not integrate your new code). Once you press run, you can refresh the LSL streams in mbt Streamer and see your markers stream, as below.
Then simply press Record in mbt streamer to save the all Markers and EEG into one .xdf file, with automatically synced streams and you’re good to go!
Fig 2. After pressing Run in the Coder, and pressing Stream in mbt Streamer, you can see 2 LSL streams in the mbt Streamer. One coming from PsychoPy, and the other from the EEG Smarting PRO; all received on 1 computer (called “kama”)
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
If you are up for an EEG experiment which requires offline recording – for example, record EEG in the woods where there is no internet connection, so you’re recording EEG on an SD card; Good news – you are in the right place! In this blog, we will walk you through a way to set up TTL in Neurobs Presentation software, using cable triggering and Smarting PRO mobile EEG.
Smarting Pro supports TTL input of 1 bit. It is a cable that is plugged, on one side, into the amplifier (as an audio jack of 2.5mm) and on the USB port of a computer on the other side. Below you will find an example of setting up TTL in Neurobs Presentation.
How To
Windows 10, 64bit (in this example we used Windows but check if it applies to other platforms)
You should have an experiment scenario already made in Neurobs Presentation or you can use this simple one (mbt sound example). Note that the experiment file (.exp) keeps the path of the computer it is saved on, and the ports it used. To solve this problem, just save it as a new experiment with Save As in Presenter. But before saving, make sure you have changed the Port Settings in the Presenter, as follows.
Fig 1. TTL setup (left), Windows Device Manager (right).
Then go to Presentation->Settings, Add an OUTPUT Port, because you are sending triggers from the computer to the EEG amplifier. Note that this applies to any other output (cable) using serial port, not only TTL.
Fig 2. Adding the output port in Presentation.Fig 3. Filling the values for TTL.
After you have filled in the values like in the Figure above, just click on Close.
Finally, if you want to use the demo examples from Presentation, like the well-known N-Back task, simply add the port as indicated earlier, and Save As new experiment file (it will modify the .exp file with the correct port and path).
TTL is “physically” marked onto the EEG data with minimal delay and can be recorded directly onto the SD card of the amplifier together with the EEG. Additionally, they can be both sent together back to the computer (mbt Streamer) via Bluetooth. In that case, open and connect the mbt Streamer to allow receiving EEG streams on your computer (capturing bluetooth packets). You can also see TTL triggers marking EEG signals in real-time on mbt Streamer. TTL protocol facilitates the integration of embedded systems e.g. it could send events from the Arduino board (using a serial communication).
Fig 4. To test if TTL is working, simply look at the EEG signals in real time in the mbt Streamer, and Send Test triggers from Presentation.Fig 5. Every time you press Send (simulating a stimulus), in real time you can see a vertical line appearing in the EEG signals within the mbt Streamer with no delay. Do not mind the EEG signals, the cap is not even placed on the subject’s head.Fig 6. Remember in mbt Streamer to click ON for TTL markers if you wish to record them also as LSL markers. In that case, you will have EEG streams, and TTL triggers as LSL.Fig 7. Depicts the path of TTL triggers and EEG. (1.) triggers are sent from Neurobs Presentation for each stimulus via TTL cable to Smarting PRO amplifier, instantly marking the EEG streams “physically”. (2.) EEG streams (with triggers) are streamed via Bluetooth back to the computer where (3.) they are converted into LSL streams and sent to the network as multicast LSL protocol to be received by any other computer on the same network.
This way you can see how TTL markers, once physically integrated into the EEG, are streamed along with the EEG signals through bluetooth and converted into LSL streams.
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
Once your experiment is outlined and equipment ready, it is time to consider the possibilities of sending triggers with mobile EEG and ways to synchronize it with other devices/stimuli. We will start from triggering types, and how you can mark different events using mobile EEG. Author of this blog post, Dr. Mladenovic, will share examples of different triggering and synchronization examples using Smarting PRO mobile EEG.
*Introduction written by mbt team
General idea
Does your study involve capturing brain reactions to simuli? Independent of the type of stimuli you are using in your experiment – images, videos, games, tasks, sound etc., you will soon realize that you need to keep track of the timing when the stimuli are presented with respect to the EEG data, so that you can align one’s reactions to those stimuli in time. This is what people call triggering. You need a trigger, marker or event (different names for the same thing) to mark the moment each stimulus appeared, and to align (sync) it to the same moment in your EEG recording (or any other physiological recording). Also, you do not want your trigger to be late, e.g. if you press a button, you want it to mark the stream instantly without delay, as in figure below.
Fig 1. Example of manual triggering of a stimulus. In the first example (top), the marker/trigger is set correctly without delay capturing the correct EEG reaction. In the second example (bottom) it has a considerable delay causing the EEG reaction, captured afterwards, to make no sense.
1.Types of triggering
Various tools enable capturing participant’s behavioral responses to stimuli (keyboard press, mouse movements or clicks), as well as marking the start and end of a stimulus set by the experimenter. Whatever includes manual initialization/response of events it is called manual triggering. On the other hand, for example, you can program the stimuli to appear every 3 seconds, or each time a status of the stimulus changes, and this is called automatic triggering. Manual and automatic triggering can be performed via a cable or wirelessly.
1.1 Cable triggering (TTL) – precise but not mobile hardware triggering
Smarting Pro mobile EEG device has an input in a form of additional cable going in the EEG amplifier (like an audio jack), called TTL, plugged to a USB port of a computer on the other side. Basically, a stimulus from the computer as a visual/sound or as some key/mouse press etc., will be sent via USB (converted into UART) to be “physically” incorporated in the EEG signals (this is hardware, cable or wire triggering). This is one way to assure your stimuli will appear as markers at the right place and without delay. TTL is supported by many apps for experiment design, like Neurobs Presentation, PsychoPy, Pro Lab etc.
Fig 2. TTL setup with Smarting PRO where on one side an audio jack 2.5mm is plugged into Smarting PRO amplifier and on the other to the USB port of a computer.
However, what happens if you want your participants to be mobile while watching (or listening to) stimuli, e.g., in a VR/Virtual Reality environment, or you want to design an outdoor study? In this type of study, you would need a wireless, mobile solution. In this form of complex setup, Smarting PRO provides wireless connection while keeping high bandwidth while losing almost no packets thanks to Bluetooth 5 (it has Bluetooth 5.0)
LSL or Lab Streaming Layer allows triggering at a sub millisecond precision, which is especially important for rapid brain reactions such as Event-related Potentials like P300 which arise and disappear within a few hundred milliseconds
LSL is open source and it supports an enormous number of applications: OpenViBE, Unity, Unreal, PsychoPy, NeuroKit2, Presentation, E-Prime 3.0 etc., and devices: Smarting PRO, Pupil Labs, (eye tracker), SR Research Eyelink, Microsoft Kinect, Shimmer ECG device, HTEC Vive PRO etc. See the list of supported apps and hardware here.
Script triggering with LSL – examples
PsychoPy is mainly used to capture behavioral responses to stimuli of participants, or to manually mark the start and end of a stimulus set by the experimenter. With LSL, whenever a keyboard key is pressed or a mouse moved or whatever behavioral activity is of interest, you can instantly create an LSL marker and align it to your EEG stream (or any physiological data stream). Also, the automatic triggering can be very easily implemented.
Synchronize different apps and devices with LSL – Hyperscanning with mobile EEG
With LSL, as we mentioned, you can sync a lot of data from various devices in real time, but, you can also allow easy synchronization between multiple EEG devices, called Hyperscanning. Smarting PRO is designed with hyperscanning studies in mind, and it uses LSL to synchronize all devices, with so-called Hyperscanning mode.
Fig 3. A hyperscanning example of an experiment where dancers and the musician react to each other. In this setup, the dancers’ EEG signals are streamed via Bluetooth (one-directional communication) to the computer closer to them, and converted into LSL streams. On the other hand, the musician’s EEG signals are streamed via Bluetooth to another computer closer to her (reducing Bluetooth packets losses with distance). Once converted from Bluetooth into LSL, these streams are being streamed and synchronized (multicast communication) onto all computers within the same network (wifi) no matter the distance.
LSL uses standard Internet protocol to send and receive data, so you can synchronize streams from as many devices or apps as you like, as long as all devices are connected to the same network (WAN or LAN). Great thing is that you do not need to write IP addresses but just give a name to your stream (you write the same name in the sending and receiving applications). This means that your code is agnostic to the device or network it is running on (no need for changing configuration files whenever you change a device and IP addresses).
In the Figure below, an LSL stream is sent from openViBE (generated signal) to control a game object (Player) in Unity.
Fig 4. Sending EEG signals from openViBE to Unity using LSL. The same name of the stream is used in both applications.
TIP2: Avoid leaving the default name of the LSL stream as other researchers in the building (using the same network) might be performing other unrelated experiments and using LSL with the same default name. In such cases you might receive their streams, and they yours.
Fig 5. Streaming continuous movements of controllers/ head with triggers from Unity, while streaming breathing (Alyzee) and EEG (Smarting Pro) as continuous signals.
LSL – all in one place
You can record it all in one .xdf file, using for instance the mbt Streamer that typically serves to receive the EEG streams via Bluetooth, but it also acts similarly to LSL LabRecorder in the sense that it can record all available LSL streams. It also has additional features, e.g. besides recording external LSL streams it can record keyboard events directly from the same computer, or really precise 3D head motion.
Fig 6. Recording all LSL streams and markers on mbt Streamer.
XDF files can be opened with several analysis software environments such as Matlab, Julia, or Python (pyxdf, example below).
In short, you can send markers at every stimulus, synchronize continuous streams of as many devices and apps as you can imagine using one network, and capture them all into one single file, all with LSL. There is a strong community behind LSL, and lots of documentation available to get started.
Conclusion
As Dr Mladenovic mentioned in this short blog post, there are many ways to send or receive triggers, and synchronize with other devices in order to make a multimodal experiment setup and get all psychological information of interest for your particular research goals. If you need any help with this, feel free to check our support page or contact form. Good luck wpith your research!
*Conclusion written by mbt team
We can assist you to choose the
product that best fits your needs