This year, we celebrate X years of mbt, and on this occasion, our CEO Ivan Gligorijevic, has wrote some reflections on the first decade.
From the idea to the leading mobile EEG company.
When you have an idea, your child instincts expect support and praise. It is often easy to give that kind of support to your inner self, especially over a glass of beer in front of you and your creative friend. But often (almost always), supporting an idea asks for more than a tap on the back. It asks you to be a part of it, promote it, bare all the risks and maybe – if all things work out and all assumptions prove true, harvest the fruits of it. This is probably as close as it gets to describing the beginnings of mbt.
I often wonder where from was the self-confidence that mobile EEG will be “the thing”, that people will accept it, that we will obtain the grant to start it, that we will have the knowledge, strength and will to follow it through.
And even this August 21st, a full decade after mbt was born… it seemed almost surreal to think about celebrating 10 years of mbt – because our mission kept us too busy to zoom out and think about it.
For us, from the first moment, mobile EEG was not just about a new product and opportunities that come from a new research paradigm. We dreamed of, and still do, about bringing brain recordings to real life, because of our strong belief that we could all benefit from knowing how our brains work in real life. We believe that tracking our brain the way we track our hearts today would help us unlock the hidden potential of a human brain and the world we live in.
To get faster to where EEG should be… the first platform we used for recording brain data was not the computer – but mobile phone. This way, we sent our message across – we wait for a day – that is not too far – when brain data could be used as any other, in any situation. Of course, to make this dream come true, there were many steps and the timing never seemed right. Disregarding the latter, we worked in two directions: making any study setup possible and pushing EEG towards everyday use.
X years on, I can proudly say that we have moved the boundaries of what could be done in research setups – solving many challenges and making it possible to observe the human brain in the only relevant conditions – without superficial (over) simplifications. Low and high density EEG, recordings while moving, artifact rejection, hyperscanning (social studies). We have made several generations of mobile EEG devices, that converged to our Smarting PRO series, mobile EEG amplifiers that are sturdy, reliable and powerful enough that it is hardly possible to imagine something that they cannot do.
While still working with the niche of pioneers who believed that mobile EEG is possible (and the way to go in research) – we also kept pushing for its adoption in real-life. Ever since the introduction of behind the ear recordings in 2015, we were thinking how we can speed it up. This finally brought Smartfones, a consumer looking form factor that you can easily imagine in both free time and work environments. Successful experiments proving that the production can be optimized, reducing errors and risk prompted us to produce a brain reading industrial cap prototype to further examine the possible use cases for the coming industry 5.0.
But one thing that comes first (being intentionally put the last to emphasize its importance) is people who did all of this. For all this time, mbt has been a home – place where you see people with both high moral and professional standards, people you can rely on, and that you enjoy spending time with. This is what holds the vision alive in rainy days and what makes you want to jump out of the bed on Monday mornings.
We are more than eager to continue on our mission path, continue supporting our partners uncompromisingly and getting inspiration from them, continuously moving the boundaries of neuroscience. I can’t wait to see what next X years will bring.
In case you have not had a chance to see our PRO X launch video yet, you can find it here.
Keep up with our work and the latest mbt news by following us on social media:
EEG in everyday life: As scientists and researchers, we have asked ourselves – what happens in our brain when we play darts or table football? How does it actually work and why can we sometimes not react as fast as we want? We have investigated these interesting activities using BESA software together with mBrainTrain, a manufacturer and pioneer of fully portable, mobile EEG devices and have come to interesting results.
Who are you and what is your professional background?
I am a Multi-Modal Scientist and Test Manager at BESA. But most importantly I am a researcher. My career path took a few interesting turns. I studied at Warsaw University of Technology in the Electronics Faculty. For my master’s thesis in engineering, I established the first simultaneous EEG-fMRI pipeline in Poland, which was pioneering work back then. I went on to the World Hearing Center in Warsaw, where I continued using this technique in research and preclinical applications, e.g. for evaluating brain correlates of dyslexia treatment, which lead to a Ph.D. at the Medical University of Warsaw. After this, I worked for GE as Eastern European Clinical Lead, and finally, I ended up at BESA. Here I can exploit all the skills I have, the multimodal data processing knowledge, software development, research attitude, and customer-oriented approach. All in all, I always wanted to know more about how the brain works and deliver tools that will help others with a similar goal.
How did you get the idea to study brain activity outside the medical lab?
From time to time in daily life we come across situations that we simply don’t understand. For example, have you ever played table soccer? There is a certain ball speed (between fast and slow) that you cannot react to as a goalie. Why? What is going on? Moreover, how can we actually play table soccer? It is complicated when you think about it – hand-to-eye coordination, reflex decision making… and we still consider this as entertainment? Darts – another case of eye-to-hand coordination – is there a mechanism at play similar to table soccer or totally different? Or another example: why, when we leave a warm room to go out to the cold, initially we don’t feel the cold, then we feel it quite strongly but at some point we get used to it. All those questions are puzzling me constantly and I could not leave it unexplained.
How did the collaboration between BESA and mbt come about?
We met Ivan (mbt CEO) and his colleagues at the OHBM conference in Rome in 2019. We started talking and it showed that we share a mutual passion for brain study, plus that our products can work together. The data quality of their mobile EEG was extraordinary, matching lab quality. Ivan was also impressed with how efficiently and easily we can perform data analysis and source estimation using BESA. I grabbed the example data from mbt (checkboard visual cortex stimulation) registered using 24 channels mobile EEG (Smarting MOBI) and we easily managed to show activation in the visual cortex. We both started thinking that this is a new era for brain study. Why not start doing EEG in everyday life to shed a light on these tiny questions that we have but never have time to investigate?
How do the products of mbt and BESA fit together? How do they complement each other?
That is quite easy, yet not so obvious. For many years EEG was performed in the lab due to technical difficulties. We needed electrically shielded rooms and quite heavy hardware. Over time, the number of EEG channels increased, hardware size decreased and advanced numerical methods of data analysis bloomed. It is generally believed that for proper source analysis you need an EEG study to be performed in the lab so the quality is assured and the number of channels is sufficient. For a long time that was indeed only possible in the lab. Therefore, when mobile solutions for EEG established themselves no one considered that the known advanced data analysis methods could be used there. In the contrary, the emerging mobile EEG community was starting with basic data analysis embracing its limitations. However, over the years the gap between mobile and lab solutions began to vanish. Recently mbt introduced a 32-channel mobile EEG system (Smarting PRO) whose data is perfect to be used for advanced data analysis methods delivered by BESA: for example source localization and connectivity analysis.
What are the first findings/results?
So far we performed two EEG recordings in the BESA office: during a table soccer session and during playing darts. Our first approach was to analyze the darts playing session and we found a very interesting network related to a reaction to the throw. The visual cortex and motor cortex are sending information back to the ventrolateral area of the middle frontal gyrus in the left hemisphere. This brain region, according to the Brainnetome atlas (1), is responsible for action execution and motor learning as well as visual perception and spatial cognition! It is quite an extraordinary and really encouraging result to start with, don’t you think? We will now try to check if the table soccer triggers the same or a different network, using the BESA Connectivity software. More info on the pictures below.
Figure 1: The brain network connectivity pattern triggered after throwing the dart
Figure 2: The main node of the network activation triggered by dart throw – ventrolateral area 6 in left hemisphere (Brainnetome A6vl_L)
Figure 4: Darts session: The participant is playing darts while his EEG is being recorded, and we use the camera from the recoding mobile phone as additional stream. Video data is recorded simultaneously with EEG using a mobile phone.
In this short blog post, we have shown how one interesting, daily activity such as darts can be observed, recorded and analyzed using pioneering research tools. We, BESA along with mbt, think that many of these everyday setups could now be replicated and could possibly uncover some amazing findings – outside of the restricted laboratory setting. We are looking forward to hearing about more and more everyday use cases in which pioneering researchers step out of their labs into the real world – and we will continue to support them with our hardware and software.
(1) Fan, Lingzhong, Hai Li, Junjie Zhuo, Yu Zhang, Jiaojian Wang, Liangfu Chen, Zhengyi Yang, et al. 2016. “The Human Brainnetome Atlas: A New Brain Atlas Based on Connectional Architecture.” Cerebral Cortex 26 (8): 3508–26.
Are you setting up an experiment with a mobile EEG device and you want to send triggers via Presentation? You can do it completely wirelessly and without much effort. In this short blog, we describe how to send triggers using LSL and mbt Streamer.
You can simply Enable sending LSL triggers from the General Settings of Presentation, to send a LSL stream called “Presentation” to any other app on the same computer, or any other device on the network.
Fig 1. Check the box saying “Send event data to LSL”.Fig 2. You need to Open the stream in Presentation for LSL, in LSL Properties.
The LSL stream will be automatically detected in the mbt Streamer, and can be recorded all into one .xdf file, as the LSL LabRecorder does. However, mbt Streamer can do much more, as it can integrate keyboard presses as markers, and include 3D head motion visualizations etc.
Fig 3. Recording LSL in mbt Streamer. To bring no confusion (kama) is the name of my computer but you can imagine streams coming from many different computers with minimal delay (in a few ms). I have also checked an option in mbt Streamer to record all keyboard presses as LSL on the computer.
Then you can open the .xdf file with Python or Matlab and analyze the data.
Fig 4. Example code to open a .xdf file in Python.
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
Did you think of performing an experiment in VR? And did you wish to better understand the behavioral reactions as well as psychological states of your participants?
You could capture hand or head movements, breathing reactions along with EEG recordings. First, your EEG device needs to be mobile and thin enough to fit the VR headset, e.g. Smarting PRO mobile EEG. Then, you need to add triggers to your VR environment each time a certain stimulus appears, and align them to all the above mentioned continuous streams (EEG, breathing, hand/head movements).
Fig 1. Smarting Pro with HTC Vive and Alyzé breathing belt
Here is an example on how to stream the positions and rotations of both VR Controllers (hands) and head of a HTC Vive VR headset, as a continuous LSL StreamOutlet from Unity. At the same time, triggers from Unity are being sent each time an event of interest occurs. On the other hand, breathing using Alyzé (Ullo) and EEG using Smarting PRO are being streamed as well. Finally all the streams are recorded using the mbt Streamer. All data is automatically synced by LSL.
Used here:
Windows 10, 64 bit, all on one computer; but you can use multiple computers, typically one for VR and another for physiological signal recordings (EEG and breathing), in which case all computers must share the same network e.g, local wifi.
Fig 2. Simple Sample scene in Unity, example from SteamVR
Then Add LSL4Unity to your project using the Window->Package Manager-> Add Package from disc…
Fig 3. Open the package.json file.
Then import all the existing Samples (all 3 of them).
Fig 4. Importing Samples from LSL4Unity
Streaming continuous signals
To track the position and rotation of your controllers and send them as LSL streams to Lab Recorder or mbt Streamer, you should simply drag and drop the “PoseOutlet.cs” for both controllers as a new Component in the Inspector.
Fig 5. Drag and drop script “PoseOutlet to the controllers and camera”
If you wish to track the head movements as well, just do the same, drag and drop the same script, but this time into the Camera.
Streaming Triggers
Let’s send a trigger every time the ball touches the ground, as it is bouncing in the Simple Sample scene. To do so, simply drag and drop the “SimpleOutletTriggerEvent.cs” onto the Sphere (game object that is bouncing on the floor).
Fig 6. Drag and drop script to the Sphere game object
Then open the script, you can see the stream name and you can rename it if you like.
Then notice we are sending strings as Marker type, and irregular sampling rate (equal to zero).
StreamInfo streamInfo = new StreamInfo(StreamName, StreamType, 1, LSL.LSL.IRREGULAR_RATE,
channel_format_t.cf_string, hash.ToString());
outlet = new StreamOutlet(streamInfo)
And to enable to send LSL triggers each time the ball touches the ground, instead of “void OnTriggerEnter(Collider other)” and “OnTriggerExit(Collider other)”, type:
To use the breathing belt and convert bluetooth Low Energy (BLE) data to LSL type in PowerShell or Command Prompt “stream_breathing_amp_multi.py -m MAC address of your device”, as below:
Obviously, you need to open the script from its location (in my case it is in Downloads), and to find the right MAC address of your device (e.g., you can find out with a great app called: NRF Connect).
Visualize in Real Time with openViBE
To visualize in real time your streams or perform some signal processing, you can use openViBE. Play your scene in Unity (leave it on Play mode), and open the openViBE Acquisition Server;
Fig 7. OpenViBE Driver Properties of the Acquisition Server
You can see 3 streams from Unity denoting the rotation and position of 2 controllers and the camera. We cannot see our Markers as they are “string” and not “int32” (the only type accepted by Acquisition Server).
Then in openViBE Designer create the simplest scenario (Acquisition client and Signal Display) to view the streams in real time (below is displayed only the Right Hand Controller):
Fig 8. Real time visualization of controllers positions and rotations, using openViBE Designer
Record all streams at once
And finally, let’s record it all in one place, in the mbt Streamer and you’re good to go!
Fig 9. 6 seconds recording summary by mbt Streamer, on computer (kama); recording 3 unity streams, 1 EEG Smarting PRO, breathing signals and triggers from Unity.
As you can see in Fig 9., there are 3 unity continuous signal streams, 1 marker stream, a breathing belt (ullo_bb) and EEG. Once I stopped recording, after 6 seconds we can see that the ball bounced 14 times as it produced 14 events.
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
If you wish to keep track of the time each stimulus appears in PsychoPy during your experiment, and align each of them to your EEG recording, you would need to automatically send triggers each time a stimulus appears. This is possible with LSL. In this blog, we will explain how to precisely send triggers with PsychoPy and mobile EEG using Smarting PRO for this specific example.
How To
Windows 10, 64bit (in this example we used Windows 10 but check if it applies to other platforms)
In Stroop, words are appearing one after the other in different colors, e.g., RED then RED, the participant should note the color of the word ignoring the word itself.
Open or Create your experiment in the Builder (example from tutorial above), then compile to Python script.
Fig 1. Compile the experiment to Python
Then open the Coder window and add at the end of section called # — Import packages —:
from pylsl import StreamInfo, StreamOutlet
# Set up LabStreamingLayer stream.
info = StreamInfo(name='PsychoPy_LSL', type='Markers', channel_count=1, nominal_srate=0, channel_format='string', source_id='psy_marker')
outlet = StreamOutlet(info) # Broadcast the stream.
Obviously, you need to install pylsl (pip install pylsl) in order to make this code (LSL) work.
If you want to send continuous data, the nominal_sampling rate should change from 0 (irregular sampling rate) to some regular value. We are sending each word as a “string” of letters the participant is seeing on screen. If in some cases you wish to send numbers then instead of “string” write “int32” or “float32”. LSL will automatically associate timestamps for each marker’s appearance.
Now, you want to send a marker whenever a word appears on the screen, and whenever the user responds with a keyboard press.
In the code you will find a section called # *response* updates, write:
I am printing in the console of PsychoPy for debugging but you can comment it. If you are using only one screen, to see the console output during the experiment, you must change in section # — Setup the Window — , fullscreen=False instead of True.
In the section called # *text* updates, write:
# Send LSL Marker : word and its colour as string
mark = word +"_"+ colour
print("Word_color: [%s]" % mark)
outlet.push_sample([mark]) # Push event marker.
Word and colour are the names of my variables (in the .csv conditions file), so if you wrote other names, you should change them. I wanted to send not only the word that is appearing on the screen but its respective color as well.
WARNING: Whenever you Compile to Python from the Builder it will erase your code in the Coder, so don’t forget to Save your python code from the Coder.
At the same time, you wish to record brain activity, e.g. to measure workload levels during the experiment, or some error related potentials, or EEG coherence sensitive to congruent and incongruent words-colors etc.
Open mbt Streamer and Run the PsychoPy experiment from the Coder (as the Builder did not integrate your new code). Once you press run, you can refresh the LSL streams in mbt Streamer and see your markers stream, as below.
Then simply press Record in mbt streamer to save the all Markers and EEG into one .xdf file, with automatically synced streams and you’re good to go!
Fig 2. After pressing Run in the Coder, and pressing Stream in mbt Streamer, you can see 2 LSL streams in the mbt Streamer. One coming from PsychoPy, and the other from the EEG Smarting PRO; all received on 1 computer (called “kama”)
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
If you are up for an EEG experiment which requires offline recording – for example, record EEG in the woods where there is no internet connection, so you’re recording EEG on an SD card; Good news – you are in the right place! In this blog, we will walk you through a way to set up TTL in Neurobs Presentation software, using cable triggering and Smarting PRO mobile EEG.
Smarting Pro supports TTL input of 1 bit. It is a cable that is plugged, on one side, into the amplifier (as an audio jack of 2.5mm) and on the USB port of a computer on the other side. Below you will find an example of setting up TTL in Neurobs Presentation.
How To
Windows 10, 64bit (in this example we used Windows but check if it applies to other platforms)
You should have an experiment scenario already made in Neurobs Presentation or you can use this simple one (mbt sound example). Note that the experiment file (.exp) keeps the path of the computer it is saved on, and the ports it used. To solve this problem, just save it as a new experiment with Save As in Presenter. But before saving, make sure you have changed the Port Settings in the Presenter, as follows.
Fig 1. TTL setup (left), Windows Device Manager (right).
Then go to Presentation->Settings, Add an OUTPUT Port, because you are sending triggers from the computer to the EEG amplifier. Note that this applies to any other output (cable) using serial port, not only TTL.
Fig 2. Adding the output port in Presentation.Fig 3. Filling the values for TTL.
After you have filled in the values like in the Figure above, just click on Close.
Finally, if you want to use the demo examples from Presentation, like the well-known N-Back task, simply add the port as indicated earlier, and Save As new experiment file (it will modify the .exp file with the correct port and path).
TTL is “physically” marked onto the EEG data with minimal delay and can be recorded directly onto the SD card of the amplifier together with the EEG. Additionally, they can be both sent together back to the computer (mbt Streamer) via Bluetooth. In that case, open and connect the mbt Streamer to allow receiving EEG streams on your computer (capturing bluetooth packets). You can also see TTL triggers marking EEG signals in real-time on mbt Streamer. TTL protocol facilitates the integration of embedded systems e.g. it could send events from the Arduino board (using a serial communication).
Fig 4. To test if TTL is working, simply look at the EEG signals in real time in the mbt Streamer, and Send Test triggers from Presentation.Fig 5. Every time you press Send (simulating a stimulus), in real time you can see a vertical line appearing in the EEG signals within the mbt Streamer with no delay. Do not mind the EEG signals, the cap is not even placed on the subject’s head.Fig 6. Remember in mbt Streamer to click ON for TTL markers if you wish to record them also as LSL markers. In that case, you will have EEG streams, and TTL triggers as LSL.Fig 7. Depicts the path of TTL triggers and EEG. (1.) triggers are sent from Neurobs Presentation for each stimulus via TTL cable to Smarting PRO amplifier, instantly marking the EEG streams “physically”. (2.) EEG streams (with triggers) are streamed via Bluetooth back to the computer where (3.) they are converted into LSL streams and sent to the network as multicast LSL protocol to be received by any other computer on the same network.
This way you can see how TTL markers, once physically integrated into the EEG, are streamed along with the EEG signals through bluetooth and converted into LSL streams.
Conclusion
Let us know how it goes and if you have any questions feel free to reach out to us on our contact page.
Once your experiment is outlined and equipment ready, it is time to consider the possibilities of sending triggers with mobile EEG and ways to synchronize it with other devices/stimuli. We will start from triggering types, and how you can mark different events using mobile EEG. Author of this blog post, Dr. Mladenovic, will share examples of different triggering and synchronization examples using Smarting PRO mobile EEG.
*Introduction written by mbt team
General idea
Does your study involve capturing brain reactions to simuli? Independent of the type of stimuli you are using in your experiment – images, videos, games, tasks, sound etc., you will soon realize that you need to keep track of the timing when the stimuli are presented with respect to the EEG data, so that you can align one’s reactions to those stimuli in time. This is what people call triggering. You need a trigger, marker or event (different names for the same thing) to mark the moment each stimulus appeared, and to align (sync) it to the same moment in your EEG recording (or any other physiological recording). Also, you do not want your trigger to be late, e.g. if you press a button, you want it to mark the stream instantly without delay, as in figure below.
Fig 1. Example of manual triggering of a stimulus. In the first example (top), the marker/trigger is set correctly without delay capturing the correct EEG reaction. In the second example (bottom) it has a considerable delay causing the EEG reaction, captured afterwards, to make no sense.
1.Types of triggering
Various tools enable capturing participant’s behavioral responses to stimuli (keyboard press, mouse movements or clicks), as well as marking the start and end of a stimulus set by the experimenter. Whatever includes manual initialization/response of events it is called manual triggering. On the other hand, for example, you can program the stimuli to appear every 3 seconds, or each time a status of the stimulus changes, and this is called automatic triggering. Manual and automatic triggering can be performed via a cable or wirelessly.
1.1 Cable triggering (TTL) – precise but not mobile hardware triggering
Smarting Pro mobile EEG device has an input in a form of additional cable going in the EEG amplifier (like an audio jack), called TTL, plugged to a USB port of a computer on the other side. Basically, a stimulus from the computer as a visual/sound or as some key/mouse press etc., will be sent via USB (converted into UART) to be “physically” incorporated in the EEG signals (this is hardware, cable or wire triggering). This is one way to assure your stimuli will appear as markers at the right place and without delay. TTL is supported by many apps for experiment design, like Neurobs Presentation, PsychoPy, Pro Lab etc.
Fig 2. TTL setup with Smarting PRO where on one side an audio jack 2.5mm is plugged into Smarting PRO amplifier and on the other to the USB port of a computer.
However, what happens if you want your participants to be mobile while watching (or listening to) stimuli, e.g., in a VR/Virtual Reality environment, or you want to design an outdoor study? In this type of study, you would need a wireless, mobile solution. In this form of complex setup, Smarting PRO provides wireless connection while keeping high bandwidth while losing almost no packets thanks to Bluetooth 5 (it has Bluetooth 5.0)
LSL or Lab Streaming Layer allows triggering at a sub millisecond precision, which is especially important for rapid brain reactions such as Event-related Potentials like P300 which arise and disappear within a few hundred milliseconds
LSL is open source and it supports an enormous number of applications: OpenViBE, Unity, Unreal, PsychoPy, NeuroKit2, Presentation, E-Prime 3.0 etc., and devices: Smarting PRO, Pupil Labs, (eye tracker), SR Research Eyelink, Microsoft Kinect, Shimmer ECG device, HTEC Vive PRO etc. See the list of supported apps and hardware here.
Script triggering with LSL – examples
PsychoPy is mainly used to capture behavioral responses to stimuli of participants, or to manually mark the start and end of a stimulus set by the experimenter. With LSL, whenever a keyboard key is pressed or a mouse moved or whatever behavioral activity is of interest, you can instantly create an LSL marker and align it to your EEG stream (or any physiological data stream). Also, the automatic triggering can be very easily implemented.
Synchronize different apps and devices with LSL – Hyperscanning with mobile EEG
With LSL, as we mentioned, you can sync a lot of data from various devices in real time, but, you can also allow easy synchronization between multiple EEG devices, called Hyperscanning. Smarting PRO is designed with hyperscanning studies in mind, and it uses LSL to synchronize all devices, with so-called Hyperscanning mode.
Fig 3. A hyperscanning example of an experiment where dancers and the musician react to each other. In this setup, the dancers’ EEG signals are streamed via Bluetooth (one-directional communication) to the computer closer to them, and converted into LSL streams. On the other hand, the musician’s EEG signals are streamed via Bluetooth to another computer closer to her (reducing Bluetooth packets losses with distance). Once converted from Bluetooth into LSL, these streams are being streamed and synchronized (multicast communication) onto all computers within the same network (wifi) no matter the distance.
LSL uses standard Internet protocol to send and receive data, so you can synchronize streams from as many devices or apps as you like, as long as all devices are connected to the same network (WAN or LAN). Great thing is that you do not need to write IP addresses but just give a name to your stream (you write the same name in the sending and receiving applications). This means that your code is agnostic to the device or network it is running on (no need for changing configuration files whenever you change a device and IP addresses).
In the Figure below, an LSL stream is sent from openViBE (generated signal) to control a game object (Player) in Unity.
Fig 4. Sending EEG signals from openViBE to Unity using LSL. The same name of the stream is used in both applications.
TIP2: Avoid leaving the default name of the LSL stream as other researchers in the building (using the same network) might be performing other unrelated experiments and using LSL with the same default name. In such cases you might receive their streams, and they yours.
Fig 5. Streaming continuous movements of controllers/ head with triggers from Unity, while streaming breathing (Alyzee) and EEG (Smarting Pro) as continuous signals.
LSL – all in one place
You can record it all in one .xdf file, using for instance the mbt Streamer that typically serves to receive the EEG streams via Bluetooth, but it also acts similarly to LSL LabRecorder in the sense that it can record all available LSL streams. It also has additional features, e.g. besides recording external LSL streams it can record keyboard events directly from the same computer, or really precise 3D head motion.
Fig 6. Recording all LSL streams and markers on mbt Streamer.
XDF files can be opened with several analysis software environments such as Matlab, Julia, or Python (pyxdf, example below).
In short, you can send markers at every stimulus, synchronize continuous streams of as many devices and apps as you can imagine using one network, and capture them all into one single file, all with LSL. There is a strong community behind LSL, and lots of documentation available to get started.
Conclusion
As Dr Mladenovic mentioned in this short blog post, there are many ways to send or receive triggers, and synchronize with other devices in order to make a multimodal experiment setup and get all psychological information of interest for your particular research goals. If you need any help with this, feel free to check our support page or contact form. Good luck wpith your research!
*Conclusion written by mbt team
We can assist you to choose the
product that best fits your needs