Set Up Precise Sound Stimulation with PsychoPy and pylsl | mBraintrain

How to Set Up Precise Sound Stimulation with PsychoPy and pylsl

July 17, 2020

I was looking for an online ready solution to generate high-precision sound stimuli in PsychoPy and send the triggers through lab streaming layer (LSL), but I wasn’t able to find something ready-to run. This short post is explaining what you need to do to implement it yourself.

Before you start, of course, you have to set up your environment. To do this, install pylslpsychopy and psychtoolbox. All of them are available through pip install. Once you do this, you are ready to go.

The first thing to do in order to secure the low-latency for sound is to choose the ptb library, instead of sounddevice (which is the default one). In order to do this, import prefs from PsychoPy and then change the prefs before you continue:

from psychopy import prefs
#change the pref libraty to PTB and set the latency mode to high precision
prefs.hardware['audioLib'] = 'PTB'
prefs.hardware['audioLatencyMode'] = 3

The audioLatencyMode can be set to an int from 1 to 4, where 1 denotes audio latency is not important, 3 means high-precision mode. There is also the critical mode (4), which is basically the same as the mode 3, with the difference that the script will break in case the system is not able to secure high precision.

After the preferences have been changed, you may continue to import necessary libraries (if this is done before the preferences are changed, the change will have no effect):

#import other necessary libraries
import psychtoolbox as ptb
from psychopy import core, event, sound
from pylsl import StreamInfo, StreamOutlet

Open the LSL outlet

The next thing to do is to set up the LSL outlet, which is pretty much straight forward:

# Set up LabStreamingLayer stream outlet
info = StreamInfo(name='sound_example_stream', type='Markers', channel_count=1,
                  channel_format='int32', source_id='sound_example_stream')
outlet = StreamOutlet(info)  # Broadcast the stream.

and to load the sound file (in case you want to use the existing sound file):

beep = sound.Sound('beep_sound.wav')

Set up your low latency

What follows is actually how to make sure to reach high-precision of the trigger marker. In order to do this, the sound needs to be pre-scheduled. If we just put the sound to play, we have no control over the system to know exactly when the sound card will actually play the sound. As the sound card needs a couple of hundreds of milliseconds to play the sound, we need to allow enough time for the sound to be played (e.g. 500ms). To do that, we need to compute the timestamp that is 500ms from now:

#Calculate the time stamp 500ms from now to allow enough time for the sound card to prepare the stimulus
sample_stamp = ptb.GetSecs()+0.5;

And then schedule the sound 500ms from now:

beep.play(when=sample_stamp)

What is left is just to push the trigger together with the timestamp through the LSL outlet:

markers = {'sound': [1]}
outlet.push_sample(markers['sound'], sample_stamp)

In the end, we should also allow some time for the system to recover before we play the next sound.

Results

I, tested this with SMARTING mobi EEG device, using our Delay/Jitter box (DJ box). SMARTING mobi is connected to the output of the sound card of the laptop through the DJ box, and sends the recorded data via bluetooth back to a computer, where the recorded data is synchronised with the sound trigger (see the picture of the setup below). The result I get 2ms delay with jitter below 1ms (see the figure below).

The test setup — The small white box on the right is the DJ box that converts audio (or light) stimulus into electical signal suitable for SMARTING mobi device. The DJ box is connected to a computer via audio cable. The audio signal is via DJ box transferred to SMARTING mobi and streamed via Bluetooth back to the computer, where the audio signal is recorded together with the triggers into an .xdf file, so that we can test the latency between the triggers and the audio.
The figure shows the alignment of 70 trials of a sound file played from Python using the script described in this test. The triggers are sent from Python via LSL and are recorded on our SMARTING Streamer

I hope this script solves this out for you. If you have trouble setting it up in your environment, feel free to write a comment. In case you need the script you can download it from here. Any other questions, let me know.

Back