OpenSesame is probably the greatest software for creating experiments for psychology, neuroscience, etc. It has a graphical interface for the most useful components, such as playing a sound file (sampler) or creating own tone (synth). However, the way these components are implemented (using PyGame) may not provide good temporal accuracy (latency), meaning the sound can be delayed, which might not be that bad. What can be very bad (for things like ERPs, etc.) is potentially bad temporal precision (jitter), meaning that the delay may vary from trial to trial. You should read about these matters here.
In this post I want to show how I managed to significantly improve the timing of my auditory stimuli using inline code, PyAudio and ASIO API, even using onboard sound card in Windows 7.
How to decrease audio latencies and jitters?
Buying a good sound card can of course help, but using good drivers and code can help much more. PST, the makers of E-Prime, have a very nice article on this: They measured the auditory latencies (the “mean” column) and jitters (the “SdtDev” column) for various HW and SW configurations. They found that onboard cards may be good enough, but one must not use the default DirectSound API, but rather CoreAudio/WASAPI or ASIO. I have two PCs in the lab and for one of them I’ve bought the ASUS Xonar DX card which supports ASIO. However, using its vendor-provided ASIO driver, the performance was even worse than before with the onboard card and sound presentation via the OpenSesame GUI components, delayed by about 300 ms! (I will get to how I measured this later..) Using ASIO4ALL, the generic ASIO driver, gave much better results. Actually, I have so far really got the best results on the other PC, using just onboard sound card and ASIO4ALL. In E-Prime E-Studio you can simply select which sound API to use. But how does one do this in OpenSesame?
Sebastiaan writes: “if you require very accurate temporal precision when presenting auditory stimuli you may want to write an inline_script that plays back sound using a different module, such as PyAudio.” This is the first step.
PyAudio with ASIO etc. support in OpenSesame
However, the standard build of PyAudio for Windows does not include support for ASIO nor WASAPI. To get it, you have to build pyaudio yourself (ughh) or download it from someone who has done it. I have found only one such build, here. The file you probably need is “PyAudio-0.2.9-cp27-none-win32.whl” You can then install it in various ways, perhaps easiest by copying it to the OpenSesame folder and from this folder in admin command prompt running the following command (OpenSesame should be closed! The –upgrade parameter allows overwriting preexisting pyaudio.):
python -m pip install PyAudio-0.2.9-cp27-none-win32.whl --upgrade
Next, we need to have drivers for our sound card (as the article about E-Prime says, there can be a difference between vendor-provided and generic drivers) and the sound device host API(s) (some are in your operating system, some come from sound card vendor, but I anyway suggest trying ASIO4ALL).
Listing audio devices and APIs in OpenSesame
Now we get to writing scripts in OpenSesame! First we need to find which host APIs and sound devices are available in PyAudio. You can list them using this script:
import pyaudio p = pyaudio.PyAudio() print "\n****APIs****" for ai in range(0,p.get_host_api_count()): print str(ai)+": " print p.get_host_api_info_by_index(ai) print "\n****Devices****" for di in range(0,p.get_device_count()): print str(di)+": " print p.get_device_info_by_index(di)
Testing audio devices and APIs in OpenSesame 🙂
E-Prime has a neat utility for this, the SoundTester. It lists the available APIs and devices and allows you to test them with different buffer sizes. I have written myself a script which accomplishes something similar: For each device it tries to play a wav file (a replacement for Sampler GUI element) and a custom-generated sine wave tone (a replacement for Synth GUI element) and asks you whether you heard the sounds. I am quite new to Python and OS so please feel free to suggest improvements.
from __future__ import division #Avoid division problems in Python 2 import numpy as np import pyaudio import time import wave PLAYSAMPLER = True #do we want to play wav file? PLAYSYNTH = True #do we want to play generated sine tone? soundbuffer = 512 #best for me, feel free to experiment chunk = soundbuffer #hopefully good approach #Specify some test .wav file - like for Sampler wavfile = "path/to/your/file.wav" #Generate custom sine wave tone - like Synth #Parts of code from stackoverflow and from synth.py bitrate = 44100 frequency = 880 length = 0.1 #in seconds attack = 5/1000.0 #ms after which sound is at full volume decay = 10/1000.0 #attenuate the tone at the end to prevent audible clicks numberofframes = int(bitrate * length) wavedata = '' t = np.linspace(0, length, length*birate) signal = np.sin(2*np.pi*frequency*t) #compute sound sine shape #Create attenuation envelope for the sound e = np.ones(length*bitrate) if attack > 0: attack = int(attack*bitrate) e[:attack] *= np.linspace(0, 1, attack) if decay > 0: decay = int(decay*bitrate) e[-decay:] *= np.linspace(1, 0, decay) esignal = signal * e #now the signal is a vector with float values between -1 and +1 #this should make it possible to play it as paInt16 format, but does not work for me.. #intsignal = esignal*32767 #wavedata = intsignal.astype(np.int16) #this produces paUInt8 intsignal = (esignal * 127) + 128 for x in xrange(numberofframes): wavedata = wavedata+chr(int( intsignal[x] )) #I also don't know how to create a stereo sound:-) #We will later respond via keyboard whether the sounds play or not my_keyboard = keyboard(timeout=None) #Instantiate PyAudio p = pyaudio.PyAudio() print "****BEGIN SOUND TEST LOOP OVER ALL DEVICES****\n" for di in range(0, p.get_device_count()): print str(di)+": " print p.get_device_info_by_index(di) try: #Play wav file if PLAYSAMPLER: # open file wf = wave.open(wavfile, 'rb') print "Wav file opened.." # open stream stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), channels=wf.getnchannels(), rate=wf.getframerate(), output=True, frames_per_buffer=soundbuffer, output_device_index=di) print "Stream for wav open.." # read data and play stream data = wf.readframes(chunk) while len(data) > 0: stream.write(data) data = wf.readframes(chunk) print "Wav stream data written.." stream.stop_stream() # stop stream stream.close() print "Wav stream closed.." time.sleep(1) #wait a bit before playing the second sound #Play synth sine wave if PLAYSYNTH: stream = p.open(format = pyaudio.paUInt8, channels = 1, rate = bitrate, output = True, frames_per_buffer=soundbuffer, output_device_index=di) print "Stream for synth open.." stream.write(wavedata) #play tone, perhaps also as chunked? print "Stream synth data written.." stream.stop_stream() stream.close() print "Stream synth closed.." time.sleep(1) except: #It does not catch all errors, e.g. if buffer is too small, it can crash rather than giving the error below, which might be actually good to learn that this is what happened:) print "Error with device " + str(di) #if the error happened after opening the stream, close it #perhaps there is a better way to test for it:-) if ('stream' in locals()) or ('stream' in globals()): stream.stop_stream() stream.close() #maybe we want to just exit the program after error? #p.terminate() #raise SystemExit #Report whether the sounds were played #You can include this message in the experiment instead of console, e.g. in sketchpad before this script #And then read what happened in the console (debug window) print "Played all (a), wav only (w), tone only (t), some distortion (d), nothing (n)?" key, timestamp = my_keyboard.get_key() print str(key) # close PyAudio p.terminate() print "*END OF PYAUDIO TEST LOOP*"
Playing sound using PyAudio in OpenSesame
From the code above you can also see how to produce sounds in your experiment using PyAudio and code. You can have a look at documentation of PyAudio. I have discovered that there can be problems with playing the custom-generated sound in some APIs/devices, e.g. ASIO4ALL does not like 1-channel sound and I don’t know how to generate it 2-channel. Also the WASAPI API has apparently its own problems with format of the generated sound. So I have switched to using wav files for everything, because I just need several tones and do not need to manipulate dozens of frequencies etc. as factors in the experiment. You can generate and download 1-channel tones e.g. here and then edit them using Audacity to duplicate the channel and make it stereo. Because I need to play some wav files anyway, this also allows me to have all sounds in the same format and hence open the stream just once, play all the sounds, add delays and response collection between them etc., and close the stream, minimizing possible latencies.
Aaand.. actually testing the accuracy and precision of sound presentation in OpenSesame!
So how did I test the timing of the sound presentation, without any fancy equipment that the E-Prime team had at their disposal? Subjectively, but with an objective approach!:-) I have an experiment where I have two events (currently two short sounds) and a random delay between them in the range of 100-1000 ms and on each trial I have to write a numeric estimate in ms how long I think the interval was. Because I’ve been doing this experiment for some time, I am quite skilled:-) I of course don’t think that I can estimate the time interval with millisecond precision, but I do few dozens of trials and then calculate several measures:
- Accuracy, as the mean of the difference between estimated and “actual” intervals allows to quantify how much I under- or overestimate the intervals, thus, because I believe that my subjective abilities are somewhat constant, I can compare this number between tests of various SW and HW configurations as a measure of relative latency (relative because there is also my subjective latency). With the onboard card, ASIO4ALL and buffer of 512, I have on average +48 ms compared to 58 ms for GUI, but that is of course a very rough measure. In fact, after each trial I have a feedback about what the actual interval was, so that I can learn and adapt, so this could be biasing the measures (I would learn to write 500 even for actual 800). Subjectively, 1000 ms takes a lot longer in the GUI approach than in the ASIO4ALL approach. Maybe I should use a stopwatch;-)
- Precision, as the standard deviation of the differences between estimated and actual intervals, as a measure of relative jitter (because there is also my subjective imprecision in estimating the intervals). I have 87 ms vs. 112 ms. For me, this is the most important measure, given that the last one is reasonable:
- Correlation coefficient between the estimated and actual intervals as a measure of how well I can do the task in general. I have 0.95 vs 0.92.
This approach has problems, but it was a lot of fun for me:-)
EDIT: I also used a stopwatch..:-)
It took me a few weeks to figure all this out, so I hope this was a bit useful for you; please let me know in comments if you have some suggestions, improvements, questions.. Hopefully, one day, OpenSesame will have a GUI support for all this, because that’s what makes OpenSesame so powerful and accessible!