Stealing our own show – NOW IN HIGH DENSITY

Whoever watched one of the TechCrunch disrupt events knows – never do a live demo! On the other hand, live demos speak many languages and replace a thousand words, and they are tempting. And those who have been following our work over the years know that we can resist anything but temptation. But enough of being vague.

We planned our newest addition for PRO line – Smarting PRO X to happen in just a few days – October 18th, and to be the crown of our X-year celebration. What is PRO X and why we decided it to be THE product for this occasion?

PRO X is the fully mobile 64 channel EEG system which will bring the convenience to the complexity of high-density brain studies. It was a challenge, as truly mobile high-density EEG had for long been considered the missing piece to the human brain research – and certainly a missing piece for us – the mobile EEG company.

The work on a new device is not only a hard engineering process – it is also putting yourself in the “users’ shoes” and trying to map the entire experience from setting the experiment up, to sorting the data after you are done, and of course, not forgetting the always “fun” experience of washing the cap. Recreating this user journey is also a fun part, a perk of the job someone could say… What comes next is slightly less fun – from making things work for the first time – to making them work every time – while testing, retesting, and trying to step back and try not to miss anything (while you always miss on something). It’s a lot of work, but our clients expect nothing less from us.

Anyway, the long digression does have a point – it was June (so not October🙂) – and there we were, in La Jolla, California, on a MoBI conference. We had a prototype of PRO X device with us (a working prototype that is) for which Pavle – our Head of Engineering – instructed me NOT TO show to anyone outside of our team. But, disregarding the memories of mentioned TechCrunch disrupt public failures, and having an optimism stemming from no rational basis, I decided to go for a – live demo – what else of course. Ignorance can be a bliss.

But stay with me – this is not a sad story – quite the opposite. Smarting PRO X, even months before its final shape made a startling appearance – proving that high-density mobile EEG can work reliably in a room full of people and not display any signs of any interference, showcase great data quality, and be mounted extremely fast as witnessed by some people in the room.

Picture 1: Not so bad experience (proven by later testimonial)
Picture 2: Averaged ERP response recorded using Smarting PRO X, checkerboard visual task
Picture 3: ERP image, O1 electrode response

In a way, Smarting PRO X stole its own show.

But not really. What we have prepared carries a lot of hidden perks so brace yourselves for the upcoming show and first (official) public appearance on October 18th.

I guess I could say that we are extremely proud of our creation – we have moved the boundaries of not only neuroscience, but also of our own knowledge. Of course, the true reward comes only after such a tool is put to use – and I can’t wait to read studies – enabled by Smarting PRO X.

Combining EEG and HD-tES

Often, we hear that pioneering research labs are looking into ways of combining EEG and HD-tES.
While there can be certain challenges on mechanical compatibility and electrical compatibility side,
we propose a solution that makes this integration straightforward and user-friendly.


Soterix Medical stimulation and mBrainTrain wireless EEG systems can be combined for sleep and
awake experiments to acquire research-grade EEG data while providing optimized HD-tES.

Picture 1. Soterix Medical HD holders inserted into EEG cap

Soterix Medical HD Holders can be inserted at any location inside the mBrainTrain EEG cap. Custom Hybrid holders are also available that allow stimulation and recording from the same location.

Two fundamental issues underpin the rational integration of tDCS and EEG: mechanical compatibility and electrical compatibility.

1. Electrical Compatibility with HD-tES:

Electrical compatibility with optimized stimulators and rational experimental design is important. It is not prudent to assume that simple high-pass filtering of the EEG signal will remove confounds from EEG recording. Especially given that DC artifacts may be non-stationary due to unknown electrode impedance changes during stimulation and that EEG amplifier active electronics can induce non-linear distortion, Soterix Medical biomedical engineers have developed protocols to implement and validate EEG+tDCS compatibility. Note that conventional experimental control arms, such as using sham stimulation or a control-montage, will not in themselves control for EEG+tDCS distortion precisely because the stimulation has been changed (e.g. only the active arm will experience artifact). Soterix Medical’s and mBrainTrain’s unmatched technical support is ready to guide you through a straightforward validation process.

Picture 2. Noise Comparison between SMI device and Other Commercial Devices

2. Mechanical compatibility with HD-tDES:

Soterix Medical offers a simple and unique solution to tDCS+EEG through the use of High-Definition
(HD) tDCS electrodes. Compact in size and gel-based, with validated tolerability, HD electrodes can
be inserted between EEG electrodes without compromising mechanical compatibility.

HD electrodes are compatible with any Soterix Medical tDCS/tES or HD-tDCS/HD-tES stimulator.
Moreover, Soterix Medical provides HD electrode holders for integration with all mBrainTrain EEG
systems.

Picture 3. Soterix Medical HD holders inserted into EEG cap

Additional Resources

Motion artifact removal using Artifact Subspace Reconstruction

What’s the challenge?

EEG signal amplitudes are really small (at the order of [µV]) and the recorded signal can be corrupted by artifacts coming from literaly anywhere. This is especially challenging in cases when subjects are moving, as a movement elicits a number of different artifact sources that are all stronger than the EEG signal itself. These artifacts are coming from neck, jaw, teeth clenching, or even just due to the micro-movements of EEG electrodes in respect to the scalp.

Now, these artifacts are extremely complex to detect, and even more complex to remove from the EEG signal when we want to obtain clean EEG. Why? Well, for several reasons:

  1. The frequency band of the signals is overlapping with the useful EEG;
  2. The signals may be uncorrelated in the electrode space, and therefore cannot be easily removed by principal or independent component analysis or similar tools;
  3. Both EEG and the artifacts are non-stationary.

To deal with these artifacts, mbt implemented the real-time cleaning algorithm based on Artifact Subspace Reconstruction (ASR) in SMARTING PRO. The ASR algorithm is developed by the scientists from University of California San Diego and will be outlined further in the text. This algorithm takes chunks of data and assesses whether the data is artifact free or not. Only in the case when the algorithm detects artifacts, it goes into the procedure of cleaning the data. The above corrupted EEG data, when cleaned by the algorithm, are shown in the figures below in red.

The algorithm works in two steps — Calibration and Processing, and I will take a cew paragraphs below to explain those steps. In the meantine, it is useful to say that the implementation of the algorithm is freely available from EEGLab, and we encourage you to try it out on your data.

Now, let’s take a deeper dive into what the algorithm actually does.

General Idea

The purpose of this article is not to be entirely mathematically correct, but to present the concept, and introduce the idea behind this algorithm.

The algorithm relies on the assumption that if Mr is the PCA Mixing Matrix of the “clean” or “calibration” signal

Then the signal window X at time t, Xt can be represented in the space of Mr, and the such that

where St are the latent sources, and so given that PCA(Xt) is

this means that the latent clean components (or sources) St(clean) can be derrived as follows

In the above equation, the trunc denotes the truncated demixing matrix, where the artifact components are removed.

The entire workflow of ASR (as taken from Chang et al.) is depicted in the figure below


 

Calibration

The first step of the algorithm is calibration. In the calibration step, the algorithm “learns” the “data principal components space” (in practice it compute covariance matrix) and uses the learned components in the further steps to assess whether a chunk of data is clean, or it is corrupted by artifacts.

However, the first step is to preserve the EEG-specific data (like alpha frequency), so that it never gets removed. This is achieved by subtracting the mean of the data and pushing the data through an IIR filter to remove alpha before computing PCA.

In the next step we compute the mixing matrix of the filtered data:

Then simply compute the root-mean square of the principal components and set the threshold for calibration at

where Г is the threshold matrix, μ and σ are the mean and standard deviations of the principal components and k is the parameter of the algorithm that regulates the threshold above which the principal components would be considered artifacts. This parameter is in regular applications of artifact cleaning usually set between 5 and 7.

Processing

After the algorithm has been calibrated (usually 30second to 2minute of data are needed for calibration), we can start with the processing step. The processing step takes short chunks of data (e.g. 64 samples), checks the data to assess whether the chunk contains artifact components or not (based on calibration) and if so, removes the artifact pricnipal components and substitutes them with the latent components from calibration.

To do this, the processing step first calculates the principal components of the windowed input signal Xt.


For each window, check if the j-th principal component Y with variance D larger than the rejection threshold Г, projected from Vr onto Vt.

If yes, replace the activation of that component with zeros (or truncate)

and reconstruct back the “clean” component as

Obviously, this processing step can be iteratively performed in real-time, and the real-time data cleaning can be achieved.

Wrap up

Artifact Subspace Reconstruction algorithms (ASR) successfully cleans artifact data from EEG. It is especially useful in non-stationary conditions, as it may handle artifacts coming from motion-related sources (such as muscles or electrode micro-movements).

To view the demonstration of real-time artifact cleaning with SMARTING PRO, check out the following video.

In the meantime read more about the algorithm and check the performance on a number of artifact-removal comparisons to other algorithms from the below resources.

Credits and references

•Mullen, T. R., Kothe, C. A., Chi, Y. M., Ojeda, A., Kerth, T., Makeig, S., … & Cauwenberghs, G. (2015). Real-time neuroimaging and cognitive monitoring using wearable dry EEG. IEEE Transactions on Biomedical Engineering62(11), 2553–2567.

•Chang, C. Y., Hsu, S. H., Pion-Tonachini, L., & Jung, T. P. (2019). Evaluation of Artifact Subspace Reconstruction for Automatic Artifact Components Removal in Multi-channel EEG Recordings. IEEE Transactions on Biomedical Engineering.

•Dereymaeker, A., Pillay, K., Vervisch, J., Van Huffel, S., Naulaers, G., Jansen, K., & De Vos, M. (2017). An automated quiet sleep detection approach in preterm infants as a gateway to assess brain maturation. International journal of neural systems27(06), 1750023.

mbt – memoirs of our brand life

They write memoirs of a company. And here are the first few chapters of ours…

Do not worry, it is not really memoirs, it is just a few minutes read.

Before we travel through time to 2012, we would like to set your expectations right — the last changes we’ve made are more evolutionary than radical, and yes, that is the image that caught your eye just below this paragraph. We believe you do recognize some elements 😊

Now, let’s stick to the form.

CHAPTER ONE — BIRTH & EARLY LIFE:

The birth of our brand happened in 2012, and our first logo, was, well…. our first logo. As a company that just emerged in a very demanding market, we simply wanted people to know what we were actually about (brain, obviously). As a short notion for our family and friends — we still don’t read thoughts.

We picked the company name (again quite self-explanatory) and picked a color.

Mission accomplished.

Good enough to get going.

CHAPTER 2 — TEENAGE YEARS AND COLLEGE:

As every teenager –we knew we were changing, and we wanted to find our recognizable style, while at the same time staying true to the child within. We wanted to be distinctive and beautiful, loved, and recognized wherever we appeared (read it ‘we needed a simplistic design easily applicable online and offline’).

The company developed further, our presence was increasing, both at events and online, which took us to the next checkpoint. We wanted to be more simplistic, more approaching. And, ok, we wanted our branding to be more than just stating the obvious.

At that time, it was all about our flagship product — SMARTING, and we intuitively perceived it as the physical representation of our brand identity.

Maybe not obvious at the first sight — but yes, it was still a brain — not a simple illustration of a brain as in Chapter 1, it was a story about the brain that reflects our company vision.

In the meantime, SMARTING had become a recognized product, present all over the world, in over a hundred laboratories. And not only that — it opened the door for the development of new products and services — SMARTING sleep, SMARTFONES…. But the success of SMARTING was such that people increasingly started mixing the company name with the name of the product. Visual diversity, hundreds of business cards, and leaflets simply were not enough.

We had to admit ourselves — we had an identity crisis.

CHAPTER 3: EARLY MATURITY

Ok, the previous four years were difficult, struggling with who we were and what we were heading to become. And, we believe it happened to all of us — you’d always been thinking that your life will be about that one thing you are famous for. Then life happens and imposes new roles upon you and you realize that yes, it will be that one thing you are great at, but there are also new skills you are evolving that are all important parts of who you are.

This is exactly what happened to us. As the team committed to bringing innovation, we need a strong mother brand (mbt) that will allow for that innovation to happen on different fronts (new products and services). So, we had to adjust to that — by keeping what makes us who we are and making our corporate brand a branded house for all the new sub-brands that will happen on our great mission.

Here we are… The year 2020, strange as it came out to be, happened to be a year of our new branding. This time, it is not simply a corporate and product logo, this time it’s a branded house.

It is simpler and more defined, reflecting the spirit of our company — fresh, cheerful, and dynamic. We believe that in the years to come it will facilitate company scaling while keeping us in your memory.

We’ll stop here — it is many more chapters to come but we are ready to embrace them and change again when it becomes necessary. Now, we leave you with the new look, not to be surprised that the next shipment arrives differently.

Don’t worry, we are still the same group of people, only with a more consistent presence, and hopefully, a more memoizable one.

To be continued…

Hope to meet soon,

Your mbt team

Neural Implant podcast: How mBrainTrain is producing a mobile EEG device

NARRATOR: Welcome to the Neural Implant podcast, where we talk with the people behind the current events and breakthroughs in brain implants and understandable ways helping bring together various fields involved in Euro prosthetics. Here is your host Ladan Jiracek.

 

HOST: Hello everyone. And welcome to the neural implant podcast. Today’s guest is Ivan. He is from Serbia, and he is a CEO and founder of mBrainTrain, which is a wearable EEG that goes into like an over the ear headphone, kind of like studio headphones. And basically, they do this for research and they use special electrode sites. So, this kind of goes along with some of the episodes we’ve been having recently more about, wearables, and EEG. It’s not implantable technology necessarily, but I think it is important. Again, he talks about this in the beginning that it is something that we have to watch out for because a lot of this technology is more accessible and the implantable technology that’s really only for, patients in the hospital, or will be for some time. So, anyway, hopefully, you enjoy this. Thank you.

 

HOST: You’re from Serbia, which explains the name. And, you are the CEO and co-founder of mBrainTrain, which is a headphone style EEG enabled research headphone or a headphone that, people can use in consumer space. Do you want to explain this a little bit and why would somebody need an EEG over-the-head headphone?

IVAN: The question has several layers of complexity. You do mostly the invasive ways to record brain activity and these invasive ways are not really suited for everyone and people who are, in my opinion, going to be the first users of such little solutions are people who have some sort of medical condition.

If we retract back to let’s say everyday people, what options do we have to record their brain activity? This brings us to the topic of EEG, like electroencephalography. This technology is used in research, it is used in medical institutions, but why wouldn’t it be used in the everyday life of humans?

Well, it offers a glimpse into how your brain works in everyday conditions, which you do not have from any other technology that exists around us. So, let’s just take an example of your mobile phone. Somebody can see and track your activities, your time to work, your time at work, at the gym, outside, your speed, number of steps… All of this technology cannot tell you really how you feel and how you optimize your mental health and consequentially you cannot tune your mental activity to give you the best results. That’s the short introduction on why we are all living in a society where we want to achieve more and be more productive, but this often leads us to very stressful situations, where actually the distress comes out of this ambition, let’s call it like that, and is actually the thing that prevents us from achieving more.

There is no one, fits-all solutions to that problem. If it was, we would all, you know, just read a book by, you know, Michael Jordan or, Novak Djoković, on how to be our best self, how to be a winner, or how to be in your Zen state. There has never been a time in history where there was more material on how to be productive, how to achieve more, and where the results were ever decreasing as they are now. The reason for this is that every individual is different. And that you have to tune your daily activities and your efforts by yourself and not in some universal way. EEG is one of the rare things that can help you achieve that.

It is known from literature for about a couple of decades, that with more or less success, you could extract mental workloads, or focus, or even to quantify stress from the EEG. This brings me to a very interesting point because, if you read the book by Daniel Kahneman: Thinking, Fast and Slow, there are two parts of us as individuals. So there is a living self and the remembering self. So in hindsight, we tend to provide some wrong answers about what really happened to us. If I ask you, for instance, were you happy last week, something that you currently feel may really buy us the answer in some direction. The only way to, you know, go around this is to actually measure how you feel, how you act, how productive you are at the time when it happens.

HOST: And so what kind of aims do you hope to have? I mean, do you want to describe the device you sent me a nice article and I see it in front of me. It is an over the ear headphone that you’d wear to like listen to music and it has some kind of electrodes coming out. So at the top of the head, it has three, and then around the ears, it has three. And then you guys talked about semi-wet electrodes, because of course for EEG you need to have kind of a saline solution to have good contact with the skin. What does that actually mean, semi-wet, and what is the reasoning behind the design of the device?

IVAN: Let’s start with the design of the device. I mentioned that we have a goal to measure your brain activity. Now, traditionally speaking of noninvasive EEG technology that has been achieved by varying a cap with electrodes over your heads. And, as you might guess, this is not the most practical thing to do in everyday situations because of course there is and would be a sort of a social stigma to do it, and the end would not justify the means. So using a headphone is something on the other part of the spectrum. You’re very used to seeing people wearing headphones, and this is what we realized early on and said, okay, it would be great if something that is already accepted, like headphones, could be used to also measure your brain activity in real-time and enable you to use it in real-life situations. So that is like the motivation to go with headphones. As for the electrodes, this is so far up feature. We are a research-oriented company, meaning that we have clients from all over 30 countries and almost all the notable scientific institutions have at least some of our equipment’s in their labs. We also started from this, research strict requirements for data quality. And that indeed brings me to the type of electrodes and the saline contacts that you have to have to have this research-grade signal. However, this is not always going to stay this way. We are working on technology that will enable soon to bring it closer to you or me. Indeed there are electrodes there, on top of the headphone.

So as you said, a couple of contacts there, a couple of contacts around the ears, and these things actually allow you to extract part of the EEG which would otherwise be recorded on the entire head. This actually allows you to have a very good insight into what goes into the brain. And it’s very new. It was just launched last year, but it’s gaining immense popularity with some strong research groups. So as, as you mentioned, I wrote the blog post about it. I hope to have something more to report soon.

HOST: Yeah, it’s very interesting, I mean, this reminds me a lot of a company called Halo Neuroscience and they have kind of a similar product and it’s over the ear headphones and with the kind of like little spikes and they call it the brain stimulator that helps you develop muscle memory faster. It’s more for athletes and everything like this, but I also heard that they stopped selling this or they went under. How was yours different and how, would you prevent, I guess, here at your company from, failing?

IVAN: Interesting question. There are many, many people try something on similar lines. Let’s start with the conceptual difference. You have, the active system in the sense that you have some sort of active stimulation to the brain, right, and you have this passive brain-computer interface approach that we are fond of. Now, I’m not an expert in brain stimulation with either constant current like TDC or DAC, but what I tend to do in these cases where I don’t know enough about the topic I try to talk to people who are in these fields. And what I found interesting is that none of the top researchers that conduct research in electrical brain stimulation actually use this on themselves and that brings me to believe that this is not a really well-confirmed scientific field to put it like that. I’m not saying that this is not working or it will never work I just believe strongly that there is more to it than just bring it to use and it is very premature. In other words, you do not control if you do something bad to yourself using some solutions on the line or if you are actually doing any goods, so there need to be more studies actually confirming the positive effects and the overall strategy of use of these devices.

Now contrary to that by using EEG in a passive way, as we put it, what does it mean? It can mean more than one thing, but let’s just simplify it and say we have headphones and let’s say that you have several types of music or auditory input you want to play to the brain depending on the current brain state to achieve a certain goal. In my opinion, that really carries no risk to the individual because you’re not modifying any of our, biological inputs. So that’s the main difference but I am happy to see that there are more and more people trying to get in this field and I believe this is a good thing.

HOST: Yeah. It’s pretty exciting stuff. Anyways. You are in Serbia. What is the advantage of being in Serbia and how has that affected your guys’ growth?

IVAN: Well, Serbia is my home place. Before that, I’ve been doing a Ph.D. in biomedical signal processing in Belgium. So, the start of mBrainTrain coincides with me coming back to my hometown. And, to be honest, when we started, I didn’t even know we were a startup. I’m sure that this sounds really crazy, and if I was in the United States, this would definitely not be the case. I just knew I was, driven by the desire to create first a mobile EEG. We spent some minutes talking about it privately, but this was sort of a driving force.

The country and the market here were not really startup friendly with little access to VC funding or even angel investments. There is a lot of bootstrapping you have to do. We had luck, we got one government grant early on in 2014, and this really helps us immensely.

HOST: Sorry, is that grants from Serbia? Are that grants from the EU?

IVAN: It is a grant from Serbia. I think that the particular fund it’s called the innovation fund in agreement by the Serbian government. It is not part of the official funding scheme or Horizon 2020, let’s say more of a government bond. But anyway, we had to bootstrap a bit and there were quite a few difficult moments there because there was no way to get some bridge money when that fund started running out about the year from it starts. It forced us to think clearly and to really be able to put out an MVP soon. Our first device was really like an MVP. The software was so basic that you simply had one button: record. And you observed the impedances like the quality of contacts, from the electrodes. But that sort of MVP approach enables us to, later on, tackle some very important, issues like the quality of Bluetooth, wireless connection…

It sounds very trivial, but believe me, it is not. So you have many Bluetooth devices and many of them are, constantly pairing and doing all sorts of stuff. So if you transfer EEG over Bluetooth, which we do, and we did, this is such a high throughput connection that it has to work perfectly. And of course, it doesn’t work perfectly out of the box. We relied heavily on that and because, we could not do a lot of re-engineering along the way, because of a lack of funds, we were forced to really deep dive into some of these topics and sort it out. Consequentially, we have great devices nowadays that work. In my opinion in a very, very reliable fashion compared to some others.

And Serbia, the rest of it. Well, part of me felt at home, part of me felt disconnected. The startup community was just becoming in away. Fast forward 2020s are way better. Now you have community, you have meetups, you have all sorts of things. I think we are still short on financing, but there are funds. There are also few in the countries around and there are possibilities. I think the situation is way better and we are locally regarded as one of the pioneers.

HOST: So what’s kind of your future short term and long-term plans?

IVAN: I mentioned our vision to bring EEG to everyday people and another angle of looking at it is a sort of a new type of interface if you think of it like that. Imagine what kind of interface do you have with the rest of the world? You have your language, right? You have your computer and you have your mobile phone. This sort of interface that I am foreseeing, and that we believe in is based on true mental states, including emotions, that could be used to communicate. To achieve that there is a long way to go yet. The signals that we capture are in the microvolt range. What that means is that everything around you, simple movements, changing the sound, temperature, lighting, can disturb those signals. And there is a lot of technology behind the scenes that need to be developed, luckily, we are developing it very fast. On the electronics, on the algorithms on the materials like electrodes and, so forth. So once you get your personal EEG device, you don’t have to spend your time adjusting to it because this is not our idea. The idea is that this new type of device, doesn’t take time from you, but just the opposite, to help you regain some of your time and life back through optimization of your life and work. And, let’s not forget it, this should be fun, this should be a fun experience.

So we are getting there, every year brings us closer, every quarter there is a small milestone that we cover. We are going to remain in the research space because we believe in a very scientific approach, we believe that the top labs that we collaborate with, they should be also confirming that the technology behind what you propose is sound.

And we are going there one step at a time, I’m guessing we are going to see one or two, more research headphones. And then who knows maybe, you get to wear one of our devices yourself.

HOST: Hopefully. Ivan, this has been excellent. Thank you so much. Is there anything that we didn’t talk about that you wanted to mention?

IVAN: Difficult question, it’s past midnight here, so it’s hard to remember. But, it’s been a pleasure to me as well. I’ll keep the track of your interesting guests and yourself. So let’s keep in touch and maybe we catch up with some exciting news soon, too.

HOST: Excellent. Looking forward to it. Thank you so much.

NARRATOR: Thank you. Hope you enjoyed the show and were able to learn something new, bringing together different fields in novel ways until next time on the neural implant podcast. For more interesting podcasts on neuroprosthetics/brain-machine interfaces/brain implants please visit neuralimplantpodcast.com

 

Video – EEG at home – Epilepsy diagnostic helper of the future?

Epilepsy is a prevailing neurological problem in which the brain activity becomes abnormal, often leading to seizures or unusual and abnormal behaviour. It is present across the globe, with yet undetermined causes. More than 1% of the world population is estimated to suffer from this disorder, which is not specific to race, ethnicity or geographical specifics. That makes more than 65 million globally. More to that, about 1 in 26 people will develop epilepsy at some point during their lifetime. Epilepsy can begin at any age and can seriously affect lives of people on multiple levels, from social relations to work-related activities. 

The gold standard for diagnosing epilepsy is the usage of combined EEG (electroencephalography) and video data. Between 20 and 50% of people who undergo this diagnostic tool are found to have psychogenic seizures – visually indistinguishable and often falsely attributed to epilepsy. However, this technique is, due to its cost and availability of personnel and specialized medical institution framework, often not available for many in the need. Some studies show that annual per-person, epilepsy-specific costs ranged from one thousand to up to twenty thousand dollars. These costs tends to be higher for uncontrolled or treatment-refractory epilepsy. 

Gap exists between employing medical equipment (costly and with long period of certification) and the oral interview that the neurologist uses to guide his judgement (that could be misleading). We discovered there is a lot of room for improvement – namely by providing assistive tools to help this judgement by the neurologist, that needn’t come all the way like the regular medical equipment. 

We discovered that the technology this time came to the rescue (once again). Previously employed technique of using Android phones as a platform for recording EEG – and providing extreme “spatial” flexibility, can be extended by using the camera system on the same device.  

Taken that the video and EEG are synchronized – which is ensured by mbt software – this becomes a very powerful and affordable “home laboratory”. This becomes even more important during a pandemic, when hospital visits are difficult and risky, and just leaving home can further endanger the health of those already suffering from epilepsy. Such a solution also highly increases the availability of EEG services, especially affecting patients in the low and medium income countries, and could contribute significantly to the fight against epilepsy on the global level. 

What we describe is a still patent-pending technique, and, accordingly, still in the research domain. What we can say is that we believe there is a great future for similar technologies and we hope that these assistive tools will soon impact our lives in very positive ways. Taking video and EEG from laboratory to home environment opens up a great potentials of continuous screening, which could lead to developing of the system which would be able to detect or predict seizures. This would be highly beneficial to any working man being at risk of epileptic seizures, especially in professions that require high levels of concentration, such as drivers or doctors. Finally this scenario could lead to diagnosing epilepsy at the earliest period. Once detected, epilepsy could be postponed, or maybe fully prevented. 

Although the future of home treatment of epilepsy with the help of video and EEG looks great, we should not forget some other benefits that we are are left offered without every time we leave home. Actually, it is very hard not to see them. Staying at home, in the company of your loved ones and in the space that suits you best cannot be compared to visiting medical institutions. How about, instead of going to hospital and risking further – staying home with your family and friends, and having a physician on a Zoom call with your data in front of his eyes in close to real-time and getting high-quality diagnostics? 

To get even closer to the possibilities provided by synchronized video recording and EEG at home, we will turn our attention to one current case. Its about little Leon, four years old child who is affected by rare genetic disorder. He is the youngest known among around six hundred cases worldwide. He has been diagnosed with the Syngap Syndrome. 

There are many symptoms of the disease, including severe mental impairment, epilepsy, autism, severe sleep impairment, muscular hypertonia (limb muscles) resulting in motoric problems, impaired speech development, impaired digestion and eating, learning disabilities. 

Leon and his parents have long since become heroes, yet this situation requires modern techniques and methods of treatment that can be crucial in the fight against such a vicious disease. Their main goal is to find a cure for a currently incurable genetic disorder! This incredible dream is becoming more and more real day by day thanks to the great support received by Sandra, Florian, Leon and Marie in cooperation with AIT Vienna, University of Edinburgh and other institutions. If you would like to help this great cause visit http://leonandfriends.org/english/ for more information.

  

Q&A: Doing Multi-human EEG

Q&A: Doing Multi-human EEG

Host: Ivan Gligorijevic, CEO, mBrainTrain
Panelists: Suzanne Dikker (University of New York),
Karl Philipp Flosch (University of Konstanz),
Martin Imhof (University of Konstanz)

 

———————————————————————————————————-

Intro

Humans act and interact all the time i.e., each second. The interactions between one human brain and another have always been a focus in the field of neuroscience and have gained immense importance in the scientific realm in the past few decades. The reason for the increasing interest in this methodology for most of the researches is the advantage of real-time monitoring of multiple brains with social interactions and intersubject correlations.

Advancements in EEG from Lab centric EEG set-up to mobile EEG for social & dynamic study have led many researchers to take up new experiments and projects on social and multi-human EEG studies.

Introduction

Ivan Gligorijevic, CEO – mBrainTrain hosted a discussion on Multi-human EEG with panelists:

Suzanne Dikker from the University of New York,

Karl Philipp and Martin Imhof from the University of Konstanz who have expertise working several projects on EEG and especially multi-human studies.

Ivan Gligorijevic:  

Setting-up an EEG experiment anywhere inside or outside a laboratory is quite challenging. Imagine setting-up multi-humans EEG experiments simultaneously! It’s undoubtedly a tough job.

Why would anyone be willing to torture themselves in such complex settings and situations?

Let’s start from the beginning…

Why is there a need to use EEG in social settings?

Martin Imhof:

There are many reasons to use EEG in a social setting. Taking an example of my lab: my research deals with measuring how people are able to process different (health) messages and videos. In our setting, we measure people sequentially (one person at a time). These people are mostly college students, and they are not always participants of interest (i.e. proper target group). Having them as participants of every research experiment is not a good choice. So, the practical advice for would be to go out, find your real target group, and measure as much as you can from your specific target group. This real-world applicability is a very good feature of multi-human EEG.

Karl Philipp:

From my point of view, the most exciting variable in all psychological processes is Time. We know that in our interactions, a lot of things happen very intuitively and quickly. For example, humans create an impression of a person in less than a second. People act and react at different time scales. This itself can be measured – we can ask people (in a social setting) to think of the argument that he/she had with their partner some time or moments ago. We give questionnaires to these subjects and measure the cognitive processes at in-between time scales from the occurrence of the event to when we ask them the questions.

However, it is still uncommon to measure things when they unfold online, i.e., when the event is happening. But now, with the EEG and the fantastic time resolution, we have the ability to measure within social settings, while the event is happening. It is quite interesting to have these very small time scales before and after the events which were not considered so far because there were not technical possibilities for this. For this reason, multi-human EEG is quite fantastic.

Suzanne Dikker:

I completely agree with what Karl-Philipp said. One major advantage of going not just into social settings but also into more real-world settings is bringing equipment to the people to reach populations. Often, target populations don’t typically tend to get studied in our undergraduate laboratory.

As Karl-Philipp pointed out, there is a real difficulty in answering some of the research questions within a laboratory environment. It is much more comfortable (and seems to be a better solution for studying people) by just bringing the equipment to people’s homes or other dynamic social environments.  This is undoubtedly a significant advantage of tech developments today.

And last, now we have the opportunity to test whether these processes and mechanisms that we’ve addressed within the laboratory setting (that we’ve been trying to model with decades of research and important findings) apply in a real-world dynamic social context. The hypothesis of how we process social stimuli and how we behave in social settings can now be tested. So this is one another advantage.

Why do a multi-human EEG? What can we find out from these studies?

Martin Imhof:

There are several dimensions to this. There are both pros and cons to doing a multi-human EEG. With this social setting, we are open-up to more influences by having this uncontrollable environment/situation. Yet, it has the real wealth: as we already know, neuro-synchrony when we measure it (in a group or sequentially), depends on the stimulus of this process as well as the psychological variables of the perceivers (for example if they attend it or they have a specific engagement). The dynamic interactions in a social situation really influence the measuring of neural synchrony.

Suzanne Dikker:

Let me summarize my studies on Collaborative Learning in the classroom (Relationship between synchrony and learning: http://www.sciencedirect.com/science/article/pii/S0960982217304116) for more insights on real-time monitoring and measuring of social interactions.

We partnered with several New York City high schools, where we collaborated with the teacher and took over their entire classroom (in agreement with the teacher, of course)  for the full school year. Initially, we gave all the kids an introduction into what Psychology and Neuroscience and also provided awareness on a research design that we intended to apply in our study with the use of EEG. We also explained how we would measure the interactions using EEG between the whole class and the teacher while they were engaging in their regular classroom in real-time. They were learning their standard biology content while we were recording their EEG. We ensured the use of different teaching styles in different classes and measured the social closeness between the kids and a number of personality traits in later studies – we also assessed their learning. We had a number of factors that predicted the extent of how their brain activity synchronized.

With all these efforts, we were able to predict things like classroom social dynamics, how close kids fell to each other, their classroom engagement; how a kid gets engaged in a given activity, and a lot more. Attending to a stimulus was a strong predictor for synchrony; we also found that some personality traits predicted the extent to which activity became synchronized between kids and the teacher. For example, kids who were more groupie (i.e., liked being in social settings more) were more in sync with their classmates.

Question from the audience (prof. Aina Puce):

How many people would need to help when you set up 12 people for hyper scanning?

Suzanne Dikker:

Taking the same example of my study. In the beginning, we created a sort of student-teacher-scientists partnership – in other words, we involved the entire school year employees:

  • At the beginning of the year, we taught the kids how to set up the equipment and also taught them about EEG and the signal
  • We asked them if there were any questions that they feel important for this study so we can incorporate it: we created ownership of the project for them
  • We had 50 minutes period for each class, which included about 10-15 minutes to set the equipment. They were trained and motivated to help each other for the perfect set up of the equipment.

The data collection lasted like five or six months. This educational-inclusive model appeared to be very important – we found out this when we tried to address follow up questions from the next study in another school. The kids were very much less motivated to participate. When we asked them to look at the wall for two minutes, they would look at their phones – some kids fell asleep during the class. So, this educational model happens to make a stark difference.

We have also discovered through other real-world research attempts (like in museums) as well as in lab studies that it is super important to have your participants motivated. Not only the motivation is crucial for participants, but also for research assistants who are helping out to get the project going.

Where to get started with multi-human EEG or hyperscanning? Do you have your entire pipeline in place or you manage on the go?

Karl Philipp:

My role is to build up the entire pipeline of data recording and processing for multi-human EEG in our lab (from scratch). Obviously, it’s quite challenging – there are no all-around/carefree solutions for this technique, as it is very new, emerging, and the community is small. Solutions of Smarting mobile EEG are absolute pioneering; however, It is impossible to make a solution for all needs and settings. It’s a great strength to have this new EEG designs for real-world settings. In most of the settings, routines don’t work, and continuous learning is required (testing out the possibilities, new designs, data analysis, and so on). We need to tailor the techniques and knowledge to address the specific needs of our experimental setting and methods of analysis. We need to make the entire pipeline as good as possible and also to decrease the time between data recording and finding the first results, while also continually improving and validating your setting.

Suzanne Dikker:

From my experience, I think that it depends – we have used several protocols and several different devices. For example, the same device might work well in one study, but in another one, it would not work so well. For example, we were lucky that in our classroom study, the set-up has been working well in terms of the Bluetooth range. But it didn’t work for another school.

Later, we also found out that the number of devices that we could record on a single computer was limited to 12 – this was also the number of the EEG devices that we had. There were many moving parts – for example, there where certain protocols that weren’t incorporated in my computer – like a Lab Streaming Layer (which are very important for the hyperscanning studies)

Generally, you can always face different and completely new issues when you are applying a different setting..

A question from the audience (Francisco Parada): Are you building from scratch? Some are busy building it from existing software like EEG lab mobile app or others.

Karl Philipp:

For recording, we are using Smarting Software. For the analyses, we try to be as independent as possible from toolboxes because they are designed for classical EEG studies… We are now working on a method named Intersubject correlation (from Parra Lab from New York), and it is not implemented in any of the toolboxes- so we need to make our own analyses pipeline without packages.

Questions from the audience: in Social settings, how do you sync other social settings with specific triggers in non-controlled settings?

Karl Philipp:

It depends on your design. You can implement triggering through the already mentioned Lab streaming layer protocol. This is an event layer which is recorded by the Smarting Streamer into the same file, and it depends on your design whether you want to press your triggers by yourself (when it’s slow enough), or you are showing videos on an experiment at a big screen – then you can also use software like Neurobs Presentation to send triggers via LSL protocol via LAN network to the recording PC.

This blog post represents the transcript and interesting questions from mbt talks webinar. Watch the complete webinar on this link here: http://www.youtube.com/watch?v=2eC-V9wDsjs&

How to Set Up Precise Sound Stimulation with PsychoPy and pylsl

I was looking for an online ready solution to generate high-precision sound stimuli in PsychoPy and send the triggers through lab streaming layer (LSL), but I wasn’t able to find something ready-to run. This short post is explaining what you need to do to implement it yourself.

Before you start, of course, you have to set up your environment. To do this, install pylslpsychopy and psychtoolbox. All of them are available through pip install. Once you do this, you are ready to go.

The first thing to do in order to secure the low-latency for sound is to choose the ptb library, instead of sounddevice (which is the default one). In order to do this, import prefs from PsychoPy and then change the prefs before you continue:

from psychopy import prefs
#change the pref libraty to PTB and set the latency mode to high precision
prefs.hardware['audioLib'] = 'PTB'
prefs.hardware['audioLatencyMode'] = 3

The audioLatencyMode can be set to an int from 1 to 4, where 1 denotes audio latency is not important, 3 means high-precision mode. There is also the critical mode (4), which is basically the same as the mode 3, with the difference that the script will break in case the system is not able to secure high precision.

After the preferences have been changed, you may continue to import necessary libraries (if this is done before the preferences are changed, the change will have no effect):

#import other necessary libraries
import psychtoolbox as ptb
from psychopy import core, event, sound
from pylsl import StreamInfo, StreamOutlet

Open the LSL outlet

The next thing to do is to set up the LSL outlet, which is pretty much straight forward:

# Set up LabStreamingLayer stream outlet
info = StreamInfo(name='sound_example_stream', type='Markers', channel_count=1,
                  channel_format='int32', source_id='sound_example_stream')
outlet = StreamOutlet(info)  # Broadcast the stream.

and to load the sound file (in case you want to use the existing sound file):

beep = sound.Sound('beep_sound.wav')

Set up your low latency

What follows is actually how to make sure to reach high-precision of the trigger marker. In order to do this, the sound needs to be pre-scheduled. If we just put the sound to play, we have no control over the system to know exactly when the sound card will actually play the sound. As the sound card needs a couple of hundreds of milliseconds to play the sound, we need to allow enough time for the sound to be played (e.g. 500ms). To do that, we need to compute the timestamp that is 500ms from now:

#Calculate the time stamp 500ms from now to allow enough time for the sound card to prepare the stimulus
sample_stamp = ptb.GetSecs()+0.5;

And then schedule the sound 500ms from now:

beep.play(when=sample_stamp)

What is left is just to push the trigger together with the timestamp through the LSL outlet:

markers = {'sound': [1]}
outlet.push_sample(markers['sound'], sample_stamp)

In the end, we should also allow some time for the system to recover before we play the next sound.

Results

I, tested this with SMARTING mobi EEG device, using our Delay/Jitter box (DJ box). SMARTING mobi is connected to the output of the sound card of the laptop through the DJ box, and sends the recorded data via Bluetooth back to a computer, where the recorded data is synchronized with the sound trigger (see the picture of the setup below). The result I get 2ms delay with jitter below 1ms (see the figure below).

The test setup — The small white box on the right is the DJ box that converts audio (or light) stimulus into electical signal suitable for SMARTING mobi device. The DJ box is connected to a computer via audio cable. The audio signal is via DJ box transferred to SMARTING mobi and streamed via Bluetooth back to the computer, where the audio signal is recorded together with the triggers into an .xdf file, so that we can test the latency between the triggers and the audio.

The figure shows the alignment of 70 trials of a sound file played from Python using the script described in this test. The triggers are sent from Python via LSL and are recorded on our SMARTING Streamer

I hope this script solves this out for you. If you have trouble setting it up in your environment, feel free to write a comment. In case you need the script you can download it from here. Any other questions, let me know.

Preparing For The Cybathlon BCI Race

The Cybathlon 2020 competition is approaching fast and since we, the CyberTUM Cerebro student team from Munich, are a new team that has just been founded, there is a lot to do. Luckily, back in spring, when we were looking for suitable EEG systems and talked to different manufacturers, one of them, the rapidly growing EEG company mBrainTrain from Belgrade in Serbia, decided to support us by arranging a portable EEG device we could use.

Having worked with consumer EEG systems before, we knew that we needed a research-grade system with good signal quality. While we did not necessarily need a mobile system, i.e. a system to which we could connect via Bluetooth instead of through a bunch of cables, we thought that it would be useful for being more flexible. A mobile EEG system, such as mBrainTrain’s Smarting, would make it much easier for us to go to our pilot’s home for recording as we wouldn’t need to go through the hassle of unplugging and re-plugging a lot of cables before and after every recording session. A dry electrode system would have made the out-of-the-lab recording even more convenient, but we feared that it would come with too much of a drawback in terms of signal quality. Besides, the Smarting device from mBrainTrain had been recommended to us by one of our professors.

Setting up the system was easier than expected. After plugging in the connector from all the electrodes into the Smarting device and connecting it via Bluetooth with the computer, one can use a convenient GUI, provided by mBrainTrain, in order to visualize the impedances of the electrodes. As shown in the picture below, this real-time feedback of the impedance values indicates which electrodes are fine and which need more gel.

 

Nicolas setting up the Smarting system on Mert’s head. As you can see on the left, the prefrontal electrodes already have a good impedance (shown in green).

How to correctly apply the gel between the scalp and the electrodes wasn’t so easy at first, but after a while, we became pretty good at it. As is common practice, we used the wooden end of cotton swabs to push aside the hair underneath the electrodes and then pressed the gel out of the syringe in a circular movement right underneath the electrodes. The cotton end of the swabs was helpful to further spread the gel to the right position.

Once the impedances are sufficiently low, the streaming over LSL can start. Even though Bluetooth is notorious for not being reliable enough whenever there are obstacles between the sender and the receiver, we managed to keep the percentage of lost packages (i.e. blocks of EEG samples) pretty low for most of the time. During a support session over Skype, Pavle Mijovic, an engineer from mBrainTrain gave us several useful tips such as lowering the sampling frequency to 250 Hz whenever we want to optimize for Bluetooth stability. He also suggested to get an extension cable for the dongle, because in our current setup our recording PC is positioned under a table that might block the signal. Furthermore, for getting a stable Bluetooth connection it was important to disable the Windows firewall.

As a tool for processing the incoming EEG data stream, we chose to use NeuroPype, a platform for real-time brain-computer interfacing and signal processing. One of the reasons for this was that the software suite includes an open-source graphical pipeline designer (coded in Python), which is quite intuitive to use. A great surprise for us was how easy it is to connect the Smarting signal acquisition software to NeuroPype. Essentially, we only had to change the name of the lab streaming layer (LSL) node to the one predefined by Smarting and then we already were able to receive the signals in real-time.

 

Aleks, Matthijs, Ashish, Eesha and Svea working on the SSVEP system (see below for a description of the system)

 
 

Jin, Xingying, Florian and Ashish working on connecting NeuroPype to the Cybathlon 2020 BCI Game via our own UDP node.

For showing that our whole pipeline works, we designed a quick experiment based on the steady-state visual evoked potentials (SSVEPs), where one of our team members was very patient, suffering through looking at a flickering checkerboard pattern over and over again (let’s hope he wasn’t dreaming about it afterward!). You can see the results in the following video (read below for detailed explanations):

The experiment consisted of a subject focusing on a flickering stimulus, whose frequency was changed between 10 Hz and 15 Hz by another person, depending on the desired kart direction. It is known that when a subject is focusing on a simple visual stimulus of a certain constant frequency (e.g. a flickering checkerboard), the power spectrum of the occipital EEG at that frequency and its higher harmonics have a significantly higher value than in the no-stimulus case. In other words, if you look at a stimulus of frequency 10 Hz, the neurons in your visual cortex start synchronously spiking in the same frequency. This interesting discovery was first described in [1] and in most cases does not depend on the volitional state of the subject (the subject only needs to passively focus on the stimulus), which means that it is not an active/intentional BCI paradigm.

Based on this central idea of the SSVEP paradigm, we designed a model in NeuroPype which computes the power of the 10 Hz and the 15 Hz frequencies and compares their normalized values (we only use the signals from the O1 and O2 electrodes for this). If the normalized 10 Hz power is significantly higher than the normalized 15 Hz power, we conclude that the subject is looking at the 10 Hz visual stimulus and send the “turn left” command to the game (a slightly adapted version of the Unity Kart Racing tutorial). If the 15 Hz power is significantly higher, then we send “turn right” and if there is no significant difference, then the vehicle just drives straight ahead.

As any other paradigm, SSVEP has its limitations. One of them relevant for us was that, since we needed 2 control signals, there had to be 2 distinguishable power bands, and that means that the stimulus frequencies could not have been integer multiples of one another. Secondly, higher frequency stimuli were harder to detect, because the signal’s power itself was decreasing with frequency. Most importantly, however, SSVEP is not intentional, which is required for the Cybathlon BCI competition and which we believe is hugely important for giving BCI-users a sense of agency. For testing our pipeline from signal recording with Smarting, to signal processing with NeuroPype and lastly control in a Unity game environment, SSVEP was a great first prototype paradigm, but the real challenge will come next, when we try to replace it by motor imagery control.

 

The checkerboard is flickering with a frequency of 10 Hz which can be nicely observed as a peak in the EEG frequency spectrum of Matthijs’ occipital cortex electrodes. The peak at 50 Hz is the powerline noise.

References:

[1] D. Regan, Human Brain Electrophysiology: Evoked Potentials and Evoked Magnetic Fields in Science and Medicine, Elsevier, New York, NY, USA, 1989.

Aleksandar Levic and Nicolas Berberich for the CyberTUM Cerebro Team

Photos: Nicolas Berberich

Our Website: www.cybertum.org

Next EEG — new human interface

It was the beginning of 2014 and I was in Hallstatt, Austria, on BNCI Horizon 2020 Retreat, event aimed to discuss the future of BCIs (brain-computer interfaces) with over 60 experts from the field. Among the hot topics was the state of brainwave reading by means of electroencephalography (EEG). The special thing about this event is that I had a “secret agenda”: without announcing it, I brought up with me the just assembled first working small Smarting mobile EEG device AND the Android phone with the (first beta version of) application able to display signals in real time! It may not sound impressive — but believe me, it was. No one had such ability (!) and it is still rather unique today. While mounting the system I had an unexpected observer, professor from the UK…her words still resonate in my mind “that is not EEG” … Yes, the demo failed. The unit was soldered manually and the loose connector was damaged on the way. But this was minor (although very embarrassing at the time) and we had it fixed soon and many successful demos followed since and, more importantly, many great scientific output came out of it.

With me, then (also as a subject in a demo attempt) was my friend and advisor prof. Maarten De Vos from Oxford University. He also co-authored arguably the first paper showing the proof of concept for truly mobile, event-related potential (ERP) ready EEG (How about taking a low‐cost, small, and wireless EEG for a walk? ).

It was an immense work that preceded the device I showcased (also assisted by our partners from EasyCap GmbH), and I was a bit edgy and annoyed when he asked me, “So ok, now we have mobile EEG on the phone, so what’s next?” and I don’t know exactly how I put it but it was on the line “hold on for a second, we are not yet done here, this is a demonstrator, we will need more time to make this work and give it to other people” (which we did in the years that followed).

But the essence is the following: Maarten did ask the correct question as we all were looking to the future. Milestone 1 was crossed though — we brought scientific excellence outside of lab. It was now possible to do some really neat stuff (tracking athletes while they perform, demonstrate home-based stroke rehab, optimize workplaces for mental fit and many others).

Milestone 2 was not upfront planned but became self-evident once Smarting (and other mobile EEG devices) came to use. You are wondering what it was, right? 🙂 Ok, EEG is mobile and we can do all this neat stuff, but hey, how do you feel with it on your head knowing that the other people around are looking? (check the picture below).

People didn’t really like three things: 1) wearing a cap 2) having to wash hair and 3) other people looking at them strangely. We knew what we had to deal with, but had an additional constraint: whatever we do, we had to ensure that we still record (good) EEG.

Oldenburg University group run by prof. Stefan Debener (also our scientific advisor) decided to use Smarting and its high mobility and mobile phone compatibility to try out a new EEG modality. Instead of classical 10–20 montage used in cap setups, and with the help of Dutch TMSi company developed the around the ear, printed electrodes termed cEEGrids. They still needed a tiny amount of gel, but had recording contacts in places without hair. This aimed to solve two of the problems — easy to apply system and, more importantly, the unobtrusive character of the recordings. With cEEGrids (that are still in their infancy) and the headband to hold the amplifier, people could walk out of the lab and be less noticeable on the street.

Picture shows mbt made version of the cEEGrids 

cEEGrids also enabled some really cool research — with many ongoing studies.

Milestone 2 was therefore, almost reached. Almost. Subjects still had something strange on their head and the residual gel present…. not so easy to mount as well, as testified by some users.

Milestone 3 was again in sight. We needed a scientific system that is being mounted in a seamless way, without washing hair AND that looked more natural.

This was a tough one to reach. I have a whole story about this one, but I will keep it for a later time. Let’s just say it was a life-changing experience, with lots of mistakes, and it took a few years, tears, pain and who knows what. It was worth it though.

So, what is the Milestone 3? Lets make a bullet list where each starts with the expectations and goes towards the solution.

· We wanted to avoid hair washing.

What every researcher thinks at this moment is — dry electrodes. Sometimes they are referred to as the wet dream of every EEG researcher (also confirmed by our recent twitter poll). Again, long story short, this was out of reach, at least if one wanted to have a valid, research-grade EEG. Solution came with the so-called semi-dry electrodes. Sponge-based system that relied on saline solution (salty water). Special credits here go to Greentek company, our collaborator and distributor for China. Together, we first made a Smarting-compatible cap, that also didn’t call for hair washing.

Smarting with semi-dry electrodes 

· We wanted unobtrusive looks.

So, how do we measure EEG and not get noticed? cEEGrids were the first step, but still — not the final solution. How do we conceal the EEG? By nice cap design? Nope. Didn’t work.

The only plausible solution was — lets fit it in something that people ALREADY wear and that comes naturally. What do we wear on the head that is not strange? List was not too large:

1) Headband

2) Winter cap

3) Glasses

4) Headphones

Headband was too sporty and out of fashion. Winter cap could be worn during winter times. Glasses — too tight, with little space for electrodes. Which leaves us with headphones.

Is there a natural connection between EEG and headphones, or did we just go with the only possible option?

Yes, it turns out there is. Music and brain are very much connected. Music can drive our mental states. Lots of EEG research is music and auditory-in-general oriented.

· Easy mounting.

I am guessing you are already bored of “it’s a tough one” statements. So I will skip them. On a more technical side, EEG signals are in the mV range. That is tiny. Every single movement, adherence imperfection, touch, very much everything that causes micromovements causes EEG signals to go wild. This is mechanical noise which is very dangerous for recordings to say the least. You need to ensure stable contact, proper pressure, just enough but not too much (no pain or discomfort after longer use). Besides, human heads are vastly different, and unless you wish to make super-expensive 1000 versions of headphones for each head you really have to be smart and careful with the geometry.

But we did it. All of the above. And we have just released our EEG enabled research headphones — Smartfones.

Smartfones by mbt

Is this the end of the road? By no means. Is it super-exciting? Oh yes! So much so, that I am going to get one pair for myself, taking huge advantage of being one of the mbt founders and getting a huge personal discount 🙂

What is then the end game?

You might have noticed that the headphones I previously described still have the “research” prefix or suffix. In reality that means that they will have sponges soaked in saline, inserted manually in each electrode cup. In each recording. Great for research but not for everyday people.

But imagine if we all had our EEG screened during our normal day. We could get the first-hand information how our brain reacts during normal everyday activities. Often times we can only remember our past events and our related feelings but we cannot KNOW what they were at the time of event. We can often RECALL someday as being productive, us being stressed out and so forth. But is this really true feeling or are we just super-subjective as we are? If we had our brains screened regularly, we could also detect early biomarkers of upcoming medical conditions. We could have home-based epilepsy screening. We could have better gaming experience, the one that considers our mental states when adjusting the game difficulty. We could have a smart music recommender. And the list goes on.

Milestone 4 is just this. Making an EEG as seamless as drinking water, as sitting on a work desk, as answering a phone call. Getting rid of the sponges, salty water, and any other consideration — we don’t have to worry about a piece of hardware — it takes care of us.

If you think this is far-fetched, just go a few years back when the only way to measure EEG was to sit in an electrically isolated room (Faraday cage), not moving, looking straight ahead surrounded by wires and cumbersome devices. And of course, all the while “feeling relaxed” as many scientific papers describe their subjects in the above conditions.

Yes, there is work to do. But I am confident we are in reach. And hey, hard work has never been more interesting and motivating.