August 4, 2020
Host: Ivan Gligorijevic, CEO, mBrainTrain
Panelists: Suzanne Dikker (University of New York),
Karl Philipp Flosch (University of Konstanz),
Martin Imhof (University of Konstanz)
Humans act and interact all the time i.e., each second. The interactions between one human brain and another have always been a focus in the field of neuroscience and have gained immense importance in the scientific realm in the past few decades. The reason for the increasing interest in this methodology for most of the researches is the advantage of real-time monitoring of multiple brains with social interactions and intersubject correlations.
Advancements in EEG from Lab centric EEG set-up to mobile EEG for social & dynamic study have led many researchers to take up new experiments and projects on social and multi-human EEG studies.
Ivan Gligorijevic, CEO – mBrainTrain hosted a discussion on Multi-human EEG with panelists:
Suzanne Dikker from the University of New York,
Karl Philipp and Martin Imhof from the University of Konstanz who have expertise working several projects on EEG and especially multi-human studies.
Setting-up an EEG experiment anywhere inside or outside a laboratory is quite challenging. Imagine setting-up multi-humans EEG experiments simultaneously! It’s undoubtedly a tough job.
Why would anyone be willing to torture themselves in such complex settings and situations?
Let’s start from the beginning…
Why is there a need to use EEG in social settings?
There are many reasons to use EEG in a social setting. Taking an example of my lab: my research deals with measuring how people are able to process different (health) messages and videos. In our setting, we measure people sequentially (one person at a time). These people are mostly college students, and they are not always participants of interest (i.e. proper target group). Having them as participants of every research experiment is not a good choice. So, the practical advice for would be to go out, find your real target group, and measure as much as you can from your specific target group. This real-world applicability is a very good feature of multi-human EEG.
From my point of view, the most exciting variable in all psychological processes is Time. We know that in our interactions, a lot of things happen very intuitively and quickly. For example, humans create an impression of a person in less than a second. People act and react at different time scales. This itself can be measured – we can ask people (in a social setting) to think of the argument that he/she had with their partner some time or moments ago. We give questionnaires to these subjects and measure the cognitive processes at in-between time scales from the occurrence of the event to when we ask them the questions.
However, it is still uncommon to measure things when they unfold online, i.e., when the event is happening. But now, with the EEG and the fantastic time resolution, we have the ability to measure within social settings, while the event is happening. It is quite interesting to have these very small time scales before and after the events which were not considered so far because there were not technical possibilities for this. For this reason, multi-human EEG is quite fantastic.
I completely agree with what Karl-Philipp said. One major advantage of going not just into social settings but also into more real-world settings is bringing equipment to the people to reach populations. Often, target populations don’t typically tend to get studied in our undergraduate laboratory.
As Karl-Philipp pointed out, there is a real difficulty in answering some of the research questions within a laboratory environment. It is much more comfortable (and seems to be a better solution for studying people) by just bringing the equipment to people’s homes or other dynamic social environments. This is undoubtedly a significant advantage of tech developments today.
And last, now we have the opportunity to test whether these processes and mechanisms that we’ve addressed within the laboratory setting (that we’ve been trying to model with decades of research and important findings) apply in a real-world dynamic social context. The hypothesis of how we process social stimuli and how we behave in social settings can now be tested. So this is one another advantage.
Why do a multi-human EEG? What can we find out from these studies?
There are several dimensions to this. There are both pros and cons to doing a multi-human EEG. With this social setting, we are open-up to more influences by having this uncontrollable environment/situation. Yet, it has the real wealth: as we already know, neuro-synchrony when we measure it (in a group or sequentially), depends on the stimulus of this process as well as the psychological variables of the perceivers (for example if they attend it or they have a specific engagement). The dynamic interactions in a social situation really influence the measuring of neural synchrony.
Let me summarize my studies on Collaborative Learning in the classroom (Relationship between synchrony and learning: https://www.sciencedirect.com/science/article/pii/S0960982217304116) for more insights on real-time monitoring and measuring of social interactions.
We partnered with several New York City high schools, where we collaborated with the teacher and took over their entire classroom (in agreement with the teacher, of course) for the full school year. Initially, we gave all the kids an introduction into what Psychology and Neuroscience and also provided awareness on a research design that we intended to apply in our study with the use of EEG. We also explained how we would measure the interactions using EEG between the whole class and the teacher while they were engaging in their regular classroom in real-time. They were learning their standard biology content while we were recording their EEG. We ensured the use of different teaching styles in different classes and measured the social closeness between the kids and a number of personality traits in later studies – we also assessed their learning. We had a number of factors that predicted the extent of how their brain activity synchronized.
With all these efforts, we were able to predict things like classroom social dynamics, how close kids fell to each other, their classroom engagement; how a kid gets engaged in a given activity, and a lot more. Attending to a stimulus was a strong predictor for synchrony; we also found that some personality traits predicted the extent to which activity became synchronized between kids and the teacher. For example, kids who were more groupie (i.e., liked being in social settings more) were more in sync with their classmates.
Question from the audience (prof. Aina Puce):
How many people would need to help when you set up 12 people for hyper scanning?
Taking the same example of my study. In the beginning, we created a sort of student-teacher-scientists partnership – in other words, we involved the entire school year employees:
The data collection lasted like five or six months. This educational-inclusive model appeared to be very important – we found out this when we tried to address follow up questions from the next study in another school. The kids were very much less motivated to participate. When we asked them to look at the wall for two minutes, they would look at their phones – some kids fell asleep during the class. So, this educational model happens to make a stark difference.
We have also discovered through other real-world research attempts (like in museums) as well as in lab studies that it is super important to have your participants motivated. Not only the motivation is crucial for participants, but also for research assistants who are helping out to get the project going.
Where to get started with multi-human EEG or hyperscanning? Do you have your entire pipeline in place or you manage on the go?
My role is to build up the entire pipeline of data recording and processing for multi-human EEG in our lab (from scratch). Obviously, it’s quite challenging – there are no all-around/carefree solutions for this technique, as it is very new, emerging, and the community is small. Solutions of Smarting mobile EEG are absolute pioneering; however, It is impossible to make a solution for all needs and settings. It’s a great strength to have this new EEG designs for real-world settings. In most of the settings, routines don’t work, and continuous learning is required (testing out the possibilities, new designs, data analysis, and so on). We need to tailor the techniques and knowledge to address the specific needs of our experimental setting and methods of analysis. We need to make the entire pipeline as good as possible and also to decrease the time between data recording and finding the first results, while also continually improving and validating your setting.
From my experience, I think that it depends – we have used several protocols and several different devices. For example, the same device might work well in one study, but in another one, it would not work so well. For example, we were lucky that in our classroom study, the set-up has been working well in terms of the Bluetooth range. But it didn’t work for another school.
Later, we also found out that the number of devices that we could record on a single computer was limited to 12 – this was also the number of the EEG devices that we had. There were many moving parts – for example, there where certain protocols that weren’t incorporated in my computer – like a Lab Streaming Layer (which are very important for the hyperscanning studies)
Generally, you can always face different and completely new issues when you are applying a different setting..
A question from the audience (Francisco Parada): Are you building from scratch? Some are busy building it from existing software like EEG lab mobile app or others.
For recording, we are using Smarting Software. For the analyses, we try to be as independent as possible from toolboxes because they are designed for classical EEG studies… We are now working on a method named Intersubject correlation (from Parra Lab from New York), and it is not implemented in any of the toolboxes- so we need to make our own analyses pipeline without packages.
Questions from the audience: in Social settings, how do you sync other social settings with specific triggers in non-controlled settings?
It depends on your design. You can implement triggering through the already mentioned Lab streaming layer protocol. This is an event layer which is recorded by the Smarting Streamer into the same file, and it depends on your design whether you want to press your triggers by yourself (when it’s slow enough), or you are showing videos on an experiment at a big screen – then you can also use software like Neurobs Presentation to send triggers via LSL protocol via LAN network to the recording PC.
This blog post represents the transcript and interesting questions from mbt talks webinar. Watch the complete webinar on this link here: https://www.youtube.com/watch?v=2eC-V9wDsjs&