When we see someone smile from ear to ear, most of us can’t help but smile along. Emotions are infectious, and under some circumstances this can be problematic. Think about stampedes, riots, bullying or depression. A computer model of how emotion spreads in groups could help in prevention and training but needs accurate data to validate and tune.
Therefore, we conduct a series of experiments in a small group setting where people play a quiz. People compete in two teams, and when they hear they won or lost, they tend to show a predictable emotion. By varying which of the other participants they see on the screen, we gather data on how emotions spread.
At the same time, we are also exploring automated techniques to recognize the emotions and gaze of the participants, because we expect the human tendency to mimic the facial expressions of others plays an important role in how emotion spread in groups. This study is conducted by Nienke Noort (MSc student Computer Science), Wafaa Aljbawi (MSc student Artificial Intelligence), and supervised by Erik van Haeringen and Charlotte Gerritsen.
Application development, technical support and equipment are all provide by the Network Institute Tech Labs.
Project 1: Evaluating ML models of emotion recognition in the video call environment.
Methods traditionally used in empirical research of emotions include manual annotations via observations, self-report via questionnaires, as well as biometrics like heart-rate. There are downsides to each of these methods that can limit the application and scope of emotion research, especially in social groups. With the rise of machine learning, various models of emotion recognition have been proposed that potentially can solve some of these limitations. One type of model focusses on the recognition of emotion from video footage. However, currently it is not clear how well these models perform in comparison to manual annotations in recognizing emotion in real-world applications, where it has to deal with variation in angles, lighting conditions and noise.
In this project we try to evaluate state-of-the-art in ML models for emotion recognition in videos in a video call environment. How reliable are the ratings? When does it go wrong and how could that be mitigated? Together with another student you will conduct an experiment in which people play a quiz via Zoom. Familiarity and independence with the technical side (machine learning) is a requirement for this project.
The Tech Labs created a networked application that handles the entire flow of the study controlled from a central computer. This included using Tobii Spark eye trackers, OBS to capture video of both the screen and a webcam and Zoom as face-2-face communication platform.
We used both lab spaces at the Tech Labs to divide the two team physically to avoid unwanted contact.
Because of the automation the test leaders only had to make sure the application was started and could then focus on handling the quiz they ran using the chat option in Zoom.