Emotion contagion in groups via virtual representatives
Emotions are known to be infectious when people meet in person. By interacting with each other, a group tends to become more similar in the emotions they express, resulting in the display of collective emotions. As more and more of our lives plays out online, with video calls, social media, chatbots and perhaps in the future a metaverse, there is a growing need to understand the emotional development of online groups. In two studies we examine the ability of virtual humans to spread emotions.
In the first study, people play a quiz together in two teams in an online environment. They can talk with each other, but, instead of seeing each other on their screen, others are visually represented by an avatar (virtual human). This avatar can mimic their facial expressions in real time. By enabling or disabling the facial mimicking by the avatar, we can examine whether facial expressions of emotions via avatars contribute to the formation of collective emotions in groups.
To create this Virtual Zoom the Network Institute’s Tech Labs created a custom networked application that interfaced with a Tobii Spark eye tracker to track where the participant was looking. A webcam was used to extract the facial expressions using FACSvatar software based on Ekman’s Facial Action Coding System. These coded expression where used in our application to map the participant’s facial expression onto their avatar. Special software was used to create live lip-sync between the participant’s speech and the avatar’s mouth.
The second study aims to take a first step to explore whether a VR environment could be used to train security personnel to work with emotional crowds, without the risks and costs of learning this in a real crowd. The participants take the role of a football steward, tasked with observing virtual spectators on the stands of a stadium. The virtual spectators watch the game and react emotionally to match events. To learn whether the virtual humans effect the participant emotionally, and therefore may prepare them for a real crowd, the heartrate and the eye movements of the participants are analysed.
This custom virtual reality application was also made by the Tech Labs. Because using large numbers of avatars poses a very serious performance issue even on high-end gaming computers, only the front 200 or so avatars are created in high resolution including facial expressions. The rest of the 1000+ avatars were created using a special low-res mesh animation version of avatars. All avatars had special emotional animations and facial expression available that were used randomly at certain moments that were synchronized with a specially made audio track. Because of the extreme computation load this virtual environment cannot be presented live to participants. Therefor we create 360-degrees look around videos that are displayed inside a virtual reality headset. This enable the participant to look around in the video while it plays.
Researchers: Erik van Haeringen & Charlotte Gerritsen (Computer Science)
Development: Marco Otte (Network Institute)