Intelligent Virtual Agents (IVAs), interactive human-like characters, become widely used for numerous applications, varying from healthcare decision support2 to communication training. In such applications, IVAs play various roles in which they interact with users, for instance as an instructor or teammate4. Interestingly, in the vast majority of these cases, IVAs are friendly and supportive. Instead, the area of IVAs with a ‘negative’ attitude towards users (i.e., ‘virtual bad guys’) has been heavily under-researched.
However, ‘virtual bad guys’ are a highly interesting topic of study for at least two reasons:
1) Several prominent people recently expressed their concern that autonomous systems might evolve to a point where they threaten human beings. Controlled studies can provide a better understanding of how humans would react to such threatening AI systems.
2) The concept of virtual bad guys opens up a range of useful applications, including virtual training of aggression de-escalation skills (e.g., for security personnel), Virtual Reality exposure therapy, and anti-bullying education.