What makes a Good Teacher? Modeling Inter-Individual Differences in Humans who Teach Agents

Artificial agents (including chatbots, robots, game-playing agents, …) can make use of  interactions with human experts to acquire new skills in a flexible way. A variety of available  algorithms allow human experts to interactively teach agents optimal or near-optimal policies in  dynamic tasks. Depending on what the algorithm allows, the teaching signal can be one or a  combination of evaluative feedback, corrections, demonstrations, advice, etc. (Li et al., 2019,  Celemin et al., 2019, Ravichandar et al. 2020).

Download Project Report

Existing research on human-interactive agent learning has focused on designing efficient  agent learners, but largely ignores factors pertaining to the human teachers that can have a direct  or indirect impact on the outcomes of the agent’s learning process. We propose that modeling  inter-individual differences in teaching signals should go hand in hand with designing efficient  algorithms on the agent side, as it could help agents explicitly reason about, adapt, and possibly  influence different types of teachers.

This project is aimed at identifying how features of an individual’s teaching signal, including  the type, timing, accuracy, or frequency of human input, can affect the performance of the agent  as well as the teaching experience of the human. Through a series of studies involving human  participants, we propose to investigate teaching signal variability in interactions between human  teachers and state-of-the-art human-interactive machine learning algorithms, in typical  reinforcement learning benchmark tasks. The output of this research will be a collection of models  capturing inter-individual differences between teachers that can explain different learning  outcomes on the agent side. Such models may unlock new possibilities for designing learning  agents that are more efficient, more flexible, and more human-centered.

Researchers:

Students: