The use of automatic face analysis is rapidly spreading in our society. This technology, like facial recognition, is primarily used for security and law enforcement purposes, but it is now becoming popular in other areas, like in recruitment, education and analysis of facial expression. However, facial recognition systems are consistently built on a gender binary construct and almost never take into account individuals who identify non-binary. As a consequence, these types of human-machine interfaces reinforce existing prejudices about these communities. By considering essential questions about the conditions under which digitalization creates knowledge and identities, the hypothesis underlying the research project is that in facial recognition system non-binary databases are missing.
Facing this phenomenon, the goal of the project is to lay the groundwork for a critical study on the relationships among gender, identity and face recognition technologies. The aim is twofold: i) enhancing the visibility of non-binary identities, as the representativeness of minority communities is the foundation of fair AI system; ii) studying the dichotomy between self-perceived gender vs machine-classified gender through an analysis of the current face recognition algorithms.
In more detail, the project is structured as follows: i) visual archives analysis: morphosyntactic analysis of facial images of historical (e.g.: IHLIA) and personal archives from individuals of the non-binary communities; ii) prolegomena for a future gender fair: construction of a dataset by cataloging patterns that contribute to gender self-perception; iii) analysis of facial recognition algorithms: an interdisciplinary study of patterns contributing to self-perceived gender and to machine-classified gender.
Researchers:
Students: