Threat and risk analysis is often performed in organizations to identify and mitigate security risks early-on in the software development life-cycle. Despite efforts to automate parts of the analysis [1-2], teams of experts still manually analyze large architectural diagrams and discuss the threats in practice. Though generally perceived as beneficial by practitioners, the quality of analysis is difficult to improve. Due to its manual execution, the quality heavily depends on the human analyst in the room. Particularly, collective domain knowledge and skills are surely important factors. But, evidence of groupthink [3] in similar analysis sessions suggests that knowledge is not always contributed (equally) by all participants. Previous experiments [4-7] with students and experts have focused on measuring quantitative performance of teams in terms of analysis outcomes but have neglected the human factors that come into play.
Download Project Report
In general, no existing study has measured the effects of gender diversity (or diversity in general) on threat and risk analysis in IT systems. Yet these very IT systems rule our lives. Are risks perceived differently (or equally) by male or female analysis? |
RQ: What is the effect of gender diversity (and other diversity parameters) on threat and risk analysis?
Although it is yet unclear how gender diversity affects threat and risk analysis discussions, some scholars argue we are facing a diversity crisis [8]. Conversely, studies have demonstrated how gender diversity can be beneficial for decision making and progress, once effectively incorporated [9-12]. |
We propose to scientifically explore the role of diversity in threat and risk analysis by: (i) leveraging an existing dataset [4] and piloting a study with master students, (ii) co-designing a diversity intervention tailored to the context of threat and risk analysis, and (iii) conducting experimental validations of the intervention in an academic setting.
Researchers:
-
-
Katja Tuma
-
-
Romy van der Lee
Students:
-
-
Ella Josephine MacLaughlin
-
-
Sarah Mei Kwakkelaar
In bioinformatics specialized annotation datasets are often very small which impedes the development of successful machine learning models for associated prediction tasks, especially within deep learning. To overcome the challenge of limited data, knowledge transfer strategies can be pursued such as transfer and multi-task learning.
One example of such a prediction task to benefit from knowledge transfer is epitope prediction. An epitope is comprised of the distinct amino acids of a protein involved in the binding of an antibody. The characterization of an antibody’s binding site on their respective antigen is crucial for their efficient use in diagnostics and biomedical research as well as for a deeper understanding of the immune response. While several machine learning models for epitope prediction have previously been developed, their performance is not highly accurate yet and their results not reliable, epitope prediction is thus still a major unsolved problem within the bioinformatics field. It has been shown that epitope prediction can benefit from the inclusion of related annotation data, as the inclusion of general protein-protein-interaction (PPI) sites was effective in improving our Serendip-CE epitope predictor. Additionally, preliminary results demonstrated that multi-task learning for protein-protein interface prediction is very effective. As the binding of an antibody to the protein’s epitope can be considered as a specific form of PPI, we anticipate epitope prediction would benefit strongly from knowledge transfer of this area.
In this project, we will systematically check which transfer learning approaches are most effective on epitope prediction data. We will compare multi-task learning against (i) the classic approach of transfer learning, where pre-trained weights are simply used as the starting state, and (ii) against an approach where a pretrained network is used via regularization, where any deviations of the pretrained network will be penalised.
Researchers:
-
-
Ilaria Tiddi
-
-
Sanne Abeln
Students:
-
-
Nichita Utiu
-
-
Daniyal Selani
This project examines team performance in science meetings in order to explore how translational, interdisciplinary teams succeed. We propose a mixed method approach that 1) combines a meeting interaction network analysis based on graph theory (‘meetomes’) with an analysis of how team members actually interact; and 2) relates the findings from the beforementioned analyses to post-meeting questionnaires that were filled out by all participants to the meetings.
Download Project Report
This project draws on a unique dataset collected within the multidisciplinary Clinical Neuroscience section of the department of Anatomy and Neurosciences (ANW). Between 2017 and 2019, all weekly lab meetings were recorded (n=70) and post-meeting questionnaires were filled out by all participants after most meetings (n=48). Questionnaire items measured meeting satisfaction, group processes, individual perceptions on taking charge and the meeting’s translational value. The goal of these recordings and questionnaires was to better understand group dynamics that may foster translational and interdisciplinary collaboration between team members. As a first step towards understanding these dynamics, we quantified interaction patterns between different team members using network analysis by creating ‘meetomes’ (a visual explanation of this method can be found here).
Preliminary results show that (1) interaction patterns show variable network topologies; (2) post-meeting assessments vary considerably; and (3) the two relate, such that team members may rate the meeting as contributing more to on-the-job effectiveness when they were part of many locally clustered interactions. As another example, we found that team members who are generally more inclined to take charge (i.e. being more proactive) more often play a ‘hub’ role in connecting subgroups within the team.
What is missing, however, is an in-depth analysis of the thematic content of each meeting, and how multidisciplinary information is exchanged in situ. This is why we propose to combine expertise from both partners involved in this project.
Researchers:
-
-
Joyce Lamerichs
-
-
Linda Douw
Students:
-
-
Aniek Antvelink
-
-
Marloes Bet
Scholars agree that cultural changes in early modern Europe (c. 1500-1800) were both accompanied and precipitated by an information revolution. The use of printed media filtered down into local chronicles. These are hand-written narratives produced usually by middle class authors, that recorded events and phenomena they considered important (local politics, upheavals, climate, prices, crime, deaths, but also supra local and international news). During the seventeenth and eighteenth century, these chronicles came to include news and topics deriving from a greater variety of information sources and, presumably, from an increasingly global information network. The question this project wishes to answer is in what way the geographical scope of the chroniclers’ access to information and news changed in the period 1500-1850. What places are mentioned by these authors in their accounts? How did the ratios between local, supra local or regional and international news develop over time?
Researchers:
-
-
Erika Kuijpers
-
-
Ronald Siebes
Students:
-
-
Tinka de Haan
-
-
Myrthe Buckens
The proper wording is key to effectively convey a message. A news article that is not understandable for its target audience is useless and online comments which are not phrased constructively easily lead to a toxic discussion culture. Human readers can intuitively judge the adequacy of a text by weighing content aspects such as the relevance of the topic against stylistic aspects such as lexical and syntactic complexity.
Download Project Report
Neural models are able to consistently label text adequacy but they are not able to explain their decision. The transformer architecture underlying most state-of-the-art models makes it almost impossible for users to understand how information is being processed and evaluated. This is problematic when a human professional makes decisions based on the model outcome (e.g., as a gatekeeper for information). As a remedy, interpretability methods using, for example, attention patterns (Vig, 2019), gradient-based saliency (Li et al., 2016), subset erasure (de Cao et al., 2020), surrogate models (Ribeiro et al., 2016), or influence functions (Koh and Liang, 2016) are being developed to provide post-hoc explanations for the model computations. Their applicability to language input and in particular to longer texts is currently an open research question.
With our project we aim to contribute to the growing field of model interpretability for responsible and reliable AI and better understand the linguistic factors underlying text adequacy classification. This project is novel in the following ways:
- Most of the interpretability metrics can only be used to examine token-level phenomena on small input snippets for tasks such as sentiment analysis. We explore how interpretability metrics can be adapted to capture inter-sentential relations in longer texts.
- Computational linguists have developed a rich inventory of semantic and stylistic methods to represent the interplay of different factors for text adequacy. We analyze to which extent neural models account for similar knowledge as linguistically motivated models when determining text adequacy and how this is captured by interpretability metrics.
Researchers:
-
-
Lisa Beinborn
-
-
Florian Kunneman
Students:
-
-
Bruna Aguiar Guedes
-
-
Charlotte Pouw
Policy compromises have become increasingly contested. A case-in-point is the stale mate in the recent Dutch coalition formation where the ‘right block’ of the conserva tive liberals (VVD) and the christian democrats (CDA) do not want to bargain with the ‘left block’ of the social democrats (PvdA) and the greens (GroenLinks). Whilst key for vital democracies, political compromises present politicians with a double edge sword: Citizens support the democratic ideals of compromise, yet often oppose the concrete compromises on political issues. Moreover, recent increases in polar ization, electoral fragmentation, as well as a ever more dispersed media landscape has brought a permanent campaigning stage: Politicians make more promises and use more persuasive, as well as negative and uncivil, rhetoric on a daily basis. The usage of such language seems to imperil accepting compromise. We therefore investi gate the role of such campaign rhetoric on people’s willingness to accept compromises.
In this project, we address this challenge using Virtual Reality (VR). In particular, we aim to develop a VR application in which participants are confronted with Intel ligent Virtual Agents (IVAs) that act as politicians in a campaign. Using modern AI algorithms, these IVAs will be able to engage in dialogues with users using different rhetorics. From a Political Communication Science perspective, this will allow us to experimentally establish when negative and uncivil rhetoric affect people’s willingness to accept compromises (RQ1) in a controlled environment. From an AI perspective, this will enable us to develop novel algorithms to endow IVAs with more intelligent behaviour, in particular with the ability to adopt different rhetorics (RQ2). The project will build upon a 2019 NIAA project (entitled In touch with VR) aiming to develop IVAs that produce the feeling of social co-presence. Specific AI techniques that will be used are dialog management and natural language processing.
Researchers:
-
-
Charlotte Gerritsen
-
-
Mariken van der Velden
Students:
-
-
Bibi Kok
-
-
Rik Timmer
Artificial agents (including chatbots, robots, game-playing agents, …) can make use of interactions with human experts to acquire new skills in a flexible way. A variety of available algorithms allow human experts to interactively teach agents optimal or near-optimal policies in dynamic tasks. Depending on what the algorithm allows, the teaching signal can be one or a combination of evaluative feedback, corrections, demonstrations, advice, etc. (Li et al., 2019, Celemin et al., 2019, Ravichandar et al. 2020).
Download Project Report
Existing research on human-interactive agent learning has focused on designing efficient agent learners, but largely ignores factors pertaining to the human teachers that can have a direct or indirect impact on the outcomes of the agent’s learning process. We propose that modeling inter-individual differences in teaching signals should go hand in hand with designing efficient algorithms on the agent side, as it could help agents explicitly reason about, adapt, and possibly influence different types of teachers.
This project is aimed at identifying how features of an individual’s teaching signal, including the type, timing, accuracy, or frequency of human input, can affect the performance of the agent as well as the teaching experience of the human. Through a series of studies involving human participants, we propose to investigate teaching signal variability in interactions between human teachers and state-of-the-art human-interactive machine learning algorithms, in typical reinforcement learning benchmark tasks. The output of this research will be a collection of models capturing inter-individual differences between teachers that can explain different learning outcomes on the agent side. Such models may unlock new possibilities for designing learning agents that are more efficient, more flexible, and more human-centered.
Researchers:
-
-
Kim Baraka
-
-
Daniel Preciado Vanegas
Students:
-
-
Mehul Verma
-
-
Raj Bhalwankar