Academy Projects 2025

Autonomy in the Attention Economy

Digital platforms have reshaped how we spend our time and attention. Although
technologies increasingly mediate all aspects of human life, there are often subtle
but significant misalignments between what users want, what algorithms can infer
about users and their needs, and what platforms are designed to promote. These
misalignments undermine users’ autonomy over their behaviour, profoundly affecting
their well-being and sense of identity over time.

How do such misalignments emerge? To what extent do algorithms accurately
represent who users are—or who they want to be? Despite growing awareness of
the psychological and social impacts of digital media, we still don’t understand how
these mismatches arise or how to quantify them.

To tackle these questions, our project aims to uncover mechanisms driving three
fundamental misalignments: (i) how users intend to spend their time versus how
they actually spend it, (ii) how users believe they spend their time versus their actual
usage patterns, and (iii) how users perceive their interests and identity versus how
algorithms construct them.

We will recruit adult smartphone users for a three-month study. Participants will log
their intentions and reflect weekly on their long-term goals. In parallel, participants
will donate their digital trace data (app usage, video viewing patterns, etc.) via the
Digital Data Donation Infrastructure (D3I). By combining subjective reflection and
individual-level digital behavioral data, we will make quantitative as well as
qualitative comparisons between users’ stated goals and preferences, and their
actual digital behavior.

 

Supervisors:

Academy Assistants: tba

Breaking the Illusion of Division: A Field Experiment on NU.nl to Reduce Perceived Polarization

Dutch citizens increasingly perceive society as more polarized than it is, which undermines social cohesion and trust in democratic institutions. Despite active moderation, Nu.nl—a major news website reaching up to 40% of the Dutch population monthly2— has inadvertently reinforced this perceived polarization. This effect is driven partly by a minority of disproportionately vocal, right-leaning commenters, and by highly divisive comments surrounding Israel’s war on Gaza. Together with Nu.nl’s moderation team, we conducted interviews and reviewed internal Nu.nl user data indeed revealing that a minority of highly active commenters generates a large share of polarizing content.

Moderate Nu.nl readers less active in the comment section may have a false perception about how polarized it is. Consequently, they may silence themselves, falsely believing that their moderate opinion is a minority position. A constructive democratic debate, however, should include moderate voices and thus break this false perception of polarization. Our field experiment investigates: How can we a) reduce perceived polarization online comments, and b) encourage participation of more “moderate” users on NU.nl?
Together with Nu.nl, we designed and pre-tested two visual interventions to display above the comment sections of Israel-Palestine news articles.

Intervention 1 corrects misperceptions by visualizing people’s real, moderate opinions on a general statement about the Israel-Palestine conflict that is perceived to be polarized but is not. Intervention 2 draws on psychological theory and internal Nu.nl research to create a welcoming space for moderate users, emphasizing shared motivations. We will analyze changes in quality of user comments on Nu.nl (DV1), and changes in moderate user commenting (DV2) when we display each intervention.

We also test boundary effects of our proposed mechanism: beyond an “optimum level,” psychological safety elicited by our interventions may discourage moderate users from commenting if they believe other moderates will instead (“online bystander effect” 6).

 

Supervisors:

Academy Assistants: tba

EN-SPEAK: English Speech and Pronunciation Enhancement AI Kit

This project builds on a recent collaboration between the Centre for Global English (CGE) and the Computational  Linguistics & Text Mining Lab (CLTL).  

The CGE runs a successful MOOC English Pronunciation in a Global World, gathering over 130,000 students from  191 countries. MOOCs offer many benefits but some challenges remain; e.g. lack of personal feedback on student  work. EN-SPEAK will help tackle this issue by developing an automated pronunciation checker capable of providing  personal, adequate and timely feedback concerning the quality of pronunciation. 

The project comprises two components: A) the creation of an open English Learner’s Pronunciation Dataset, and B)  the development of an automated pronunciation checker to aid in the diagnosis of English pronunciation mistakes. A) will address the lack of annotated speech Learner Data. Learner Data is valuable and rare, especially when annotated,  and crucial for the development of education technology as it helps understand the real necessities of students (e.g.,  where they struggle, what kind of help they need). The existing MOOC provides a unique opportunity to analyze large  numbers of data from L2 speakers across the world and identify areas of struggle in English pronunciation. We will  then use the result of this analysis to generate high-quality synthetic data using speech synthesis technology to produce  artificial learner data that reflect student’s challenges. In this way, we will derive two unique and complementary data  sets. 

  1. B) will use the data analyzed and generated by A) to inform the development of an automated pronunciation checker.  A proof of concept of this tool is currently under development. We will use state-of-the-art models for speech  recognition/transcription to determine the quality of pronunciation. We will test different methods (e.g. ensemble  models, fine-tuning paradigm, etc.) to create an automated pronunciation checker suitable to be integrated in CGE’s  MOOC. 

Supervisors:

Academy Assistants: tba

 



News Literacy For All: Adapting the N-NWS14 for Audiences with Lower Functional Literacy

Misinformation poses a threat to democratic processes and informed public discourse, and countering it  requires a population with strong news literacy skills. News literates can critically evaluate the reliability  of information and make informed decisions in our complex media environment. However, the tools we  use to measure news literacy often rely on dense text and complex language, making them inaccessible  to a vulnerable group: people with low functional literacy. Functional literacy is the ability to use reading,  writing, and math effectively in daily life. In the Netherlands, over 2.5 million adults struggle with these  skills, making it difficult for them to engage with news or assess the credibility of information

Surprisingly, the group that is most vulnerable to misinformation is often excluded from both news literacy  assessments and interventions. Consequently, we lack insight into the news literacy levels of this group,  making it difficult to tailor interventions to their needs. It also prevents us from evaluating the  effectiveness of existing interventions, as we cannot accurately measure their impact. This project aims to  fill this gap by adapting our recently developed news literacy scale to be more accessible for individuals  with lower literacy levels. This can help researchers and practitioners gain deeper insight into their current news literacy levels, and ultimately develop more effective interventions to empower this vulnerable  group. 

The project will begin with a literature review in which the AAs map out which linguistic, (audio)visual, and  survey design features increase or decrease the accessibility of measurement tools. Interviews with  experts in (news) literacy and science communication will further guide the redesign. The revised tool will  be pilot-tested and adjusted based on feedback from the target group. Without such inclusive  measurements, we risk designing solutions that fail to reach those who need them most to navigate our  complex digital society.

 

Supervisors:

Academy Assistants: tba



PoliBias: Cross-National Analysis of Political Bias in Language Models through Parliamentary Voting Patterns

As large language models (LLMs) become central to political communication, news generation, and decision support, their biases carry real-world consequences. While bias studies in AI have addressed race and gender, political bias remains underexplored, despite its critical role in democratic societies.

In our initial work on the PoliBias benchmark, developed through 2 AI MSc and 6 AI BSc theses supervised by Dr. Jieying Chen, we found that popular LLMs consistently display left-leaning tendencies and exhibit negative bias towards right-wing or conservative parties. These findings, based on over 13,000 parliamentary motions from the Netherlands, Norway, Spain, and the US, raise important concerns. Yet, we currently lack the political science expertise to interpret what such patterns mean for the real-world use of LLMs, for instance, when they are deployed in text summarization, policy analysis, or political Q&A.

This project seeks to bridge that gap. Dr. Chendi Wang, the only member from the Department of Political Science within the Network Institute, will join the project to help interpret the results in light of party systems, ideological polarization, and democratic discourse. Her expertise is crucial to assess which model behaviors constitute problematic bias, which reflect political structures, and what ethical or policy implications arise.

To publish this work at top AI venues, it is essential to enrich the current computational framework with deeper, theory-driven insights. This requires a close collaboration with a political science expert, enabling us to develop more context-sensitive, comparative, and socially meaningful analyses that go beyond technical evaluation.

Together, we will extend the benchmark, analyze ideological and sentiment patterns in model outputs, and explore how LLMs may reinforce or challenge political power dynamics. The project will train two Academy Assistants, one from each field, who will collaborate on data analysis and interpretation, producing a benchmark and publication that combine computational insight with theoretical depth.

This collaboration directly advances the Digital Society mission by addressing fairness, transparency, and the democratic impact of AI systems.

 

Supervisors:

Academy Assistants: tba