Last summer, Dr. Jieying Chen from the Department of Computer Science and Dr. Chendi Wang from the Department of Political Science received funding through the Network Institute Academy Assistants programme to support two research assistants working on the interdisciplinary project on Political Bias detection in LLMs.
The project has led to two full paper acceptances at the AI for Social Good track of the International Joint Conference on Artificial Intelligence (IJCAI), one of the leading international conferences in Artificial Intelligence.
The accepted papers are:
* Directional Hallucinations: Ideological Drift in News-Grounded LLM Question Answering
Authors: Chendi Wang, Liam Cunningham, Tom Yishay, Jieying Chen
* White-Hat Testing for the Ballot Box: A Framework for Election AI Auditing
Authors: Chendi Wang, Jieying Chen
The research investigates political bias, hallucination behaviour, and democratic risks in large language models, with a particular focus on elections and politically sensitive AI applications. By combining methods from computer science and political science, the project contributes to the development of more transparent, fair, and trustworthy AI systems.
This work directly advances the Network Institute’s Digital Society mission by addressing the societal impact of AI technologies on democratic processes, public information, and responsible AI governance.
We congratulate Chendi Wang, Jieying Chen, Liam Cunningham, and Tom Yishay on their upcoming publications and look forward to reading the publications – stay tuned to find them here.