Currently thinking about
- How do AI systems generate political output (whether prompted by propagandists, or voters themselves)?
- Will AI-mediated information improve the epistemic environment, or will pandering and optimization dampen potential benefits?
Approach to research
Across projects, my aim is to diagnose trade-offs clearly and provide evidence that helps institutions, platforms, and the public make better choices.
Main Munich projects
Postdoc at TUM
With the Digital Governance research group, chaired by Yannis Theocharis, we designed large cross-national surveys on toxic content, free-speech trade-offs, and content moderation preferences. I co-wrote questionnaires, managed fieldwork, and ran pre-registered experiments. A through-line of this work is mapping what ordinary users actually want platforms to do: when they tolerate incivility, when they draw the line at intolerance, and when threats trigger removal preferences.
Digital governance →Global survey
Over 13,000 respondents across 10 countries; people value expression but majorities favor removing incitements to violence and worry about platform power. Many also feel online hate/incivility is becoming unavoidable.
2025 Report →APSR article (2024)
Users' demand for moderation is limited; threats reliably trigger support for action, and target identity matters (e.g., harsher views on threats vs. insults; more tolerance when targets are high-status like billionaires).
Paper →
Social Media and Politics
How digital platforms shape political discourse, from influence operations to filter bubbles.