Centrum Wiskunde & Informatica (CWI) has a vacancy in the
Distributed & Interactive Systems (DIS) research group for a
Postdoc on the subject of Trustworthy Human-AI interaction for Media and Democracy (m/f/x) We are looking for a talented postdoc who is interested in trustworthy and transparent human-AI interactions, in the context of news media and journalism.
A 2-year fixed-term full time Postdoc position is available at the
Distributed & Interactive Systems (DIS) research group at
CWI. This project is a collaboration between CWI and the
AI, Media, and Democracy (AIMD) lab in Amsterdam. You will be working under the supervision of Dr.
Abdallah El Ali and Prof.dr.
Pablo Cesar, where you will be embedded within the AIMD lab.
Job description The AI, Media and Democracy (AIMD)
ELSA Lab’s aim is to set up an experimental space and testbed for novel ways of applying AI in the media. Journalists, media professionals, designers, citizens, researchers and public and societal partners bring in their challenges in using AI in the media. The lab investigates how AI-driven applications in the media change the public sphere, civic engagement and economic competition, and explores novel ways of applying AI in the media. The
Distributed & Interactive Systems (DIS) research group at
CWI combines data science with a strong human-centric, empirical approach to understand the experience of users. This enables us to design and develop next generation intelligent and empathic systems. We base our results on realistic testing grounds and data sets, and embrace areas such as ubiquitous and affective computing, human-centered multimedia systems, and human-AI interaction.
Encounters with AI-generated content can impact the human experience of algorithms, and more broadly the psychology of Human–AI interaction. In particular, AI system disclosures can influence users’ perceptions of media content. There is a growing concern that as generative AI becomes more widely used, manipulated content could easily spread false information. As a key step toward mitigating harms and risk, a key recent mitigation effort involves the European Al Act Proposition “Transparency Obligations for Providers and Users of Certain AI Systems and GPAI Models”, which seeks to address the issue of AI system transparency.
The research scope broadly addresses the effective, trustworthy, and transparent communication of AI system disclosures. We aim to account for ethical and legal considerations, design and human factors perspectives, as well as policy recommendations. As such, this role may involve relevant stakeholders where necessary, ranging from media organizations, policy makers, as well as AI researchers and practitioners. The initial focus is on the end-user (media consumer) perspective, and at later stages, on the perspective of the media organizations and the generative AI media production process itself. For this postdoc, we are specifically interested in how trust in AI systems can be garnered by focusing on creating better user interfaces and/or understanding human-AI system interactions at a cognitive, behavioral, and physiological level. By establishing user-centric designs for transparent AI disclosures, we can take steps toward ensuring a well-functioning democratic society.
We expect the postdoc researcher to be embedded within the AI, Media, and Democracy lab, which is located in Amsterdam’s city center.