Postdoc: Gesture Generation in Face-to-Face Dialogue
You cannot apply for this job anymore (deadline was 14 Nov)
Academic fields
Engineering; Behaviour and society; Language and culture
Job types
Postdoc; Research, development, innovation; IT
Education level
Doctorate
Weekly hours
32 hours per week
We are looking for a postdoctoral researcher with experience in generative AI, multimodal representation learning, and modelling face-to-face dialogues for our NWO-funded project “Grounded Gesture Generation in Context: Object- and Interaction-Aware Generative AI Models of Language Use”. Preferred start date is 1 February 2026 but is negotiable.
Face-to-face conversation is the primary setting for human language, where meaning and coordination arise from the interplay between speech, prosody, gesture, facial expression, and gaze. Virtual humans are now prevalent in social media, education, and healthcare. However, their non-verbal behaviour, especially gestures, still lags behind state-of-the-art. This project tackles that gap by building generative AI models that produce context-aware, grounded gestures: responsive to objects and to interlocutors, aligning speech and visual signals for richer, more natural interaction. In this project, you will carry out research as described in the project proposal, which includes:
To do so, you will have full access to motion-capture and virtual-reality labs, 3D animation tools, and GPU-based high-performance computing at MPI. You will also be embedded in a rich theoretical and computational environment supported by the Multimodal Language Department.
Requirements
Essentials
Desirable
What We Offer You
A challenging position in a scientifically engaged organisation. At the MPI, you contribute to fundamental research. In return for your efforts, we offer you:
Application Procedure
Do you recognise yourself in the job profile? Then we look forward to receiving your application. The deadline to submit your application is Friday, 14 November 2025, 23:59 (Amsterdam time). You can apply online via the apply button. Applications should include the following information:
About the Project
This project will be advised by Dr. Esam Ghaleb, who is a research scientist in the Multimodal Language Department in the area of computer vision, machine learning, and behaviour & dialogue modelling. If you have questions about the position you wish to discuss before applying, please contact Esam Ghaleb: esam.ghaleb@mpi.nl.
The Employer
About Our Institute
The Max Planck Institute (MPI) for Psycholinguistics is a world-leading research institute devoted to interdisciplinary studies of the science of language and communication, including departments on genetics, psychology, development, neurobiology, and multimodality of these fundamental human abilities. We investigate how children and adults acquire their language(s), how speaking and listening happen in real-time, how the brain processes language, how the human genome contributes to building a language-ready brain, how multiple modalities (as in speech, gesture, and sign) shape language and its use in diverse languages and how language is related to cognition and culture, and shaped by evolution.
We are part of the Max Planck Society, an independent non-governmental association of German-funded research institutes dedicated to fundamental research in the natural sciences, life sciences, social sciences, and the humanities.
The Max Planck Society is an equal opportunities employer. We recognize the positive value of diversity and inclusion, promote equity, and challenge discrimination. We aim to provide a working environment with room for differences, where everyone feels a sense of belonging. Therefore, we welcome applications from all suitably qualified candidates.
Our institute is situated on the campus of the Radboud University and has close collaborative links with the Donders Institute for Brain, Cognition and Behavior and the Centre for Language Studies at the Radboud University. We also work closely with other child development researchers as part of the Baby & Child Research Center. Staff and students at the MPI have access to state-of-the-art research and training facilities.
About the Multimodal Language Department
The Multimodal Language Department in particular aims to understand the cognitive and social foundations of the human ability for language and its evolution by focusing on its multimodal aspect and crosslinguistic diversity. The research at the department combines multiple methods including corpus and computational linguistics, psycho- and neuro-linguistics, machine learning, AI and virtual reality, and is concerned with various populations ranging from speakers of signed and spoken languages, young and older subjects from typical and atypical populations. The department provides opportunities for training in a range of linguistic, and conversational state-of-the-art multimodal language analysis (such as motion capture and automatic speech recognition), as well as neuropsychological, psychological methods related to multimodal language and frequent research and public engagement meetings, and support from an excellent team of researchers in linguistics and psycholinguistics.