Are you interested in performing high-impact interdisciplinary research in Artificial Intelligence and its alignment with humans and society? The University of Amsterdam has recently started a flagship project on Human-Aligned Video AI (HAVA). The HAVA Lab will address fundamental questions about what defines human alignment with video AI, how to make this computable, and what determines its societal acceptance.
Video AI holds the promise to explore what is unreachable, monitor what is imperceivable and to protect what is most valuable. New species have become identifiable in our deep oceans, the visually impaired profit from automated speech transcriptions of visual scenery, and elderly caregivers may be supported with an extra pair of eyes, to name just three of the many, many application examples. This is no longer wishful thinking. Broad uptake of video-AI for science, for business, and for wellbeing awaits at the horizon, thanks to a decade of phenomenal progress in machine deep learning. However, the same video-AI is also accountable for self-driving cars crashing into pedestrians, deep fakes that make us believe misinformation, and mass-surveillance systems that monitor our behaviour. The research community’s over-concentration on recognition accuracy has neglected human-alignment for societal acceptance. The HAVA Lab is an intern-disciplinary lab that will study how to make the much-needed digital transformation towards human-aligned video AI.
The HAVA Lab will host 7 PhD positions working together with researchers from all 7 faculties of the university, from video AI and its alignment with human cognition, ethics, and law, to its embedding in medical domains, public safety, and business. The lab has 9 supervisors in total spanning all 7 faculties of the university for maximum interdisciplinarity. Depending on the specific topic, the PhD students also have a strong link to the working environment and faculty of their respective supervisors. The HAVA Lab has been given a unique central location at the library, an ideal hub for interdisciplinary collaborations. The PI of the lab is prof. dr. Cees Snoek.
The PhD Position on human-aligned video-AI for public safety will be supervised by prof. dr. Marie Rosenkrantz Lindegaard and prof. dr. Cees Snoek. What are you going to do?
For this position, you will research video AI in a human aligned manner for the context of public safety. Video AI has the potential to detect unsafe behavior and situations from camera recordings and provide unique insights, for example statistics on crimes that are currently undetected, unregistered and unreported, or on incidences of self-policing and helping. However, specialist knowledge about behavior during subtle incidences of such interactions is needed to deal with human biases present in existing data. This requires new video-AI algorithms recognizing subtle and fine-grained behavior without perpetuating unwanted biases. Our ideal PhD candidate has an artificial intelligence background and affinity with the social and behavioural sciences. Tasks and responsibilities
Your tasks will be to:
- Perform novel research towards video AI and its human-alignment in society;
- Actively collaborate within the interdisciplinary HAVA Lab;
- Present research results at international conferences and journals;
- Be active in sharing your research in the public as well as in the social domain, according to UvA Guidelines;
- Assist in teaching activities such as lab assistance and student supervision;
- Pursue and complete a PhD thesis within the appointed duration of four years.