Are you interested in performing high-impact interdisciplinary research in Artificial Intelligence and its alignment with humans and society? The University of Amsterdam has recently started a flagship project on Human-Aligned Video AI (HAVA). The HAVA Lab will address fundamental questions about what defines human alignment with video AI, how to make this computable, and what determines its societal acceptance.
Video AI holds the promise to explore what is unreachable, monitor what is imperceivable and to protect what is most valuable. New species have become identifiable in our deep oceans, the visually impaired profit from automated speech transcriptions of visual scenery, and elderly caregivers may be supported with an extra pair of eyes, to name just three of the many, many application examples. This is no longer wishful thinking. Broad uptake of video-AI for science, for business, and for wellbeing awaits at the horizon, thanks to a decade of phenomenal progress in machine deep learning. However, the same video-AI is also accountable for self-driving cars crashing into pedestrians, deep fakes that make us believe misinformation, and mass-surveillance systems that monitor our behaviour. The research community’s over-concentration on recognition accuracy has neglected human-alignment for societal acceptance. The HAVA Lab is an intern-disciplinary lab that will study how to make the much-needed digital transformation towards human-aligned video AI.
The HAVA Lab will host 7 PhD positions working together with researchers from all 7 faculties of the university, from video AI and its alignment with human cognition, ethics, and law, to its embedding in medical domains, public safety, and business. The lab has 9 supervisors in total spanning all 7 faculties of the university for maximum interdisciplinarity. Depending on the specific topic, the PhD students also have a strong link to the working environment and faculty of their respective supervisors. The HAVA Lab has been given a unique central location at the library, an ideal hub for interdisciplinary collaborations. The PI of the lab is prof. dr. Cees Snoek.
The PhD Position on alignment between video-AI and ethics will be supervised by prof. dr. Tobias Blanke and prof. dr. Cees Snoek, both from the University of Amsterdam. What are you going to do?
A core part of the HAVA lab is its commitment to ethical AI within the field of Video AI. This project is looking for a PhD student to work together with us to investigate the ethical choices made during the design, training and deployment of Video AI. Being embedded with the lab, we will systematically record to understand where genuine choices in the sense of morality lie, how to represent these genuine choices in the video-AI algorithms, and how to develop new standards for labelling and machine learning that include these choices. Our ideal PhD candidate has a social science and humanities background with technical AI-expertise. Tasks and responsibilities
Your tasks will be to:
- Perform novel research towards video AI and its human-alignment in society.
- Actively collaborate within the interdisciplinary HAVA Lab.
- Present research results at international conferences and journals.
- Be active in sharing your research in the public as well as in the social domain, according to UvA Guidelines
- Assist in teaching activities such as lab assistance and student supervision.
- Pursue and complete a PhD thesis within the appointed duration of four years.