We are seeking a PhD candidate for the project 'InDeep: Interpreting Deep Learning Models for Text and Sound Methods and Applications' (NWO NWA 1292.19.399). This ambitious project aims to develop, apply, and fine-tune techniques to make modern deep learning models for text, speech, and music more transparent.
In this PhD position you can combine insights from audio analysis and deep learning to help musicians play what they love. If you are excited about doing this kind of research in an interdisciplinary environment with smart and friendly colleagues and a strong industrial collaboration, then you may want to join us.
The project is funded by an NWO Dutch Research Agenda grant to a consortium led by Dr. Willem Zuidema of the Institute for Logic, Language, and Computation (ILLC) at the University of Amsterdam (UvA). The consortium also includes Tilburg University, the University of Groningen, Radboud University, and Vrije Universiteit Amsterdam as academic partners and KPN, AIGent, TNO, Textkernel, Chordify, GlobalTextware, Deloitte, and Floodtags as industrial partners.
What are you going to doCan we deduce how hard a song is to play purely from the audio? Is the difficulty scale absolute or should it be individual? This project aims to use unsupervised or semi-supervised learning methods to achieve a measure of a song "playability": the level of music expertise necessary to be able to play along with a given song on a given instrument. Initially, the goal is to develop a tool that aspiring musicians will be able to use to choose songs on the
Chordify platform that are on the right difficulty level for them, as well as a tool that Chordify can use to evaluate the capabilities of its users. As the project advances, we hope to expand the research to help music teachers choose appropriate songs for learning and examination.
The technological focus will be on state-of-the-art Explainable AI (XAI) techniques in deep-learning models, as part of a larger consortium including experts in applications of deep learning to natural language processing and speech recognition. The consortium is particularly interested in the potential of attribution methods to build a bridge between deep-learning models and more musicologically-inspired models of musical structure in music information retrieval (MIR).
The PhD candidate will have a four-person supervision team including Prof. Henkjan Honing (music cognition), Dr Willem Zuidema (NLP, Interpretability, cognitive modelling), Dr John Ashley Burgoyne (music information retrieval), and Dr Jonathan Driedger (director of research at Chordify).
Tasks and responsibilities:
- Independently carrying out research, including writing and publishing three to four peer-reviewed articles.
- Submitting a PhD thesis within the period of appointment.
- Participating in the PhD programme of the ILLC.
- Participating in and contributing to the organisation of research activities and events at the ILLC, such as workshops and colloquia.
- Making a small contribution to the ILLC's educational mission by working as a teaching assistant for courses in your area of expertise and by assisting with the supervision of student research projects.
- Regularly presenting research results at international workshops and conferences, and publishing them in conference proceedings and journals.