We are seeking a PhD candidate for the project 'InDeep: Interpreting Deep Learning Models for Text and Sound Methods and Applications' (NWO NWA 1292.19.399). This ambitious project aims to develop, apply, and fine-tune techniques to make modern deep learning models for text, speech, and music more transparent.
Would you like a PhD position where you do fundamental research, with immediate applicability, in Explainable AI? Where you work on evaluating and extending attribution methods for 'opening the blackbox' of deep learning models for text and audio processing? If you are excited about doing this kind of research in an interdisciplinary environment with smart and friendly colleagues and a strong industrial collaboration, then you may want to join us.
The PhD candidate will have supervision team consisting of Dr Willem Zuidema (NLP, Interpretability, cognitive modelling) and at least one other senior member of the InDeep consortium.
The project is funded by an NWO Dutch Research Agenda grant to a consortium led by Dr. Willem Zuidema of the Institute for Logic, Language, and Computation (ILLC) at the University of Amsterdam (UvA). The consortium also includes Tilburg University, the University of Groningen, Radboud University, and Vrije Universiteit Amsterdam as academic partners and KPN, AIGent, TNO, Textkernel, Chordify, GlobalTextware, Deloitte, and Floodtags as industrial partners.What are you going to do
The goal of PhD project is to develop and fine-tune data-driven interpretability methods for DL models of language, speech and music. We start with the subclass of data-driven methods known as 'attribution methods', including those based on computing gradients, like LRP, and those based on computing 'Shapley-values', like SHAP and Contextual Decomposition. Key insights guiding the work are that (1) simplifications/approximations are always required when the DL-model is nonlinear, (2) that no single method is best for all usages. Rather, the details of the application determine which of the necessary simplifications are justified.
The ultimate goal of the project is to provide a comparison of the usefulness of different methods in many different applications, and deeper insights about why certain methods work are best for certain applications. We also aim to extend the range of data-driven interpretability methods, by combining and existing methods and by moving such methods beyond attributing responsibility to individual words (syllables/notes/chords). In particular, we work towards identifying more complex structure in the data that in concord leads to a particular classification or decision. Our overview of attribution methods, and their do's and don'ts, will also play a key role in our Industry Outreach & Education programme.Tasks and responsibilities:
- Independently carrying out research, including writing and publishing three to four peer-reviewed articles.
- Submitting a PhD thesis within the period of appointment.
- Participating in the PhD programme of the ILLC.
- Participating in and contributing to the organisation of research activities and events at the ILLC, such as workshops and colloquia.
- Making a small contribution to the ILLC's educational mission by working as a teaching assistant for courses in your area of expertise and by assisting with the supervision of student research projects.
- Regularly presenting research results at international workshops and conferences, and publishing them in conference proceedings and journals.