Are you interested in improving the interpretability, robustness and safety of AI by integrating causal reasoning? The Causality team in the AMLab group at the University of Amsterdam is looking for 2 PhD students to work on this topic in the NWO VIDI sponsored project CANES (CAusal NEuro-Symbolic approach to integrating perception and abstract reasoning) led by prof. Sara Magliacane.
Join us!AI is a vital part of many real-world systems, but several concerns remain about its lack of interpretability, robustness to changes, and adherence to safety regulations. The urgency and severity of these issues have spurred the proposal of the EU AI Act, aimed at ensuring a safe development of AI and avoiding its misuse. These issues might be exacerbated by the lack of formal guarantees in explaining the behavior of AI systems in terms of human-interpretable, high-level concepts. While many approaches in explainable AI and neuro-symbolic methods focus on similar issues, they are prone to learning concepts with the wrong intended meaning, especially when different concepts are correlated with each other.
CANES will address these issues by developing a theoretically principled framework to enable safe, interpretable, and robust AI by integrating perception and reasoning through causality. We will develop methods to learn from unstructured data Causally Grounded Concepts, i.e., concepts with theoretical guarantees as error bounds and sample complexity, in challenging settings with continuous and discrete valued concepts (e.g., symbols in neuro-symbolic methods), correlations between concepts, and when we do not have (all) labels for the concepts, but we only have a weak supervision signal, e.g., labels of a downstream task, and background knowledge about the concepts. We also envision some practical applications of this framework in cross-species translation (transfer of findings from animal studies to humans) in drug discovery, dynamical systems for long-horizon time series forecasting, and verifiably safe reinforcement learning.
While both PhD positions are part of the same project, each of them has an independent research direction, allowing for a level of autonomy. The first project (“Discrete concepts”) will focus on developing strong theoretical guarantees beyond the current work on independent continuous concepts to allow also for dependent and discrete concepts. The second project (“Weak supervision”) will instead focus on providing similar guarantees by integrating background knowledge and weak supervision, e.g. based on labels in downstream tasks, which can be for example logical constraints.
This is what you will doThe goal is to develop methods to learn concepts from unstructured data with theoretical guarantees as error bounds and sample complexity, in challenging settings with continuous and discrete valued concepts (e.g., symbols in neuro-symbolic methods) and only weak supervision signals (e.g., labels in a downstream task or symbolic constraints).
You will perform machine learning research, developing a framework for learning interpretable and robust concepts with theoretical guarantees based on causal reasoning. You are able to work within a team, while also pro-actively tackling research challenges with input and guidance from you advisors.
You will be able to choose between two projects that allow you a level of autonomy, but still ideally contribute to the same framework. The first project (“Discrete concepts”) will focus on developing strong theoretical guarantees beyond the current work on independent continuous concepts to allow also for dependent and discrete concepts. The second project (“Weak supervision”) will instead focus on providing similar guarantees by integrating background knowledge and weak supervision, e.g. based on labels in downstream tasks, which can be for example logical constraints.
As part of the project, you will ideally also perform a 6 month to 1 year research visit to the
Saarland Informatics Campus in Saarbrücken, Germany, where prof. Magliacane leads the newly founded Causal Machine Learning group.
Tasks and responsibilities: - invent, evaluate and describe novel algorithms for learning concepts with theoretical guarantees from unstructured data;
- present research results at international conferences, workshops, and journals;
- pursue and complete a PhD thesis within the appointed duration of four years;
- assist in teaching activities, such as teaching labs and tutorials or supervising bachelor and master students;
What we ask of youYour experience and profile - MSc in artificial intelligence, statistics, computer science or a related field;
- Strong background in machine learning and/or statistics;
- Preferred prior knowledge/experience with causality, explainable AI and/or neurosymbolic approaches
This is what we offer youA temporary contract for 38 hours per week for the duration of 4 years (the initial contract will be for a period of 18 months and after satisfactory evaluation it will be extended for a total duration of 4 years). The preferred starting date is October 2026. This should lead to a dissertation (PhD thesis). We will draft an educational plan that includes attendance of courses and (international) meetings. We also expect you to assist in teaching undergraduates and master students.
The gross monthly salary, based on 38 hours per week and dependent on relevant experience, ranges between € 3,059 to € 3,881 (scale P). This does not include 8% holiday allowance and 8,3% year-end allowance. The UFO profile Promovendus is applicable. A favourable tax agreement, the ‘30% ruling’, may apply to non-Dutch applicants. The
Collective Labour Agreement of Universities of the Netherlands is applicable.
Curious about our extensive secondary benefits package? You can read more about it
here.You will work in this teamThe mission of the
Informatics Institute (IvI) is to perform curiosity-driven and use-inspired fundamental research in Computer Science. The main research themes are Artificial Intelligence, Computational Science and Systems and Network Engineering. Our research involves complex information systems at large, with a focus on collaborative, data driven, computational and intelligent systems, all with a strong interactive component.
You will be part of
Amsterdam Machine Learning Lab (AMLab). AMLab conducts research in machine learning, artificial intelligence, and its applications to large scale data domains in science and industry. This includes the development of deep generative models, methods for approximate inference, probabilistic programming, Bayesian deep learning, causal inference, reinforcement learning, graph neural networks, and geometric deep learning. In particular, you will be part of the
Causality team under the supervision of prof.
Sara Magliacane.As part of the project, during the PhD, you will ideally also perform a 6 month to 1 year research visit to the
Saarland Informatics Campus in Saarbrücken, Germany, where prof. Magliacane leads the newly founded Causal Machine Learning group.
If you feel the profile fits you, and you are interested in the job, we look forward to receiving your application. You can apply online via the button below. We accept applications until and including 20 April 2026.
Applications should include the following information (all files besides your cv should be submitted in one single pdf file):
- a detailed CV including the months (not just years) when referring to your education and work experience;
- a letter of motivation, including a clean indication of your fitness to this position and a preference for one of the two topics (discrete concepts or weak supervision);
- a written sample in English, e.g., (a draft of) your master thesis or a paper;
- a complete record of your Bachelor and Master courses, included grades and explanation of the grading system;
- the names and email addresses of two references who can provide letters of recommendation (please do not include any reference letters in your application). We will contact the references only at the later stages of the process.
A knowledge security check can be part of the selection procedure.
(for details:
national knowledge security guidelines)
If you have any questions or do you require additional information? Please contact: