The Uncertainty in Artificial Intelligence (UAI) group is a new and quickly growing group embedded in the Data and AI (DAI) cluster at the Eindhoven University of Technology. In the DAI cluster, we aim at developing foundations of AI for the present and the future. This includes the design of new AI methods, development of AI algorithms and tools with a view at expanding the reach of AI and its generalization abilities. In particular, we study foundational issues of robustness, safety, trust, reliability, tractability, scalability, interpretability and explain ability of AI.
The UAI group is looking for a highly motivated and skilled PhD candidate to work in an area of Machine Learning as suggested below. The concrete research direction will be determined together with the successful candidate. Potential topics include, but are not restricted to:
- (Deep) Reinforcement Learning
- (Contextual) Multi-Armed Bandits
- Counterfactual Learning
- Fairness-aware Learning
- Fairness in Reinforcement Learning
- Off-policy Reinforcement Learning
- Modeling Bias in Machine Learning
- Decision Making under Uncertainty
- Auto ML
- Explainable AI
- Recommendation Systems
We encourage you to express your preferences on the above topics (if applicable) in your research statement.