Do you want to understand how and why humans interact with an AI to accelerate systematic reviews? If so, apply for this PhD position! This position is part of the bigger project “Transparent and Reproducible AI-aided Systematic Reviewing for the Social Sciences (TRASS)”.
Your job The rapidly evolving field of AI offers promising solutions to the literature screening challenge using machine learning models, like active learning, and, very recently, large language models (LLMs). However, many of these AI-driven solutions emerge from tech companies that publish new and, hopefully, better models at an unprecedented rate. The rapidity of advancements in the field of AI outpaces meticulous scientific evaluations, leaving many methods unrefined and unproven. The challenge is twofold: keeping pace with these relentless innovations while collaboratively forging a comprehensive understanding of their implications.
In the evolving landscape of AI-aided systematic tools (like ASReview), we need to explore the nuanced complexities and potential biases that arise when human discernment is coupled with the predictive capabilities of AI, aiming to foster a synergistic relationship that enhances the efficacy and reliability of the review process.
Therefore, in the subproject for which we are looking a PhD candidate, we will investigate questions like:
- Which biases - such as position bias or authority bias - do humans display when reviewing abstracts?
- How does knowledge about these biases generalize to other contexts, such as in doing research and university work generally.
- Should we replace human screeners with an AI (as predicted by OpenAI), or should we keep a human in the loop?
- How do (or should) humans interact with AI-aided screening models spotlighting potential biases inherent in AI-assisted processes?
- How can we reduce biases by humans and AI?
You will be responsible for conducting several high-quality experiments where humans interact with an AI to address such questions. Addressing these questions is crucial in fostering a collaborative synergy between humans and AI models, and to spotlight and reduce potential biases, fostering a more transparent and equitable research process. Moreover, it seeks to delineate the roles and responsibilities of human researchers in an AI-assisted landscape, paving the way for a harmonious and productive human-AI collaboration in academic research.