Are you passionate about finding out how well the security techniques using AI really work?
Then, the
Software Engineering and Technology (SET) cluster at TU/e has a vacancy for a PhD project to study how to automate threat assessment with AI and how to empirically evaluate it as a complex system with social aspects.
InformationThreat analysis and risk assessment are routinely performed in organizations to identify and mitigate security risks to software. In a bid to scale such assessments, experts are using multi-agent systems based on LLMs. In this new context, there are also social aspects at play, such as the influence of ethical values or cognitive biases on the perception of security risks. The goal of this project is to improve these techniques by finding more reliable ways of evaluating and validating them. As a PhD student in this topic, you will:
- Build a prototype LLM pipeline for threat assessment.
- Design and conduct experiments to evaluate pipelines.
- Design and conduct surveys and interviews with practitioners and academic researchers.
- Collect and statistically analyze quantitative and qualitative data.
- Write scientific publications, attend international conferences and workshops to present own work and maintain up-to date with latest research findings.
Throughout the project you will gain a good understanding of security risks on the design level, and code-level vulnerabilities and attacks. This birds-eye perspective is invaluable to industry. By investigating this topic, you will have the opportunity to engage with researchers and practitioners from different disciplines, and a chance to impact the state of practice.
The interesting aspect of this project is its interdisciplinary flavor: Security & AI, risk analysis, and empirical software engineering. The quest for novel evaluation methods will be the narrative backbone of your research.
The position is located within the Social Software Engineering Group, where the candidate will be supervised by Dr. Katja Tuma and Prof. Dr. Alexander Serebrenik. You will be joining the team as PhD-TA, which means up to 25% of your time will be dedicated to teaching. Specifically, you will contribute to a master-level course on threat assessment, support bachelor-level Computer Science courses, and supervise Bachelor and Master level students.