PhD on Auditing Framework and tool for robust and reliable AI software development and operations (DevOps) (0.8 - 1.0 FTE)
Jheronimus Academy of Data Science (JADS) Den Bosch, is proud to start with three large Robust AI labs together with:
- Deloitte (Auditing for Responsible AI Software Systems) - 5 PhD's
- DPG Media (Responsible Media Lab) - 5 PhD's together with University of Amsterdam (UvA)
- ILUSTRE (Innovation Lab for Utilities on Sustainable Technology and Renewable Energy) - 5 PhD's
JADS is seeking enthusiastic colleagues for the position of PhD students. We operationalize the huge ambition around AI by explicitly aligning our research agenda on Robust AI with the United Nation's sustainable development goals.
The project is funded in a public-private partnership by NWO/NLAIC and the private partners. This position is part of the Deloitte project.Short Description
The next generation of enterprise applications is quickly becoming AI-enabled, providing novel functionalities with unprecedented levels of automation and intelligence. As we recover, reopen, and rebuild, it is time to rethink the importance of trust. At no time has it been more tested or valued in leaders and each other. Trust is the basis for connection. Trust is all-encompassing: physical, emotional, digital, financial, and ethical. A nice-to-have is now a must-have; a principle is now a catalyst; a value is now invaluable.
Are you an enthusiastic and ambitious researcher with a completed master's degree in a field related to machine learning (Computer science, AI, Data Science) or in Electrical Engineering with an affinity for AI and deep learning? Does the idea of working on real-world problems and with industry partners excite you? Are you passionate about using trustworthy AI methods for the next generation of auditing processes, which are increasingly AI-enabled and data-driven? And are you interested in delivering new tools to ascertain the robustness and reliability of the next generation of AI software?
We are recruiting a PhD candidate who will develop and validate novel concepts, methods, and tools for monitoring, auditing, and fostering the robustness and reliability of AI software systems and trial them with industrial partners who work with Deloitte.Job Description
This vacancy falls under the auspices of the JADE lab, which is the data/AI engineering and governance research UNIT of the JADS, and DELOITTE. In particular, this position is associated with JADE's ROBUST program on Auditing for Responsible AI Software System (SAFE-GUARD), which is financed under the NWO LTP funding scheme with Deloitte as the key industry partner.
Whilst the overall objective of SAFE-GUARD is auditing of AI software, it may be further refined in the following more elaborated goal: "Explore, develop and validate novel auditing theories, tools, and methodologies that will be able to monitor and audit whether AI applications adhere in terms of fairness (no bias), explainability, and transparency (easy to explain), robustness and reliability (delivering same results under various execution environments), respect of privacy (respecting GDPR), and safety and security (with no vulnerabilities)."
The industrial setting of the deep involvement of Deloitte will balance the rigor with relevance and ascertain fit with societal requirements and trends, validation with industrial case studies.Scientific Challenge
Like DevOps, MLOps adopts the continuous integration and continuous testing cycle to produce and deploy production-ready new micro-releases and versions of AI software. This implies a culture shift between data analysts, data engineers, deployment and system engineers, and domain experts, with improved dependency management (and thus transparency) between model development, training, validation, and deployment. In addition, MLOps requires sophisticated policies based on metrics and telemetry such as performance indicators like F1, accuracy scores, and software quality.
At the same time, AI models are trained to assume that data used is representative of future data, i.e., input data is independently and identically distributed. This implies that random and noisy input data corruptions can lead to a lack of robustness. From here, the need to provide an instrument able to maintain the robustness of AI models when new data comes and to realize a new version with the same reliability value.