There is a widespread adoption of AI in most application domains, including in safety- and mission-critical systems. This raises new challenges as these systems must undergo certification processes to prove that they will not harm users, by-standers, the environment, other systems or lead to undesirable, unforeseen or dangerous situations. Such safety- and mission-critical systems must therefore meet a combination of safety, domain-specific, and high-performance requirements to execute complex and data-hungry AI-applications in a provably safe manner. This notably include requirements that the system's performance must be predictable and analyzable so as to provide worst-case guarantees that can be used as evidence during their certification.
A major roadblock to achieving time-predictable high-performance computing originates from the complexity of the modern execution platforms designed to meet the computational needs of AI-applications. On such platforms, the need to reduce power consumption while maximizing peak performance results in a high-degree of resource sharing (caches, DRAM, bus, I/Os) causing unpredictable interference, thus preventing guaranteeing that the AI-application's timing requirements will be met once deployed. This is an unprecedented challenge from a safety and real-time perspective.We look for a PhD candidate who will work on developing a predictable execution platform for AI-oriented computing to achieve higher control over the system's predictability, enable its analyzability —hence providing required properties towards its certifiability—, with no performance degradation.
The project will work towards:
The candidate will integrate with the Interconnected Resource-aware Intelligent Systems cluster
- developing an execution layer (acting as a middleware) to exploit the capabilities of modern computing platforms to enforce predictable behavior and to efficiently utilize the available computing power through dynamically orchestrating access to shared resources.
- Developing modeling and timing analysis tools to produce evidences usable as safety arguments during certification.
at TU/e and the Chair of Cyber-Physical Systems in Production Engineering