With the rise of deep learning
(DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing
hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong legacy in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to get accepted; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
- On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
- Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
- Transparent compilers supporting automated code optimizations and domain-specific languages
- Fast compositional design of System-on-Chips (SoC)
- Digital accelerators for dynamic ANN and SNN
- ULP memristive circuits for computation-in-memory
- Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.Candidates
We are seeking highly skilled and motivated candidates to tackle any of the following four research areas:PhD1: Ultra-low power CGRA for Dynamic ANNs and SNNs:
Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.PhD2: Design-flow for SNNs and ANNs implemented in compiler:
Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, memory hierarchy, reuse, sparsity, etc.PhD3: Compositional performance analysis and architecture Design Space Exploration (DSE)
: Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes due to a failing platform component.PhD4: Composable and Secure SoC accelerator platform:
Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.