PhD position on developing novel event-based algorithms for the fusion of FMCW radar and event-based cameras data for low-latency perception and control. You will explore the design space of low-latency control systems, mapping your solutions to FPGA-based neuromorphic hardware and building demonstrators in collaboration with industrial partners.
InformationAutonomous systems, such as collaborative robots or drones, must perceive and react to their environment in milliseconds. Traditional perception pipelines process data in "frames" (snapshots in time), which introduces unavoidable latency and high data redundancy. Furthermore, fusing distinct modalities, like the spatial depth, speed and direction of moving objects from
FMCW Radar and the high temporal resolution of
Dynamic Vision Sensors (DVS/Event Cameras), remains a complex computational challenge.
As a PhD candidate at the
Neuromorphic Edge Computing Systems (NECS) Lab, you will develop "event-based" algorithms that fuse these two sensory worlds. You will move beyond simple classification and target real-time perception and control tasks.
Your core research responsibilities will include:
-
Algorithm Design & Sensor Fusion: You will research novel Spiking Neural Network (SNN) and Event-based architectures that can fuse sparse radar data (e.g., Range-Doppler maps or binary encoded signals) with the asynchronous stream of events from DVS cameras. You will explore different fusion strategies (early vs. late fusion) to maximize information extraction while minimizing latency.
-
Design Space Exploration: You will not just build one model; you will explore the trade-off space between latency, accuracy, and energy efficiency. You will investigate how techniques like sparsity, quantization, and pruning affect system performance on the edge neuromorphic platforms..
-
Hardware Mapping (FPGA & Neuromorphic): You will work closely with hardware designers to map your algorithms onto the NECS lab's custom neuromorphic platforms and FPGA accelerators. You will ensure your algorithms are "hardware-friendly", optimizing for memory constraints and in-memory computing architectures.
-
Demonstrators: You will validate your full-stack solution by building compelling demonstrators (e.g. closed-loop low-latency control systems, and/or smart lighting systems) in collaboration with our project partners, such as Demcon and Signify.