- Are you inspired by the prospect of shaping the future of autonomous driving?
- Are you fascinated by the latest developments in self-supervised learning and generative models?
- Are you excited to work on perception tasks for safety-critical systems using the next-generation automotive radar?
- Then apply for the PhD position on Self-supervised Deep Learning for Automotive 4D Imaging Radar!
Autonomous driving is a key application of artificial intelligence generally, and vehicle perception specifically. Contemporary autonomous driving systems utilize various sensors, such as cameras, radar, and LiDAR. Recent developments in automotive radar technology have led to the emergence of a new class of sensors, 4D Imaging Radar. This technology can be a key enabler for Level 4 and Level 5 autonomy due to its additional vertical information, high density, and robustness.
Consequently, this requires the development of novel deep-learning methods that can process Imaging radar data on resource-constrained devices, and perform standard automotive perception tasks, such as object detection and collision prediction. Moreover, due to the real-life complexity of such tasks in urban conditions, and the many edge cases that can be encountered, it is practically impossible to gather all-encompassing training data.
This PhD project is designed to research various self-supervised and multimodal deep-learning methods to perform vehicle perception tasks, such as collision risk prediction, using imaging radar data. The aim is to increase the generalization capabilities of those prediction models for rarely occurring scenarios. To this end, it will be necessary to not only consider mathematical and technical details of deep learning but also the understanding of signal processing systems and working principles of 4D Imaging radar technology.
More specifically, research tasks will include:
- Reviewing relevant literature from the deep learning architectures for signal processing, radar-based and computer vision-based automotive perception, and domain invariance/resilience.
- Designing methods for domain invariant deep representations of radar-specific embeddings.
- Developing learning strategies capable of learning from auxiliary simulated data for real-life applications.
- Collaborating on ongoing research projects that aim to implement radar-based perception methods for next-generation ADAS and autonomous driving.
An ideal candidate will combine technical expertise in deep learning and automotive sensing. You are strongly interested in the application of next-generation radar technology and robust perception in the context of road participants' safety. Next, you have good programming experience and are passionate about artificial intelligence, computer science, and autonomous driving.
The candidate will be integrated into the Mobile Perception Systems
(MPS) lab within a newly forming Automated Vehicle Test Facility (AVTF). They will be a member of the LTP ROBUST
consortium funded by NWO, and of the EAISI
institute at TU/e.