Join the Data Revolution at Tu/e: Where Data Meets Discovery!As the data team forresearchers, we empower our community with tailored and cost-effective data solutions forpioneering innovations. As part of the Product Area Research (PAR), our mission is to equipour university’s brightest minds with the tools and technology they need to navigate theentire research data & AI lifecycle. We are building an advanced data ecosystem that’s morethan just infrastructure, it’s a launchpad for groundbreaking investigations. Technologyevolves fast and so do we. While continuously reinventing ourselves, we take pride in puttingstrong foundations first.
Together with our research community across TU/e, we explore and validate new ways ofworking with data. This ranges from testing and integrating techniques such as syntheticdata to developing the smart campus data ecosystem, where we provide the infrastructurebuilding blocks that turn real campus data into shared foundations for advanced analyticsand data-driven decision-making at TU/e. That is why we are looking for a Senior Full-StackData Engineer to join us on this journey.
Your roleAs our senior full-stack data engineer, you will serve as the primary collaborator to the TechLead within a young, ambitious team of three developers and a Product Owner. We arecurrently scaling our footprint within the TU/e organization, transitioning from foundationalbuilds to a mature, high-output engineering culture. If your ambition is experimenting,designing and implementing such data solutions, we want to hear from you!
What you will doYou will work in lockstep with the Tech Lead to shape our technical roadmap, acting as a vitalsparring partner. We are looking for a disciplined engineer who can navigate the "big picture"while ensuring our infrastructure remains rock-solid; someone who turns complex ideas intoproduction-worthy reality with independent initiative.
-
Full-Stack Development: Architect and build internal data products, developing both therobust Python/Spark backends and the intuitive Frontend interfaces that bring data to life.
-
Rapid Prototyping: Act as the team’s "bridge" from concept to reality, utilizing "VibeCoding" and AI-assisted workflows to quickly prototype frontends that validate ideasbefore full-scale production.
-
User-centric data delivery: Transform complex datasets into actionable tools, ensuringthat the "last mile" of data delivery (the UI/UX) is as high-quality as the pipeline behind it.
-
Collaborative architecture: Design and maintain scalable data pipelines using ApacheSpark and Databricks, ensuring data integrity across the entire lifecycle.
Infrastructure as code: Take full ownership of the deployment lifecycle—from frontendassets to backend infrastructure—using Databricks Asset Bundles (DABs), Terraform, andGitLab CI/CD. -
Building intelligent ecosystems: Implement event-driven ingestion patterns and explorethe integration of agents, LLMs, and Model Context Protocols (MCP) to enable smart, full-stack research applications.
-
Secure and connected infrastructure. Apply your networking knowledge (VNets,subnets, firewalls) to ensure applications are secure, performant, and well-integratedwithin the TU/e landscape.