Robot Learning & Perception Engineer at 7th Unit
ABOUT 7TH UNIT
7th Unit is building a humanoid robot platform for the labor markets in Europe and Asia. We have signed LOIs from industrial operators and a clear 12-month milestone: a working robot operating inside a paying customer facility.
We are not building toward a demo. We are building toward a deployed robot. This is one of four founding engineering roles. The team you join defines the architecture. The decisions made in the first 90 days will be in production in month ten.
THE ROLE
You own how the robot learns and sees. That means the full pipeline: demonstration capture from real customer environments, imitation learning and fine-tuning on deployment data, vision-language model integration for task understanding, and the fleet learning architecture that transfers knowledge across every new deployment.
Our core methodology is differentiated: we train directly from workers performing their existing tasks in the real deployment environment — not from simulation or controlled lab conditions. Every customer site becomes a training data source. Every deployment improves every subsequent deployment. You are building the system that makes that flywheel work.
WHAT YOU WILL OWN
Demonstration capture methodology: design the sensor and data capture pipeline for recording human task demonstrations in live industrial environments
Imitation learning pipeline: build and deploy the fine-tuning system that adapts foundation model priors to specific deployment tasks
Perception stack: RGB-D vision plus task-specific models, designed for onboard inference in environments with unreliable connectivity
Fleet learning architecture: cloud-connected knowledge transfer so every deployment improves the platform globally
First 90 days deliverable: a robot that can reliably complete the pick-and-place workflow defined in our manufacturing LOI, trained from demonstrations captured in the customer environment
WHAT WE ARE LOOKING FOR
3+ years building and deploying robot learning or perception systems on real physical hardware — not just simulation
Hands-on experience with imitation learning, behavior cloning, sim-to-real transfer, or VLA/foundation model fine-tuning for manipulation
Background at a company where learning systems shipped and operated in real environments: Figure AI, Sanctuary AI, Apptronik, Boston Dynamics, NVIDIA Isaac, DeepMind Robotics, CMU Robotics Institute, MIT CSAIL, ETH Zurich RSL, or equivalent
Comfortable with the embodiment gap problem — you understand why demonstration data from humans does not transfer cleanly to robot execution and you have opinions on how to close it
Experience designing learning pipelines that improve continuously from real-world operational feedback, not just offline training datasets
COMPENSATION & STRUCTURE
Salary: $130,000+ (post-round close, market-rate for your location)
Equity: Meaningful founding engineer equity, negotiated directly
Structure: Founding engineer. You own the intelligence architecture. The decisions you make in the first 90 days define how this platform learns for the next decade.
Location: Remote during pre-deployment phase. Relocation to team hub as the build phase begins.
WHY THIS, WHY NOW
The gap between what is achievable in a lab and what is deployable in a real environment has never been smaller. We have the customer environments, the operator relationships, and the deployment infrastructure. We need the learning system that makes it work reliably in those environments.
If you have been building robot learning systems inside a large company and want to own the architecture — not contribute to someone else's — this is the role.