AI/ML Engineer (Forward Deployed Engineer) at Hexo We’re building ‘Emily’ - an Agentic Machine Learing Engineer. It’s an AI that builds AI. The first step on the path to build Artificial SuperIntelligence (ASI) is to teach an agent how to train & improve ML models, build AI pipelines, AI agents etc. If AI can learn how a good ML researcher thinks and works, it will learn to improve itself, till infinity. We are hiring only A+ players on our team. If you are one and want to work with other A+ players, this is the place to work. Job Description Summary Be the technical face of the company in front of customers - working directly with frontier AI labs, enterprise ML teams, and research orgs to deliver high-impact projects powered by Hexo’s autonomous ML platform. Act as the bridge between customer ML teams and Hexo’s core product - designing, building, and deploying real-world solutions using Hexo’s Agents and self-improving AI pipelines. Ideal for engineers who thrive at the intersection of deep ML expertise, customer interaction, and hands-on delivery. You will embed with customers, delivery is urgent, and ambiguity is the default. You will use this to map their problems, structure delivery, and ship fast. You will scope, sequence, and build full-stack solutions that create measurable value. You will also drive clarity across internal and external teams. You will identify reusable patterns and share field signal that influences the roadmap. Responsibilities Work closely with customer teams (researchers, ML engineers, and infra leads) to design and implement custom ML solutions on the Hexo platform. Deploy and orchestrate autonomous ML workflows for use cases such as model training, fine-tuning, evaluation, and optimization. Build integration scaffolds, SDK extensions, and APIs that adapt Hexo’s platform to unique customer needs. Lead customer onboarding - from technical setup and data ingestion to end-to-end pipeline execution. Collaborate with Hexo’s internal product and research teams to refine the platform based on real-world feedback. Diagnose and resolve ML infra or training bottlenecks; optimize for speed, cost, and reliability. Deliver demos, technical workshops, and proof-of-concept deployments that showcase platform capabilities. Make trade-offs between scope, speed, and quality; adjust plans to protect delivery Serve as a trusted technical advisor - helping customers accelerate their AI roadmap through autonomous model development. Codify working patterns into tools, playbooks, or building blocks that others can use Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. 5+ years of hands-on experience in ML engineering, applied ML, or AI infrastructure roles. Strong programming skills in Python and proficiency with modern ML frameworks Experience deploying ML systems in real-world, production-grade environments. Excellent communication and stakeholder management skills; comfortable interfacing directly with CTOs, researchers, and engineering teams. Ability to scope and deliver technical projects end-to-end with minimal oversight. Solid program and project management expertise with ownership of planning, tracking, and delivery governance. Familiarity with cloud and distributed compute environments (AWS/GCP/Azure, Ray, Kubernetes). Deep curiosity for emerging AI technologies and a proactive drive to stay ahead of innovation trends. Ability to thrive in multi-project, high-context environments with frequent context switching. Bonus: Experience in forward-deployed or customer-facing technical roles (solutions engineering, applied research, ML consulting, etc.). Why Join Us? Work on the bleeding edge of AI. Fast-track your entrepreneurial journey. Unmatched ownership, exposure, and impact. Collaborate with the founder and core team. Competitive compensation and stock options.