Firmenlogo

Machine Learning Engineer - Multi-Modality Foundation Model at Zoox

Zoox · Foster City, United States Of America · Hybrid

$189,000.00  -  $258,000.00

Apply Now

Description

The Perception team is pioneering the development of a multi-modality foundation model to drive the next generation of autonomous system intelligence.  As a Multi-modality Foundation Model Engineer, you will focus on building highly efficient, production-ready multi-modality models. We are looking for experts who have hands-on experience building multi-modality foundation models—whether that involves AV-centric modalities (Vision, LiDAR, Radar) or broader domains (Vision, Language, Text, Audio). You will design, train, and deploy these models using Knowledge Distillation (KD) to transfer capabilities from large-scale proprietary teacher models to efficient student models capable of real-time, on-vehicle inference.

In this role, you will:

  • Build, pre-train, and evaluate large-scale multi-modality foundation models from the ground up, successfully aligning diverse data streams (e.g., Vision, LiDAR, Radar, Language, Audio).

  • Define and execute the ML roadmap for deploying these multi-modality representations to the vehicle.

  • Architect and implement Knowledge Distillation pipelines to compress large-capacity multi-modal teacher models into highly efficient, production-ready student models.

  • Build high-quality training and evaluation datasets, applying advanced data-centric techniques to maximize cross-modal representation learning and student model convergence.

  • Collaborate with downstream perception teams to integrate and validate the performance, robustness, and latency of your models in on-board production systems.

  • Qualifications:

  • MS or PhD in Computer Science, Machine Learning, or a related technical field with demonstrated professional experience.

  • Deep, proven expertise in building and training large-scale multi-modality foundation models (e.g., Vision-Language Models (VLMs), Vision-Audio-Text, or Vision-LiDAR-Radar architectures).

  • Strong understanding of cross-modal alignment, multi-modal attention mechanisms, and large-scale pre-training techniques.

  • Proven experience in Knowledge Distillation (KD), model compression, and training highly efficient student models for production environments.

  • Proficiency in ML frameworks (e.g., PyTorch) and experience building large-scale ML training and evaluation pipelines.

  • Bonus Qualifications:

  • Experience in the Autonomous Driving or robotics industry.

  • Experience with model deployment, optimization, and hardware constraints (e.g., C++ for inference, TensorRT, quantization, pruning).

  • Publications in top-tier conferences (CVPR, ICCV, NeurIPS, ICLR, ACL) related to multi-modality foundation models, cross-modal learning, or model compression.

  • Additional Information

    About Zoox
    Zoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team.

    Follow us on LinkedIn

    Accommodations
    If you need an accommodation to participate in the application or interview process please reach out to [email protected] or your assigned recruiter.

    A Final Note:
    You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
    Apply Now

    Other home office and work from home jobs