Hybrid Senior Machine Learning Engineer – Perception/ End-to-End presso Xpeng motors
Xpeng motors · Santa Clara, CA, Stati Uniti d'America · Hybrid
- Ufficio in Santa Clara, CA
-
Research and develop cutting-edge deep learning algorithms for a unified, end-to-end onboard model that seamlessly integrates perception, prediction, and planning, replacing traditional modular model pipelines.
-
Research and develop Vision-Language-Action (VLA) models to enable context-aware, multimodal decision-making, allowing the model to understand visual, textual, and action-based cues for enhanced driving intelligence.
-
Address real-world challenges by enhancing online mapping, occupancy grid, and 3D detection models. Have deep expertise in perception systems and demonstrate strong problem-solving skills in analyzing and resolving production-level corner cases.
-
Design and optimize highly efficient neural network architectures, ensuring they achieve low-latency, real-time execution on the vehicle’s high-performance computing platform, balancing accuracy, efficiency, and robustness.
-
Develop and scale an offline machine learning (ML) infrastructure to support rapid adaptation, large-scale training, and continuous self-improvement of end-to-end models, leveraging self-supervised learning, imitation learning, and reinforcement learning.
-
Deliver production-quality onboard software, working closely with sensor fusion, mapping, and perception teams to build the industry’s most intelligent and adaptive autonomous driving system.
-
Leverage massive real-world datasets collected from our autonomous fleet, integrating multi-modal sensor data to train and refine state-of-the-art end-to-end driving models.
-
Design, conduct, and analyze large-scale experiments, including sim-to-real transfer, closed-loop evaluation, and real-world testing to rigorously benchmark model performance and generalization.
-
Collaborate with system software engineers to deploy high-performance deep learning models on embedded automotive hardware, ensuring real-world robustness and reliability under diverse driving conditions.
-
Work cross-functionally with AI researchers, computer vision experts, and autonomous driving engineers to push the frontier of end-to-end learning, leveraging advances in transformer-based architectures, diffusion models, and reinforcement learning to redefine the future of autonomous mobility.
-
MS or PhD level education in Engineering or Computer Science with a focus on Deep Learning, Artificial Intelligence, or a related field, or equivalent experience.
-
Strong experience in applied deep learning including model architecture design, model training, data mining, and data analytics.
-
1-3 years + of experience working with DL frameworks such as PyTorch, Tensorflow.
-
Strong Python programming experience with software design skills.
-
Solid understanding of data structures, algorithms, code optimization and large-scale data processing.
-
Excellent problem-solving skills.
-
Hands on experience in developing DL based planning engine for autonomous driving.
-
Experience in applying CNN/RNN/GNN, attention model, or time series analysis to real world problems.
-
Experience in other ML/DL applications, e.g., reinforcement learning.
-
Experience in DL model deployment and optimization tools such as ONNX and TensorRT.