At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise as well as personal needs. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute - a suite that brings frontier intelligence to end-users.
We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Role Summary
About the Research Engineering team
The team spans Platform (shared infra & clean code) and Embedded (inside research squads). Engineers can move along the research↔production spectrum as needs or interests evolve.
As a Research Engineer – ML track, you’ll build and optimise the large-scale learning systems that power our open-weight models. Working hand-in-hand with Research Scientists, you’ll either join:
- Platform RE Team: Enhance the shared training framework, data pipelines and cluster tooling used by every team; or
- Embedded RE Team: Sit inside a research squad (Alignment, Pre-training, Multimodal, …) and turn fresh ideas into repeatable, scalable code.
What will you do
• Accelerate researchers by taking on the heavy parts of large-scale ML pipelines and building robust tools.
• Interface cutting-edge research with production: integrate checkpoints, streamline evaluation, and expose APIs.
• Conduct experiments on the latest deep-learning techniques (sparsified 70 B + runs, distributed training on thousands of GPUs).
• Design, implement and benchmark ML algorithms; write clear, efficient code in Python.
• Deliver prototypes that become production-grade components for Le Chat and our enterprise API.
About you
• Master’s or PhD in Computer Science (or equivalent proven track record).
• 4 + years working on large-scale ML codebases.
• Hands-on with PyTorch, JAX or TensorFlow; comfortable with distributed training (DeepSpeed / FSDP / SLURM / K8s).
• Experience in deep learning, NLP or LLMs; bonus for CUDA or data-pipeline chops.
Ces cookies sont nécessaires au fonctionnement du site web et ne peuvent pas être désactivés dans nos systèmes. Vous pouvez configurer votre navigateur pour qu'il bloque ces cookies, mais certaines parties du site risquent alors de ne pas fonctionner.
Sécurité
Expérience utilisateur
Cookies ciblés
Ces cookies sont placés par nos partenaires publicitaires via notre site web. Ils peuvent être utilisés par ces entreprises pour créer un profil de vos intérêts et vous montrer des publicités pertinentes ailleurs.
Google Analytics
Google Ads
Nous utilisons des cookies
🍪
Notre site web utilise des cookies et des technologies similaires pour personnaliser le contenu, optimiser l'expérience de l'utilisateur, individualiser et évaluer la publicité. En cliquant sur OK ou en activant une option dans les paramètres des cookies, vous acceptez cela.
Les meilleurs emplois à distance par courriel
Rejoins 5'000+ personnes qui reçoivent des alertes hebdomadaires avec des emplois à distance!