At Agtonomy, we’re not just building tech—we’re transforming how vital industries get work done. Our Physical AI and fleet services turn heavy machinery into intelligent, autonomous systems that tackle the toughest challenges in agriculture, turf, and beyond. Partnering with industry-leading equipment manufacturers, we’re creating a future where labor shortages, environmental strain, and inefficiencies are relics of the past. Our team is a tight-knit group of bold thinkers—engineers, innovators, and industry experts—who thrive on turning audacious ideas into reality. If you want to shape the future of industries that matter, this is your shot.
About the Role
We’re looking for a skilled software engineer to build and refine perception algorithms that give our autonomous tractors human-like awareness in rugged environments. You’ll develop computer vision and machine learning systems to process noisy data from cameras, LiDAR, and radar, enabling tractors to navigate through any type of dirty mess it may find itself in. This role is hands-on: you’ll write production-grade software, optimize models for embedded hardware, and test your work on real tractors at operating farms all over the world. Working closely with team members across the autonomy stack, you’ll own critical pieces of our perception stack, driving innovations that make our systems generalized, safe and reliable.
About UsAt Agtonomy, we’re not just building tech—we’re transforming how vital industries get work done. Our Physical AI and fleet services turn heavy machinery into intelligent, autonomous systems that tackle the toughest challenges in agriculture, turf, and beyond. Partnering with industry-leading equipment manufacturers, we’re creating a future where labor shortages, environmental strain, and inefficiencies are relics of the past. Our team is a tight-knit group of bold thinkers—engineers, innovators, and industry experts—who thrive on turning audacious ideas into reality. If you want to shape the future of industries that matter, this is your shot.About the RoleWe’re looking for a skilled software engineer to build and refine perception algorithms that give our autonomous tractors human-like awareness in rugged environments. You’ll develop computer vision and machine learning systems to process noisy data from cameras, LiDAR, and radar, enabling tractors to navigate through any type of dirty mess it may find itself in. This role is hands-on: you’ll write production-grade software, optimize models for embedded hardware, and test your work on real tractors at operating farms all over the world. Working closely with team members across the autonomy stack, you’ll own critical pieces of our perception stack, driving innovations that make our systems generalized, safe and reliable.
The US base salary range for this full-time position is $180,000 to $250,000 + equity + benefits + unlimited PTO
The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location, internal equity, and additional factors, including, but not limited to, job-related skills, experience, and relevant education or specialty training. Your recruiter can share more about the specific salary range during the hiring process.
What you'll do
Develop computer vision and machine learning models for real-time perception systems, enabling tractors to identify crops, obstacles, and terrain in varying unpredictable conditions.
Build sensor fusion algorithms to combine camera, LiDAR, and radar data, creating robust 3D scene understanding that handles challenges like crop occlusions or GNSS drift.
Optimize models for low-latency inference on resource-constrained hardware, balancing accuracy and performance.
Design and test data pipelines to curate and label large sensor datasets, ensuring high-quality inputs for training and validation, with tools to visualize and debug failures.
Analyze performance metrics and iterate on algorithms to improve accuracy and efficiency of various perception subsystems.
What you’ll bring
A MS, or PhD in Computer Science, AI, or a related field, or 5+ years of industry experience building vision-based perception systems.
Deep expertise in developing and deploying machine learning models, particularly for perception tasks such as object detection, segmentation, mono/stereo depth estimation, sensor fusion, and scene understanding.
Strong understanding of integrating data from multiple sensors like cameras, LiDAR, and radar.
Experience handling large datasets efficiently and organizing them for labeling, training and evaluation.
Fluency in Python and experience with ML/CV frameworks like TensorFlow, PyTorch, or OpenCV, with the ability to write efficient, production-ready code for real-time applications.
Proven ability to design experiments, analyze performance metrics (e.g., mAP, IoU, latency), and optimize algorithms to meet stringent performance requirements in dynamic settings.
An eagerness to get your hands dirty and agility in a fast-moving, collaborative, small team environment with lots of ownership.
What makes you a strong fit
Experience architecting multi-sensor ML systems from scratch.
Experience with Foundational models for robotics or Vision-Language-Action (VLA) models
Experience with compute-constrained pipelines including optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.
Experience implementing custom operations in CUDA.
Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).
Passion for sustainable agriculture and securing our food supply chain.
Benefits
• 100% covered medical, dental, and vision for the employee (partner, children, or family is additional)
• Commuter Benefits
• Flexible Spending Account (FSA)
• Life Insurance
• Short- and Long-Term Disability
• 401k Plan
• Stock Options
• Collaborative work environment working alongside passionate mission-driven team!
Our interview process is generally conducted in five (5) phases:
1. Phone Screen with Hiring Manager (30 minutes)
2. Technical Evaluation in Domain (1 hour)
3. Software Engineering Evaluation (1 hour)
4. Panel Interview (Video interviews scheduled with key stakeholders, each interview will be 30 to 60 minutes)
Estas cookies son necesarias para que el sitio web funcione y no se pueden desactivar en nuestros sistemas. Puede configurar su navegador para bloquear estas cookies, pero entonces algunas partes del sitio web podrían no funcionar.
Seguridad
Experiencia de usuario
Cookies orientadas al público objetivo
Estas cookies son instaladas a través de nuestro sitio web por nuestros socios publicitarios. Estas empresas pueden utilizarlas para elaborar un perfil de sus intereses y mostrarle publicidad relevante en otros lugares.
Google Analytics
Anuncios Google
Utilizamos cookies
🍪
Nuestro sitio web utiliza cookies y tecnologías similares para personalizar el contenido, optimizar la experiencia del usuario e indvidualizar y evaluar la publicidad. Al hacer clic en Aceptar o activar una opción en la configuración de cookies, usted acepta esto.
Los mejores empleos remotos por correo electrónico
¡Únete a más de 5.000 personas que reciben alertas semanales con empleos remotos!