Platzhalter Bild

Applied Scientist - AI Inference (Agentic AI startup) na NinjaTech AI

NinjaTech AI · Sydney, Austrália · Onsite

Candidatar-se agora

We invite you to join NinjaTech AI as an Applied Scientist specialized in AI inference and distributed systems to help optimize and scale our AI models for production environments.

You will work at the intersection of deep learning and systems engineering, focusing on optimizing our inference infrastructure that powers millions of user interactions daily.

About us:

NinjaTech AI is a generative AI startup (B2C and B2B) with headquarters in Silicon Valley and offices in Sydney and Vancouver. Our Engineering team is mostly ex-AWS and our Product and UX team is ex-Google. Backed by Alexa Fund and Samsung Ventures, we're on track to raise Series A funding in 2025.

Our flagship product, SuperNinja, is an advanced agentic AI platform with full OS capabilities, performing website creation, end-to-end coding, advanced data analysis, and more.

This is a full-time onsite position, based at our Sydney office (great office space, free meals). Our team works in a fast-paced and collaborative work environment. We get a lot done when we ideate together and iterate quickly.

Why join NinjaTech AI:

This is a unique opportunity for a motivated Applied Scientist to join our LLM Inference optimization team to help us build the foundation of next-generation AI agent systems.

This position focuses on the critical infrastructure and optimization techniques that will enable autonomous agents to operate efficiently at scale. You'll be joining a team that values scientific rigor and engineering excellence, with the chance to make significant contributions to an emerging field.

What makes you a strong match for this Science position:

  • Experience in practical R&D (prototyping based on publication and literature review)

  • Proven ability to solve real-world problems using cutting-edge ideas and independent research

  • Problem-solving skills for design, creation, and testing of custom inference systems

  • Adept at adapting academic ideas and theoretical algorithms into production systems

  • Experience with hardware accelerators and specialized AI chips

  • Knowledge of model serving frameworks and inference optimization techniques

  • Experience with large language models and their deployment challenges

Key challenges you will work on:

  • Research and develop novel techniques for optimizing LLM inference, focusing on latency reduction, throughput improvement, and resource efficiency

  • Design and implement distributed inference architectures that scale efficiently across multiple GPUs and nodes

  • Develop and optimize memory management techniques for large language models, including attention mechanisms and KV cache strategies

  • Research and implement quantization methods to reduce model size while preserving quality

  • Explore speculative decoding and other algorithmic optimizations to improve inference speed

  • Collaborate with the engineering team to integrate innovations into production systems

  • Benchmark and evaluate different optimization approaches against key performance metrics

  • Stay current with the latest research in LLM inference optimization and contribute to the field through publications and open-source contributions.

Experience Requirements:

  • Master's or PhD in Computer Science, Machine Learning, or a related field

  • Strong publication record (preferred)

  • 1+ years of industry experience (can be before or after PhD)

  • Strong proficiency in Python and PyTorch

  • Experience with GPU programming and optimization

  • Track record of solving complex technical problems with innovative approaches

Day-to-day responsibilities:

You will report directly to the Chief Science and Technology Officer/Co-founder and will have ownership in these areas:

  • Research, design, and build high-performance inference systems for autonomous AI agents

  • Find ways to reproduce cutting-edge innovations from academic literature in model optimization and build upon research findings

  • Build rapid prototypes and proof of concepts to turn ideas into product features and infrastructure

  • Stay up-to-date with the latest advancements in AI inference, quantization, pruning, and distributed systems

  • Design and implement efficient inference pipelines that can operate at scale

  • Collaborate with cross-functional teams including engineering, product, and design

Benefits:

NinjaTech AI offers competitive salary and excellent benefits:

  • Annual Health insurance subsidy

  • Superannuation

  • Paid Time Off (Vacation, Sick & Holidays)

  • Paid lunches when you work on-site

  • Stock Option Plan

Candidatar-se agora

Outros empregos