Platzhalter Bild

ML Inference Platform Intern (6 months) at AION

AION · Bengaluru, India · Onsite

Apply Now

About AION

AION is building the next generation of AI cloud platform by transforming the future of high-performance computing (HPC) through its decentralized AI cloud. Purpose-built for bare-metal performance, AION democratizes access to compute power for AI training, fine-tuning, inference, data labeling, and full stack AI/ML lifecycle.

Led by high-pedigree founders with previous exits, AION is well-funded by major VCs with strategic global partnerships. Headquartered in the US with global presence, the company is building its initial core team across India, London and SF. 

Who You Are

You're an ML systems engineer who's passionate about building high-performance inference infrastructure. You don't need to be an expert in everything - this field is evolving too rapidly for that - but you have strong fundamentals and the curiosity to dive deep into optimization challenges. You thrive in early-stage environments where you'll learn cutting-edge techniques while building production systems. You think systematically about performance bottlenecks and are excited to push the boundaries of what's possible in AI infrastructure.

Requirements

Key Responsibilities

  • Learn and implement ML inference optimization techniques including KV-cache management, dynamic batching, and quantization under mentorship.
  • Contribute to GPU optimization projects using CUDA with hands-on learning of Triton kernel development and performance tuning.
  • Build model benchmarking and evaluation frameworks to assess performance across different models and optimization strategies.
  • Research and experiment with trending open-source models (DeepSeek R1, Qwen 3, Llama variants) to understand optimization opportunities.
  • Implement cost-performance analysis tools to understand tradeoffs between speed, quality, and resource usage.
  • Explore agent system implementations and multi-step reasoning workflows for future platform capabilities.
  • Document learning and create technical guides for internal team knowledge sharing and customer education.

Skills & Experience

  • High agency individual with strong willingness to experiment and learn with the team.
  • Previous internships or projects in ML infrastructure, contributions using PyTorch/ML frameworks, competitive programming achievements, research experience in ML systems, familiarity with agent systems or reasoning techniques.
  • Strong coding and implementation skills in Python and C++ with demonstrated ability to write performant, production-quality code.
  • Experience reading and contributing to large codebases with proof of open-source contributions (GitHub profile required).
  • Proof of technical work through projects like Google Summer of Code, hackathon wins, competitive programming, or significant open-source contributions.
  • Working knowledge of deep learning fundamentals including neural networks, transformers, and basic training/inference concepts.
  • Basic understanding of PyTorch including model development and tensor operations.
  • Fundamental knowledge of GPU computing or strong willingness to learn CUDA programming.
  • Working knowledge of at least one inference framework (vLLM, TensorRT-LLM, Hugging Face) through coursework or personal projects.
  • Understanding of distributed systems concepts and performance optimization principles.

Benefits

  • Join the ground floor of a mission-driven AI startup revolutionizing compute infrastructure.
  • Learn from world-class engineers and gain hands-on experience with cutting-edge inference optimization techniques.
  • Work with a high-caliber, globally distributed team backed by major VCs.
  • Significant learning and growth opportunity in one of the fastest-moving areas of AI infrastructure.
  • Competitive internship compensation with potential for full-time conversion.
  • Fast-paced, flexible work environment with room for ownership and impact.
Apply Now

Other home office and work from home jobs