- Ufficio in Bengaluru
Introduction
At IBM Infrastructure & Technology, we design and operate the systems that keep the world running. From high-resiliency mainframes and hybrid cloud platforms to networking, automation, and site reliability. Our teams ensure the performance, security, and scalability that clients and industries depend on every day. Working in Infrastructure & Technology means tackling complex challenges with curiosity and collaboration. You’ll work with diverse technologies and colleagues worldwide to deliver resilient, future-ready solutions that power innovation. With continuous learning, career growth, and a supportive culture, IBM provides the opportunities to build expertise and shape the infrastructure that drives progress.
Your role and responsibilities
As an AI Engineer, you will enable and optimize Large Language Models (LLMs) on IBM Z platforms and AI Accelerators (IBM Spyre). This role sits at the intersection of LLM systems, performance engineering, and large-scale AI infrastructure, delivering production-ready AI systems at scale.
Key Responsibilities
-
Enable and optimize LLMs for training and inference on IBM Z, GPUs, and AI accelerators
-
Drive performance improvements (latency, throughput, memory efficiency) for production workloads
-
Implement LLM optimizations such as KV cache management, efficient attention, and optimized execution strategies
-
Evaluate and validate LLMs at model-level and ops-level to ensure functional correctness, numerical accuracy, and model quality
-
Evaluate LLMs using quality and benchmarking frameworks (RAGAS, DeepEval, etc.)
-
Analyze and optimize tensor shapes, strides, and memory layouts to ensure efficient and correct execution across PyTorch and accelerator backends
-
Build and scale distributed training and inference systems across multi-GPU and multi-node environments
-
Develop high-performance kernels (CUDA/Triton) for compute-intensive workloads such as attention and quantization
-
Profile and debug performance using PyTorch Profiler, TensorBoard, and system-level tools, focusing on compute, memory, and communication bottlenecks
-
Build and maintain scalable infrastructure (Docker, Kubernetes) for reproducible and stable deployments
-
Collaborate with compiler and backend teams, contribute to PyTorch ecosystem (TorchDynamo, TorchInductor)
Required technical and professional expertise
-
5+ years of experience in AI/ML systems, deep learning, or performance engineering
-
Strong programming skills in Python (must) and working knowledge of C+* Strong understanding of PyTorch internals (Autograd, ATen, Dispatcher) and exposure to compiler stack (TorchDynamo, TorchInductor, torch.compile)
-
Good understanding of LLM architectures (Transformers, attention variants, KV cache, and efficient attention techniques such as Flash Attention or Paged Attention)
-
Experience in model optimization and performance tuning (latency, throughput, memory)
-
Strong understanding of tensor operations (shapes, strides, memory layouts) and their impact on execution
-
Experience with distributed training/inference frameworks (FSDP, DeepSpeed, or similar)
-
Familiarity with multi-GPU / multi-node environments and parallel execution
-
Experience in profiling and debugging using tools like PyTorch Profiler, TensorBoard, or similar
-
Good understanding of LLM evaluation and validation (performance and quality metrics)
-
Experience with Linux environments and containerization (Docker)
-
Strong problem-solving skills with ability to debug complex system-level and model-level issues
Preferred technical and professional experience
-
Experience with AI/ML frameworks (PyTorch, TensorFlow) in production-scale deployments
-
Strong understanding of model deployment workflows and end-to-end ML lifecycle management
-
Familiarity with GPU computing, kernel optimization, and low-level performance debugging tools
-
Experience in distributed systems, microservices architecture, and REST API-based services
-
Experience integrating MLOps pipelines with CI/CD for continuous training and deployment
-
Deep understanding of AI runtimes, memory hierarchies, and parallel execution models
-
Strong knowledge of PyTorch distributed runtime, parameter sharding, and memory management techniques
-
Hands-on experience with torch.compile and TorchInductor for model acceleration
-
Experience managing enterprise systems with long release cycles and strict compatibility requirements
-
Experience working with Hugging Face ecosystem for model enablement and deployment
-
Exposure to model quality evaluation frameworks and validation pipelines
-
Application of IBM Design Thinking to deliver user-centric, high-quality AI solutions
-
Demonstrated technical leadership in AI/backend engineering or large-scale system projects
-
Strong communication skills with ability to engage technical and non-technical stakeholders effectively
-
Commitment to engineering excellence including code quality, performance, security, and best practices
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Candidarsi ora