- Senior
- Optionales Büro in Bengaluru
About Netskope
Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security.
Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope.
About the role
Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience.
The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as Data services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you.
As a SRE MLOps, you will be critical to deploying and managing cutting-edge infrastructure crucial for AI/ML operations, and you will collaborate with AI/ML engineers and researchers to develop a robust CI/CD pipeline that supports safe and reproducible experiments. Your expertise will also extend to setting up and maintaining monitoring, logging, and alerting systems to oversee extensive training runs and client-facing APIs. You will ensure that training environments are optimally available and efficiently managed across multiple clusters, enhancing our containerization and orchestration systems with advanced tools like Docker and Kubernetes.
- Work closely with AI/ML engineers and researchers to participate in the designing and architecture of AI ML Applications for scale and reliability. Design and deploy a CI/CD pipeline that ensures safe and reproducible experiments.
- Involve in production troubleshooting of AI ML Application code as well as infrastructure configurations.
- Set up and manage monitoring, logging, and alerting systems for extensive training runs and client-facing APIs.
- Ensure training environments are consistently available and prepared across multiple clusters.
- Develop and manage containerization and orchestration systems utilizing tools such as Docker and Kubernetes.
- Operate and oversee large Kubernetes clusters with GPU workloads.
- Improve reliability, quality, and time-to-market of our suite of software solutions
- Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement
- Provide primary operational support and engineering for multiple large-scale distributed software applications
Is this you?
- You have professional experience with:
- Model training
- Huggingface Transformers
- Pytorch
- LLM
- TensorRT
- Infrastructure as code tools like Terraform
- Scripting languages such as Python or Bash
- Cloud platforms such as Google Cloud, AWS or Azure
- Git and GitHub workflows
- Tracing and Monitoring
- Familiar with high-performance, large-scale ML systems
- You have a knack for troubleshooting complex systems and enjoy solving challenging problems
- Proactive in identifying problems, performance bottlenecks, and areas for improvement
- Take pride in building and operating scalable, reliable, secure systems
- Familiar with monitoring tools such as Prometheus, Grafana, or similar
- Are comfortable with ambiguity and rapid change
Preferred skills and experience:
- Familiar with monitoring tools such as Prometheus, Grafana, or similar
- 8+ years building core infrastructure
- Experience running inference clusters at scale
- Experience operating orchestration systems such as Kubernetes at scale
#LI-DB1
Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate.
Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.