Platzhalter Bild

Hybrid Senior Software Engineer (Data Platform) na Sibros

Sibros · Pune, Índia · Hybrid

Candidatar-se agora

About the Role:

Job Title: Senior Software Engineer

Reporting to: Engineering Manager

Location: Pune, India

Job Type: Full-Time

Experience: 6 - 9 years

At Sibros, we are building the foundational data infrastructure that powers the software-defined future of mobility. One of our most impactful products—Deep Logger—enables rich, scalable, and intelligent data collection from connected vehicles, unlocking insights that were previously inaccessible.

Our platform ingests high-frequency telemetry, diagnostic signals, user behavior, and system health data from vehicles across the globe. We transform this into actionable intelligence through real-time analytics, geofence-driven alerting, and predictive modeling for use cases like trip intelligence, fault detection, battery health, and driver safety.

We’re looking for a Senior Software Engineer to help scale the backend systems that support Deep Logger’s data pipeline—from ingestion and streaming analytics to long-term storage and ML model integration. You’ll play a key role in designing high-throughput, low-latency systems that operate reliably in production, even as data volumes scale to billions of events per day.

In this role, you’ll collaborate across firmware, data science, and product teams to deliver solutions that are not only technically robust, but also critical to safety, compliance, and business intelligence for OEMs and fleet operators.

This is a unique opportunity to shape the real-time intelligence layer of connected vehicles, working at the intersection of event-driven systems, cloud-native infrastructure, and automotive-grade reliability.

What you’ll do:

  • Lead the Design and Evolution of Scalable Data Systems: Architect end-to-end real-time and batch data processing pipelines that power mission-critical applications such as trip intelligence, predictive diagnostics, and geofence-based alerts. Drive system-level design decisions and guide the team through technology tradeoffs.
  • Mentor and Uplift the Engineering Team: Act as a technical mentor to junior and mid-level engineers. Conduct design reviews, help grow data engineering best practices, and champion engineering excellence across the team.
  • Partner Across the Stack and the Org: Collaborate cross-functionally with firmware, frontend, product, and data science teams to align on roadmap goals. Translate ambiguous business requirements into scalable, fault-tolerant data systems with high availability and performance guarantees.
  • Drive Innovation and Product Impact: Shape the technical vision for real-time and near-real-time data applications. Identify and introduce cutting-edge open-source or cloud-native tools that improve system reliability, observability, and cost efficiency.
  • Operationalize Systems at Scale: Own the reliability, scalability, and performance of the pipelines you and the team build. Lead incident postmortems, drive long-term stability improvements, and establish SLAs/SLOs that balance customer value with engineering complexity.
  • Contribute to Strategic Technical Direction: Provide thought leadership on evolving architectural patterns, such as transitioning from streaming-first to hybrid batch-stream systems for cost and scale efficiency. Proactively identify bottlenecks, tech debt, and scalability risks.

What you should know:

  • 7+ years of experience in software engineering with a strong emphasis on building and scaling distributed systems in production environments.
  • Deep understanding of computer science fundamentals including data structures, algorithms, concurrency, and distributed computing principles.
  • Proven expertise in designing, building, and maintaining large-scale, low-latency data systems for real-time and batch processing.
  • Hands-on experience with event-driven architectures and messaging systems like Apache Kafka, Pub/Sub, or equivalent technologies.
  • Strong proficiency in stream processing frameworks such as Apache Beam, Flink, or Google Cloud Dataflow, with a deep appreciation for time and windowing semantics, backpressure, and checkpointing.
  • Demonstrated ability to write production-grade code in Go or Java, following clean architecture principles and best practices in software design.
  • Solid experience with cloud-native infrastructure including Kubernetes, serverless compute (e.g., AWS Lambda, GCP Cloud Functions), and containerized deployments using CI/CD pipelines.
  • Proficiency with cloud platforms, especially Google Cloud Platform (GCP) or Amazon Web Services (AWS), and services like BigQuery, S3/GCS, IAM, and managed Kubernetes (GKE/EKS).
  • Familiarity with observability stacks (e.g., Prometheus, Grafana, OpenTelemetry) and an understanding of operational excellence in production environments.
    Ability to balance pragmatism with technical rigor, navigating ambiguity to design scalable and cost-effective solutions.
  • Passionate about building platforms that empower internal teams and deliver meaningful insights to customers, especially within the automotive, mobility, or IoT domains.
  • Strong communication and collaboration skills, with experience working closely across product, firmware, and analytics teams.

Preferred Qualifications

  • Experience architecting and building systems for large-scale IoT or telemetry-driven applications, including ingestion, enrichment, storage, and real-time analytics.
  • Deep expertise in both streaming and batch data processing paradigms, using tools such as Apache Kafka, Apache Flink, Apache Beam, or Google Cloud Dataflow.
  • Hands-on experience with cloud-native architectures on platforms like Google Cloud Platform (GCP), AWS, or Azure, leveraging services such as Pub/Sub, BigQuery, Cloud Functions, Kinesis etc.
  • Experience working with high-performance time-series or analytical databases such as ClickHouse, Apache Druid, or InfluxDB, optimized for millisecond-level insights at scale.
  • Proven ability to design resilient, fault-tolerant pipelines that ensure data quality, integrity, and observability in high-throughput environments.
  • Familiarity with schema evolution, data contracts, and streaming-first data architecture patterns (e.g., Change Data Capture, event sourcing).
  • Experience working with geospatial data, telemetry, or real-time alerting systems is a strong plus.
  • Contributions to open-source projects in the data or infrastructure ecosystem, or active participation in relevant communities, are valued.

What We Offer:

  • Competitive compensation package with performance incentives.
  • A dynamic work environment with a flat hierarchy and the opportunity for rapid career advancement.
  • Collaborate with a dynamic team that’s passionate about solving complex problems in the automotive IoT space.
  • Access to continuous learning and development opportunities.
  • Flexible working hours to accommodate different time zones.
  • Comprehensive benefits package including health insurance and wellness programs.
  • A culture that values innovation and promotes a work-life balance.
Candidatar-se agora

Outros empregos