- Senior
- Ufficio in Bangalore
This position is based in Bangalore, India.
Calix is leading a service provider transformation to deliver a differentiated subscriber experience around the Smart Home and Business, while monetizing their network using Role based Cloud Services, Telemetry, Analytics, Automation, and the deployment of Software Driven Adaptive networks.
As part of a high performing global team, the right candidate will play a significant role as Calix Cloud Data Engineer involved in architecture design, implementation, technical leadership in data ingestion, extraction, transformation and analytics.
Responsibilities and Duties:
- Work closely with Cloud product owners to understand, analyze product requirements and provide feedback.
- Develop conceptual, logical, physical models and meta data solutions.
- Design and manage an array of data design deliverables including data models, data diagrams, data flows and corresponding data dictionary documentations.
- Determine database structural requirements by analyzing client operations, applications, and data from existing systems.
- Technical leadership of software design in meeting requirements of service stability, reliability, scalability, and security
- Guiding technical discussions within engineer group and making technical recommendations. Design review and code review with peer engineers
- Guiding testing architecture for large scale data ingestion and transformations.
- Customer facing engineering role in debugging and resolving field issues.
Qualifications:
- This role may be required to travel and attend face-to-face meetings and Calix sponsored events.
- 10-12 years of software engineering experience delivering quality products.
- 10+ years of development experience performing Data modeling, master data management and building ETL/data pipeline implementations.
- Cloud Platforms: Proficiency in both Google Cloud Platform (GCP) services (BigQuery, Dataflow, Dataproc, PubSub/Kafka, Cloud Storage) and AWS services (Redshift, Glue, Kinesis, S3).
- Data Pipelines: Proven experience in designing, building, and maintaining scalable data pipelines across GCP and AWS.
- Big Data Technologies: Knowledge of big data processing frameworks such as Apache Spark ,Flink and Beam, particularly in conjunction with Dataproc ,EKS and AWS EMR.
- Data Transformation: Proficient in using dbt/Dataform for data transformation and modeling within the data warehouse environment.
- Programming Languages: Strong knowledge of SQL and at least one programming language (Python, Java, or Scala).
- Proficient in working with open file formats like Apache Hudi and Apache Iceberg for efficient data management and processing.
- Data Visualization: Experience with BI tools such as Google Data Studio, Looker, ThoughtSpot, and using BigQuery BI Engine for optimized reporting
- Containerization: Understanding of Docker and Kubernetes for deploying data applications.
- Data Governance :Knowledge of data catalog tools (e.g., DataHub, Collibra, Alation) to ingest and maintain metadata, Data Quality, automated lineage, schema changes, and tagging policies will be a plus.
- Problem Solving: Strong analytical and troubleshooting skills, particularly in complex data scenarios.
- Collaboration: Ability to work effectively in a team environment and engage with cross-functional teams.
- Communication: Proficient in conveying complex technical concepts to stakeholders.
- Knowledge of data governance, security best practices, and compliance regulations in both GCP and AWS environments.
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications (e.g., Google Cloud Professional Data Engineer, AWS Certified Data Analytics – Specialty).
Location:
- India – (Flexible hybrid work model - work from Bangalore office for 20 days in a quarter)