- Senior
- Escritório em Bengaluru
Job Details:
Job Description:
For decades, Altera has been at the forefront of programmable logic technology. Our commitment to innovation has empowered countless customers to create groundbreaking solutions that have transformed industries.
Join us in our journey to becoming the #1 FPGA company!
We are looking for a highly skilled Senior Data Engineer to join our Data & Analytics team and help lead the design and development of modern data platforms. This role focuses on building scalable, high-performance data solutions using tools such as Microsoft Fabric, Azure Data Factory and PySpark. The ideal candidate will be passionate about data warehousing, data engineering, data architecture, automation, and DevOps practices, and will work closely with Business analysts, engineering team, Data Scientists to enable robust, real-time, and secure data operations.
Responsibilities
- Expert in data engineering, ETL/ELT, and modern data platforms such as (MS Fabric, Synapse, Databricks, etc.)
- Design, build, and optimize scalable data pipelines using Azure Data Factory, Databricks, and Microsoft Fabric.
- Develop ETL/ELT workflows using PySpark, SQL, and data orchestration frameworks.
- Architect and implement data Lakehouse and data warehouse solutions on Microsoft Fabric.
- The role requires a strong understanding of Data Warehousing, Data Modeling, data engineering principles, cloud computing, and the Microsoft Fabric platform
- Strong in SQL development, including writing complex queries and stored procedures
- Collaborate with data scientists and analysts to ensure efficient data delivery and model readiness.
- Leverage CI/CD practices and infrastructure-as-code (IaC) to deploy and maintain data pipelines in production.
- Monitor and troubleshoot data pipelines, ensuring performance, reliability, and data quality.
- Enforce data governance, security, and compliance best practices across platforms.
- Mentor junior engineers and contribute to architectural decisions, coding standards, and engineering best practices.
Qualifications:
- Bachelor’s or master’s degree in computer science, Information Systems, Engineering, or a related field.
- 7+ years of experience in data engineering with strong knowledge of cloud data platforms.
- Proficiency in PySpark, Azure Data Factory, Databricks and SQL.
- Experience working with Microsoft Fabric or similar unified analytics platforms.
- Solid understanding of data lakehouse and data warehouse architectures.
- Solid understanding of Data modeling concepts.
- Familiarity with CI/CD pipelines, version control (e.g., Git), and deployment automation tools (e.g., Azure DevOps, GitHub Actions).
- Strong problem-solving skills and ability to work across technical and business teams.
Preferred
- Experience with Delta Lake, Synapse Analytics, and other Azure analytics services.
- Knowledge of infrastructure-as-code tools (e.g., Terraform, ARM templates).
- Microsoft certifications in Azure Data Engineering or Fabric.
- Background in implementing data observability and lineage tools (e.g., Unity Catalog, Purview).