Firmenlogo

Data Engineer chez Syngenta Group

Syngenta Group · Pune, Inde · Hybrid

Postuler maintenant

Company Description

About Syngenta
At Syngenta Group, we're a global community of 56,000 innovators across 90 countries, united by a 250-year legacy of agricultural excellence. As the world's most local agricultural technology partner, we create tailor-made solutions that transform farming while protecting our planet, driven by our commitment to innovation, ethics, and integrity. Through our inclusive environment and diverse perspectives, we pioneer breakthrough solutions for farmers, society, and future generations. Join our worldwide teams of agricultural pioneers in creating a more resilient and equitable food system for all. 

Job Description

We’re hiring experienced Data Engineer to support large-scale data transformation project. You’ll be embedded within a high-performing team delivering mission-critical data platforms using Databricks and AWS.

This is a hands-on engineering role focused on architecture, implementation, and optimization of robust data solutions at scale.

Key Responsibilities

  • Design, build, and deploy data pipelines and platforms using Databricks and cloud infrastructure (preferably AWS)
  • Lead or contribute to end-to-end implementation of data solutions in enterprise environments
  • Collaborate with architects, analysts, and client stakeholders to define technical requirements
  • Optimize data systems for performance, scalability, and security
  • Ensure data governance, quality, and compliance in all solutions

Required Skills & Experience

  • 7+ years of experience in data engineering
  • Deep expertise with Databricks (Spark, Delta Lake, MLflow, Unity Catalog)
  • Strong experience with cloud platforms, ideally AWS (S3, Glue, Lambda, etc.)
  • Proven track record of delivering complex data solutions in commercial space like Sales, Marketing, Pricing, Customer Insights
  • At least 4 years of hands-on data pipeline design and development experience with Databricks, including specific -platform features like Delta Lake, Uniform (Iceberg), Delta Live Tables (Lake flow Declarative pipelines), and Unity Catalog.
  • Strong programming skills using SQL, Stored Procedures and Object-Oriented Programming languages (Python, PySpark etc.).
  • Experience with CI/CD for data pipelines and infrastructure-as-code tools (e.g., Terraform)
  • Strong understanding of data modeling, Lakehouse architectures, and data security best practices
  • Familiarity with NoSQL Databases and Container Management Systems.
  • Exposure to AI/ML tools (like mlflow), prompt engineering, and modern data and AI agentic workflows.
  • An ideal candidate will have Databricks Data Engineering Associate and/or Professional certification completed with multiple Databricks project delivery experience.

Nice to Have

  • Experience with Azure or GCP in addition to AWS
  • Knowledge of DevOps practices in data engineering
  • Familiarity with regulatory frameworks (e.g., GDPR, SOC2, PCI-DSS)
  • AWS Redshift, AWS Glue/ Spark (Python,Scala)

Qualifications

Bachelor's of Engineering in CS

Additional Information

Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, marital or veteran status, disability, or any other legally protected status. 

#LI-Hybrid

Postuler maintenant

Plus d'emplois