Platzhalter Bild

Hybrid Data Engineer (Databricks) na Coherent solutions

Coherent solutions ·  Americas, Estados Unidos Da América · Hybrid

Candidatar-se agora

Project Description

We are looking for an experienced Data Engineer with deep expertise in Databricks to join our advanced analytics and data engineering team. The ideal candidate will play a key role in designing, building, and optimizing large-scale data solutions on the Databricks platform, supporting business intelligence, advanced analytics, and machine learning initiatives. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performance data pipelines and architectures.

Technologies

  • Databricks (including Spark, Delta Lake, MLflow)
  • Python/Scala
  • SQL
  • ETL concepts
  • Distributed data processing
  • Data warehousing
  • Cloud Platforms & Storage

What You'll Do

  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow);
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security;
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP);
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights;
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage;
  • Develop and maintain documentation, data models, and technical standards;
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments;
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem;

Job Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field;
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow);
  • Strong proficiency in Python and/or Scala for data processing;
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts;
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage);
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases;
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering;
  • Experience with data governance, security, and compliance in cloud environments;
  • Excellent problem-solving, communication, and leadership skills;
  • English: Upper Intermediate level or higher;

What Do We Offer

The global benefits package includes:

  • Technical and non-technical training for professional and personal growth;
  • Internal conferences and meetups to learn from industry experts;
  • Support and mentorship from an experienced employee to help you professional grow and development;
  • Internal startup incubator;
  • Health insurance;
  • English courses;
  • Sports activities to promote a healthy lifestyle;
  • Flexible work options, including remote and hybrid opportunities;
  • Referral program for bringing in new talent;
  • Work anniversary program and additional vacation days.
Candidatar-se agora

Outros empregos