Platzhalter Bild

Data Engineer na Weekday (Weekday's client services)

Weekday (Weekday's client services) · Delhi, Índia · Hybrid

Candidatar-se agora

This role is for one of Weekday’s clients
Min Experience: 5 years
Location: Delhi NCR
JobType: full-time

Requirements

We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will bring strong expertise in SQL, AWS cloud services, and data pipeline development to build and maintain scalable, reliable, and efficient data infrastructure. You will play a key role in designing and optimizing data workflows that power critical business insights and decision-making across the organization.

This position is perfect for an individual who thrives in fast-paced, data-driven environments and has a strong command of modern AWS data stack tools such as S3, Glue, Kinesis, Redshift, and Databricks.

Key Responsibilities

  • Data Pipeline Development:
    Design, build, and maintain robust, scalable, and high-performance data pipelines to support analytics, reporting, and machine learning initiatives.
    Implement ETL/ELT processes for structured and unstructured data from multiple sources into centralized data lakes or warehouses.
  • Data Integration & Transformation:
    Utilize AWS Glue, Databricks, and PySpark to transform, clean, and enrich large-scale datasets ensuring consistency and quality.
    Optimize data transformations and queries for performance and cost efficiency.
  • Data Lake & Warehouse Management:
    Architect, manage, and monitor data lakes on S3 and data warehouses on Redshift, ensuring high data availability and reliability.
    Develop solutions that support both real-time streaming and batch data processing using AWS Kinesis and related services.
  • Automation & Orchestration:
    Build and automate workflows using Airflow, Step Functions, or equivalent orchestration tools to streamline data movement and reduce manual intervention.
  • Data Quality & Governance:
    Ensure data accuracy, completeness, and consistency across systems through validation checks, monitoring, and proactive troubleshooting.
    Collaborate with data analysts and business teams to understand data needs and provide reliable, timely datasets.
  • Performance Optimization:
    Continuously analyze and optimize SQL queries, data pipelines, and storage architecture to improve efficiency, scalability, and performance.
  • Collaboration & Documentation:
    Work closely with data scientists, analysts, and business stakeholders to deliver scalable data solutions that support business objectives.
    Maintain detailed technical documentation, data flow diagrams, and process manuals for ongoing support and scalability.

Key Skills and Requirements

  • Experience: Minimum 5 years of hands-on experience in data engineering with strong exposure to AWS-based data ecosystems.
  • Technical Expertise:
    • Proficiency in SQL, AWS Glue, Kinesis, Redshift, S3, Databricks, and PySpark.
    • Experience in Data Lake design, ETL pipeline orchestration, and workflow automation.
    • Familiarity with Airflow, Step Functions, or similar orchestration frameworks.
  • Data Management: Strong understanding of data modeling, performance tuning, and data quality management.
  • Analytical Skills: Ability to perform detailed data analysis and identify opportunities for process improvement.
  • Cloud Expertise: Deep understanding of AWS architectures, cost optimization, and infrastructure best practices.
  • Soft Skills: Excellent communication, problem-solving, and collaboration skills in cross-functional environments.
  • Work Mode: Ability to work onsite 2 days/week from CyberHub, Gurugram.
Candidatar-se agora

Outros empregos