Firmenlogo

Hybrid Data Engineer Data Engineer

Awign  ·  nan, · Hybrid

Apply Now

About the job

About Awign Expert:

Awign Expert, a division of Awign - India's largest work-as-a-service platform. We connect skilled professionals with exciting project-based opportunities from top companies, handling onboarding, feedback, conflict resolution, and payroll. Our mission is to empower professionals to focus on their work by managing administrative tasks, providing access to a network of renowned companies and rewarding assignments.

Contract Duration: 6 months to start with.

Years of Experience Required: 4-6 years of experience

Mode of Engagement: Onsite

Basic Qualifications:

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • Proven experience as a Data Engineer with expertise in ETL techniques is a MUST
  • Strong programming skills in languages such as Python, Java, or Scala.
  • Skillset to scrape and transform data off of the publicly available web sources
  • Experience with cloud-based data platforms (e.g., AWS, Azure, GCP).
  • Proficiency in SQL and experience working with relational and non-relational databases.
  • Knowledge of data warehousing concepts and architectures.
  • Familiarity with big data technologies such as Hadoop, Spark, and Kafka.
  • Experience with data modeling tools and techniques.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.

Responsibilities:

  • Outline the primary responsibilities and tasks the candidate is expected to perform.

About the role:

As a Data Engineer for our Data Science team, you will play a critical role in helping the team to enrich and maintain the central repository of datasets which is leveraged for carrying out advanced data analytics and machine learning techniques to extract actionable insights from financial and market data. You will work closely with cross-functional teams to develop and implement robust data pipelines that will automate the updation of data in our cloud-based repository in a ready to use form thereby increasing data accessibility for the entire organization.

Key Responsibilities:

  • ETL Development:
  • Design, develop, and maintain efficient ETL processes for multi-scale data sets.
  • Implement and optimize data transformation and validation processes to ensure data accuracy and consistency.
  • Collaborate with cross-functional teams to understand data requirements and business logic.
  • Data Pipeline Architecture:
  • Architect, build and maintain scalable and high-performance data pipelines
  • Evaluate and implement new technologies to enhance data pipeline efficiency and reliability
  • Pipelines for extracting data through scraping for ad-hoc sector specific datasets
  • Data Modelling:
  • Develop and implement data models to support analytics and reporting needs.
  • Optimize database structures for performance and scalability.
  • Data Quality and Governance:
  • Implement data quality checks and governance processes to ensure data integrity.
  • Collaborate with stakeholders to define and enforce data quality standards.
  • Documentation and Communication:
  • Document ETL processes, data models, and other relevant information.
  • Communicate complex technical concepts to non-technical stakeholders effectively.
  • Cross-functional collaboration:

Collaborate internally with the Quant team and developers to lay and optimize the data pipelines and externally with the stakeholders to understand the business requirements for the enrichment of the cloud database

Apply Now

Other Jobs