Lavoro Product-owner a distanza a Pune, Maharashtra, India ∙ Pagina 1

Logo

Coursera

Adquiera conocimientos en las mejores universidades en línea. ¡Haga avanzar su carrera hoy mismo!

Patrocinado por Coursera

Logo

CareerFoundry

Lanza tu carrera a distancia con CareerFoundry. Cursos de diseño, tecnología y mucho más con garantía de empleo.

Patrocinado por CareerFoundry

Logo

Exponent

Supera tus entrevistas tecnológicas con sesiones de prueba y tutores expertos de las mejores empresas.

Patrocinado por Exponent

Hybrid Data Engineer (GCP, Azure)

Artefact · Pune, Maharashtra, India, India · Hybrid

Solicitar ahora Logo

Designlab – Conviértase en diseñador con cursos prácticos y tutorías.

Patrocinado por Designlab

Artefact is a new generation of data service providers specialising in data consulting and data-driven digital marketing. It is dedicated to transforming data into business impact across the entire value chain of organisations. We are proud to say we’re enjoying skyrocketing growth.

The backbone of our consulting missions, today our Data consulting team has more than 400 consultants covering all Artefact's offers (and more): data marketing, data governance, strategy consulting, product owner…

What you will be doing?
As a Data Engineer, your role involves crafting and maintaining robust data pipelines, utilising Python and SQL, to ensure efficient extraction, transformation, and loading (ETL) of data.

Your responsibilities will include:

  • Data Pipeline Development: Building and optimising data pipelines to facilitate seamless data flow across systems and platforms.
  • Database Management: Managing databases, ensuring their integrity, and implementing data storage and retrieval solutions.
  • Cloud Services Integration: Leveraging cloud services such as MS Azure, GCP, and AWS to architect and deploy scalable data solutions.
  • Machine Learning Integration: Collaborating with teams to integrate machine learning models into data pipelines for enhanced data processing.
  • Utilising Spark & Kafka: Implementing and working with Spark and Kafka for real-time data processing and analytics.

What we are looking for?

  • Proficiency in Python, SQL, and database management.
  • Experience with Data Pipelines ETL, Cloud Services (MS Azure, GCP preferred), ML Modeling, Spark & Kafka.
  • Proven problem-solving skills and a solution-oriented mindset.
  • Excellent communication skills to collaborate effectively within teams and with stakeholders.
  • Strong business acumen with an interest in business-facing roles.
  • Adaptability and a start-up mentality to thrive in a dynamic environment.
  • Candidates with similar skill sets and experiences have excelled in technology firms or consultancy firms.
  • Successful candidates often possess Computer Science, Electronics, and Communication Engineering degrees.