Jetzt bewerben

Details zum Jobangebot

About the role

About Us

DOPP is the first fully on-chain derivatives exchange, from the matching engine to settlement, built on Starknet. As capital-efficient as centralized exchanges, our goal is to serve as the foundational layer for options products within DeFi.

Position Overview

We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in Web3 technologies. You will be responsible for designing, implementing, and maintaining our data infrastructure to support our initiatives, ensuring scalability, reliability, and performance.

Key Responsibilities

  • Design and develop robust real time data pipelines to collect, process, and store large volumes of data from various decentralized sources.
  • Design/implement scalable data streaming using Apache Kafka.
  • Take a leading role in designing a self-healing and innovative data pipeline that will process digital assets and financial data.
  • Implement data architecture and data models that support scalable and efficient data processing and analytics.
  • Ensure correctness and reliability of real-time insights.
  • Collaborate with developers, data scientists, and product managers to define data requirements and ensure seamless data integration.
  • Develop and maintain scalable ETL/ELT pipelines using Apache Spark, Kafka, and Flink.
  • Explore and integrate cutting-edge Web3 indexers, data accessors, and other development tools to drive continuous innovation.
  • Alongside the team, take ownership of project documentation and knowledge base, including manuals, static API specs, and any relevant playbooks.
  • Ensure data security and compliance with industry standards and regulations.
  • Stay up-to-date with the latest advancements in blockchain and data engineering, bringing innovative solutions to the team.

Qualifications

  • 4+ years of experience in data engineering, developing backend services, and working with data pipeline tools within Web3 (mandatory); experience within derivatives platforms is a plus.
  • Strong proficiency in programming languages such as Python, Go, and Scala.
  • Extensive experience with SQL and NoSQL databases.
  • Proficient in designing and implementing data pipelines using Apache Spark, Kafka, Flink.
  • Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes).
  • Strong understanding of data security principles and practices.
  • Self-driven learner who is highly motivated and unafraid to explore new technologies.
  • Effective communication and collaboration skills.
  • Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment.
  • Nice to have: Experience with big data solutions, and can can code in Rust

What We Offer

  • Competitive salary and equity options.
  • Flexible work environment with remote work options.
  • Opportunity to work on cutting-edge technology in the rapidly evolving Web3 space.
  • Collaborative and inclusive company culture.
  • Professional development and continuous learning opportunities.
Jetzt bewerben

Weitere Jobs