OneTeamAnywhere · Singapore, · Remote
Details zum Jobangebot
About The OTA Client
- Trailblazer in employing AI to transform the sourcing industry.
- As an AI-powered platform, we are devoted to enabling retailers to source smarter and sell better.
- Our primary mission is to digitize global B2B sourcing and international trade, focusing on making international sourcing straightforward.
- We understand the complexities businesses face in sourcing, driving us to offer seamless solutions.
- Our platform is designed as an affordable, easy-to-use solution that allows businesses to source products from abroad without any hassle.
- Headquartered in Singapore, we are a global company serving clients across various countries.
- Our growth is propelled by esteemed investors across Southeast Asia.
As a Data Engineer, you'll be a cornerstone of our AI team, playing a pivotal role in the design, development, and implementation of our data infrastructure. You'll collaborate closely with cross-functional teams to design and maintain robust data pipelines, empowering data-driven decision-making. This role offers the chance to architect and uphold a state-of-the-art cloud-based data platform, ensuring data accuracy, consistency, and reliability.
We are looking for passionate professionals with a knack for problem-solving, who are ready to build from scratch, are comfortable with the inherent risks of early-stage startups, and display a readiness to learn, adapt, and grow.
What You'll Be Doing
- Designing, building, and maintaining data storage and processing infrastructure
- Developing and managing data pipelines using modern tools and following
- industry best practices
- Ensuring data quality and reliability through comprehensive validation to ensure accuracy and completeness
- Collaborating with product, tech and other internal teams to fully understand business requirements and translate them into pragmatic and effective technical Solutions
- Identifying and implementing process improvements and continuous seeking of new technologies, techniques and methodologies to optimize data flow
- Exploring and implementing cutting-edge AI technology to build and bolster our AI capabilities
- 5-7+ years of experience in a data engineering role
- Proficiency in Python, particularly in building data pipelines and interacting with REST APIs, relational and non-relational databases.
- Advanced proficiency in SQL, covering both transactional (OLTP) and analytical (OLAP) use cases
- Experience with data warehousing, data processing, pipeline building and orchestration using modern tools such as Airflow/dagster, Airbyte/dlt, and dbt required
- Experience designing and building dimensional models (Kimball) optimized for reporting & analytics required
- Competence in building solutions on the modern data tech stack, including the ability to set up and configure the infrastructure from scratch. Experience with AWS Services, Docker, CI/CD, Kubernetes, and Infrastructure as Code a strong plus
- Knowledge on AI-related concepts and tooling (e.g. LLMs, vector databases, Python frameworks) a strong plus
- Strong communication and collaboration skills, with the ability to engage effectively with both technical and non-technical stakeholders.
- Good problem-solving skills and ability to troubleshoot and optimize performance.
- Comfortable with ambiguity and ability to navigate within a fast-paced, start-up environment.
- A chance to play a key role in executing ideas and plans with mentorship and support.
- An opportunity to work on significant projects and launch them from scratch.
- A fantastic working environment that promotes transparency and relationships.
- A customer-centric culture.
- Competitive rewards and attractive benefits.
- An opportunity to further your career remotely, with occasional meetings in Singapore or Hong Kong as needed.