About the job
To succeed in this position, they should have strong analytical skills, the ability to combine data from different sources, and familiarity with several programming languages and machine learning methods. If you are detail-oriented, possess excellent organizational skills, and have experience in this field, we’d like to hear from you.
What are the main responsibilities of this role?
- Design, build, and maintain data infrastructure.
- Manage large datasets, ensuring data quality and reliability.
- Develop and manage ETL (Extract, Transform, Load) processes.
- Create and optimize data pipelines.
- Collaborate with Data Scientists to understand data requirements and support their needs.
- Monitor and maintain the performance of databases.
- Ensure data security and compliance with relevant regulations.
- Troubleshoot and resolve data-related issues.
- Document data processes and infrastructure.
- +2 years of experience working in data engineering or a related field.
- Advanced technical background, including different languages, platforms, and tools for maintaining data infrastructure.
- Knowledge of relational databases: Experience with SQL databases such as MySQL or PostgreSQL.
- Knowledge of NoSQL databases: Familiarity with NoSQL databases such as MongoDB, Cassandra, or DynamoDB.
- Data pipeline tools: Knowledge of data pipeline tools such as Apache Kafka and Apache Airflow.
- Cloud platforms: Experience working with cloud platforms such as AWS, Azure, or GCP.
- Search and analytics systems: Familiarity with search and analytics systems such as ElasticSearch, Solr, Splunk, or OpenSearch.
- Data warehousing and analytics: Experience with data storage and analytics on platforms like Snowflake, Databricks, Amazon Athena, Amazon Redshift, or Google BigQuery.
- Large-scale data storage and processing systems: Experience with systems like Hadoop and Azure Data Lake.
- Orchestration and containers: Proficiency in using orchestration and container tools such as Docker, Kubernetes, ECR, or ECS.
- Distributed data processing: Experience with distributed data processing using Apache Spark, Flink, or Dataflow
- Intermediate to strong problem-solving skills.
- Intermediate to strong teamwork and communication skills.
- Advanced English skills (B2+).
- Intermediate ability to communicate with different stakeholders: team, client, and managers.
- Advanced ability to write documentation and provide clear explanations of complex data processes.
- Strong knowledge of scrum.
- Strong knowledge of estimation techniques.
- Strong background in writing user stories and acceptance criteria.
- Access to Rootstrap University,Conferences/Certifications, and a Mentorship Program for your professional growth.
- Learning Bonus
- Opportunities to organize cross-functional initiatives and receive 360 Feedback to improve your skills continuously.
- The flexibility to work remotely or from our offices in Montevideo, Buenos Aires, and Medellín, with a flexible time schedule and Workation program.
- Gym benefits, psychological counseling, and weekly lunch reimbursements with special foods in the offices.
- An Onboarding kit and access to cutting-edge technologies and tools to make your work easier.
- Referrals program bonuses, After offices, Prizes, and special occasions gifts to celebrate your achievements.
- People Care referent to support your well-being and personal development.
How We Work
- We offer a flexible and diverse work environment that fosters multicultural talent
- We value autonomy and creativity, We want people to bring ideas to the table and to think as leaders.
- We put our focus on real human connections
- We have a commitment to excellence
All of our candidate searches are covered under Uruguayan Law 19691 (the purpose of this law is to promote the inclusion of people with disabilities).