Elasticsearch Empregos à distância e no escritório em casa
Hybrid Software Engineer Software Engineer with verification
Trimble Inc. · Bonn, North Rhine-Westphalia, Germany (Remote) · Germany · Hybrid
Hybrid Solutions Engineer Solutions Engineer with verification
SearchStax · United States (Remote) · United States Of America · Hybrid
Hybrid Solutions Engineer Solutions Engineer with verification
SearchStax · United States (Remote) · United States Of America · Hybrid
Hybrid Backend Software Engineer Backend Software Engineer
Leadfeeder · Germany (Remote) · Germany · Hybrid
Hybrid Solution Engineer Solution Engineer with verification
deepset · Germany (Remote) · Germany · Hybrid
Hybrid Senior Member of Technical Staff Senior Member of Technical Staff with verification
Oracle · Spain · Hybrid
Hybrid Senior Python LLM Developer Senior Python LLM Developer with verification
EPAM Systems · Colombia (Remote) · Colombia · Hybrid
Hybrid Senior Data Engineer (Kafka, Python, Elasticsearch) (CPT Remote) Senior Data Engineer (Kafka, Python, Elasticsearch) (CPT Remote)
Datafin Recruitment · South Africa · Hybrid
About the job
Cape Town - Western Cape ~ Remote
ENVIRONMENT:
AN award-winning leader in contact centre AI software seeks a passionate Data Engineer with expertise in Kafka pipelines and a thorough understanding of Elastic, looking to contribute to cutting-edge technology and make a difference in the Financial Services industry. You will design, implement, and maintain robust data pipelines; troubleshoot data pipeline and Elasticsearch issues while ensuring data infrastructure aligns with business needs. The ideal candidate will have proven experience in designing and implementing data pipelines including End-to-End Testing of analytics pipelines & managing and optimising Elasticsearch clusters, including performance tuning and scalability. You will also be proficient with Python, Scala or Java and DevOps.
DUTIES:
- Design, implement, and maintain robust data pipelines, ensuring the efficient and reliable flow of data across systems.
- Develop and maintain Elasticsearch clusters, fine-tuning them for high performance and scalability.
- Collaborate with cross-functional teams to Extract, Transform, and Load (ETL) data into Elasticsearch for advanced analytics and search capabilities.
- Troubleshoot data pipeline and Elasticsearch issues, ensuring the integrity and availability of data for analytics and reporting.
- Participate in the design and development of data models and schemas to support business requirements.
- Continuously monitor and optimise data pipeline and Elastic performance to meet growing data demands.
- Collaborate with Data Scientists and Analysts to enable efficient data access and query performance.
- Contribute to the evaluation and implementation of new technologies and tools that enhance Data Engineering capabilities.
- Demonstrate strong analytical, problem-solving, and troubleshooting skills to address data-related challenges.
- Collaborate effectively with team members and stakeholders to ensure data infrastructure aligns with business needs.
- Embody the company values of playing to win, putting people over everything, driving results, pursuing knowledge, and working together.
- Implement standards, conventions and best practices.
- Proven experience in designing and implementing data pipelines.
- Experience with End-to-End Testing of analytics pipelines.
- Expertise in managing and optimising Elasticsearch clusters, including performance tuning and scalability.
- Strong proficiency with data extraction, transformation, and loading (ETL) processes.
- Familiarity with data modeling and schema design for efficient data storage and retrieval.
- Good programming and scripting skills using languages like Python, Scala, or Java.
- Knowledge of DevOps and automation practices related to Data Engineering.
Data Pipelines:
- Kafka / ksqlDB
- Python
- Redis
- Elasticsearch, cluster management and optimisation
- AWS S3
- PostgreSQL
- AWS
- Experience with Data Engineering in an Agile / Scrum environment.
- Familiarity with ksqlDB / Kafka or other stream processing frameworks.
- Familiarity of Data Lakes and the querying thereof.
- Experience with integrating Machine Learning models into data pipelines.
- Familiarity with other data-related technologies and tools.
- Strong analytical and problem-solving abilities, with a keen attention to detail.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
- A commitment to staying up to date with the latest developments in Data Engineering and technology.
- Alignment with company values and a dedication to driving positive change through data.