About the job
What You'll Do:
- This role is for an strong Scala with medium strong DynamoDB consultant
- The identified developer will work with Scala and DynamoDB for performance improvement in Scala spark jobs
- - Develop and optimize Scala-based Spark jobs to process large datasets efficiently.
- Implement best practices in Scala coding to ensure high performance, reliability, and scalability of Spark applications.
- Design and implement data models and storage solutions using Amazon DynamoDB to support high-performance data processing tasks.
- - Identify performance bottlenecks in existing Scala Spark jobs and DynamoDB interactions, and provide actionable solutions.
- - Optimize Spark job configurations and data partitioning to reduce processing time and resource consumption.
- - Fine-tune DynamoDB read/write operations, indexing, and data distribution for optimal performance.
- - Work closely with data engineers, architects, and DevOps teams to integrate Scala Spark jobs into a larger data processing ecosystem.
- - Collaborate with cross-functional teams to understand business requirements and translate them into scalable technical solutions.