Platzhalter Bild

AVP, Data Engineer, Core Banking Technology, Group Technology at DBS Bank

DBS Bank · Singapore, Singapore · Onsite

Apply Now

Business Function

Group Technology enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group Technology, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels.

 

Roles & Responsibilities

·       Create Scala/Spark/Pyspark jobs for data transformation and aggregation

·       Produce unit tests for Spark transformations and helper methods

·       Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API.

·       Work closely with Business Analysts team to review the test results and obtain sign off Prepare necessary design/operations documentation for future usage

·       Perform peers Code quality review and be gatekeeper for quality checks

·       Hands-on coding, usually in a pair programming environment

·       Working in highly collaborative teams and building quality code

·       The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation

·       Familiar with Oracle, Maria DB , Gauss DB , Mongo DB , Spark streaming, Kafka, ML

·       To develop an application by using Hadoop tech stack and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner

·       Ensure smooth production deployments as per plan and post-production deployment verification

·       This Hadoop Developer will play a hands-on role to develop quality applications within the desired timeframes and resolving team queries

 

Requirements

·       Hadoop data engineer with total 4 - 7 years of experience and should have strong experience in Hadoop, Spark, Scala, Java, Hive, Impala, CI/CD, Git, Jenkins, Agile Methodologies, DevOps, Cloudera Distribution

·       Strong Knowledge in data warehousing Methodology

·       Relevant 4+ years of Hadoop & Spark/Pyspark experience is mandatory

·       Strong in enterprise data architectures and data models

·       Good experience in Core Banking, Finance domain

·       Good to have Cloud experience on AWS/GCP

·       Proficiency in Python, excellent SQL knowledge.

 

Apply Now

We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognizes your achievements.

Apply Now

Other home office and work from home jobs