- Professional
- Office in Noida
Career Opportunities: Associate- BIM (10508)Requisition ID 10508 - Posted - Noida - Candor Techspace, Tower 11
Position Summary
To be a driven business analyst who can work on complex Analytical problems and help the customer in better business decision making especially in the area of pharma/life sciences (domain).
Job Responsibilities
Engage with Client to participate in requirement gathering, Status update on work, UAT and be the key partner in the overall engagement
Participates in ETL Design using any python framework of new or changing mappings and workflows with the team and prepares technical specifications
Crafts ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter
Write complex SQL queries with performance tuning and optimization
Should be able to handle task independently and lead the team
Coordinate with cross-functional teams to ensure project objectives are met.
Collaborate with data architects and engineers to design and implement data models.
Education
BE/B.TechMaster of Computer ApplicationWork Experience
Advanced knowledge of PySpark ,python, pandas, numpy frameworks.
Minimum 4 years of extensive experience in design, build and deployment of Spark/Pyspark - for data integration.
Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations
Create Spark jobs for data transformation and aggregation
Spark query tuning and performance optimization - Good understanding of different file formats (ORC, Parquet, AVRO) to optimize queries/processing and compression techniques.
Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus)
Experience in Modular Programming & Robust programming methodologies
ETL knowledge and have done ETL development using any python framework
Worked with Databricks/Snowflake in the past Preferable.
Behavioural Competencies
OwnershipTeamwork & LeadershipCultural FitMotivation to Learn and GrowTechnical Competencies
Problem SolvingLifescience KnowledgeCommunicationCapability Building / Thought LeadershipSkills
Position Summary
To be a driven business analyst who can work on complex Analytical problems and help the customer in better business decision making especially in the area of pharma/life sciences (domain).
Job Responsibilities
Engage with Client to participate in requirement gathering, Status update on work, UAT and be the key partner in the overall engagement
Participates in ETL Design using any python framework of new or changing mappings and workflows with the team and prepares technical specifications
Crafts ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter
Write complex SQL queries with performance tuning and optimization
Should be able to handle task independently and lead the team
Coordinate with cross-functional teams to ensure project objectives are met.
Collaborate with data architects and engineers to design and implement data models.
Education
Work Experience
Advanced knowledge of PySpark ,python, pandas, numpy frameworks.
Minimum 4 years of extensive experience in design, build and deployment of Spark/Pyspark - for data integration.
Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations
Create Spark jobs for data transformation and aggregation
Spark query tuning and performance optimization - Good understanding of different file formats (ORC, Parquet, AVRO) to optimize queries/processing and compression techniques.
Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus)
Experience in Modular Programming & Robust programming methodologies
ETL knowledge and have done ETL development using any python framework
Worked with Databricks/Snowflake in the past Preferable.
Behavioural Competencies
Technical Competencies
Skills
Position Summary
To be a driven business analyst who can work on complex Analytical problems and help the customer in better business decision making especially in the area of pharma/life sciences (domain).
Job Responsibilities
Engage with Client to participate in requirement gathering, Status update on work, UAT and be the key partner in the overall engagement
Participates in ETL Design using any python framework of new or changing mappings and workflows with the team and prepares technical specifications
Crafts ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter
Write complex SQL queries with performance tuning and optimization
Should be able to handle task independently and lead the team
Coordinate with cross-functional teams to ensure project objectives are met.
Collaborate with data architects and engineers to design and implement data models.
Education
Work Experience
Advanced knowledge of PySpark ,python, pandas, numpy frameworks.
Minimum 4 years of extensive experience in design, build and deployment of Spark/Pyspark - for data integration.
Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations
Create Spark jobs for data transformation and aggregation
Spark query tuning and performance optimization - Good understanding of different file formats (ORC, Parquet, AVRO) to optimize queries/processing and compression techniques.
Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus)
Experience in Modular Programming & Robust programming methodologies
ETL knowledge and have done ETL development using any python framework
Worked with Databricks/Snowflake in the past Preferable.
Behavioural Competencies
Technical Competencies
Skills
Position Summary
To be a driven business analyst who can work on complex Analytical problems and help the customer in better business decision making especially in the area of pharma/life sciences (domain).
Job Responsibilities
Engage with Client to participate in requirement gathering, Status update on work, UAT and be the key partner in the overall engagement
Participates in ETL Design using any python framework of new or changing mappings and workflows with the team and prepares technical specifications
Crafts ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter
Write complex SQL queries with performance tuning and optimization
Should be able to handle task independently and lead the team
Coordinate with cross-functional teams to ensure project objectives are met.
Collaborate with data architects and engineers to design and implement data models.
Education
Work Experience
Advanced knowledge of PySpark ,python, pandas, numpy frameworks.
Minimum 4 years of extensive experience in design, build and deployment of Spark/Pyspark - for data integration.
Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations
Create Spark jobs for data transformation and aggregation
Spark query tuning and performance optimization - Good understanding of different file formats (ORC, Parquet, AVRO) to optimize queries/processing and compression techniques.
Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus)
Experience in Modular Programming & Robust programming methodologies
ETL knowledge and have done ETL development using any python framework
Worked with Databricks/Snowflake in the past Preferable.