
Exponent – Supera i tuoi colloqui tecnologici con sessioni simulate e coach esperti delle migliori aziende.
Sponsorizzato da ExponentWork Schedule
Standard (Mon-Fri)Environmental Conditions
OfficeJob Description
Job Title – Staff Data Engineer
Job Location: Bangalore, India
About Company:
We help our customers accelerate life sciences research, solve complex analytical challenges, improve patient diagnostics, deliver medicines to market and increase laboratory efficiency.
Our team, Automation, AI, and Data (AAD), supports data engineering, analytics, automation, and AI across Thermofisher Scientific.
What will you do?
Strengthen data engineering capabilities by delivering pipelines and solutions via Enterprise Data Platform.
General Job Functions
- Collaborate with functional analysts to convert the requirements into data engineering pipelines.
- Collaborate with the scrum master on product backlogs and help with sprint planning.
- Build, test, and optimize data pipelines for various use-cases, including real-time and batch processing, based on specific requirements.
- Support the evolution of EDP architecture and take part in roadmap activities around data platform architecture initiatives or changes.
- Collaborate with leadership and partners to ensure data quality and integrity in DWH & AWS platforms for BI/Analytical reporting.
- Offer hands-on mentorship and oversight for a group of projects.
- Identify potential risks in advance and communicate effectively with partners to develop and implement risk mitigation plans.
- Actively support development activities in data engineering, ensuring bandwidth is available when needed.
- Implement and follow agile development methodologies to deliver solutions and product features, adhering to DevOps practices.
- Ensure the teams follow the prescribed development processes and approaches.
Must have skills and experience
- 10+ years of overall work experience with 7+ years exclusively in delivering data solutions.
- 5+ years of proven experience building Cloud BI solutions using AWS.
- Experience with agile development methodologies by following DevOps, Data Ops and Dev Sec Ops practices.
- 5+ years of programming in SQL, Pyspark and Python.
- Excellent written, verbal and interpersonal and partner communication skills.
- Excellent analysis and business requirement documentation skills.
- Ability to work with multi-functional teams from multiple regions/ time zones by optimally demonstrating multi-form communication (Email, MS Teams for voice and chat, meetings).
- Excellent prioritization and problem-solving skills.
Good to have skills
- Hands-on experience with Snowflake or Azure data engineering.
- Knowledge of SQL and NoSQL databases like PostgreSQL, MySQL, MongoDB, Cassandra.
- Experience in building data pipelines in Databricks.
- Data visualization experience using tools such as Power BI or Tableau.
- Knowledge of data governance practices, data quality, and data security.
- Relevant certifications in data engineering on cloud platforms.
- Basic understanding of machine learning and Generative AI concepts.