Hybrid Senior Data Engineer (Databricks) Senior Data Engineer (Databricks)
Snap Analytics · nan, · Hybrid
About the job
The headlines… Role - Senior Data Engineer (Databricks) Location - Remote A bit about us… Snap Analytics is a high-growth data analytics consultancy with offices in the UK and India, we work with enterprise clients to simplify complex data and drive business value. We’re customer-centric in our approach and dedicated to helping organisations achieve their strategic goals through innovative cloud analytics solutions. We pride ourselves on using our innovative Snap 360 delivery framework, combined with a strong culture of teamwork and knowledge-sharing, to consistently deliver exceptional results and ensure 100% customer satisfaction. Joining Snap at this exciting stage of growth offers a unique opportunity to make a significant impact. As a rapidly expanding consultancy, you'll have the chance to shape our direction and play a key role in driving our success and strategy forward. A bit about the role... As a Senior Data Engineer at Snap, building on your wealth of experience, you’ll have the opportunity to explore the latest technologies within analytics; from ETL and Modern Cloud Data Platforms, to BI and ML/AI tools. We’ll also look to you to lead in client interaction and engagement, support and mentor more junior members of the team and assist in building internal capabilities for use with clients. Here’s a breakdown of what you’ll be doing… Data Migration & Integration: o Lead and manage complex data migration projects, ensuring smooth transitions from multiple data sources to cloud environments. o Design and implement efficient ETL/ELT processes for seamless integration between systems. Data Warehousing: o Design, build, and optimize cloud-based data warehouses (e.g. Databricks (must have), Snowflake, AWS Redshift, Google BigQuery, Azure Synapse) to support advanced analytics and reporting needs. Ensure data accuracy, security, and compliance. Data Modelling: o Develop and maintain robust data models (star, snowflake, etc.) that ensure high-quality, scalable, and efficient data storage and retrieval. o Implement best practices for data modelling to support analytics and reporting requirements. Cloud Infrastructure: o Develop and maintain scalable, robust, and cost-effective cloud-based data architectures on platforms such as AWS, Azure, and GCP. o Provide best practices for data storage, governance, and access control. Pipeline Development: o Build, automate, and monitor data pipelines that handle large volumes of structured and unstructured data. o Implement best practices for data cleansing, transformation, and enrichment. Collaboration & Stakeholder Management: o Collaborate with data architects, data scientists, and business teams to understand requirements and translate them into effective data engineering solutions. o Communicate technical challenges and provide insights on data-driven solutions. Performance Optimization: o Analyse and improve the performance of data systems and pipelines, ensuring low-latency data access, high availability, and optimized cloud resource usage. Mentorship & Leadership: o Mentor junior engineers, fostering a culture of collaboration and continuous learning. o Provide technical guidance, code reviews, and best practices to elevate the overall team performance. Innovation & Continuous Improvement: o Stay updated with the latest trends and technologies in data engineering, cloud platforms, and big data to propose innovative solutions for clients' evolving data needs. This role is for you if you have... o Proven experience (5 years+) as a data engineer with a focus on data migration, integration and cloud-based data warehousing working with big data for enterprise organisations. o Expertise in building data architecture on cloud platforms such as AWS, GCP or Microsoft Azure. o 5+ years of experience of ETL/ELT design and development using tools like Matillion, Informatica, SAP Data Services, Talend and Cloud Data Platforms including Databricks (must have), Snowflake, Redshift or BigQuery. o A Databricks Data Engineer Professional certificate o A deep understanding of data modelling principles and techniques (e.g., star schema, snowflake schema), and experience in developing optimized models to support analytics and reporting. o Experience with version control, CI/CD pipelines, and containerization tools (e.g., Git, Jenkins, Docker, Kubernetes). o Strong knowledge of relational and non-relational databases and hands on experience with cloud data warehouse including AWS Redshift, Google BigQuery, Azure Synapse. o A 2:1 University degree or above (or equivalent other). o Excellent communication skills, with the ability to work collaboratively with both technical and non-technical stakeholders. o Strong problem-solving skills, with the ability to troubleshoot complex data issues and deliver optimized solutions. It’s useful (not a must) if you have exposure to… o Data Science and Machine Learning. o Tools like dbt and Dataiku. Something to note… Here at Snap, our values drive us. So, aligning with them is incredibly important. Our values are – be Smart, be Nice, be Accountable, be Passionate. Be SNAP 😊 So, what does being SNAP look like? All applicants are welcome! Research shows that some may hesitate to apply unless they meet every requirement. However, your unique experience, skills, and passion are what set you apart, so, we’d love to hear from you even if you don’t hit all the skills above. So, what’s next? If you’ve got this far and you’re feeling excited, apply! Our hiring process is as follows… o Initial screening with a member of our Talent Acquisition Team. o A technical interview with one of our Principal Consultants. o A technical task and final interview with our Delivery Director. If you need any adjustments made to our process, let us know so we can provide the right support and ensure you can perform at your best!