Insomniac Design is a global digital agency headquartered in Washington D.C., with offices in London, Bucharest and Chisinau. We’re an agile, determined and innovative team organized by functional areas of expertise — Creative, Technology, Strategy, and Management. We specialize in human-centered design with deep focus on design thinking and digital transformation.
At Insomniac, we leverage AI to free our teams from routine tasks so they can focus on the aspects of our functions that are most valuable. We thrive on creative problem-solving, collaboration, and innovation. By thoughtfully integrating AI into our workflows, we’re not only improving productivity but also ensuring our people have the tools to do their best work. This empowers us to build smarter solutions and deliver stronger results for our clients.
As a Senior Data Engineer, you will architect, build, and optimize scalable data platforms that empower our clients with unified, actionable insights. You will lead the design of robust data pipelines, enforce best practices in data governance and security, and mentor junior engineers. Your expertise will bridge technical execution and strategic vision, ensuring high-performance, secure, and scalable data solutions that drive business impact.
Technical Leadership & Strategy:
Architect and implement enterprise-grade data platforms, including data lakes, warehouses, and real-time pipelines.
Define and enforce data engineering standards, including ETL/ELT frameworks, data modeling, and quality controls.
Lead performance optimization of databases, queries, and pipelines (e.g., partitioning, indexing, cost-efficient cloud solutions).
Stay updated with emerging trends, evaluating and integrating new technologies.
Data Architecture & Infrastructure:
Design and implement scalable data architecture that ensures high availability, security, and performance.
Define best practices for data modeling, storage, retrieval, and governance.
Optimize database structures, schemas, indexing, and partitioning for efficient query performance.
Implement CI/CD for data pipelines and automate infrastructure using Infrastructure-as-Code (IaC) tools.
Data Pipelines & ETL Development:
Architect and develop end-to-end ETL/ELT pipelines for structured and unstructured data ingestion, processing, and transformation.
Automate and optimize data workflows using tools like Apache Spark, Kafka, Airflow, and dbt.
Monitor, troubleshoot, and optimize data pipeline performance.
Data Governance, Security & Compliance:
Implement data governance policies, including metadata management, lineage tracking, and role-based access control (RBAC).
Ensure compliance with data privacy regulations (e.g., GDPR, CCPA) and enforce security best practices.
Develop frameworks for data quality validation, anomaly detection, and integrity checks.
Business & Stakeholder Engagement:
Collaborate with stakeholders (clients, PMs) to translate business needs into technical specifications and scalable architectures.
Develop interactive dashboards and reporting solutions to provide actionable insights.
Innovation & Mentorship:
Lead technical discussions, workshops, and knowledge-sharing sessions within the team.
Mentor junior engineers, conduct code reviews, and set best practices.
Drive automation of data workflows, including CI/CD for pipelines and Infrastructure-as-Code (IaC) solutions.
Technical Skills & Experience:
7+ years of experience in data engineering with a proven track record of building scalable data solutions.
Expertise in SQL and relational database management systems (MySQL, PostgreSQL, Oracle).
Proficiency in Python (or similar languages) for data processing and automation.
Deep understanding of cloud platforms (AWS, GCP, Azure) and cloud-native data services:AWS (Redshift, Glue, S3)Azure (Synapse, Data Factory)GCP (BigQuery, Dataflow)
Hands-on experience with big data tools:Apache Spark, Kafka, Airflow, dbt, Snowflake, Databricks
Expertise in data modeling.
Strong knowledge of data security, governance, and compliance frameworks.
Leadership & Soft Skills:
Proven track record of leading data projects, including migrations, warehousing, and real-time analytics.
Ability to lead cross-functional teams and manage client expectations.
Strong problem-solving skills for debugging distributed systems and resolving data anomalies.
Experience with Agile/Scrum methodologies, including tools like Jira, Git.
Excellent communication and stakeholder management skills.
Nice-to-Have (Bonus Skills):
Certifications in cloud/data technologies (e.g., AWS Certified Data Analytics, Google Cloud Data Engineer).
Experience with ML pipelines (e.g., feature stores, model deployment).
Contributions to open-source data projects.
Applicant Eligibility: Please note, candidates who are eligible to work in the US without visa sponsorship are eligible to apply. We are not accepting applicants from recruiters or staffing agencies
Estas cookies son necesarias para que el sitio web funcione y no se pueden desactivar en nuestros sistemas. Puede configurar su navegador para bloquear estas cookies, pero entonces algunas partes del sitio web podrían no funcionar.
Seguridad
Experiencia de usuario
Cookies orientadas al público objetivo
Estas cookies son instaladas a través de nuestro sitio web por nuestros socios publicitarios. Estas empresas pueden utilizarlas para elaborar un perfil de sus intereses y mostrarle publicidad relevante en otros lugares.
Google Analytics
Anuncios Google
Utilizamos cookies
🍪
Nuestro sitio web utiliza cookies y tecnologías similares para personalizar el contenido, optimizar la experiencia del usuario e indvidualizar y evaluar la publicidad. Al hacer clic en Aceptar o activar una opción en la configuración de cookies, usted acepta esto.
Los mejores empleos remotos por correo electrónico
¡Únete a más de 5.000 personas que reciben alertas semanales con empleos remotos!