Embark on a transformative journey with Cognite, a global SaaS forerunner in leveraging AI and data to unravel complex business challenges through our cutting-edge offerings including Cognite Atlas AI, an industrial agent workbench, and the Cognite Data Fusion (CDF) platform. We were awarded the 2022 Technology Innovation Leader for Global Digital Industrial Platforms & Cognite was recognized as 2024 Microsoft Energy and Resources Partner of the Year. In the realm of industrial digital transformation, we stand at the forefront, reshaping the future of Oil & Gas, Chemicals, Pharma and other Manufacturing and Energy sectors. Join us in this venture where AI and data meet ingenuity, and together, we forge the path to a smarter, more connected industrial future.
Impact: Cogniters strive to make an impact in all that they do. We are result-oriented, always asking ourselves.
Ownership: Cogniters embrace a culture of ownership. We go beyond our comfort zones to contribute to the greater good, fostering inclusivity and sharing responsibilities for challenges and success.
Relentless: Cogniters are relentless in their pursuit of innovation. We are determined and deliverable (never ruthless or reckless), facing challenges head-on and viewing setbacks as opportunities for growth.
About Cognite & This Role
Cognite is revolutionizing industrial data management through our flagship product, Cognite Data Fusion - a state-of-the-art SaaS platform that transforms how industrial companies leverage their data. We're seeking a Senior Data
Platform Engineer who excels at building high-performance distributed systems and thrives in a fast-paced startup environment. You'll be working on cutting-edge data infrastructure challenges that directly impact how Fortune 500 industrial companies manage their most critical operational data.
About Cognite & This RoleCognite is revolutionizing industrial data management through our flagship product, Cognite Data Fusion - a state-of-the-art SaaS platform that transforms how industrial companies leverage their data. We're seeking a Senior DataPlatform Engineer who excels at building high-performance distributed systems and thrives in a fast-paced startup environment. You'll be working on cutting-edge data infrastructure challenges that directly impact how Fortune 500 industrial companies manage their most critical operational data.
What You'll Build & Own
High-Performance Data Systems
Design and implement robust data processing pipelines using Apache Spark, Flink, and Kafka for terabyte-scale industrial datasets
Build efficient APIs and services that serve thousands of concurrent users with sub-second response times
Optimize data storage and retrieval patterns for time-series, sensor, and operational data
Implement advanced caching strategies using Redis and in-memory data structures
Distributed Processing Excellence
Engineer Spark applications with deep understanding of Catalyst optimizer, partitioning strategies, and performance tuning
Develop real-time streaming solutions processing millions of events per second with Kafka and Flink
Design efficient data lake architectures using S3/GCS with optimized partitioning and file formats (Parquet, ORC)
Implement query optimization techniques for OLAP data stores like ClickHouse, Pinot, or Druid
Scalability & Performance
Scale systems to 10K+ QPS while maintaining high availability and data consistency
Optimize JVM performance through garbage collection tuning and memory management
Implement comprehensive monitoring using Prometheus, Grafana, and distributed tracing
Design fault-tolerant architectures with proper circuit breakers and retry mechanisms
Technical Innovation
Contribute to open-source projects in the big data ecosystem (Spark, Kafka, Airflow)
Research and prototype new technologies for industrial data challenges
Collaborate with product teams to translate complex requirements into scalable technical solutions
Participate in architectural reviews and technical design discussions
What We're Looking For- Core Technical Requirements
Distributed Systems Experience (4-6 years) - Production Spark experience - built and optimised large-scale Spark applications with understanding of internals - Streaming systems proficiency - implemented real-time data processing using Kafka, Flink, or Spark Streaming - JVM Language expertise - strong programming skills in Java, Scala, or Kotlin with performance optimisation experience.
Data Platform Foundations (3+ years) - Big data storage systems - hands-on experience with data lakes, columnar formats, and table formats (Iceberg, Delta Lake) - OLAP query engines - worked with Presto/Trino, ClickHouse, Pinot, or similar high-performance analytical databases - ETL/ELT pipeline development - built robust data transformation pipelines using tools like DBT, Airflow, or custom frameworks
Infrastructure & Operations - Kubernetes production experience -deployed and operated containerised applications in production environments. Cloud platform proficiency - hands-on experience with AWS, Azure, or GCP data services.
Monitoring & observability - implemented comprehensive logging, metrics, and alerting for data systems.
Resource efficiency - optimised systems for cost while maintaining performance requirements.
Concurrency expertise - designed thread-safe, high-concurrency data processing systems.
Data Engineering Best Practices - Data quality frameworks -implemented validation, testing, and monitoring for data pipelines.
Schema evolution - managed backwards-compatible schema changes in production systems.
Data modelling expertise - designed efficient schemas for analytical workloads
Collaboration and Growth:
Technical Collaboration - Cross-functional partnership - worked effectively with product managers, ML engineers, and data scientists.
Codereview excellence - provided thoughtful technical feedback and maintained high code quality standards.
Documentation & knowledge sharing - created technical documentation and participated in knowledge transfer.
Continuous Learning - Technology adoption - quickly learned and applied new technologies to solve business problems.
Industry awareness - stayed current with big data ecosystem developments and best practices.
Problem-solving approach - demonstrated a systematic approach to debugging complex distributed system issues.
Startup Mindset
Execution Excellence - Rapid delivery - consistently shipped high-quality features within aggressive timelines.
Technical pragmatism - made smart trade-offs between technical debt, velocity, and system reliability.
End-to-end ownership - took responsibility for features from design through production deployment and monitoring.
Ambiguity comfort - thrived in environments with evolving requirements and unclear specifications.
Technology flexibility - adapted to new tools and frameworks based on project needs.
Customer focus - understood how technical decisions impact user experience and business metrics.
Bonus Points:
Open-source contributions to major Apache projects in the data space (e. g. Apache Spark or Kafka) are a big plus.
Conference speaking or technical blog writing experience, Industrial domain knowledge - previous experience with IoT, manufacturing, or operational technology systems.
Technical Stack
Primary Technologies:
Languages: Kotlin, Scala, Python, Java.
Big Data: Apache Spark, Apache Flink, Apache Kafka.
If you're excited about the opportunity to work at Cognite and make a difference in the tech industry, we encourage you to apply today! We welcome candidates of all backgrounds and identities to join our team.
We encourage you to follow us on Cognite LinkedIn; we post all our openings there.
Questi cookie sono necessari per il funzionamento del sito e non possono essere disattivati nei nostri sistemi. È possibile impostare il proprio browser in modo da bloccare questi cookie, ma alcune parti del sito potrebbero non funzionare.
Sicurezza
Esperienza dell'utente
Cookie orientati al gruppo target
Questi cookie sono impostati attraverso il nostro sito web dai nostri partner pubblicitari. Possono essere utilizzati da queste aziende per profilare i vostri interessi e mostrarvi pubblicità pertinenti altrove.
Google Analytics
Google Ads
Utilizziamo i cookie
🍪
Il nostro sito web utilizza i cookie e tecnologie simili per personalizzare i contenuti, ottimizzare l'esperienza dell'utente e per indvidualizzare e valutare la pubblicità. Facendo clic su Ok o attivando un'opzione nelle impostazioni dei cookie, l'utente accetta questo.
Le migliori offerte di lavoro da remoto via e-mail
Unisciti alle oltre 5'000+ persone che ricevono notifiche settimanali sulle offerte di lavoro da remoto!