Platzhalter Bild

Data Engineer presso JPI

JPI · Washington, Stati Uniti d'America · Hybrid

Candidarsi ora

Description

We are seeking a Data Engineer with 10+ years of experience to join our onsite team in the Washington, DC area. This role will support federal clients in designing and implementing scalable data pipelines, cloud-native architectures, and modern data platforms that directly drive mission impact. The ideal candidate combines technical expertise with a consulting mindset, capable of working across teams to translate data strategy into actionable solutions.


The Data Engineer will design and build ingestion pipelines, transform large datasets, and enable advanced analytics across cloud and hybrid environments. This includes working with structured and unstructured data, integrating APIs, and developing high-performance workflows with Databricks and Spark. The role involves close collaboration with analysts, developers, and mission stakeholders to ensure solutions are agile, secure, and aligned with evolving priorities.


This role is contingent upon funding award and requires onsite presence at the USCG St. Elizabeths campus three days per week.


At JPI, we strive to empower our people and excel for our clients. We hold ourselves to high standards and prioritize our values of being one team with unwavering integrity. We are motivated by our mission and driven to deliver solutions that exceed expectations. Will you join us?


RESPONSIBILITIES INCLUDE, BUT ARE NOT LIMITED TO:

  • Partner with clients, business leaders, and technical teams to align data architecture with mission and business goals.
  • Participate in Agile ceremonies, track deliverables, and support development of technical documentation and briefings.
  • Design and implement modern, scalable data pipelines (ETL/ELT) using Databricks, Spark, and AWS services.
  • Migrate and modernize legacy data platforms to cloud environments (AWS preferred; Azure/GCP acceptable).
  • Build ingestion mechanisms for batch and real-time data, integrating APIs and custom connectors.
  • Develop robust data models and schema designs for transactional, warehouse, and analytics systems.
  • Write efficient, well-documented Python and SQL code to enable scalable data processing.
  • Leverage orchestration tools (Airflow, Step Functions, etc.) to automate and optimize workflows.
  • Apply best practices for data governance, metadata management, and security.
  • Collaborate with BI developers and data scientists to deliver structured, high-quality data for analytics and machine learning.
  • Provide mentorship, conduct code reviews, and enforce engineering standards across the team.

Requirements

 KEY REQUIREMENTS:

  • BA/BS with 10+ years of experience, or Master’s with 5+ years.
  • Active Public Trust clearance or ability to obtain one.
  • Strong experience with Python, SQL, and AWS services (EC2, S3, RDS, Redshift, Lambda, Glue, etc.).
  • Hands-on experience with Databricks (Notebooks, Spark clusters, Delta Lake, cloud storage integration).
  • Familiarity with Databricks SQL, MLflow, and job/workflow management within the Databricks workspace preferred.
  • Proficiency in SQL and NoSQL databases.
  • Proven ability to design, build, and deploy ETL/ELT pipelines.
  • Experience with big data tools (Spark, Hadoop, Kafka) and Spark performance tuning in Databricks environments.
  • Ability to work with APIs for data retrieval and system integration.
  • Strong understanding of data modeling, schema design, and performance tuning.
  • Experience with Agile methodologies and DevSecOps practices.
  • Excellent written and verbal communication skills.

PREFERRED QUALIFICATIONS:

  • Prior experience supporting Department of Homeland Security (DHS), U.S. Coast Guard, or similar federal agencies.
  • Hands-on experience using Databricks for large-scale data processing, Delta Lake management, and collaboration across data teams.
  • Experience with MLOps or enabling machine learning environments using tools such as MLflow or Databricks Workflows.
  • Familiarity with data governance and metadata management best practices in a cloud-native environment.
  • Exposure to operationalizing data pipelines in mission-critical or high-security contexts.

JPI is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. 

Candidarsi ora

Altri lavori