Platzhalter Bild

Homeoffice Staff Software Engineer na eSimplicity

eSimplicity · Columbia, Estados Unidos Da América · Remote

US$ 116.250,00  -  US$ 155.000,00

Candidatar-se agora

Description

About Us

eSimplicity is a modern digital services company that delivers innovative federal and commercial IT solutions designed to improve the health and lives of millions of Americans while defending our national interests. Our solutions and services improve healthcare for millions of Americans, protect our borders, and defend our country on the battlefield by supporting the Air Force, Space Force, and Navy. 

eSimplicity's people-centric approach aims to transform government services through innovative technologies. Our team’s experience spans various federal civilian customers on diverse projects across its core competencies.  


We’re seeking a Staff Software Engineer who is experienced in building scalable, resilient data pipelines that ingest, validate, and transform data rapidly and accurately. This person will emphasize observability and reliability when supporting the ongoing operation and re-architecture of our data ingestion capability, which routinely supports large volumes of Medicare and Medicaid data. 


Responsibilities:

  • Leads and mentors all other data roles in the program.  
  • Identifies and owns all technical solution requirements in developing enterprise-wide data architecture.  
  • Creates project-specific technical design, product and vendor selection, application, and technical architectures.  
  • Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options.  
  • Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need.  
  • Ensures strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies.  
  • Develops and manages product roadmaps, backlogs, and measurable success criteria and writes user stories. 
  • Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.  
  • Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects.  
  • Creates new pipeline development and maintains existing pipeline; updates Extract, Transfer, Load (ETL) process; creates new ETL feature development; builds PoCs with Redshift Spectrum, Databricks, etc.; 
  • Implements, with the support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations. 
  • Assemble large, complex data sets that meet non-functional and functional business requirements. 
  • Identify, design, and implement internal process improvements, including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes. ? 
  • Building required infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies 
  • Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics, including operational efficiency and customer acquisition? 
  • Working with stakeholders, including data, design, product, and government stakeholders, and assisting them with data-related technical issues 
  • Write unit and integration tests for all data processing code. 
  • Work with DevOps engineers on CI, CD, and IaC. 
  • Read specs and translate them into code and design documents. 
  • Perform code reviews and develop processes for improving code quality. 


 

Requirements

Required Qualifications: 

  • All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation.
  • Minimum of 10 years related experience. Hands-on software development experience 
  • A Bachelor’s degree in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline. With ten years of general information technology experience and at least eight years of specialized experience, a degree is NOT required. 
  • Extensive Data pipeline experience using Python, Java, and cloud technologies  
  • Expert data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.  
  • Self-sufficient and comfortable supporting the data needs of multiple teams, systems, and products. 
  • Experienced in designing data architecture for shared services, scalability, and performance 
  • Experienced in designing data services including API, metadata, and data catalog.  
  • Experienced in data governance process to ingest (batch, stream), curate, and share data with upstream and downstream data users. 
  • Ability to build and optimize data sets, ‘big data’ data pipelines, and architectures? 
  • Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions? 
  • Excellent analytic skills associated with working on unstructured datasets? 
  • Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata 
  • Demonstrated understanding and experience using software and tools, including big data tools like Kafka, Spark, and Hadoop; relational NoSQL and SQL databases including Cassndra and Postgres; workflow management and pipeline tools such as Airflow, Luigi and Azkaban; AWS cloud services including Redshift, RDS, EMR, and EC2; stream-processing systems like Spark-Streaming and Storm; and object function/object-oriented scripting languages including Scala, C++, Java, and Python.? 
  • Flexible and willing to accept a change in priorities as necessary. 
  • Ability to work in a fast-paced, team-oriented environment  
  • Experience with Agile methodology, using test-driven development. 
  • Experience with Atlassian Jira/Confluence. 
  • Excellent command of written and spoken English. 
  • Ability to obtain and maintain a Public Trust; residing in the United States 
  • Desired Qualifications: 
  • Federal Government contracting work experience. 
  • Google’s Certified Professional-Data-Engineer certification, IBM Certified Data Engineer – Big Data certification, CCP Data Engineer for Cloudera 
  • Centers for Medicare and Medicaid Services (CMS) or Health Care Industry experience 
  • Experience with healthcare quality data, including Medicaid and CHIP provider data, beneficiary data, claims data, and quality measure data. 

Working Environment:
eSimplicity supports a remote work environment operating within the Eastern time zone so we can work with and respond to our government clients. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed by your manager.
Occasional travel for training and project meetings. It is estimated to be less than 25% per year.


Benefits:
We offer highly competitive salaries and full healthcare benefits.


Equal Employment Opportunity:
eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability.

Candidatar-se agora

Outros empregos