Platzhalter Bild

Associate Developer bei OnSite Partners

OnSite Partners · Chicago, Vereinigte Staaten Von Amerika · Onsite

85.000,00 $  -  100.000,00 $

Jetzt bewerben

Onsite Partner’s mission is to empower each customer in achieving success through the design and delivery of collaborative, creative and comprehensive energy solutions.

 

Position Summary

OnSite Partners is seeking an Associate Developer / Coding Analyst to design, build, and maintain reliable data pipelines and ETL processes and develop process and tool automations that power client deliverables. In addition to core development tasks, you will focus on developing processes, automating workflows and optimizing analytical tools to enhance efficiency and scalability. Your work will involve leveraging SQL for data transformation, Power BI for dynamic reporting and dashboards, and Python for scripting and process automation. 

 

You’ll collaborate closely with our Data Scientist to transform raw inputs into governed, high‑quality datasets for analytics, portals, and reporting. You will also develop tools to help streamline and automate existing work processes. Expect to work across Microsoft Power Platform and AWS (Glue, Athena, S3), integrate APIs, and produce clear technical documentation for everything you build. This role is ideal for someone passionate about streamlining processes, reducing manual effort, and enabling data-driven insights through automation. 

 

Key Responsibilities


ETL & Data Pipelines 


  • Develop, schedule, and monitor ETL jobs (e.g., AWS Glue PySpark/Python Shell) that land, clean, and transform data into curated layers in S3; register/maintain schemas in the Glue Data Catalog and optimize for Athena (partitions, compression, columnar formats). 
  • Author performant SQL for Athena and other SQL engines; tune queries for cost/performance and data‑scanning efficiency. 
  • Implement automated data quality checks and reconciliation (row counts, referential integrity, null/valid range tests) with alerts and run validation reports for stakeholders. 
  • Automate repetitive data engineering tasks and workflows using Python and Power Platform tools to reduce manual effort and improve operational efficiency. 


Power Platform Development 


  • Build and maintain Power Automate flows, Power Apps (model-driven) and custom connectors to orchestrate data movement and operational workflows; design with resiliency, retries, and throttling awareness. 


API Integrations 


  • Ingest data from third‑party and internal REST APIs: handle authentication (OAuth 2.0), pagination, rate limits, schema drift, and error handling; normalize JSON payloads into consistent tabular models. 


Technical Documentation and Observability 


  • Document the data flow, including sources, transformations, and destinations, to provide clarity on pipeline architecture and dependencies. 
  • Include detailed setup and usage instructions, along with troubleshooting guidelines, to facilitate seamless deployment and maintenance of data pipelines. 
  • Instrument pipelines with structured logging and metrics; publish job status to dashboards and configure alerting (failures, SLA breaches). 


Security, Compliance & Governance 


  • Apply least‑privilege IAM roles, secret management, and encryption-at-rest/in‑transit (e.g., KMS). 
  • Follow company QA/QC and data governance practices; contribute to catalog/lineage where required. 
  • Adhere to internal risk and security policies; partner with IT/security on reviews and remediations. 


Collaboration & Delivery

 

  • Work closely with the Data Scientist to translate analysis requirements into robust pipelines and reproducible datasets supporting client deliverables. 
  • May provide timely operational support for data loads and client deadlines. 

 

Qualifications

Must‑Have 

  • 1–3 years’ experience (or internships/projects) in data engineering, analytics engineering, or software development. 
  • Working knowledge of Python (preferably PySpark) and SQL; comfort with JSON and data modeling basics. 
  • Hands‑on experience with source control (Git/GitHub or Azure DevOps). 
  • Exposure to AWS data services (Glue, Athena, S3) and building/operating ETL pipelines. 
  • Clear, concise technical writing skills. 
  • Strong debugging mindset, attention to detail, and a bias for documenting as you build. 
  • Alignment with company values: Create value, Commit to excellence, Promote a healthy environment, Care for others, Ownership, Develop and grow


Nice‑to‑Have 

  • Experience with Microsoft Power Platform (Power Automate, Power Apps) 
  • Experience with Glue Data Catalog, Glue Workflows/Triggers, and Athena cost/performance tuning. 
  • Power Platform custom connectors, Power Pages, or Dataverse. 
  • CI/CD for data (build/test/deploy), unit/integration tests for pipelines. 
  • Familiarity with data governance/lineage tools and basic security best practices.
Jetzt bewerben

Weitere Jobs