- Professional
JOB DESCRIPTION: Responsible for building and scaling production data pipelines that ingest, process, and deliver clean datasets, for both internal and external data. Design and build data queries to support ad-hoc requests for analysis and insights. Responsible for developing processes to test and validate new and existing data pipelines and datasets. Capture business and technical requirements and translate them into clean code and data warehouse design. Responsible for documenting processes and polices governing the data integration landscape. Exhibit ownership, accountability, and self-management style. Meet established milestones and schedules. Utilize knowledge of SQL querying and development, BI Reporting tools including Stitch, PowerBI, Quicksights, and Looker, REST based API, and Python. Develop Extract Transform Load (“ETL”) pipelines, including scripting and design of source-to-target mappings; and understanding data warehousing concepts, such as star schema, snowflake schema, and data-marts. May work remotely anywhere in the continental U.S.
REQUIREMENTS: Master’s or foreign equivalent degree in Computer Science, Engineering, Engineering Management, or a related field.
Must have work/internship experience or completed graduate coursework in each of the following:
- Python, Tableau, Jira.
- Leveraging existing ETL processes to create and oversee new data feeds used in marketing performance reports.
- Conducting proper documentation and QA testing of all production code changes into project management tracking suite.
- Ad-hoc data modeling to be used in complex client deliverables.
- Data Warehouse performance.
TO APPLY: please send your resume to [email protected] and refer to this posting. EOE.
Candidarsi ora