Firmenlogo

Remote Data Engineer

Gradient AI  ·  nan, United States Of America · Remote

Apply Now

About the job

Gradient AI: Gradient AI is a leading provider of AI solutions for the Group Health and P&C insurance industries. Our solutions improve loss ratios and profitability by predicting underwriting and claim risks with greater accuracy, as well as reducing quote turnaround times and claim expenses through intelligent automation. Gradient AI's SaaS platform leverages a vast industry data lake comprising tens of millions of policies and claims, providing insurers with high resolution, data-driven insights. Customers include some of the most recognized insurance carriers, MGAs, MGUs, TPAs, risk pools, PEOs, and large self-insured employers across all major lines of insurance. Founded in 2018, Gradient has experienced strong growth every year, and recently raised $56 million in Series C funding from top Insurtech investors.About the Role: We are looking for a Data Engineer to join our team to create and maintain optimal data pipeline architecture and assemble large, complex data sets. You'll have a hand in every stage of development and are comfortable thinking creatively with tools such as Python, SQL, and the AWS ecosystem. The ideal candidate is willing to learn new skills and technologies as the task at hand requires and are adaptable to fluid and evolving project requirements. In this role, you will contribute materially towards shaping and realizing the vision of our business and contribute towards fundamentally changing the way an entire industry does business (really).Responsibilities:

  • Design, build, and implement data systems that fuel our ML and AI models
  • Develop tools to extract and process client data from different sources and tools to profile and validate data
  • Work cross functionally with data scientists to transform large amounts of data and store it in a format to facilitate modeling
  • Contribute to production operations, data pipelines, workflow management, reliability engineering, and much more
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies
Qualifications:
  • BS in Computer Science (or in another quantitative discipline); 4+ years of working experience
  • Experience working in Python in a professional environment
  • Desire to learn new skills and tools (e.g. Redshift, Tableau, AWS Lambda, etc.); bonus for experience with Apache Spark (PySpark), DataBricks, Snowflake, or similar distributed compute platforms
  • Experience using data orchestration frameworks such as Airflow, Dagster, Prefect
  • Exposure working with a cloud-computing environment (e.g. AWS EC2)
  • Comfortable with Linux, including developing shell scripts
  • Experience working in insurtech or on AI/ML products is a bonus
What We Offer:We are an equal opportunity employer that offers a number of benefits and perks to accommodate all types. Bring your authentic self to work in our supportive workplace where we offer:
  • A fun and fast-paced startup culture
  • A culture of employee engagement, diversity and inclusion
  • Full benefits package including medical, dental, vision, 401k, disability, life insurance, and more
  • Unlimited vacation days and ample holidays- we all work hard and take time for ourselves when we need it
  • Competitive salary and generous stock options - we all get to own a piece of what we're building
  • Ample opportunities to learn and take on new responsibilities
  • Paid parental leave

Apply Now

Other Jobs