The Platform team creates the technology that enables Spotify to learn quickly and scale easily, enabling rapid growth in our users and our business around the globe. Spanning many disciplines, we work to make the business work; creating the infrastructure, tooling, frameworks, and capabilities needed to welcome a billion customers.
Spotify is seeking a Data Scientist II to join Product Trust Insights (PTI) within Trust & Safety. PTI accelerates Spotify innovation by providing safety research, risk measurement, and evidence-based recommendations that enable R&D teams to confidently expand into new products, markets, and technologies.
PTI's research informs how Spotify designs, launches, and improves features across the platform, with a particular focus on AI-powered experiences, recommendations, social and messaging surfaces, and the safety needs of younger users. Our work helps product teams understand where risks concentrate, which user segments are most affected, and which interventions improve outcomes.
We aim to keep safety upstream of moderation by shaping product design before launch and measuring how features perform once they reach users.
What You'll Do
Lead end-to-end research and measurement projects that evaluate the safety of new and existing features, from scoping through delivery of actionable recommendations
Design and generate data for product risk assessments, stress tests, and evaluation of AI-powered features, including generative and agentic experiences
Develop longitudinal trust and safety metrics and use them to evaluate the effectiveness of product interventions over time
Translate complex research findings into clear narratives, tools, and recommendations for product, policy, and leadership audiences
Partner with product safety specialists, policy advisors, product leads, and engineering counterparts to ensure product launches reflect user safety needs and support thoughtful, “no regrets” design
Build and improve evaluation methods, including LLM-based evaluation approaches, behavioral instrumentation, and measurement frameworks in collaboration with data scientists and engineers
You will work closely with product managers, designers, engineers, policy specialists, researchers, and other data scientists. Your work will help inform decisions that often involve senior leadership.
Who You Are
You have 3+ years of experience leading data science or research projects with a focus on safety, integrity, responsible AI, fairness, or a related domain
You are experienced with SQL and Python and are comfortable working across quantitative and qualitative evidence
You have hands-on familiarity with modern AI and machine learning systems, including recommendation systems and large language models, and understand how risks emerge from system design
You are comfortable scoping ambiguous problems, including early-stage or zero-to-one research areas, and prioritizing them in a fast-moving environment
You communicate clearly with both technical and non-technical audiences, including explaining methodological choices to policy partners and leadership
You care about turning research into practice and are comfortable making concrete, evidence-based recommendations
You bring a thoughtful perspective on responsible product innovation and how to measure and improve platform safety
Where You'll Be
This role is based in New York or Boston
We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home
This team operates within the Eastern Standard time zone for collaboration
Additional Information
The United States base range for this position is $117,000 – $167,000 USD, plus equity. The benefits available for this position include health insurance, six-month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, paid flexible holidays, and paid sick leave. These ranges may be modified in the future.
These cookies are necessary for the website to function and cannot be turned off in our systems. You can set your browser to block these cookies, but then some parts of the website might not work.
Security
User experience
Target group oriented cookies
These cookies are set through our website by our advertising partners. They may be used by these companies to profile your interests and show you relevant advertising elsewhere.
Google Analytics
Google Ads
We use cookies
🍪
Our website uses cookies and similar technologies to personalize content, optimize the user experience and to indvidualize and evaluate advertising. By clicking Okay or activating an option in the cookie settings, you agree to this.
The best remote jobs via email
Join 5'000+ people getting weekly alerts with remote jobs!