The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions. Our work includes publishing a global statement on AI Risksigned by Geoffrey Hinton, Yoshua Bengio and CEOs from the major AI labs, leading the charge on a major AI safety bill, and running a large compute cluster for global academic researchers working on AI safety research.
As a research engineer intern here, you will work very closely with our researchers on projects in fields such as Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean, you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.
Timing
This application is for the full-time summer internship position. Applications are due by December 5, 2025.
Introduction
The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions. Our work includes publishing a global statement on AI Risksigned by Geoffrey Hinton, Yoshua Bengio and CEOs from the major AI labs, leading the charge on a major AI safety bill, and running a large compute cluster for global academic researchers working on AI safety research.
As a research engineer intern here, you will work very closely with our researchers on projects in fields such as Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean, you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.
Timing
This application is for the full-time summer internship position. Applications are due by December 5, 2025.
IntroductionThe Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions. Our work includes publishing a global statement on AI Risk signed by Geoffrey Hinton, Yoshua Bengio and CEOs from the major AI labs, leading the charge on a major AI safety bill, and running a large compute cluster for global academic researchers working on AI safety research. As a research engineer intern here, you will work very closely with our researchers on projects in fields such as Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean, you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models. TimingThis application is for the full-time summer internship position. Applications are due by December 5, 2025.
This internship is unpaid; however, CAIS provides the above stipend to assist with academic pursuits and living expenses. The stipend is subject to tax.
You might be a good fit if you:
Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature.
Are comfortable setting up, launching, and debugging ML experiments.
Are familiar with relevant frameworks and libraries (e.g., pytorch).
Communicate clearly and promptly with teammates.
Take ownership of your individual part in a project.
Have co-authored a ML paper in a top conference.
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, national origin, ancestry, age, disability, medical condition, marital status, military or veteran status, or any other protected status in accordance with applicable federal, state, and local laws. In alignment with the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records for employment.
If you require a reasonable accommodation during the application or interview process, please contact [email protected].
We value diversity and encourage individuals from all backgrounds to apply.
Ces cookies sont nécessaires au fonctionnement du site web et ne peuvent pas être désactivés dans nos systèmes. Vous pouvez configurer votre navigateur pour qu'il bloque ces cookies, mais certaines parties du site risquent alors de ne pas fonctionner.
Sécurité
Expérience utilisateur
Cookies ciblés
Ces cookies sont placés par nos partenaires publicitaires via notre site web. Ils peuvent être utilisés par ces entreprises pour créer un profil de vos intérêts et vous montrer des publicités pertinentes ailleurs.
Google Analytics
Google Ads
Nous utilisons des cookies
🍪
Notre site web utilise des cookies et des technologies similaires pour personnaliser le contenu, optimiser l'expérience de l'utilisateur, individualiser et évaluer la publicité. En cliquant sur OK ou en activant une option dans les paramètres des cookies, vous acceptez cela.
Les meilleurs emplois à distance par courriel
Rejoins 5'000+ personnes qui reçoivent des alertes hebdomadaires avec des emplois à distance!