Not enough people are working on it. We help researchers, engineers, and policymakers pivot into AI safety — through training, research programs, and a community built for the long term.
The number of people working seriously on AI safety is a fraction of what the problem demands. The talent exists. The pipeline doesn't.
Roughly 1,100 people globally work on AI safety full-time. The capabilities field is growing 30–40% per year. The gap is not closing — it's widening. More researchers, more policymakers, more governance professionals are needed now.
The existing AI safety pipeline is concentrated in the US and UK. India has 1.4 billion people, world-class engineering talent, and significant policy weight — and almost no structured entry point into AI safety work.
AI safety requires both technical researchers who understand how models fail and policymakers who can translate that into governance. Building one without the other produces incomplete solutions. We run both tracks, together.
We don't run isolated courses. We run a structured pipeline — from first exposure to original contribution. Everyone starts at reading groups. The path forward depends on how deep you want to go.
Weekly sessions open to anyone curious about AI safety. Paper walkthroughs, discussions, guest speakers. No prerequisites. This is where people discover the field, understand the open problems, and decide if they want to go deeper.
A structured cohort covering the foundations of AI safety — what it is, why it matters, and where the open problems are. Participants choose a track based on their background and goals.
A deeper program for those ready to produce original work. Fellows work on specific research questions with mentor support, building toward a contribution to the field — a paper, a tool, a policy brief, or a role at a safety organisation.
India is where we started. Not because it's the only place that matters, but because the gap here is among the largest in the world — and we're from here.
India produces the world's largest concentrations of AI builders. We run the only structured program routing that talent into safety-focused research and governance. Cohort 1 is complete. Cohort 2 is open.
Our alumni are working in AI safety organisations globally. We've seeded three university AI Safety clubs. We're building the SPAR research pipeline from India and collaborating with ENAIS and AI Safety Atlas internationally.
Apply to Cohort 2 →Building the pipeline the global AI safety field is missing — starting with India. Active researcher and field-builder, currently a SPAR Fellow working on technical AI safety.
Whether you want to learn, collaborate, or support — there's a place for you in this work.