Not enough people are working on it. We help researchers, engineers, and policymakers pivot into AI safety — through training, research programs, and a community built for the long term.
The number of people working seriously on AI safety is a fraction of what the problem demands. The talent exists. The pipeline doesn't.
~600 people work on technical AI safety globally. The capabilities field is growing 30–40% per year. The gap is not closing — it's widening.
The pipeline is concentrated in the US and UK. India has 1.4 billion people and world-class engineering talent — with almost no structured entry point into AI safety.
Technical safety and governance together. Not one without the other. Starting with India, building toward a global model.
We don't run isolated courses. We run a structured pipeline — from first exposure to original contribution. Everyone starts at reading groups. The path forward depends on how deep you want to go.
Weekly sessions open to anyone curious about AI safety. Paper walkthroughs, discussions, guest speakers. No prerequisites. This is where people discover the field, understand the open problems, and decide if they want to go deeper.
A structured cohort covering the foundations of AI safety — what it is, why it matters, and where the open problems are. Participants choose a track based on their background and goals.
A deeper program for those ready to produce original work. Fellows work on specific research questions with mentor support, building toward a contribution to the field — a paper, a tool, a policy brief, or a role at a safety organisation.
Our first project. India's only structured program routing technical talent into AI safety research and governance.
Cohort 1 is done. Cohort 2 is open. We've seeded university clubs, built international partnerships, and submitted a research node proposal to the Cooperative AI Foundation.
Apply to Cohort 2 →Building the pipeline the global AI safety field is missing — starting with India. Active researcher and field-builder, currently a SPAR Fellow working on technical AI safety.
Whether you want to learn, collaborate, or support — there's a place for you in this work.