AI Safety · Technical + Governance

AI safety is one of
the most urgent
problems right now.

Not enough people are working on it. We help researchers, engineers, and policymakers pivot into AI safety — through training, research programs, and a community built for the long term.

30+
AI Safety practitioners trained
Cohort 1 complete
3
University clubs seeded
Across India
2
Global partnerships
ENAIS · AI Safety Atlas
2
Tracks running
Technical · Policy
Cohort 2 applications open · March 2026
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Cohort 2 is open now
Why it matters

AI is advancing faster than the field studying its risks.

The number of people working seriously on AI safety is a fraction of what the problem demands. The talent exists. The pipeline doesn't.

People working in AI — a comparison 2025 estimates
AI capabilities
(global workforce)
300,000+
Non-technical AI safety
(governance, policy)
~500
Technical AI safety
(global)
~620
AI safety researchers
in India
Very few
Source: EA Forum — AI Safety Field Growth Analysis 2025 · Industry estimates for capabilities workforce
01
The field is undersized for the stakes.

~600 people work on technical AI safety globally. The capabilities field is growing 30–40% per year. The gap is not closing — it's widening.

02
Most talent is outside the pipeline.

The pipeline is concentrated in the US and UK. India has 1.4 billion people and world-class engineering talent — with almost no structured entry point into AI safety.

03
We close the gap.

Technical safety and governance together. Not one without the other. Starting with India, building toward a global model.

What we do

One pipeline. Three stages. Two tracks.

We don't run isolated courses. We run a structured pipeline — from first exposure to original contribution. Everyone starts at reading groups. The path forward depends on how deep you want to go.

01
Entry point
Reading Groups

Weekly sessions open to anyone curious about AI safety. Paper walkthroughs, discussions, guest speakers. No prerequisites. This is where people discover the field, understand the open problems, and decide if they want to go deeper.

Those who want to go deeper join the fundamentals course
02
Core program
Fundamentals Course

A structured cohort covering the foundations of AI safety — what it is, why it matters, and where the open problems are. Participants choose a track based on their background and goals.

Technical track
For engineers and researchers. Alignment, interpretability, multi-agent systems, evals. Output: writeups, EA Forum posts, LessWrong articles, early tools.
Policy track
For lawyers, policymakers, social scientists. AI governance, regulation, institutional design. Output: policy memos, governance briefs, research notes.
Top graduates from the fundamentals course are invited to apply
03
Advanced
Research Fellowship

A deeper program for those ready to produce original work. Fellows work on specific research questions with mentor support, building toward a contribution to the field — a paper, a tool, a policy brief, or a role at a safety organisation.

Our work

AI Safety India Community

Our first project. India's only structured program routing technical talent into AI safety research and governance.

30+
Practitioners trained
3
University clubs
2
Global partnerships
2
Tracks running

Cohort 1 is done. Cohort 2 is open. We've seeded university clubs, built international partnerships, and submitted a research node proposal to the Cooperative AI Foundation.

Apply to Cohort 2 →
ENAIS
European Network for AI Safety
AI Safety Atlas
Global field-building network
About

Built by people working on the problem, not observing it.

Aditya Raj
Founder · AI Safety Researcher

Building the pipeline the global AI safety field is missing — starting with India. Active researcher and field-builder, currently a SPAR Fellow working on technical AI safety.

SPAR Research Fellow — technical AI safety researcher
Bluedot Impact Alumni — AI safety fundamentals
Jailbreak Hackathon — Top 30 globally (Grayswan)
Ran Cohort 1 — 30 researchers trained, 3 university clubs launched
"AI safety is urgent. The people who will solve it don't all live in San Francisco. We're building the infrastructure to find them, train them, and place them where it matters."

AI Safety Collective is a global organisation. AI Safety India Community is our current project. As the model is validated, we expand to other underserved geographies where technical talent exists and the pipeline doesn't.

We are actively building partnerships with global AI safety organisations, funders, and researchers. If you're working on the same problem, we want to talk.

India — Active Southeast Asia — Next
Get involved

Three ways to work
with us.

Whether you want to learn, collaborate, or support — there's a place for you in this work.

For researchers
Apply to Cohort 2
10 weeks. Technical or governance track. Open to students, engineers, and policy professionals. Applications close soon.
For organisations
Partner with us
Co-develop curriculum, host fellows, connect your research agenda to the India pipeline. We're open to serious collaborations.
For funders
Support the work
We are building the top-of-funnel the global AI safety field is missing. Read our proposal or get in touch to discuss.