AI Safety India Community

420 million AI users.
5.4 million developers.
Fewer than 50
working to make it safe.

This is where it starts. Training, research, and a cohort of people already doing this work — built for India, connected to the global field.

30
Researchers trained
Cohort 1 complete
3
University clubs seeded
Across India
2
Global partnerships
ENAIS · AI Safety Atlas
1
Cohort completed
Cohort 2 open now
Programs

Three layers. Start anywhere. Go as deep as you want.

02 — Learn
Technical AI Safety
Unsafe AI is a technical problem. Learn how alignment fails, where the gaps are, and where your skills make the biggest difference.
10 weeks · Cohort-based
02 — Learn
AI Governance & Policy
AI is being deployed faster than the rules governing it. Learn how policy is made, where it's failing, and where you have leverage to change it.
10 weeks · Cohort-based
03 — Produce
Stipend supported · 12 weeks · Cohort-based
Technical Research
Technical Safety Research
Work on unsolved safety problems grounded in India's AI infrastructure. Exit with a paper or benchmark contribution.
12 weeks · Cohort-based · Stipend supported
Governance Research
Governance & Policy Research
Work on real governance gaps — welfare systems, public AI infrastructure, regulatory frameworks. Exit with a policy brief for MeitY or NITI Aayog.
12 weeks · Cohort-based · Stipend supported
Bridge Track
AI Policy Engineering
Build evaluation tools, audit frameworks, and policy-relevant benchmarks. Exit with work that bridges both worlds.
12 weeks · Cohort-based · Stipend supported
What's happening now

Live view of current programs, open applications, and upcoming sessions. Updated in real time.

People who started here

AI Safety India was my entry point into AI safety. The cohort's facilitation model — not lecture-based — built real critical thinking rather than surface familiarity. I now apply this lens directly in my work on agentic and RPA automations, and through my role at the UNESCO Women for Ethical AI South Asia Chapter.

Neha
PM/BA · UNESCO Women for Ethical AI, South Asia

Coming from eight years in public health, I had seen how poorly designed systems cause unintentional harm. The AI Safety India cohort gave me the structured foundation I was missing. It connected global AI risks to local realities and made clear that AI safety is not a conversation reserved for advanced economies. That clarity pushed me from interest to responsibility. I've since co-founded Ethicore AI Uganda.

Sylvia
Co-Founder, Ethicore AI Uganda · AI Governance & Policy

I was exploring AI Safety on my own. It was scattered. The cohort fixed that — structure, consistency, and people who actually took it seriously. That combination changed how I approached it.

Prasanna
Now at an Impact-Aligned Startup

The weekly Wednesday sessions were highly interactive, giving us a platform to share ideas freely. The live hands-on sessions allowed us to apply concepts in real-time. Above all, the mentorship made the space feel friendly, approachable, and truly collaborative.

Chekuri Yukthamukhi
Student
Stay close to the work
Expression of Interest
Start here.

Whether you want to learn, research, fund, or build — tell us who you are and what brought you here.

We reply within 12 hours.
Received. We'll be in touch within 12 hours.