This is where it starts. Training, research, and a cohort of people already doing this work — built for India, connected to the global field.
Three layers. Start anywhere. Go as deep as you want.
Live view of current programs, open applications, and upcoming sessions. Updated in real time.
AI Safety India was my entry point into AI safety. The cohort's facilitation model — not lecture-based — built real critical thinking rather than surface familiarity. I now apply this lens directly in my work on agentic and RPA automations, and through my role at the UNESCO Women for Ethical AI South Asia Chapter.
Coming from eight years in public health, I had seen how poorly designed systems cause unintentional harm. The AI Safety India cohort gave me the structured foundation I was missing. It connected global AI risks to local realities and made clear that AI safety is not a conversation reserved for advanced economies. That clarity pushed me from interest to responsibility. I've since co-founded Ethicore AI Uganda.
I was exploring AI Safety on my own. It was scattered. The cohort fixed that — structure, consistency, and people who actually took it seriously. That combination changed how I approached it.
The weekly Wednesday sessions were highly interactive, giving us a platform to share ideas freely. The live hands-on sessions allowed us to apply concepts in real-time. Above all, the mentorship made the space feel friendly, approachable, and truly collaborative.
Whether you want to learn, research, fund, or build — tell us who you are and what brought you here.