Trustworthy AI Forum Singapore 2026

Date: 25 January, 2026 at 11:00 am
Location: Singapore University of Technology & Design, Singapore

Building trustworthy AI ecosystems across India, Singapore, and the Global South.

  • Through editions of the TrAI Forum, we aim to engage a large range of stakeholders in discussions and thought leadership around the safe and beneficial development and deployment of advanced AI systems.

  • The Secure AI Futures Lab (SAFL) is a dedicated hub for convening research dialogue and accelerating expertise for the safe and beneficial development and deployment of advanced AI. SAFL will have iterations across tech hubs in India and South Asia, including New Delhi, Bengaluru, Chennai, Hyderabad, Mumbai, Singapore, etc.

  • The Trustworthy AI Forum (TrAI Forum) convened researchers, policymakers, and industry practitioners for a focused, high-signal conversation on what it truly takes to build AI systems that are safe, accountable, and beneficial, especially from the vantage point of the Global South.

    Held on the sidelines of the Singapore AAAI-26 Week at SUTD, this inaugural edition explored the governance frameworks, technical challenges, and strategic choices facing nations like India and Singapore as AI reshapes economic and social structures at scale.

    Organised by Impact Academy with FAR.AI as the knowledge partner, the forum brought together five expert panellists from across academia, civil society, and the AI safety industry, alongside a lively audience of practitioners and academics from around the world.

    The Forum is designed as an annual, evolving series, with India set to host the second edition, marking the first time a major AI safety summit of this kind will be hosted by a Global South nation.

Core Themes Explored

  • No single law, regulator, or actor can govern AI alone. The Forum examined multi-stakeholder governance models exploring the interplay between existing legal frameworks, sectoral regulation, and the gaps that remain unresolved across jurisdictions.

  • With AI infrastructure concentrated in a handful of Western firms, countries in the Global South risk "missed use" of AI as much as its misuse. Panellists debated sovereign model development, open-weight strategies, indigenous data, and the peril of relying solely on application-layer solutions.

  • AI's probabilistic, black-box nature makes exhaustive testing impossible. The Forum explored the limits of current red-teaming approaches, the role of agent-based testing, and the challenge of verifying neural networks at scale particularly for CBRN and disinformation risks.

  • If AI is opaque and probabilistic, who is accountable when things go wrong? The session interrogated the limits of human culpability, the role of market mechanisms, and the importance of building technical oversight tools that make accountability meaningful in practice.

  • Beyond risks, the Forum concluded with a generative conversation on what success looks like sector-specific deployment in India, diverse and distributed AI ecosystems, careful management of employment transitions, and the "organisational AI efficiency paradox."

  • Drawing on Singapore's experience with the IMDA joint red-teaming challenge, the Singapore Consensus on AI Safety Research Priorities, and regional efforts like SeaLion, the Forum examined how interoperable governance frameworks can be built across diverse national contexts.

Key Participants

Prof. Balaraman Ravindran

Head, Wadhwani School of Data Science and AI (WSAI), and the Centre for Responsible AI (CeRAI) at IIT Madras; Head of the Safe & Trusted AI Pillar of the India AI Impact Summit

Prof. Toby Walsh

FAA FTSE FRSN – Laureate Fellow and Scientia Professor of AI, UNSW; Chief Scientist, UNSW AI Institute

Ms. Nur Syahidah Sahrom

Director (Ecosystem Development), Digital Economy Office; Director (Policy and Strategy), National AI Group, MDDI 

Dr. Yifan Jia

CEO, AIDXTech; Adjunct Fellow, SUTD

Prof. Mohan Kankanhalli

Director, NUS AI Institute; Deputy Executive Chairman, AI Singapore

Dr. Kellin Pelrine

Lead, Integrity Team, FAR.AI

Key Takeaways

01 No single actor can govern AI alone.

02 The Global South's primary risk is "missed use," not just misuse.

03 AI can never be fully tested.

04 Human accountability must be made meaningful.

05 Open-weight model access is critical to preventing dangerous power concentration.

06 AI safeguards against manipulation are woefully inadequate.

07 Transparency is necessary but not sufficient.

08 Beware of the organisational AI efficiency paradox.