IMDA sets guardrails for agentic AI with new framework
share on
The Infocomm Media Development Authority (IMDA) has unveiled a new Model AI Governance Framework for Agentic AI, marking what it said is the world’s first framework designed to guide the safe and responsible deployment of autonomous AI agents.
Announced by minister for digital development and information Josephine Teo at the World Economic Forum on Wednesday, the framework builds on Singapore’s original Model AI Governance Framework launched in 2020. It is aimed at organisations deploying agentic AI, whether developed in-house or through third-party solutions, and emphasises that humans remain ultimately accountable for decisions made by AI agents.
Unlike traditional or generative AI systems, agentic AI can reason and take actions on behalf of users, such as updating databases or making payments. While this enables greater automation in areas such as customer service and enterprise productivity, IMDA said it also introduces new risks, including unauthorised actions, misuse of sensitive data and increased automation bias stemming from over-reliance on autonomous systems.
Don't miss: IMDA in talks with X as Grok misuse sparks safety concerns
To address these concerns, the framework outlines guidance across four key areas, including assessing and bounding risks upfront, ensuring meaningful human oversight, implementing technical controls throughout the AI lifecycle, and enabling end-user responsibility through transparency and training. The goal, IMDA said, is to put guardrails in place without stifling innovation, in line with Singapore’s “practical and balanced” approach to AI governance.
April Chin, co-chief executive officer at Resaro, said the framework fills a critical gap in policy guidance for agentic AI. "The framework establishes critical foundations for AI agent assurance. For example, it helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails."
IMDA added that the framework is a living document and will continue to evolve with industry feedback and real-world case studies. The authority is also developing additional testing guidelines for agentic AI applications, building on its existing starter kit for large language model-based systems.
The launch adds to Singapore’s broader push to position itself as a hub for trusted AI, alongside initiatives such as AI Verify and its work with regional and international partners through the ASEAN Working Group on AI Governance and the AI Safety Institute.
The agentic AI framework sits alongside Singapore’s earlier governance efforts around generative AI. In 2024, IMDA and the AI Verify Foundation jointly launched the Model Governance Framework for Generative AI, which outlined nine dimensions to support a trusted gen AI ecosystem. These include accountability, data governance, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research, and the use of AI for public good. The framework calls on policymakers, industry players, researchers and the wider public to share responsibility in addressing risks while enabling innovation.
Related articles:
IMDA commits SG$200m to turn Singapore content into global brand IP
IMDA powers SG’s SMEs with fresh GenAI partnerships
AI tools and innovations that reshaped marketing in 2025
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window