Deep Lex
Back to Regulation Tracker

Singapore AI Regulation

No regulation

Asia

Overview

No AI-specific legislation has been enacted or formally tabled.
  • Singapore's approach is voluntary and framework-based: the Model AI Governance Framework (updated 2020) and AI Verify governance testing toolkit.
  • IMDA launched a Model AI Governance Framework for Generative AI in 2024.
  • MAS is developing an AI risk management toolkit for the financial sector in partnership with industry (2025-2026).

Key Sources

IMDA — Model AI Governance Framework for Agentic AIView PDF
MAS — AI Risk Management Guidelines (November 2025)View
MAS — Project MindForge Toolkit (March 2026)View

This content is for informational and educational purposes only and does not constitute legal advice.

AI Regulation Timeline

  1. 01/04/2026
    implementation

    Obligations under code of practice for online safety for app distribution services, including age assurance measures enter into force

    On 1 April 2026, the obligations under the code of practice for online safety for application (app) distribution services, including age assurance measures, enter into force. The code requires service providers to implement content guidelines and moderation measures covering six categories of harmful content, including sexual content, violent content, and cyberbullying, with pre-release app reviews and enforcement powers against non-compliant app providers. Providers must proactively detect and remove child sexual exploitation and terrorism content, and are required to introduce child-specific protections, including age assurance mechanisms, restrictive default account settings, and parental controls. It also requires that an accessible user reporting mechanism be in place, with expedited handling for the most serious content categories and notification obligations for affected users. Providers must also submit annual online safety reports to the Infocomm Media Development Authority, detailing the effectiveness of their safety measures and actions taken in response to user reports.

  2. 10/03/2026
    adoption

    Ministry of Health and Health Sciences Authority adopted Artificial Intelligence in Healthcare Guidelines (AIHGle 2.0)

    On 10 March 2026, the Ministry of Health (MOH) and the Health Sciences Authority (HSA) launched the revised Artificial Intelligence in Healthcare Guidelines (AIHGle 2.0). The guidelines aim to support patient safety and promote trust in the use of artificial intelligence (AI) in the healthcare sector. AIHGle 2.0 focuses on complex AI systems, including machine learning (ML) and deep learning (DL) models, as well as generative AI (GenAI) applications, which may present risks due to their complexity and the potential for model drift. The guidelines set out recommendations for developers, deployers, and users. Developers are expected to manage AI solutions through a Total Product Lifecycle (TPLC) approach, covering risk assessment, software validation, and post-market surveillance. They are also expected to provide clear and accurate information to healthcare partners on aspects such as system limitations, datasets, algorithms, and intended operating contexts. Healthcare organisations acting as deployers are expected to establish internal governance arrangements to oversee the use of AI systems, assess whether solutions are fit for purpose, and maintain a registry of deployed tools. They are also expected to provide guidance on the safe deployment of AI systems and ensure that cybersecurity and data protection requirements are met. The guidelines recommend that deployers adopt a risk-based approach to determine deployment models and governance measures proportionate to potential patient harm. They are further encouraged to test and validate AI systems prior to deployment, provide staff training, monitor performance periodically, and prepare adverse event response plans where appropriate. In addition, deployers are encouraged to establish communication mechanisms that support patient understanding and decision-making regarding the use of AI in medical management. Healthcare professionals acting as users remain responsible for maintaining professional standards of care when using AI-supported tools. They are expected to assess the accuracy and suitability of AI inputs and outputs, participate in relevant training and monitoring processes, respond to adverse events where necessary, and communicate transparently with patients about the use of AI in care delivery.

  3. 22/01/2026
    adoption

    Ministry of Digital Development and Information announced Model AI Governance Framework for Agentic AI (Version 1.0)

    On 22 January 2026, the Ministry of Digital Development and Information (MDDI) released the Model AI Governance Framework for Agentic AI (Version 1.0). The Model AI Governance Framework for Agentic AI was developed by the Infocomm Media Development Authority (IMDA). The framework applies to organisations that develop or deploy agentic AI systems capable of multi-step planning, autonomous action-taking, adaptation to new information, and interaction with other agents and external systems. The framework builds on the Model AI Governance Framework and sets out governance considerations for agentic AI across the design, development, deployment, and post-deployment lifecycle. It identifies four areas of focus. These are assessing and bounding risks upfront, making humans meaningfully accountable, implementing technical controls and processes, and enabling end-user responsibility. The framework specifies agent-related risk sources and risk types, including erroneous actions, unauthorised actions, biased or unfair outcomes, data breaches, and disruption to connected systems, including cascading effects in multi-agent setups. It sets expectations for defining agent limits on tools, data access, autonomy, and scope of impact. It establishes agent identity and permissions and allocates organisational responsibilities across the agent lifecycle. The guidance also defines human oversight checkpoints for high-risk or irreversible actions, requires pre-deployment testing, and calls for gradual deployment. It mandates continuous monitoring, logging, and auditing, and ensures transparency, information, and training for users interacting with or supervising agentic AI systems.

  4. 22/10/2025
    outline

    Cyber Security Agency of Singapore, Government Technology Agency of Singapore, and Infocomm Media Development Authority opened consultation on draft Quantum Readiness Index

    On 22 October 2025, the Cyber Security Agency of Singapore, the Government Technology Agency of Singapore, and the Infocomm Media Development Authority opened a consultation on the draft Quantum Readiness Index (QRI) until 31 December 2025. The QRI is a voluntary self-assessment tool designed to help organisations evaluate their preparedness for future quantum computing risks. It targets technology strategists, including Chief Technology and Chief Information Officers, and assesses readiness across five domains: governance, risk assessment, training and capability, external engagement, and technology. Organisations can rate their maturity from Level 0 (Not Started) to Level 3 (Operational) and receive recommendations for improving their security posture. The tool is intended for periodic use to track progress and adapt to the evolving quantum threat landscape. Developed with input from industry and academia, it aligns with international frameworks such as the World Economic Forum’s Quantum Readiness Toolkit and aims to support organisations beginning their transition towards quantum-safe technologies.

  5. 22/10/2025
    outline

    Cyber Security Agency of Singapore, Government Technology Agency of Singapore, and Infocomm Media Development Authority opened consultation on draft guidance on quantum-safe migration

    On 22 October 2025, the Cyber Security Agency of Singapore (CSA), the Government Technology Agency of Singapore (GovTech), and the Infocomm Media Development Authority (IMDA) opened a consultation on a draft guidance on quantum-safe migration until 31 December 2025. The guidance notes that future quantum computers could compromise current public-key cryptography and offers voluntary recommendations to support organisations in planning a phased transition to quantum-safe systems. It advises conducting risk assessments, identifying vulnerable cryptographic assets, and establishing governance structures for migration. It also notes that post-quantum cryptography is the main approach, with quantum key distribution applicable in limited cases, and encourages staff training and vendor engagement.

  6. 22/10/2025
    outline

    Cybersecurity Agency opened consultation on addendum to guidelines and companion guide on securing Artificial Intelligence systems

    On 22 October 2025, the Cyber Security Agency of Singapore (CSA) opened a public consultation on an addendum to its guidelines and companion guide on securing Artificial Intelligence (AI) systems, until 31 December 2025. The addendum focuses on securing agentic AI systems that can plan, act, and make decisions independently. It provides voluntary, risk-based measures for system owners, AI practitioners, and cybersecurity professionals. The guidance covers controls including supply chain security, model and system hardening, authorisation, limiting system autonomy, and continuous monitoring. It also includes practical examples for use cases such as coding assistants, client onboarding, and fraud detection.

  7. 22/10/2025
    declaration

    Singapore and United Kingdom signed Memorandum of Understanding on mutual recognition of consumer Internet-of-Things cybersecurity regimes

    On 22 October 2025, Singapore’s Cyber Security Agency (CSA) and the United Kingdom’s Department for Science, Innovation and Technology (DSIT) signed a Memorandum of Understanding (MoU) on mutual recognition of consumer Internet-of-Things (IoT) cybersecurity regimes. The MoU applies to manufacturers of smart consumer devices, including home assistants, automation systems, and IoT hubs. It allows products certified under Singapore’s Cybersecurity Labelling Scheme for IoT (CLS(IoT)) to be recognised as compliant with the UK’s Product Security and Telecommunications Infrastructure (PSTI) Act, and vice versa, through a simplified application process. The MoU aims to reduce duplicated testing, lower compliance costs, and improve market access, and will take effect on 1 January 2026.

  8. 15/10/2025
    law

    Online Safety (Relief and Accountability) Bill (OSRA Bill) introduced to Parliament

    On 15 October 2025, the Ministry of Digital Development and Information (MDDI) and the Ministry of Law (MinLaw) submitted the Online Safety Relief and Accountability (OSRA) Bill to the Parliament. The Bill establishes the Online Safety Commission (OSC), a new agency expected to be set up by the first half of 2026, which will administer a statutory reporting mechanism. Victims will generally report harm to online service providers first, but can approach the OSC directly for urgent harms such as intimate image abuse. The OSC will be empowered to issue directions, including content takedown or account restriction, to address online harm, with non-compliance constituting a criminal offence. Additionally, the Bill introduces statutory torts to clarify duties and liabilities for communicators, administrators, and online platforms concerning specified online harms. The Bill aims to enable victims to seek redress from the courts, such as compensatory damages and injunctions. Measures are also included to enhance accountability for communicators of online harms by allowing the OSC to require platforms to disclose identity information of suspected perpetrators, or for platforms with greater reach, to collect additional identity information.

  9. 14/10/2025
    law

    Criminal Law (Miscellaneous Amendments) Bill 2025 was introduced in Parliament

    On 14 October 2025, the Ministry of Home Affairs (MHA) introduced the Criminal Law (Miscellaneous Amendments) Bill for first reading in Parliament. The Bill updates the Penal Code 1871, Organised Crime Act 2015, Computer Misuse Act 1993, and Miscellaneous Offences (Public Order and Nuisance) Act 1906. It introduces caning for scams and scam-related offences under Sections 420, 5, 6, 8A, 8B, 39B to 39G, 51, and 54, addressing conduct such as remote communication fraud, misuse of Singpass credentials, and SIM card offences. The Bill expands the definition of “intimate image” under Section 377BE to cover AI-generated material, criminalises non-consensual image production, and clarifies that computer-generated child abuse material is prohibited. It also amends Section 292 to criminalise online locations distributing obscene content and introduces graduated penalties for materials depicting minors. By aligning penalties with the scale and digital nature of these offences, the Bill enhances deterrence and accountability across online and cross-border contexts, ensuring that Singapore’s criminal law remains responsive to technological developments, including AI-generated imagery, digital impersonation, and cyber-enabled scams.

  10. 30/09/2025
    outline

    Ministry of Law closes consultation on Guide for Using Generative Artificial Intelligence in Legal Sector

    On 30 September 2025, the Singapore Ministry of Law closes the consultation on the Guide for Using Generative Artificial Intelligence in the Legal Sector. The Guide sets out principles and provides practical guidance to support the responsible, ethical, and effective use of GenAI tools in Singapore’s legal sector. To protect the security of data entered into the AI system, the Guide recommends that when processing sensitive contract information using generative AI tools, legal offices consider whether input data will be stored, used for underlying model training, or could be inadvertently reproduced in outputs for unintended recipients. The Guide also recommends that they acquire clear assurance from the provider that confidential information will not be retained or utilised to train its models. The Guide also recommends that legal offices train staff to establish clear workflows and enforce usage protocols, including explicit prohibitions on entering confidential or sensitive information into systems, particularly free-to-use tools. The Guide further recommends that data access controls be configured within the law practice. These controls ensure client confidentiality is protected by regulating who can access specific data, including implementing differentiated access permissions for separate teams to prevent potential conflicts of interest.

  11. 29/09/2025
    inquiry

    Monetary Authority of Singapore and industry partners published report on quantum-safe communications in financial sector

    On 29 September 2025, the Monetary Authority of Singapore (MAS), together with Development Bank of Singapore, Hongkong and Shanghai Banking Corporation, Oversea-Chinese Banking Corporation, United Overseas Bank, Singapore Power Telecommunications and SpeQtral, published a technical report after completing a proof-of-concept sandbox on Quantum Key Distribution (QKD) for secure financial communications. The initiative applies to financial institutions involved in data transfer and communications infrastructure. The sandbox tested QKD’s technical viability and found that it can strengthen network security, including links between data centres and bank premises. The report highlighted the need for better interoperability between QKD providers and stronger security standards for tamper-resistant, auditable trusted nodes. The Monetary Authority of Singapore also noted that strong management support and sufficient resources are vital for advancing quantum-safe initiatives.

  12. 25/09/2025
    outline

    Monetary Authority of Singapore and Advertising Standards Authority of Singapore published guide on responsible financial content creation

    On 25 September 2025, the Monetary Authority of Singapore (MAS) and the Advertising Standards Authority of Singapore (ASAS) jointly published guidance establishing seven regulatory requirements for content creators and influencers sharing financial information online. The guidance applies to personal finance and investment-related content shared on social media platforms. The guidance mandates that creators must present accurate information explaining both risks and rewards, obtain a MAS licence when recommending buying, selling, or holding specific investment products or tailoring advice to individuals' circumstances. The guidance also highlights that a licence is to be obtained while dealing in capital market products when helping investors submit orders or soliciting trades, verifying the credibility of financial institutions through MAS' Financial Institutions Directory and avoiding promoting entities on MAS' Investor Alert List. It also highlights compliance with the Singapore Code of Advertising Practice and disclosure of all sponsored content and compensation received.

  13. 25/09/2025
    order

    Monetary Authority of Singapore adopted guidelines on standards of conduct for digital advertising activities

    On 25 September 2025, the Monetary Authority of Singapore (MAS) issued Guidelines on Standards of Conduct for Digital Advertising Activities. They apply to financial institutions and their digital marketers, including employees, influencers, affiliates and agencies. The Guidelines require boards and senior management to take responsibility for all digital advertising. Institutions must assess the suitability of digital media, ensure disclosures are clear despite format limits, and monitor all advertising activities. They must also select and train marketers carefully, manage conflicts of interest, and take disciplinary action against misconduct. The Guidelines will take effect on 25 March 2026.

  14. 04/09/2025
    investigation

    Ministry of Digital Development and Information instructed POFMA Office to issue targeted correction directions to Meta Platforms and X Corp

    On 4 September 2025, the Minister for Digital Development and Information instructed the Protection from Online Falsehoods and Manipulation Act (POFMA) Office to issue Targeted Correction Directions (TCDs) to Meta Platforms and X Corp. after non-compliance with a Correction Direction issued on 1 September 2025 concerning false statements published on 27 August 2025. The false statements alleged that the Infocomm Media Development Authority (IMDA) required multiple edits to submitted material, including the removal of all references to Israel, Palestine, and the conflict in Gaza, and that IMDA subsequently rejected the Arts Entertainment Licence application on the basis that the applicant might go off-script on stage. The TCDs require Meta Platforms and X Corp. to communicate correction notices to all end-users in Singapore who had accessed, or will access, the original posts, ensuring that readers are provided with links to the Government’s clarification through the Factually article “Corrections regarding false statements by Sammy Obeid”, thereby enabling access to both the original content and the correction notice.

  15. 01/09/2025
    outline

    Ministry of Law opened consultation on Guide for Using Generative Artificial Intelligence in Legal Sector

    On 1 September 2025, the Singapore Ministry of Law opened a consultation on the Guide for Using Generative Artificial Intelligence in the Legal Sector until 30 September 2025. The Guide sets out principles and provides practical guidance to support the responsible, ethical, and effective use of GenAI tools in Singapore’s legal sector. To protect the security of data entered into the AI system, the Guide recommends that when processing sensitive contract information using generative AI tools, legal offices consider whether input data will be stored, used for underlying model training, or could be inadvertently reproduced in outputs for unintended recipients. The Guide also recommends that they acquire clear assurance from the provider that confidential information will not be retained or utilised to train its models. The Guide also recommends that legal offices train staff to establish clear workflows and enforce usage protocols, including explicit prohibitions on entering confidential or sensitive information into systems, particularly free-to-use tools. The Guide further recommends that data access controls be configured within the law practice. These controls ensure client confidentiality is protected by regulating who can access specific data, including implementing differentiated access permissions for separate teams to prevent potential conflicts of interest.

  16. 22/07/2025
    order

    Order designating The Online Citizen’s website and social media pages as declared online locations under Protection from Online Falsehoods and Manipulation Act enters into force

    On 22 July 2025, the order under the Protection from Online Falsehoods and Manipulation Act 2019 (POFMA), designating The Online Citizen’s (TOC) website and its Facebook, Instagram, and X pages as “Declared Online Locations”, enters into force. The order was issued due to TOC’s repeated dissemination of falsehoods over a two-year period on topics including the death penalty and financial and social assistance for Singaporeans. Under the order, operators of the platforms are prohibited from receiving financial or material benefits, and the platforms must display notices alerting users to the designation and history of falsehoods. Digital advertising service providers are required to take reasonable steps to prevent paid content from being communicated in Singapore via these platforms, and individuals or entities are barred from providing financial support.

  17. 11/02/2025
    adoption

    Ministry of Digital Development and Information announced Global AI Assurance Pilot

    On 11 February 2025, the Ministry of Digital Development and Information announced the launch of the Global AI Assurance Pilot at the AI Action Summit in Paris, France. Led by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA), this initiative aims to establish global best practices for technical testing of GenAI applications. It brings together AI assurance vendors and companies deploying real-world GenAI systems to develop technical testing standards and strengthen AI governance frameworks. The objective of the pilot is to stimulate the growth of local and international third-party AI assurance markets while shaping future AI regulatory and compliance standards.

Last updated: 01/04/2026