Asia
This content is for informational and educational purposes only and does not constitute legal advice.
On 26 March 2026, the Ministry of Public Security closes the consultation on the draft decree providing for administrative sanctions in the fields of cybersecurity and personal data protection. The draft decree establishes administrative violations, sanctions, and remedial measures applicable to Vietnamese and foreign organisations. The Law on Government Organisation 2025, the Law on Organisation of Local Government of 16 June 2025, and the Law on Handling of Administrative Violations of 20 June 2012, as amended and supplemented in 2025, provide the procedural and institutional framework for the decree, including the statute of limitations, forms of sanction, remedial measures, and sanctioning authority. Under the Law on Cybersecurity of 10 December 2025, primary sanctions take the form of monetary fines, with a maximum of VND 100 million for individuals and VND 200 million for organisations. Articles 7 to 11 and Articles 32, 33, and 35 sanction providing, sharing, or failing to remove unlawful information in cyberspace within 24 hours of a request from the specialised cybersecurity protection force under the Ministry of Public Security. Article 36 sanctions failure to store data domestically or establish a branch or representative office in Vietnam as required under Decree No. 53/2022/NĐ-CP detailing certain articles of the Law on Cybersecurity. Articles 32 and 37 sanction failure to verify digital account holders, including the use of artificial intelligence (AI), deepfake technology, or other high-tech means to spoof biometric data. Under the Law on Personal Data Protection of 26 June 2025, fines reach VND 1.5 billion for individuals and VND 3 billion for organisations for violations including unlawful personal data processing, failure to uphold data subject rights, and failure to prepare a personal data processing impact assessment dossier. Revenue-based fines of up to five percent of total revenue for the immediately preceding financial year in Vietnam apply to violations involving the cross-border transfer or leakage of personal data of 5 million or more Vietnamese citizens. Article 65(1)(d) sanctions collecting, processing, or using data to develop, train, test, or operate artificial intelligence systems in contravention of the law on data and personal data protection. Under the Law on Telecommunications of 24 November 2023 and the Law on Information Technology of 29 June 2006, Articles 38, 39, 48, and 53 sanction unlicensed operation of cybersecurity services, social networks, and electronic game services. Under the Law on Artificial Intelligence 2025, Article 14(1)(p) sanctions the use of AI to copy the style or content of original works to create derivative products without the agreement of the original author. Chapter III allocates sanctioning authority to the Department of Cybersecurity and High-Tech Crime Prevention, provincial-level Public Security Departments, People's Committees at all levels, the Border Defence Force, the Vietnam Coast Guard, and Government Cipher Committee Inspectorates. Supplementary sanctions include licence suspension, operational suspension, confiscation of exhibits and means, and expulsion of foreign individuals.
On 26 March 2026, the Ministry of Public Security closes the consultation on the Decree on cybersecurity protection for information systems. The decree applies to agencies and organisations involved in building, managing, operating, upgrading, or expanding information systems in Vietnam, including systems used to provide online services. The decree defines terms including information processing, information system operators, specialised cyber security units, and online services, and establishes roles and responsibilities between system managers and operators. It sets out principles requiring cybersecurity to be ensured continuously across the system lifecycle, aligned with technical standards, and implemented in a coordinated and resource-efficient manner with priority given to higher-risk systems. The Decree requires cybersecurity assurance plans to ensure system availability and minimise the impact of security incidents on the overall system when individual components are compromised. For level 3 and level 4 systems deployed using outsourced data centre or cloud services, it mandates logical separation from other systems, segmentation of network areas with access controls, and logically independent storage partitions. For level 5 systems or systems of national security importance, stricter requirements apply, including physical separation from other systems, physically independent storage partitions and main network equipment, and controlled access between system components. The decree also allows shared cybersecurity solutions for physically isolated systems only, where they are limited to monitoring, detection, warning, or edge protection functions and do not enable access to or control over internal system data or operations.
On 16 March 2026, the Ministry of Public Security opened a consultation on the draft decree providing for administrative sanctions in the fields of cybersecurity and personal data protection, until 26 March 2026. The draft decree establishes administrative violations, sanctions, and remedial measures applicable to Vietnamese and foreign organisations. The Law on Government Organisation 2025, the Law on Organisation of Local Government of 16 June 2025, and the Law on Handling of Administrative Violations of 20 June 2012 as amended and supplemented in 2025 provide the procedural and institutional framework for the decree, including the statute of limitations, forms of sanction, remedial measures, and sanctioning authority. Under the Law on Cybersecurity of 10 December 2025, primary sanctions take the form of monetary fines, with a maximum of VND 100 million for individuals and VND 200 million for organisations. Articles 7 to 11 and Articles 32, 33, and 35 sanction providing, sharing, or failing to remove unlawful information in cyberspace within 24 hours of a request from the specialised cybersecurity protection force under the Ministry of Public Security. Article 36 sanctions failure to store data domestically or establish a branch or representative office in Vietnam as required under Decree No. 53/2022/NĐ-CP detailing certain articles of the Law on Cybersecurity. Articles 32 and 37 sanction failure to verify digital account holders, including the use of artificial intelligence (AI), deepfake technology, or other high-tech means to spoof biometric data. Under the Law on Personal Data Protection of 26 June 2025, fines reach VND 1.5 billion for individuals and VND 3 billion for organisations for violations including unlawful personal data processing, failure to uphold data subject rights, and failure to prepare a personal data processing impact assessment dossier. Revenue-based fines of up to five percent of total revenue for the immediately preceding financial year in Vietnam apply to violations involving the cross-border transfer or leakage of personal data of 5 million or more Vietnamese citizens. Article 65(1)(d) sanctions collecting, processing, or using data to develop, train, test, or operate artificial intelligence systems in contravention of the law on data and personal data protection. Under the Law on Telecommunications of 24 November 2023 and the Law on Information Technology of 29 June 2006, Articles 38, 39, 48, and 53 sanction unlicensed operation of cybersecurity services, social networks, and electronic game services. Under the Law on Artificial Intelligence 2025, Article 14(1)(p) sanctions the use of AI to copy the style or content of original works to create derivative products without the agreement of the original author. Chapter III allocates sanctioning authority to the Department of Cybersecurity and High-Tech Crime Prevention, provincial-level Public Security Departments, People's Committees at all levels, the Border Defence Force, the Vietnam Coast Guard, and Government Cipher Committee Inspectorates. Supplementary sanctions include licence suspension, operational suspension, confiscation of exhibits and means, and expulsion of foreign individuals.
On 16 March 2026, the Ministry of Public Security opened a consultation on the draft decree on cybersecurity protection for information systems, until 26 March 2026. The Decree applies to agencies and organisations involved in building, managing, operating, upgrading, or expanding information systems in Vietnam, including systems used to provide online services. The decree defines terms including information processing, information system operators, specialised cyber security units, and online services, and establishes roles and responsibilities between system managers and operators. It sets out principles requiring cybersecurity to be ensured continuously across the system lifecycle, aligned with technical standards, and implemented in a coordinated and resource-efficient manner with priority given to higher-risk systems. The Decree requires cybersecurity assurance plans to ensure system availability and minimise the impact of security incidents on the overall system when individual components are compromised. For level 3 and level 4 systems deployed using outsourced data centre or cloud services, it mandates logical separation from other systems, segmentation of network areas with access controls, and logically independent storage partitions. For level 5 systems or systems of national security importance, stricter requirements apply, including physical separation from other systems, physically independent storage partitions and main network equipment, and controlled access between system components. The decree also allows shared cybersecurity solutions for physically isolated systems only, where they are limited to monitoring, detection, warning, or edge protection functions and do not enable access to or control over internal system data or operations.
On 1 March 2026, the Law on Artificial Intelligence enters into force. The Law applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. The Law introduces a risk-based classification of AI systems into high, medium, and low-risk categories and sets out measures for regulation and enforcement. Article 5 establishes state policies to promote AI as a driver of growth, innovation, and sustainable development, supporting access, learning, and social welfare while preserving national cultural identity. The State must prioritise investment in data and computing infrastructure, safe AI development, human resource training, and strategic AI platforms, and encourage public-private partnerships, international cooperation, ethical and socially trusted AI, and the use of AI in public administration and economic sectors. Articles 6 and 14 regulate sectoral applications and high-risk AI, requiring compliance with risk management principles and relevant laws, with additional requirements in healthcare, education, and areas affecting human life, health, rights, or social order. Providers of high-risk AI must establish and regularly review risk management measures and ensure the quality of training, testing, and operational data. Article 19 mandates the Government to issue and update a National AI Strategy, guiding technology, infrastructure, data, human resource development, research, application, safety, innovation, and national sovereignty. Finally, Articles 28 and 29 cover inspection, violation handling, and compensation, including administrative sanctions, criminal liability, and civil compensation for damages caused by AI systems, with exemptions for force majeure or third-party interference.
On 1 March 2026, the Law on Artificial Intelligence enters into force. The Law establishes a risk-based classification framework for AI systems, distinguishing between high-, medium-, and low-risk systems based on their potential impacts. High-risk AI systems are those that may cause significant harm to life or health, infringe legitimate rights and interests, or affect public interests or national security. Medium-risk systems are those that may mislead or manipulate users, particularly where users are unaware that they are interacting with an AI system or AI-generated content, while low-risk systems comprise all other AI systems that do not meet these criteria. The Government is tasked with issuing implementing rules for this classification framework. Under the Law, providers are required to self-classify AI systems prior to use, with medium- and high-risk systems supported by classification dossiers. Deployers are bound by the assigned risk classification and must ensure system safety and integrity throughout operation. Where modifications introduce new or higher risks, deployers must coordinate with providers to reassess and reclassify the system. Providers of medium- and high-risk AI systems must notify the Ministry of Science and Technology through a single AI portal before deployment, whereas developers of low-risk systems are encouraged to disclose basic system information publicly for transparency. Where the applicable risk level is uncertain, providers may seek guidance from the Ministry. Inspection and supervision are calibrated to risk. High-risk AI systems are subject to regular inspections or ad hoc reviews where violations are suspected. Medium-risk systems are monitored through reporting obligations, sampling, or independent evaluation, while low-risk systems are supervised only in response to incidents or safety concerns. Where inconsistencies are identified, authorities may require reclassification, the submission of additional documentation, or the temporary suspension of systems. The Government further regulates the content and procedures for notifications and provides technical guidance on risk classification.
On 1 March 2026, the Law on Artificial Intelligence enters into force. The Law introduces a risk-based classification of AI systems into high, medium, and low-risk categories. Articles 8 and 21 establish a framework for AI testing and experimentation. The Government is required to establish an AI portal to serve as a digital platform for registering participation in controlled experimentation, receiving classification notifications, incident and periodic reports, and publicly disclosing information on AI systems, conformity assessments, and violation handling. The national AI database is to be managed uniformly to support oversight, management, and public disclosure. Both the portal and database must ensure information security and protect state secrets, business secrets, and personal data, with the Government defining operational and management rules. Controlled experimentation is to be conducted under science, technology, and innovation regulations. Results are used by authorities to recognise conformity assessments and adjust compliance obligations. Competent authorities oversee applications, supervise experimentation, and may suspend or terminate tests if safety, security, or rights are at risk.
On 1 March 2026, the Law on Artificial Intelligence enters into force. The Law requires AI systems to be designed so that AI-generated output is clearly identifiable. Providers must mark AI-generated audio, image, and video content in machine-readable format, while deployers must notify users when such content could cause confusion about the authenticity of events or persons. Simulations of real people or events must be clearly labeled. For cinematographic, artistic, or creative works, labeling should be applied appropriately without obstructing display or enjoyment. Providers and deployers must maintain transparency throughout the provision of AI systems, products, or content, and the Government will define notification and labeling requirements. High-risk AI systems must be designed to allow human supervision and intervention.
On 1 March 2026, the Law on Artificial Intelligence enters into force, making data protection obligations enforceable. Article 7(3) prohibits any collection, processing, or use of data for artificial intelligence systems that violates data protection, intellectual property, or cybersecurity laws. Article 8(3) requires secure disclosure, connection, and sharing of data on the one-stop electronic portal on artificial intelligence and the national database on artificial intelligence systems, with protection of personal data, business secrets, and state secrets. Article 12(1) imposes duties on developers, suppliers, implementers, and users to ensure data safety and to detect and remediate incidents. Article 14(1)(b) and Article 14(2)(b) require secure management and confidentiality of training, testing, and operational data. Article 17(1) to Article 17(4) regulate databases serving artificial intelligence under data protection and intellectual property law. Article 31(1) to Article 31(3) enforce confidentiality, necessity, proportionality, and security for data provided to competent state authorities.
On 1 March 2026, the Law on Artificial Intelligence enters into force. The Law establishes measures regarding user interaction with AI systems. Article 11 sets out transparency responsibilities for AI providers and deployers. Providers must ensure that AI systems interacting directly with humans make it clear to users when they are engaging with the system. AI-generated audio, image, and video content must be marked in a machine-readable format according to Government regulations. Deployers must notify users when AI-generated or edited content could cause confusion about the authenticity of events or persons and ensure that simulations or imitations of real people or events are clearly distinguishable from real content. For cinematographic, artistic, or creative works, labeling should be applied appropriately without obstructing the display, performance, or enjoyment of the work. Providers and deployers are responsible for maintaining transparency throughout the provision of AI systems, products, or content. The Government will specify the forms of notification and labeling.
On 28 February 2026, the Prime Minister of Vietnam issued Decision No. 367/QĐ-TTg approving a plan for the implementation of the Artificial Intelligence Law (Law No. 134/2025/QH15). The decision assigns the Ministry of Science and Technology to coordinate nationwide implementation and requires ministries and local authorities to review existing legislation to ensure consistency with the law. The implementation plan provides for the development of several implementing instruments, including a government decree detailing the Artificial Intelligence Law and a decree establishing a National Artificial Intelligence Development Fund. The plan also foresees the issuance of Prime Ministerial decisions on datasets for artificial intelligence development and on the classification of high-risk artificial intelligence systems. In addition, the Ministry of Science and Technology is tasked with issuing a circular establishing a national artificial intelligence ethics framework. Implementation measures beginning in 2026 also include updating the national artificial intelligence strategy, developing artificial intelligence human resources, and establishing infrastructure such as a national artificial intelligence portal, shared computing resources, and innovation clusters.
On 15 February 2026, Government Decree No. 342/2025/ND-CP, which specifies certain provisions of the Law on Advertising, and which mandates specific design requirements for online advertising systems, enters into force. Article 17 establishes a number of design requirements for online advertisers. Static image advertisements must be immediately closable without a waiting time, while video advertisements must be closable after a maximum of five seconds. Advertisements must also be closable with a single interaction, with no fake or hidden closing icons.
On 9 February 2026, the Ministry of Science and Technology opened a consultation on guidelines for controlled testing in Decree No. 2026/ND-CP implementing the Law on Artificial Intelligence. The law implemented by this Decree introduces the controlled testing mechanism to allow organisations to trial AI systems that may not yet comply with current laws, offering exemptions and state support to gather evidence for future regulatory adjustments. The Decree specifies that successful controlled testing provides the evidence needed to certify an AI system, tailor its ongoing legal obligations based on its risk level, and guide state agencies in updating the relevant regulations (Article 24). Participants in the trial gain preferential access to State infrastructure and data. To participate in controlled testing, an AI system must be innovative or in development, face a legal barrier that prevents normal deployment, and be accompanied by a comprehensive risk management plan from the applicant (Article 25). Article 26 sets the limits for how long and where an AI can be tested, allowing a maximum of three years per phase with one possible extension, and permitting the geographic scope to range from a single area to several provinces. Participation requires submitting an application via the Ministry's one-stop web portal, after which medium-risk systems benefit from a simplified approval process. The authority to issue testing certificates depends on the geographic scope. Provincial committees handle local tests, while specialised ministries or the Ministry of Science and Technology oversee multi-provincial or complex systems, with defence and security applications entirely exempt from this decree.
On 9 February 2026, the Ministry of Science and Technology opened a consultation on Decree No. 2026/ND-CP implementing the Law on AI, which establishes design requirements for labelling AI-generated audio, image, and video content. Article 18 requires suppliers to integrate metadata on content origin and system signatures, apply watermarks resistant to removal, incorporate optional user interface warnings, and maintain impact history records. Systems must have built-in marking capabilities before market entry and employ anti-tampering measures. Open-source model providers must offer content detection mechanisms and free authentication tools, while use policies must prohibit interference with identification marks. Article 19 mandates notifications and labels for AI-generated content made public that could cause confusion, with labels placed clearly at fixed positions. Simulated content depicting real people or events must be easily distinguishable from reality. Labelling requirements specify symbols in high contrast for images, verbal notices for audio, and screen corner displays for video, with minimum duration requirements. Cinematic and artistic works may follow adapted labelling that does not obstruct enjoyment as long as the labelling meets minimum requirements. Intermediary platforms must display lawfully provided labels but bear no obligation to actively identify AI-generated content. The Ministry of Science and Technology will issue implementation guidance ensuring feasibility without creating additional obligations or procedures.
On 9 February 2026, the Ministry of Science and Technology opened a consultation on Decree No. 2026/ND-CP implementing the Law on AI. Article 16 establishes that transparency and labelling obligations must be balanced against the protection of rights and technical capabilities, serving as the foundational principle for all specific compliance requirements detailed later in the chapter. Article 29 requires organisations conducting tests to publicly communicate risks through mass media and directly to relevant parties, ensuring information about products is accurate, complete, and truthful. Testing entities must secure voluntary participation from users who have been clearly informed of the nature, objectives, and risks, and participants retain the right to withdraw at any time without explanation through a simple process. Organisations must maintain human intervention capabilities over AI system predictions, proposals, or decisions. A data processing procedure must be established that monitors high risks to data subjects' rights, with all personal data deleted upon test completion.
On 9 February 2026, the Ministry of Science and Technology opened a consultation on Decree No. 2026/ND-CP implementing the Law on AI. Article 38 requires agencies and individuals managing or using AI infrastructure and data to apply technical and management measures ensuring confidentiality, integrity, and availability of infrastructure and data. Incident notification obligations related to personal data, cybersecurity, and cyber information security follow existing laws on data protection and cybersecurity. Cybersecurity requirements in Article 38 do not create new obligations to disclose data unless otherwise provided for by law. The Decree also creates the National Database on Artificial Intelligence to support state management, research, and testing. The Database organises data into open data, conditionally open data, and commercial data accessible through agreements or licensing (Article 36).
On 9 February 2026, the Ministry of Science and Technology opened a consultation on Decree No. 2026/ND-CP implementing the Law on AI. The Decree mandates that suppliers of high-risk and medium-risk AI systems submit a classification dossier to the Ministry via the one-stop web portal before market circulation (Article 12). For high-risk systems, the dossier requires system identification, an overview description, data information, risk management details, test results, and guiding documents. Medium-risk systems require a streamlined dossier including system identification, a description of intended use, data information, risk management methods, and supporting documents. Suppliers of medium or high-risk AI systems must submit a declaration via the one-stop portal before market release, providing system details and risk classification for post-inspection purposes rather than seeking prior approval (Article 13). Classification dossiers must be retained by the supplier throughout the system's provision period and for at least five years after it ceases operation. Upon receiving valid information, the portal issues an electronic confirmation with a unique system identifier, and basic system information is added to the National Database on artificial intelligence systems. Suppliers must update their registration following significant changes to system functions, purpose, or risk level.
On 9 February 2026, the Ministry of Science and Technology opened a consultation on Decree No. 2026/ND-CP implementing the Law on AI. The Decree mandates risk-based supervision for high-risk AI systems, with the Ministry of Science and Technology providing implementation guidance and publishing oversight reports (Article 15). The law implemented by this Decree institutes a controlled testing mechanism for AI systems. Article 28 specifies that authority to issue testing certificates is based upon geographic scope, excepting defence and security applications (authority unlisted). Article 31 empowers issuing agencies to grant exemptions, conduct inspections, and revoke permits while also allowing them to propose regulations and exempting them from certain liabilities. Article 32 establishes principles of unification, openness, and security when building Vietnam's national AI infrastructure, defines its components, and assigns lead roles to the Ministry of Public Security for data centres and the Ministry of Science and Technology for technology development/coordination. Article 33 governs the provision of state-invested AI infrastructure. Article 35 assigns the Ministry of Finance the lead role in issuing guidance on preferential mechanisms. Article 36 establishes the National Database on AI as managed by the Ministry of Science and Technology (content) and hosted by the National Data Centre (infrastructure), requiring all ministry and provincial AI databases to connect uniformly according to state standards. Articles 39, 40, and 43 set national priorities for state support of AI technologies and specify fair and transparent measures of support. Article 41 establishes voluntary "AI Linkage Clusters" following a State-University-Enterprise "Golden Triangle" model, which can be recognised by the Ministry of Science and Technology through the one-stop AI Portal. Article 42 provides cost support for compliance testing, free state-developed tools, and a voucher system for infrastructure and consultancy services, with implementation guided by the Ministry of Science and Technology.
On 1 January 2026, Law No. 75/2025/QH15 enters into force, applying the amended Articles 21, 22, and 23 of the Law on Advertising, which set binding design requirements for advertising in print, broadcast, and online environments. Advertising space is limited to a maximum of 30% of the total area of a newspaper issue and 40% of a magazine issue, excluding publications specialising in advertising, with mandatory visual separation from editorial content. Advertising time on promotional television channels is limited to 10% of total daily broadcast time and to 5% on paid television channels, excluding dedicated advertising channels. Programmes of less than 5 minutes may not be interrupted. Programmes of 5 to under 15 minutes may be interrupted once. Programmes of 15 minutes or more may include one additional interruption for each additional 15 minutes, with each interruption capped at 5 minutes. Scrolling or moving advertising text displayed alongside official information may not exceed 10% of the screen area and must not interfere with main content. Online advertisements must display clear identifying marks and provide user interface features enabling users to close advertisements, report violations, or refuse inappropriate advertising.
On 1 January 2026, the Law on Personal Data Protection enters into force. The Law states that organisations and individuals handling personal data must obtain clear consent from data subjects before processing their data, with special provisions for sensitive data and the personal data of children (Article 9, 2, 24). The processing of personal data must be conducted in accordance with the principles of purpose limitation, data minimisation, and accuracy (Article 3). The processing of personal data by automated means must be disclosed to data subjects, along with an explanation of the potential impact on their rights and interests (Article 9). Data subjects must be afforded the option to decline the processing of their data by AI systems (Article 4). The Law allows organisations to utilise personal data for the development of self-learning algorithms and automated systems, such as artificial intelligence (AI) so long as they comply with these articles. Under Article 41 of the "Implementing Regulations of the Law on Personal Data Protection" (adopted 31 December 2025), small businesses have until 5 years from the effective date of the Law on Personal Data Protection to appoint dedicated data protection personnel and conduct impact assessments. This exemption does not apply if they provide data processing services, directly handle sensitive data, or process the data of over 100’000 individuals.
On 1 January 2026, the Law on Digital Technology Industry enters into force with a grace period. The law establishes the National Committee for the Promotion of the Digital Technology Industry as the organisation responsible for directing and coordinating the resolution of matters related to cooperation, investment, and the implementation of projects and programmes to promote the development of the digital technology industry. The law designates the Ministry of Information and Communications (MIC) to issue ethical guidelines for the development, deployment, and application of AI. Furthermore, the MIC will provide guidelines on the labelling of digital technology products created by AI. The MIC may also issue technical regulations and mandate the application of international, regional, foreign, and national standards in digital technology industry activities. Finally, the MIC provides guidance on risk level classification and defines the measures, obligations, and responsibilities required to mitigate risks associated with AI systems at each level.
On 1 January 2026, the Law on Digital Technology Industry, including testing requirements, enters into force. The law establishes a mechanism for the controlled and time-limited testing of digital technology products and services, subject to specific restrictions on location, duration, scope, and eligible participants. The mechanism applies to products and services that meet certain criteria, particularly in cases where no existing legislation governs them. Enterprises wishing to take part in the testing process must submit the required documentation in line with the established procedures. Participation in this mechanism does not constitute authorisation to place the tested products or services on the market. The testing period is limited to a maximum of two years and must take place within the territory of Vietnam. Enterprises approved for testing are exempt from civil liability for damages and from administrative or criminal liability, provided they fully comply with the conditions set out in the testing permit, unless they are aware of potential risks and fail to disclose them or take appropriate preventive action. Test results must be submitted every six months, and participants must adhere to the law’s provisions on user and consumer protection.
On 1 January 2026, the Law on Digital Technology Industry enters into force. The law sets out quality management principles and introduces a conformity assessment framework. The Ministry of Information and Communications (MIC) will issue ethical guidelines for the development, deployment, and application of artificial intelligence (AI). Additionally, digital technology products generated by AI must be clearly labelled in a machine-readable format to ensure they can be identified as artificially created or manipulated. The law also introduces a risk-based classification system for AI systems, based on their potential impact on human health and safety, individual rights and lawful interests, national critical information systems, and critical infrastructure.
On 31 December 2025, the Prime Minister adopted Implementing Regulations of Law on Personal Data Protection including data protection regulation, which will enter into force on 1 January 2026. The Implementing Regulations further specify the regulatory requirements established by the framework of the Law on Personal Data Protection. Articles 3 and 4 specify the distinction between basic personal data and sensitive personal data, with the latter requiring stricter security and confidentiality. Article 5 sets out the procedures for data controllers and processors to comply with data subject requests, including specific response and completion timelines for requests, while Article 6 sets out the methods for obtaining data subject consent for processing of personal data. The Implementing Regulations also set out additional data protection requirements for a range of specific processing activities, such as financial activities, big data processing, AI systems, blockchain, and cloud computing, including maintaining mechanisms to explain the use of personal data by algorithms to data subjects. Further, the Implementing Regulations specify the duties of internal data protection departments and providers of data protection services. The detailed requirements for personal data impact assessments, which controllers and processors must carry out, are set out in Articles 19 and 20. Controllers and processors must must prepare, maintain, and submit an impact assessment dossier to the protection agency within 60 days, detailing their processing activities, risks, and safeguards, and must be updated every 6 months, or, in case of major organisational or operational changes, within 10 days. Such assessments, applicable for both processing and cross-border transfer of personal data, must be updated every 6 months or immediately within 10 days for specific organisational or operational changes. Under Article 41, small businesses are exempt from the requirement to appoint dedicated data protection personnel and conduct impact assessment for 5 years from the effective date of the Law on Personal Data Protection, unless they provide data processing services, directly handle sensitive data, or process the data of over 100’000 individuals. Finally, Article 28 of the Implementing Regulations specifies the necessary details for processors or controllers to include in notifications for violations of personal data protection regulations to data protection agencies.
On 26 December 2025, the Government adopted Decree No. 342/2025/ND-CP, which specifies certain provisions of the Law on Advertising, and which mandates specific design requirements for online advertising systems. The Decree enters into force on 15 February 2026. Article 17 establishes a number of design requirements for online advertisers. Static image advertisements must be immediately closable without a waiting time, while video advertisements must be closable after a maximum of five seconds. Advertisements must also be closable with a single interaction, with no fake or hidden closing icons.
On 11 December 2025, the National Assembly of Vietnam adopted the Law on Digital Transformation. The Law establishes design requirements for digital systems under Article 7. Article 7(1) requires systems to be designed based on digital platforms and shared components. Article 7(2) mandates the efficient use of cloud computing infrastructure. Article 7(3) requires connectivity and integration from the outset, based on open standards and standard application programming interfaces. Article 7(4) requires cybersecurity and data protection to be incorporated from the design and development stages. Article 7(5) mandates a data-centric approach with a one-time declaration by default. Article 7(6) requires a user-focused design, ensuring convenience and accessibility. Article 7(7) calls for flexibility, ease of upgrading, and modular architecture. Article 4(5) further requires the implementation of cybersecurity and data protection measures at the design stage.
On 11 December 2025, the National Assembly of Vietnam adopted the Law on Digital Transformation. The Law includes testing-related provisions applicable to organisations participating in digital transformation. Article 4(6) establishes activities to research, test, pilot, evaluate, and deploy digital technology products and services and new digital transformation models, including controlled testing. Article 4(11) confirms pilot development through building and test-operating digital systems, digital platforms, and digital services within a limited scope to evaluate effectiveness. Article 28 confirms that agencies, organisations, and businesses may conduct controlled trials of processes, solutions, products, services, and business models in digital transformation in accordance with relevant laws.
On 10 December 2025, the National Assembly adopted the Law on Artificial Intelligence (AI), which applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. The Law introduces a risk-based classification of AI systems into high, medium, and low-risk categories and sets out measures for regulation and enforcement. Article 5 establishes state policies to promote AI as a driver of growth, innovation, and sustainable development, supporting access, learning, and social welfare while preserving national cultural identity. The State must prioritise investment in data and computing infrastructure, safe AI development, human resource training, and strategic AI platforms, and encourage public-private partnerships, international cooperation, ethical and socially trusted AI, and the use of AI in public administration and economic sectors. Articles 6 and 14 regulate sectoral applications and high-risk AI, requiring compliance with risk management principles and relevant laws, with additional requirements in healthcare, education, and areas affecting human life, health, rights, or social order. Providers of high-risk AI must establish and regularly review risk management measures and ensure the quality of training, testing, and operational data. Article 19 mandates the Government to issue and update a National AI Strategy, guiding technology, infrastructure, data, human resource development, research, application, safety, innovation, and national sovereignty. Finally, Articles 28 and 29 cover inspection, violation handling, and compensation, including administrative sanctions, criminal liability, and civil compensation for damages caused by AI systems, with exemptions for force majeure or third-party interference.
On 10 December 2025, the National Assembly adopted the Law on Artificial Intelligence (AI), which applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. The Law establishes a risk-based classification framework for AI systems, distinguishing between high-, medium-, and low-risk systems based on their potential impacts. High-risk AI systems are those that may cause significant harm to life or health, infringe legitimate rights and interests, or affect public interests or national security. Medium-risk systems are those that may mislead or manipulate users, particularly where users are unaware that they are interacting with an AI system or AI-generated content, while low-risk systems comprise all other AI systems that do not meet these criteria. The Government is tasked with issuing implementing rules for this classification framework. Under the Law, providers are required to self-classify AI systems prior to use, with medium- and high-risk systems supported by classification dossiers. Deployers are bound by the assigned risk classification and must ensure system safety and integrity throughout operation. Where modifications introduce new or higher risks, deployers must coordinate with providers to reassess and reclassify the system. Providers of medium- and high-risk AI systems must notify the Ministry of Science and Technology through a single AI portal before deployment, whereas developers of low-risk systems are encouraged to disclose basic system information publicly for transparency. Where the applicable risk level is uncertain, providers may seek guidance from the Ministry. Inspection and supervision are calibrated to risk. High-risk AI systems are subject to regular inspections or ad hoc reviews where violations are suspected. Medium-risk systems are monitored through reporting obligations, sampling, or independent evaluation, while low-risk systems are supervised only in response to incidents or safety concerns. Where inconsistencies are identified, authorities may require reclassification, the submission of additional documentation, or the temporary suspension of systems. The Government further regulates the content and procedures for notifications and provides technical guidance on risk classification.
On 10 December 2025, Vietnam’s National Assembly adopted the Law on Artificial Intelligence (AI), which applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. The Law introduces a risk-based classification of AI systems into high, medium, and low-risk categories. Articles 8 and 21 establish a framework for AI testing and experimentation. The Government is required to establish an AI portal to serve as a digital platform for registering participation in controlled experimentation, receiving classification notifications, incident and periodic reports, and publicly disclosing information on AI systems, conformity assessments, and violation handling. The national AI database is to be managed uniformly to support oversight, management, and public disclosure. Both the portal and database must ensure information security and protect state secrets, business secrets, and personal data, with the Government defining operational and management rules. Controlled experimentation is to be conducted under science, technology, and innovation regulations. Results are used by authorities to recognise conformity assessments and adjust compliance obligations. Competent authorities oversee applications, supervise experimentation, and may suspend or terminate tests if safety, security, or rights are at risk.
On 10 December 2025, the National Assembly adopted the Law on Artificial Intelligence (AI), which applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. The Law requires AI systems to be designed so that AI-generated output is clearly identifiable. Providers must mark AI-generated audio, image, and video content in machine-readable format, while deployers must notify users when such content could cause confusion about the authenticity of events or persons. Simulations of real people or events must be clearly labeled. For cinematographic, artistic, or creative works, labeling should be applied appropriately without obstructing display or enjoyment. Providers and deployers must maintain transparency throughout the provision of AI systems, products, or content, and the Government will define notification and labeling requirements. High-risk AI systems must be designed to allow human supervision and intervention.
On 10 December 2025, the National Assembly of Vietnam adopted the Law on Artificial Intelligence, confirming binding data protection obligations for artificial intelligence systems. Article 7(3) prohibits unlawful collection, processing, or use of data in artificial intelligence development, training, testing, and operation. Article 8(3) mandates protection of personal data, business secrets, and state secrets in the disclosure and sharing of data through the one-stop electronic portal on artificial intelligence and the national database on artificial intelligence systems. Article 12(1) requires all developers, suppliers, implementers, and users to ensure data safety and respond to incidents. Article 14(1)(b) and Article 14(2)(b) impose obligations to securely manage training, testing, and operational data and maintain confidentiality. Article 17(1) to Article 17(4) govern the creation, sharing, and exploitation of databases serving artificial intelligence in accordance with data protection and intellectual property law. Article 31(1) to Article 31(3) require confidentiality, proportionality, and secure handling of data used for state management.
On 10 December 2025, the National Assembly adopted the Law on Artificial Intelligence (AI), which applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. The Law establishes measures regarding user interaction with AI systems. Article 11 sets out transparency responsibilities for AI providers and deployers. Providers must ensure that AI systems interacting directly with humans make it clear to users when they are engaging with the system. AI-generated audio, image, and video content must be marked in a machine-readable format according to Government regulations. Deployers must notify users when AI-generated or edited content could cause confusion about the authenticity of events or persons and ensure that simulations or imitations of real people or events are clearly distinguishable from real content. For cinematographic, artistic, or creative works, labeling should be applied appropriately without obstructing the display, performance, or enjoyment of the work. Providers and deployers are responsible for maintaining transparency throughout the provision of AI systems, products, or content. The Government will specify the forms of notification and labeling.
On 27 November 2025, the Law on Artificial Intelligence was introduced to the National Assembly of Vietnam, establishing registration-related obligations for high-risk artificial intelligence systems applicable to suppliers. Article 10(1) requires the supplier to classify the artificial intelligence system before deployment. Article 10(3) requires suppliers of high-risk artificial intelligence systems to notify the Ministry of Science and Technology of the classification results through the one-stop electronic portal on artificial intelligence before putting the system into use. Article 8(1) provides that the one-stop electronic portal on artificial intelligence supports receipt of classification notifications and public disclosure of information on artificial intelligence systems. Article 8(2) provides for a national database on artificial intelligence systems serving management, monitoring, and public disclosure in accordance with the law.
On 27 November 2025, the Law on Artificial Intelligence was introduced to the National Assembly of Vietnam, setting out testing requirements applicable to organisations and individuals participating in artificial intelligence activities. The Law introduces a risk-based classification of AI systems into high, medium, and low-risk categories. Articles 8 and 21 establish a framework for AI testing and experimentation. The Government is required to establish an AI portal to serve as a digital platform for registering participation in controlled experimentation, receiving classification notifications, incident and periodic reports, and publicly disclosing information on AI systems, conformity assessments, and violation handling. The national AI database is to be managed uniformly to support oversight, management, and public disclosure. Both the portal and database must ensure information security and protect state secrets, business secrets, and personal data, with the Government defining operational and management rules. Controlled experimentation is to be conducted under science, technology, and innovation regulations. Results are used by authorities to recognise conformity assessments and adjust compliance obligations. Competent authorities oversee applications, supervise experimentation, and may suspend or terminate tests if safety, security, or rights are at risk.
On 27 November 2025, the Law on Artificial Intelligence was introduced to the National Assembly of Vietnam, setting out design requirements for high-risk artificial intelligence systems applicable to suppliers. Article 14(1)(a) requires the establishment and maintenance of risk management measures and regular review when significant changes or new risks arise. Article 14(1)(b) requires managing training, testing, and operational data to ensure quality within technical capabilities and in accordance with the intended use. Article 14(1)(c) requires establishing, updating, and maintaining technical records and operational logs for conformity assessment and post-commissioning inspection. Article 14(1)(d) requires designing the system to ensure the ability of humans to monitor and intervene. Article 11(1) requires that artificial intelligence systems interacting directly with humans are designed and operated so that users are aware of the interaction. Article 11(2) requires that audio, visual, and video content generated by artificial intelligence systems is marked in a machine-readable format.
On 27 November 2025, the Law on Artificial Intelligence was introduced to the National Assembly of Vietnam, establishing the jurisdiction of authorities responsible for overseeing organisations and individuals participating in artificial intelligence activities. Article 30(2)(a) establishes unified state management of artificial intelligence by the Government. Article 30(2)(b) designates the Ministry of Science and Technology as the focal authority responsible to the Government for nationwide state management of artificial intelligence. Article 30(2)(c) assigns ministries and ministerial-level agencies the responsibility to coordinate with the Ministry of Science and Technology within their respective functions, duties, and powers. Article 30(2)(d) assigns Provincial People’s Committees the responsibility for state management of artificial intelligence at the local level. These provisions establish the competent authorities overseeing suppliers, implementers, developers, and other organisations and individuals participating in artificial intelligence activities.
On 27 November 2025, the Law on Artificial Intelligence was introduced to the National Assembly of Vietnam, establishing data protection requirements for artificial intelligence activities. Article 7(3) prohibits collecting, processing, or using data for developing, training, testing, or operating artificial intelligence systems in violation of laws on data, personal data protection, intellectual property, and cybersecurity. Article 8(3) requires that public disclosure, connection, and sharing of data on the one-stop electronic portal on artificial intelligence and the national database on artificial intelligence systems ensure information security and protect personal data, business secrets, and state secrets. Article 12(1) requires developers, suppliers, implementers, and users to ensure data safety and timely detection and remediation of incidents. Article 14(1)(b) and Article 14(2)(b) require management of training, testing, and operational data to ensure data security and confidentiality. Article 17(1) to Article 17(4) regulate databases serving artificial intelligence in compliance with data protection and intellectual property law. Article 31(1) to Article 31(3) establish confidentiality, necessity, proportionality, and security requirements for data provided for state management purposes.
On 27 November 2025, the Law on Artificial Intelligence was introduced to the National Assembly of Vietnam, defining user rights and protections. Article 11(1) requires users to be able to recognise when they are interacting with an artificial intelligence system. Article 11(3) and Article 11(4) require clear notification and labelling when text, audio, images, or videos are generated or modified by artificial intelligence and may cause confusion about authenticity, including simulated persons or events. Article 3(7) defines affected parties whose legal rights and interests may be impacted. Article 14(1)(f) and Article 14(2)(d) require suppliers and implementing parties to provide users and affected parties with publicly available information on system functions, operating methods, and risk warnings. Article 15(2)(c) recognises the right of users to lawfully use low-risk artificial intelligence systems. Article 29(2) to Article 29(4) establish user and affected party rights to compensation where damage occurs.
On 31 October 2025, the Law on Digital Transformation was introduced to the National Assembly of Vietnam. The Law on Digital Transformation applies to domestic and foreign organisations and individuals directly involved in or related to digital transformation in Vietnam under Article 2. Article 7 sets out principles of digital system architecture and design, including the use of digital platforms and shared components under Article 7(1), efficient use of cloud computing infrastructure under Article 7(2), connectivity and integration based on open standards and standard application programming interfaces under Article 7(3), and ensuring cybersecurity and data protection from the design and development stages under Article 7(4). Article 7(5) requires a data-centric approach based on one-time declaration by default. Article 7(6) requires a focus on the user, including accessibility. Article 7(7) requires flexibility, ease of upgrading, and adaptability through modular architecture. Article 4(5) requires cybersecurity and data protection measures in the design of digital systems.
On 31 October 2025, the Law on Digital Transformation was introduced to the National Assembly of Vietnam. The Law on Digital Transformation applies to organisations and individuals directly involved in or related to digital transformation in Vietnam under Article 2. Article 4(6) provides for research, test, pilot, evaluate, and deploy applications of digital technology products and services, new models and solutions for digital transformation, and the implementation of a controlled testing mechanism in digital transformation. Article 4(11) provides for pilot development, including building and test-operating digital systems, digital platforms, and digital services within a limited scope to evaluate effectiveness before investment, leasing, or procurement. Article 28 provides that agencies, organisations, and businesses may conduct controlled trials of processes, solutions, products, services, and business models in digital transformation in accordance with relevant laws.
On 20 October 2025, the Ministry of Science and Technology (MST) closes the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence prohibiting unacceptably risky AI practices. These include manipulation causing harm, real-time remote biometric identification in public for law enforcement except under special legal authorisation, large-scale biometric databases, and emotion recognition in workplaces and education.
On 20 October 2025, the Ministry of Science and Technology (MST) closed the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence. The Draft Law requires that high-risk artificial intelligence systems be registered in the National Database on Artificial Intelligence before being put into use under Article 22(2). It provides that registration and updating must be carried out in the electronic environment and linked with the National Public Service Portal and relevant specialised databases. The Draft Law requires that registration information be updated when there are important changes to the system under Article 22(2). It further provides that basic information on high-risk artificial intelligence systems is made public, subject to limits necessary to protect personal data, business secrets and state secrets under Article 22(4). The Draft Law links registration obligations with supervision and inspection mechanisms, allowing competent state management agencies to verify compliance and request re-evaluation where higher risks are identified under Article 23(2).
On 20 October 2025, the Ministry of Science and Technology (MST) closed the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence. The Draft Law requires high-risk artificial intelligence systems subject to pre-inspection to undergo controlled environment testing before being put on the market under Article 16(1) and (2). It provides that testing results obtained in controlled environments may be used as a technical basis for conformity assessment and certification procedures under Article 16(2). The Draft Law requires suppliers and implementers of high-risk artificial intelligence systems to maintain test results and performance evaluation documents as part of mandatory technical records under Article 15(3). It further requires that certified high-risk artificial intelligence systems remain subject to post-certification testing and monitoring, including re-testing where major changes to algorithms, training data or intended use occur under Article 16(4). The Draft Law also provides for the establishment of artificial intelligence testing and inspection facilities to assess safety and reliability under Article 51.
On 20 October 2025, the Ministry of Science and Technology (MST) closed the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence setting design obligations for high-risk AI. Providers must ensure human oversight, proportional safety and cybersecurity measures, transparency towards affected individuals, and maintain documentation such as testing and risk-mitigation records. The proposal also requires that AI-generated content, including deepfakes, be clearly labelled.
On 20 October 2025, the Ministry of Science and Technology (MST) closed the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence. The Draft Law sets out binding cross-references to existing data protection regulation. Article 6(2) requires that activities related to artificial intelligence systems, where they involve personal data, comply with legal regulations on personal data protection and cybersecurity. Article 25(2)(d) requires that the development of national artificial intelligence infrastructure ensure the protection of personal privacy in accordance with the Law on Personal Data Protection, the Law on Data and the Law on Cybersecurity. Article 28(4) requires ministries, branches and local authorities to connect, share and exploit artificial intelligence infrastructure in compliance with laws on confidentiality, network security and personal data protection. Article 29(1) requires that the construction, management, connection, sharing and protection of national and specialised databases serving artificial intelligence comply with the Law on Data and the Law on Personal Data Protection. Article 22(4) requires that the public disclosure of basic information on high-risk artificial intelligence systems balance transparency with the protection of personal data. Articles 23 and 32 require compliance with personal data protection obligations in supervision, inspection and enforcement activities and in the storage, processing and transfer of personal data in artificial intelligence activities.
On 20 October 2025, the Ministry of Science and Technology (MST) closed the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence. The Draft Law establishes respect for privacy as a basic principle for artificial intelligence systems (Article 4) and requires personal data processing within such systems to comply with applicable data-protection law (Articles 6 and 32). It obliges suppliers and implementing parties of high-risk artificial intelligence systems to implement data governance measures covering the origin, quality and representativeness of training, testing and operational data and bias minimisation (Article 15). It prohibits or restricts uses posing serious risks to personal data and human rights, including large-scale facial recognition databases and real-time remote biometric identification in public places except where permitted by specialised laws (Article 11), and requires registration of high-risk artificial intelligence systems in the National Database on Artificial Intelligence with safeguards for personal data (Article 22).
On 20 October 2025, the Ministry of Science and Technology (MST) closed the public consultation, which had been open since 29 September 2025, on the Draft Law on Artificial Intelligence. The draft Law requires providers to ensure transparency for affected persons. Providers must disclose the nature and decision mechanisms of high-risk AI systems and establish a process for individuals to request human review of automated outcomes.
On 29 September 2025, the Ministry of Science and Technology (MST) opened a public consultation, until 20 October 2025, on the Draft Law on Artificial Intelligence. The draft Law prohibits unacceptably risky AI practices. These include manipulation causing harm, real-time remote biometric identification in public for law enforcement except under special legal authorisation, large-scale biometric databases, and emotion recognition in workplaces and education.
On 29 September 2025, the Ministry of Science and Technology (MST) opened a public consultation, running until 20 October 2025, on the Draft Law on Artificial Intelligence, which establishes design obligations for high-risk AI systems. Providers must ensure human oversight, proportional safety and cybersecurity measures, transparency towards affected individuals, and maintain documentation such as testing and risk-mitigation records. The draft Law also requires that AI-generated content, including deepfakes, be clearly labelled.
Last updated: 26/03/2026