This content is for informational and educational purposes only and does not constitute legal advice.
On 31 March 2026, the Personal Information Protection Commission (PIPC) adopted revised guidelines on the processing of pseudonymised information. The guidelines apply to all organisations processing pseudonymised personal data, including Artificial Intelligence (AI) providers. The revision introduces a standardised, risk-based framework under which internal data use is classified as low risk, while provision to third parties is classified as medium or high risk depending on the degree of environmental control. Documentation and review requirements are now proportionate to risk level, with required forms reduced from 24 to 10. It also provides that organisations may pre-designate expandable purposes to allow reuse of pseudonymised data for related activities without restarting the review process, and processing period criteria have been made more flexible to accommodate continuous AI training.
On 4 March 2026, the Personal Information Protection Commission (PIPC) issued recommendations following discussions with generative AI companies on improving personal information handling policies to increase transparency in Artificial Intelligence (AI) data processing. The discussions addressed the personal information processing policy evaluation system, which examines whether companies clearly disclose the categories of personal data they handle, the legal basis for such processing, retention periods, third-party data sharing practices, and the mechanisms available for users to exercise their rights under the Personal Information Protection Act. The Commission identified several areas for improvement in the privacy policies of generative AI providers, including vague descriptions of the data involved, unclear explanations of the legal grounds relied upon, and limited accessibility for users. It recommended increasing the specificity of these policies so that individuals can better understand how their input data may be used for AI training, the applicable retention periods, and the opt-out options available.
On 26 February 2026, a Bill amending the Information and Communications Network Act to address overseas-based manipulation of online public opinion was introduced to the National Assembly. The Bill applies to information and communications service providers that meet criteria to be specified by Presidential Decree, including thresholds based on average daily users, revenue, and business type. It requires covered providers to adopt technical and managerial measures to prevent abnormal use of their networks, including the bulk posting or transmission of information through automated programmes or bypass access such as Virtual Private Networks (VPNs). The Bill further mandates technical measures to display the user’s country of access and whether bypass access was used. The amendment would enter into force one year after promulgation.
On 9 February 2026, the Personal Information Protection Commission (PIPC) designated Gwangju Technopark as the operating institution for the Personal Information Innovation Zone through Notice No. 2026-05. The designated facility is located in Gwangju Metropolitan City and the AI Convergence Centre of Gwangju Technopark will oversee the operation of the Zone. The designation follows the selection process conducted by the Data Safety Policy Division and formalises the institution responsible for administering the Innovation Zone under the applicable data governance framework.
On 29 January 2026, the Ministry of Science and ICT adopted the revised high-impact AI decision guidelines, building on the version released on 22 January 2026 following the entry into force of the Artificial Intelligence Basic Act. The guidelines specify the legal and technical criteria for determining whether an artificial intelligence system constitutes high-impact artificial intelligence under Article 33 of the Act. They establish a structured, two-stage determination methodology that first assesses whether the system is used within enumerated application domains and then evaluates whether its use may generate significant risks to human life, physical safety, or fundamental rights. The guidelines define assessment factors, including the function of the system, the degree of automation in decision-making, the scale and reversibility of potential harm, and the characteristics of affected persons. The revised version updates procedural guidance on applicability in criminal investigation and arrest contexts and clarifies the process for determining whether a person is subject to high-impact artificial intelligence, while maintaining the overall determination framework introduced on 22 January 2026.
On 27 January 2026, the Ministry of Science and ICT released the revised guidelines on ensuring transparency in systems Artificial Intelligence (AI), based on the version released on 22 January 2026 pursuant to Article 31 of the Artificial Intelligence Basic Act. The guidelines provide operational interpretation of statutory transparency obligations applicable to AI business operators that provide products or services to users. They detail obligations of prior notice where products or services are operated using high-impact AI or generative AI, obligations to indicate that outputs are generated by artificial intelligence, and specific notice or marking requirements for deepfake outputs that are difficult to distinguish from reality. The guidelines clarify the scope of application, including allocation of responsibility between artificial intelligence developers and artificial intelligence users, and specify acceptable technical and organisational methods for notice and marking.
On 22 January 2026, the Ministry of Science and ICT released the artificial intelligence (AI) safety assurance guidelines under Article 32 of the Artificial Intelligence Basic Act and its Enforcement Decree. The Guidelines provide structured technical and procedural guidance on the implementation of statutory safety assurance obligations for applicable artificial intelligence systems. They address the determination of applicability and responsible entities, systematic identification, evaluation, and mitigation of reasonably foreseeable risks across the artificial intelligence lifecycle, and establishment of continuous monitoring mechanisms. The Guidelines also set out requirements for safety incident response, including internal escalation and external reporting, and procedures for submission of safety assurance results to the competent authority. They clarify that the Guidelines function as a reference instrument to support compliance with statutory and subordinate legislation and may be revised to reflect legal, technological, or risk-related developments.
On 22 January 2026, the Ministry of Science and ICT released the high-impact artificial intelligence (AI) operator obligations guidelines to support the implementation of Article 34 of the Artificial Intelligence Basic Act. The guidelines elaborate on the statutory responsibilities of operators that provide or use high-impact artificial intelligence products or services. They specify obligations relating to the establishment and operation of risk management measures, explanations of artificial intelligence outputs and decision criteria within technically feasible limits, disclosure of information on training data used in development and operation, user protection measures, human oversight mechanisms, and the preparation and retention of documentation demonstrating safety and reliability measures. The Guidelines explain how these obligations apply across the full lifecycle of high-impact artificial intelligence and clarify their role as interpretative guidance supporting compliance with existing legal obligations rather than creating additional binding requirements.
On 22 January 2026, the Ministry of Science and ICT released the artificial intelligence (AI) impact assessment guidelines following the entry into force of the Artificial Intelligence Basic Act. The guidelines provide methodological guidance on the conduct of artificial intelligence impact assessments, with a primary focus on high-impact artificial intelligence systems. They define the concept and objectives of impact assessments as procedures for identifying, analysing, and managing potential impacts on human dignity, fundamental rights, and safety prior to the provision of products or services. The guidelines set out recommended assessment timing, assessment subjects, and responsible entities, including artificial intelligence developers and artificial intelligence users, and describe the relationship between artificial intelligence impact assessments and other assessment frameworks. They structure the assessment process into preparatory, execution, and follow-up stages and include annexes with standard assessment templates, illustrative risk scenarios by system function, and practical implementation guidance.
On 22 January 2026, the Enforcement Decree of the Framework Act on the Development of Artificial Intelligence and Creation of a Trust Base (Presidential Decree No. 36053) entered into force. The Enforcement Decree repeals the Regulations on the Establishment and Operation of the National Artificial Intelligence Strategy Committee pursuant to Supplementary Provisions Article 2. The Enforcement Decree establishes transitional governance arrangements pursuant to Supplementary Provisions Article 4. Under Article 4(1) and Article 4(2), the National Artificial Intelligence Strategy Committee established under the repealed regulations is deemed to be the National Artificial Intelligence Strategy Committee under Article 7 of the Framework Act on the Development of Artificial Intelligence and Creation of a Trust Base, and its affairs are succeeded accordingly. Provisions concerning digital medical devices listed in Appendix 1 take effect on 24 January 2026.
On 22 January 2026, the Bill on Development of Artificial Intelligence and Establishment of Trust-Based Foundations (Bill no. DD20131) enters into force. The Bill guides the ethical and responsible development and use of artificial intelligence (AI) technologies and incorporates 19 Bills for establishing rules for AI. The Bill's purpose is to protect the rights, interests, and dignity of the people. The Bill defines the term "high-impact artificial intelligence" as an artificial intelligence system that may affect or cause danger to human life, physical safety, and fundamental rights and is used in energy supply, in the production of food, in the development of medical devices, in the management of nuclear materials, the analysis of biometric information and more. When providing such a high-impact AI or generative AI, the provider is obligated to disclose that the product or service is based on AI. Products and services generated by generative AI need to be indicated as such. When providing virtual sound, images, or videos, the provider must include a notification or a display about its origin in an AI system that is clearly recognisable by the user.
On 22 January 2026, the Bill on Development of Artificial Intelligence and Establishment of Trust-Based Foundations (Bill no. DD20131) enters into force. The Bill establishes a National AI Policy Centre, the AI Policy Centre, the AI Safety Research Institute, the Civilian Voluntary AI Ethics Committee, and the Trust Foundation for AI. The National AI Committee is tasked with matters related to policies for the promotion of the AI industry, the research and development strategy in the AI field, the AI-related investment strategy, AI data centres, and promoting international cooperation and usage in industrial sectors. The AI Policy Centre is designated to share its expertise in the development of master plans, AI-related measures, and the research and analysis of the impact on society, trends, and social and cultural changes. The AI Safety Research Institute is tasked with defining and analysing safety risks, research, evaluation criteria and methods, AI safety technology, and standardisation research. The Civilian Voluntary AI Ethics Committee is designated to ensure compliance with the Ethics Principles.
On 22 January 2026, the Bill on Development of Artificial Intelligence and Establishment of Trust-Based Foundations (Bill no. DD20131) enters into force. The Bill guides the ethical and responsible development and use of artificial intelligence (AI) technologies and incorporates 19 Bills for establishing rules for AI. The Bill defines the term "high-impact artificial intelligence" as an artificial intelligence system that may affect or cause danger to human life, physical safety, and fundamental rights and is used in energy supply, in the production of food, in the development of medical devices, in the management of nuclear materials, the analysis of biometric information and more. The Bill enables the government to establish AI ethics principles, including safety and reliability, accessibility, and assurance of AI that contributes to human life and prosperity. Lastly, the government is allowed to promote projects regarding verification and certification voluntarily promoted by organisations to ensure safe and reliable AI. These projects include guidelines on AI development.
On 21 January 2026, the President signed the Enforcement Decree of the Framework Act on the Development of Artificial Intelligence and Creation of a Trust Base (Presidential Decree No. 36053). The Enforcement Decree, as promulgated, does not include elements announced during the drafting stage on 15 January 2025 concerning the establishment of a primary subordinate legislative framework to operationalise national governance measures, industrial development measures, risk prevention mechanisms, obligations of providers, transparency requirements, safety requirements, accountability mechanisms, or enforcement and compliance measures under the Act. The final Enforcement Decree text is limited to provisions on the enforcement date, the repeal of the Regulations on the Establishment and Operation of the National Artificial Intelligence Strategy Committee, and transitional measures relating to institutional succession and organisational continuity as set out in the Supplementary Provisions.
On 21 January 2026, the President signed the Enforcement Decree of the Framework Act on the Development of Artificial Intelligence and Creation of a Trust Base (Presidential Decree No. 36053). The Enforcement Decree, as promulgated, does not include provisions previously announced during the drafting phase concerning operationalisation of obligations of providers, transparency, safety, accountability, risk prevention, industrial development measures, or enforcement and compliance mechanisms. The final Enforcement Decree text is limited to the determination of enforcement dates pursuant to Supplementary Provisions Article 1, the repeal of the Regulations on the Establishment and Operation of the National Artificial Intelligence Strategy Committee pursuant to Supplementary Provisions Article 2, and transitional and institutional succession measures concerning the National Artificial Intelligence Strategy Committee, its support team, and dispatched personnel pursuant to Supplementary Provisions Articles 3 to 6.
On 21 January 2026, the President signed the Enforcement Decree of the Framework Act on the Development of Artificial Intelligence and Creation of a Trust Base (Presidential Decree No. 36053). The Enforcement Decree repeals the Regulations on the Establishment and Operation of the National Artificial Intelligence Strategy Committee pursuant to Supplementary Provisions Article 2. The Enforcement Decree establishes transitional governance arrangements pursuant to Supplementary Provisions Article 4. Under Article 4(1) and Article 4(2), the National Artificial Intelligence Strategy Committee established under the repealed regulations is deemed to be the National Artificial Intelligence Strategy Committee under Article 7 of the Framework Act on the Development of Artificial Intelligence and Creation of a Trust Base, and its affairs are succeeded accordingly. The Enforcement Decree applies from 22 January 2026. Provisions concerning digital medical devices listed in Appendix 1 take effect on 24 January 2026.
On 20 January 2026, the Broadcasting and Media Communications Commission (KMCC) published the legal guidance on telecommunications-related Laws for user protection relating to AI service providers, following a review of how existing telecommunications statutes apply to artificial intelligence services provided via information networks. The guidance analyses user-protection provisions under the Telecommunications Business Act and the Information and Communications Network Act, noting that AI services may fall within the scope of value-added telecommunications services and information and communications services, while the applicability of specific provisions can depend on the service form and mode of provision. It states that, where applicable, AI service providers should comply with duties under the Telecommunications Business Act, including prohibitions on conduct that harms user interests and failures to notify users of significant matters, and it reviews provisions under the Information and Communications Network Act on preventing the distribution of illegal or harmful information and protecting children and adolescents. The KMCC developed the legal guidance with the Korea Information Society Development Institute (KISDI) and external legal experts.
On 8 January 2026, the Special Act on the Promotion of Artificial Intelligence Data Centres (Bill no. 2215928) was introduced to the National Assembly. The Act aims to simplify licensing procedures by establishing a centralised window under the Minister of Science and Information and Communications Technology (ICT) for the batch processing of permits. Applicants for the construction and operation of these centres may apply through the new process, which mandates that relevant agencies initiate review procedures within a set time frame. The legislation introduces a timeout mechanism that permits are deemed granted if review results are not provided by authorities promptly. Additionally, the Act allows licensed artificial intelligence data centres located outside metropolitan areas to engage in direct electricity trading with large-capacity power sources. These measures intend to facilitate the expansion of infrastructure for machine learning and artificial intelligence development while addressing current institutional inconsistencies regarding power access.
On 8 January 2026, the Partial Amendment to the Telecommunications Business Act (Bill No. 2215922) was introduced to the National Assembly. The Bill seeks to prohibit value-added service providers from using dark patterns in the design and operation of online interfaces, such as websites and mobile applications. Specifically, the amendment prohibits practices that interfere with users’ rational decision-making, including making cancellation procedures more complex than the subscription process without justifiable cause, or limiting cancellation to methods other than those used at the time of subscription.
On 22 December 2025, the Ministry of Science and Information and Communication Technology closes the public consultation on the Enforcement Decree of Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation, which had been open since 12 November 2025 to gather opinions on user-rights-related provisions contained in the draft Enforcement Decree. The consulted draft included transparency obligations requiring prior notification to users when artificial intelligence is used in products or services and clear labelling of generative artificial intelligence outputs including deepfake outputs, considering user age and physical conditions. It required artificial intelligence operators to publish risk management plans, explanation procedures, user protection plans, human oversight information and documentation retention for five years except for trade secrets. It also included artificial intelligence impact assessment requirements specifying affected groups, affected fundamental rights, impact extent, use patterns, evaluation indicators, mitigation and recovery measures and improvement plans, determining information that artificial intelligence businesses must evaluate and disclose regarding impacts on users.
On 22 December 2025, the Ministry of Science and Information and Communication Technology closes the public consultation on the Enforcement Decree of Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation, which had been open since 12 November 2025 to gather opinions on governance provisions included in the draft Enforcement Decree. The consulted draft contained governance arrangements specifying the designation and operation of an artificial intelligence safety research institute conducting artificial intelligence safety and trust functions, an artificial intelligence policy centre supporting artificial intelligence policy development and international norm establishment and dissemination, and an artificial intelligence cluster dedicated management body providing integrated support for artificial intelligence cluster tasks. These governance elements form part of the institutional structure to implement artificial intelligence industry promotion and the safety and trust foundation required under the Framework Act.
On 22 December 2025, the Ministry of Science and Information and Communication Technology closes the public consultation on the Enforcement Decree of Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation, which had been open since 12 November 2025 to gather opinions on design requirements in the draft Enforcement Decree. The consulted draft contained design requirements including criteria for artificial intelligence research and development, learning data construction and artificial intelligence adoption and utilisation; procedures and criteria for artificial intelligence cluster designation and management; transparency obligations requiring advance notice to users regarding artificial intelligence operation in products or services and clear labelling of generative artificial intelligence outputs including deepfakes; safety obligations defining artificial intelligence system thresholds using cumulative compute of 10²⁶ floating-point operations; criteria for determining high-impact artificial intelligence including use area, impact on fundamental rights, severity, frequency and sector-specific characteristics; and artificial intelligence impact assessment requirements specifying affected groups, affected fundamental rights, impact extent, use patterns, evaluation indicators, mitigation and recovery measures and improvement plans.
On 2 December 2025, the National Assembly adopted the Bill on Partial Amendment to the Act on Promotion of Industrial Digital Transformation (Bill No. 2214589). The amendment revised Articles 6(3), 7, and 8 to rename the Industrial Digital Transformation and Artificial Intelligence Utilisation Committee and to expand its mandate. The amended provisions define the Committee’s composition and extend its deliberative and oversight functions to policies, plans, and implementation relating to industrial artificial intelligence.
On 27 November 2025, the Government released the Regulatory Rationalisation Roadmap for the AI Sector. The document outlines a cross-government plan involving 25 ministries and 67 regulatory-improvement tasks covering technology development, service use, infrastructure, and trust-and-safety measures. It identifies a wide range of regulatory barriers, including issues related to AI training data, the application of copyright and industrial-property rights, the use of synthetic data, pseudonymisation and data-combination procedures, the opening of public data, autonomous-driving demonstration zones, and the safety certification of outdoor mobile robots. It also addresses questions concerning the use of personal data for AI training, data-centre artwork, and lift-installation rules, the definition of high-impact AI, and the development of AI-related hiring guidelines. The roadmap notes growing global competition, rapid AI-technology development, and the need for proactive regulatory updates. It sets out timelines from 2025 to 2032 for legislative amendments, the preparation of guidelines, the construction of data spaces, the development of criteria for AI-ready public data, standard models for manufacturing data, and improvements to the provision of pseudonymised data by public institutions.
On 27 November 2025, the Personal Information Protection Commission (PIPC) issued an investigation finding during the final review of the 2025 Proactive Administration Best Case Competition. The finding concerned the personal information processing practices of the Chinese generative AI service DeepSeek. The PIPC noted that, immediately after DeepSeek’s launch in January 2025 and amid wider global attention to personal information risks, it established direct contact with the company’s headquarters and carried out its own preliminary analysis. The PIPC reported deficiencies in DeepSeek’s handling of personal information and recommended a temporary suspension of the service until these issues were addressed. DeepSeek accepted the recommendation and suspended the service globally, then resumed operations after implementing the improvements and supplementary measures requested by the PIPC. The PIPC stated that the finding affected users in the Republic of Korea and in other jurisdictions, and represented the first instance in which an overseas operator, at an early stage of service launch, committed to follow domestic law in response to the PIPC’s proactive engagement.
On 26 November 2025, the Bill on Partial Amendment to the Act on Promotion of Industrial Digital Transformation (Bill No. 2214589) was introduced to the National Assembly. The Bill would amend Articles 6(3), 7, and 8 to rename the Industrial Digital Transformation and Artificial Intelligence Utilisation Committee and to expand its mandate. The Bill would define the Committee’s composition and extend its deliberative and oversight functions to policies, plans, and implementation relating to industrial artificial intelligence.
On 12 November 2025, the Ministry of Science and Information and Communication Technology opened the public consultation on the Enforcement Decree of the Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation until 22 December 2025 to receive opinions on provisions affecting user rights. The draft Enforcement Decree includes transparency obligations requiring businesses to notify users in advance when products or services operate based on artificial intelligence and to ensure that users can clearly recognise when generative artificial intelligence outputs, including deepfakes, are generated. It requires notices and labelling to consider user age and physical conditions. The draft requires artificial intelligence operators to publish risk management plans, explanation procedures, user protection plans, human oversight information and documentation retention for five years, excluding trade secrets. It also includes artificial intelligence impact assessment requirements identifying affected groups, affected fundamental rights, impact scope, use patterns, evaluation indicators, mitigation measures and improvement plans, determining user-related information to be assessed by artificial intelligence businesses.
On 12 November 2025, the Ministry of Science and Information and Communication Technology opened the public consultation on the Enforcement Decree of the Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation until 22 December 2025 to collect opinions on governance provisions in the draft Enforcement Decree. The draft specifies designation and operation of institutions supporting artificial intelligence policy and safety, including the artificial intelligence safety research institute responsible for artificial intelligence safety and trust functions, the artificial intelligence policy centre responsible for artificial intelligence policy development and support for establishing and spreading international norms, and the artificial intelligence cluster dedicated management body responsible for integrated support for artificial intelligence cluster tasks. These governance provisions were prepared following earlier opinion collection from relevant ministries and are part of the institutional framework for implementing artificial intelligence industry development and the safety and trust foundation under the Framework Act.
On 12 November 2025, the Ministry of Science and Information and Communication Technology opened the public consultation on the Enforcement Decree of the Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation until 22 December 2025 to collect opinions on design requirements defined in the draft Enforcement Decree. The draft Enforcement Decree specifies and clarifies delegated matters under the Framework Act, including criteria for learning data construction and artificial intelligence adoption and utilisation and safety obligations defining artificial intelligence system thresholds using cumulative compute of 10²⁶ floating-point operations.
On 24 October 2025, the Fair Trade Commission’s amended consumer protection guidelines in e-commerce, establishing specific interpretation standards and recommendations for dark pattern regulation, enter into force. The amendments follow the entry into force of the Amended Electronic Commerce Act in February 2025. The revised guidelines clarify application criteria for six regulated types of online dark patterns, including hidden renewal, sequential price disclosure, pre-selected options, misleading visual hierarchy, obstruction of withdrawal or cancellation, and repetitive interference. Hidden renewals involve automatically increasing subscription fees or converting free trials to paid services without proper consent. The revised guidelines specify that businesses must obtain separate, explicit consent from consumers before any price increases or conversions, not just general permission at signup. If proper consent is not obtained, businesses must take necessary measures, such as cancelling the automatic payment rather than allowing charges to proceed. Drip pricing refers to showing only partial costs in initial product listings rather than the total price, including taxes, fees, and shipping. The guidelines specify which screens must display complete pricing information and which costs must be included in the total amount shown to consumers from the first viewing. The revision also provides concrete examples of other prohibited practices, including pre-selecting add-on purchases, using visual design to manipulate consumer choices, making cancellation procedures unnecessarily complicated, and repeatedly asking consumers to reconsider their decisions through pop-ups. Beyond explicit prohibitions, the guidelines include recommendations for businesses to voluntarily improve their practices. The recommendations encourage clear communication of variable pricing conditions, transparent disclosure when optional selections involve additional costs, and intuitive placement of cancellation and withdrawal buttons in easily accessible locations.
On 23 October 2025, the Constitutional Court ruled in case 2021Hun-Ma290, with case 2021Hun-Ma1521 consolidated into it, concerning constitutional petitions challenging Article 22-5, Paragraph 2 of the Telecommunications Business Act (Act No. 17352, amended 9 June 2020) and Article 30-6, Paragraphs 1 and 2 of the Enforcement Decree of the Telecommunications Business Act (Presidential Decree No. 31223, amended 8 December 2020), which impose obligations on value-added telecommunications service providers to implement technical and administrative measures to prevent the distribution of illegally filmed materials. The Constitutional Court held that these obligations do not infringe freedom of expression or freedom of communication and dismissed the constitutional challenge to the above provisions. The Court dismissed the remaining petitions concerning Article 22-5, Paragraph 1 of the former Telecommunications Business Act, Article 95-2, Paragraph 1-3 of the Telecommunications Business Act, and Article 44-7, Paragraph 1, Subparagraph 1 and Paragraphs 2 and 3 of the former Act on Promotion of Information and Communications Network Utilisation and Information Protection as inadmissible for reasons including untimely filing, lack of self-relevance, or lack of direct infringement.
On 23 October 2025, the Korea Fair Trade Commission (KFTC) adopted amendments to the consumer protection guidelines in e-commerce, establishing specific interpretation standards and recommendations for dark pattern regulation. The amendments follow the entry into force of the Amended Electronic Commerce Act in February 2025. The revised guidelines clarify application criteria for six regulated types of online dark patterns, including hidden renewal, sequential price disclosure, pre-selected options, misleading visual hierarchy, obstruction of withdrawal or cancellation, and repetitive interference. Hidden renewals involve automatically increasing subscription fees or converting free trials to paid services without proper consent. The revised guidelines specify that businesses must obtain separate, explicit consent from consumers before any price increases or conversions, not just general permission at signup. If proper consent is not obtained, businesses must take necessary measures, such as cancelling the automatic payment rather than allowing charges to proceed. Drip pricing refers to showing only partial costs in initial product listings rather than the total price, including taxes, fees, and shipping. The guidelines specify which screens must display complete pricing information and which costs must be included in the total amount shown to consumers from the first viewing. The revision also provides concrete examples of other prohibited practices, including pre-selecting add-on purchases, using visual design to manipulate consumer choices, making cancellation procedures unnecessarily complicated, and repeatedly asking consumers to reconsider their decisions through pop-ups. Beyond explicit prohibitions, the guidelines include recommendations for businesses to voluntarily improve their practices. The recommendations encourage clear communication of variable pricing conditions, transparent disclosure when optional selections involve additional costs, and intuitive placement of cancellation and withdrawal buttons in easily accessible locations.
On 15 October 2025, the Korea Fair Trade Commission (KFTC) issued a ruling against Spotify AB over subscription cancellations with KRW 1 million. The ruling found that Spotify AB violated the E-Commerce Act by failing to disclose operator identity and withdrawal information on its Korean online platform. The ruling affects the subscription-based streaming service, Spotify Premium. It was noted that Spotify did not provide the required business details, including the representative’s name, address, or registration number and failed to inform users of their cancellation rights. The Commission issued a corrective order, and Spotify has since corrected its disclosures and updated user guidance on withdrawal procedures.
On 15 October 2025, the Korean Fair Trade Commission (KFTC) issued a ruling against NHN Bugs with a fine of KRW 3 million over hindering subscription cancellations. The ruling was issued over multiple violations of the E-Commerce Act, including hindering subscription cancellations and failing to disclose withdrawal rights. The ruling applies to NHN Bugs' online music service, Bugs. The company omitted key information about mid-term cancellations and withdrawal procedures before purchase. The Commission ordered remedial measures, and NHN Bugs has since updated its website and mobile application to display accurate cancellation and consumer rights information.
On 15 October 2025, the Korea Fair Trade Commission (KFTC) issued a ruling against Content Wavve with a fine of KRW 4 million for obstructing consumer contract cancellations in violation of the E-Commerce Act. The ruling applies to Content Wavve's subscription-based video streaming service. The ruling highlighted that Content Wavve failed to provide clear guidance on mid-term cancellations, instead only presenting the less favourable standard cancellation option. The KFTC ordered corrective action, and the company has since amended its user interface and online information to include information on mid-term termination procedures.
On 15 October 2025, the Korea Fair Trade Commission (KFTC) issued a ruling against Coupang Inc. with a fine of KRW 2.5 million for deceptive practices breaching the Act on Consumer Protection in Electronic Commerce. The ruling applies to Coupang’s e-commerce and subscription service activities, particularly the Wow Membership programme. The Commission found that Coupang used dark patterns to induce existing members to consent to a subscription price increase by manipulating button placement and payment interfaces. The KFTC issued a corrective order, and Coupang has since adjusted its interface and allowed users to review or withdraw consent.
On 13 October 2025, the Fair Trade Commission ordered Coupang Eats to amend its merchant contract terms to address unfair practices. The company must revise ten categories of terms, including stopping the practice of charging intermediation and payment processing fees based on pre-discount prices and instead calculating fees on actual transaction amounts. The order also requires Coupang Eats to clarify and limit restaurant exposure distance restrictions with mandatory merchant notification, specify reasons for withholding payment settlements, extend objection periods to seven days, and pay interest for platform-related delays. In addition, the company must replace public notices with individual 30-day notifications for disadvantageous contract changes, restrict liability exemptions to cases without operator intent or negligence, and introduce notification and three-business-day objection procedures before deleting merchant reviews. It must also remove advertising refund period limits and eliminate excessive compensation obligations and unfair cost-sharing requirements.
On 13 October 2025, the Fair Trade Commission ordered Baemin, a food delivery platform, to amend nine categories of unfair merchant contract terms. It requires Baemin to clarify and limit restaurant exposure distance restrictions with mandatory merchant notification, specifying payment settlement withholding reasons with extended seven-day objection periods and delayed interest payments for platform-caused delays. It also requires replacing public notices with individual 30-day notifications for disadvantageous terms changes, limiting liability exemptions to cases without operator intent or negligence, constraining the scope of arbitrary merchant obligations to named service areas, and prohibiting the imposition of core obligations through subsidiary policies or guidelines.
On 9 October 2025, the Fair Trade Commission (KFTC) concluded its investigation into contractual terms used by three talent market platforms, namely Brave Mobile Co (Soongo), Kmong Co, and Tal-ing. The investigation examined intermediary liability exemptions, clauses transferring responsibility for personal data leaks to users, and restrictions on monetary rights and restitution obligations. It identified 26 unfair clauses across 10 categories, contravening Sections 6, 7, 9, and 10 of the Act on the Regulation of Terms and Conditions. The KFTC required the platforms to amend these terms to ensure liability in cases of intent or gross negligence, to refund unused cyber-money upon contract termination, and to align refund and withdrawal procedures with the Electronic Commerce Act. The enforcement action aims to enhance transparency and fairness in service intermediation and to protect both freelancers and consumers. The KFTC stated that similar monitoring and corrective measures will continue across other online platform sectors, including taxi-hailing, camping, cross-border e-commerce, and interior-design services.
On 2 October 2025, the amendment to the Enforcement Decree of the Personal Information Protection Act entered into force. The amendment aims to strengthen the domestic agent system for overseas businesses and expand obligations for local government-funded and -invested institutions. The amendment requires overseas operators with domestic subsidiaries to appoint local representatives responsible for management, annual training, planning, and inspections. The scope of public institutions now includes local government-funded entities. These entities must register personal information files within 60 days and, if needed, conduct impact assessments within two years.
On 2 October 2025, the amendment to the Personal Information Protection Act enters into force. The amendment requires overseas businesses with a local subsidiary to designate that subsidiary as their domestic representative. The amendment introduces new obligations and sanctions for non-compliance. It aims to strengthen South Korea’s domestic agent system and adjust personal data protection requirements for foreign entities operating in the country. The Personal Information Protection Commission (PIPC) will be responsible for revising enforcement ordinances and conducting inspections to support the implementation of the new provisions.
On 30 September 2025, the Korea Fair Trade Commission (KFTC) issued a ruling against dark pattern practices in large online platforms, following the enforcement of the amended Electronic Commerce Act in February 2025. The measure applies to 36 firms in over-the-top (OTT) platforms, music, e-book, e-commerce, rental, and travel sectors. The KFTC required corrections or plans to address practices, including blocking cancellations, hidden renewals, misleading layouts, and partial price disclosures. Platforms are required to allow online cancellations, obtain explicit consent for renewals or price changes, and show total prices upfront.
On 18 September 2025, the Korean Fair Trade Commission (KFTC) issued a ruling regarding the merger between Shinsegae and Alibaba Group, culminating in the formation of the Gmarket-AliExpress joint venture. The deal involves Apollo Korea, a Shinsegae affiliate, acquiring a 50% stake in Grand Ophus Holdings, an Alibaba affiliate. This acquisition will give Shinsegae and Alibaba joint control over Grand Ophus Holdings. Grand Ophus Holdings, in turn, will fully own both G Market and AliExpress Korea. G Market is currently the third-highest ranked e-commerce provider in Korea. AliExpress Korea has rapidly grown its influence and user base, recently surpassing G Market in Monthly Active Users. Therefore, the KFTC believes this merger will significantly impact the Korean e-commerce market, and the transaction is considered a horizontal combination in the open market sector. The KFTC identified an anti-competitive effect of combining Gmarket's rich domestic consumer data with AliExpress's global consumer preference data and advanced analytics, and imposed conditions for approval. The primary condition mandates the separation of domestic consumer data between Gmarket and AliExpress, preventing its shared use in direct purchase markets. In addition, it specifically prohibited the mutual use of consumer data for the overseas direct purchase market. The ruling also requires Gmarket and AliExpress to operate independently to maintain personal information protection and data security standards. These orders are valid for three years, extendable upon further KFTC review. The decision, which addresses the anti-competitive risk of combining comprehensive domestic and global consumer data sets, sets a precedent in considering consumer information as a competitive asset in merger evaluations.
On 18 September 2025, the Fair Trade Commission (FTC) closes the consultation on draft amended consumer protection guidelines in e-commerce, establishing specific interpretation standards and recommendations for dark pattern regulation. The draft amended guidelines were issued following the implementation of the amended E-commerce Act, which defines six types of dark patterns, namely hidden renewal, drip pricing, pre-selected options, misdirection, obstruction of cancellation/withdrawal, and repeated interference, and allows the FTC to impose corrective measures and fines for violations. The guidelines clarify how businesses must obtain explicit consent for price increases or subscription conversions, display total costs upfront, avoid pre-selected or misleading options, prevent repeated interference with consumer choices, and ensure cancellation or withdrawal processes are as accessible as purchase procedures. The amendments also include recommendations for transparent pricing, clear selection items, and prominently displayed cancellation options to promote voluntary compliance and fair online interfaces.
On 18 March 2025, an amendment to the Network Act (Act on Promotion of Information and Communication Network Utilisation and Information Protection) was introduced to the National Assembly of the Republic of Korea. The proposed amendment would introduce procedural requirements for handling requests to remove online content that violates privacy or is defamatory. It would require ISPs to notify users of the action taken and inform them of their options to request information about the uploader or to seek dispute resolution. The amendment would set out rules for the mediation process, including application procedures, referral mechanisms, decision deadlines, and conditions for acceptance. It also specifies situations in which mediation may be refused or suspended and sets out confidentiality obligations for participants.
On 16 September 2025, the Cabinet approved the amendment to the Enforcement Decree of the Personal Information Protection Act. The amendment aims to strengthen the domestic agent system for overseas businesses and expand obligations for local government-funded and -invested institutions. The amendment requires overseas operators with domestic subsidiaries to appoint local representatives responsible for management, annual training, planning, and inspections. The scope of public institutions now includes local government-funded entities. These entities must register personal information files within 60 days and, if needed, conduct impact assessments within two years. The measures take effect on 2 October 2025.
On 16 September 2025, the Ministry of Science and ICT (MSIT) opened an investigation into allegations by an international hacking group concerning the theft and sale of SK Telecom customer data. In coordination with the Korea Internet and Security Agency (KISA), the MSIT requested materials from SK Telecom and began on-site inspections to assess the claims. The investigation seeks to determine the circumstances of the alleged unauthorised access and sale of customer information. The Cyber Intrusion Response Division within the MSIT is leading the inquiry, with support from KISA’s Threat Analysis Division.
On 15 September 2025, the Personal Information Protection Commission announced measures on data and digital industries, including fair use guidelines for copyrighted works and a transaction and compensation system for copyrighted data. Public and pseudonymised data will be more accessible, with civil servant exemptions to encourage disclosure. Original video data can be used for autonomous driving within the year. Pilot zones for autonomous vehicles will expand to the city level. Regulations for Artificial Intelligence (AI) robots in daily life and industry will be updated.
On 12 September 2025, the Personal Information Protection Commission (PIPC) announced that it is cooperating with Jeollanam-do and Sejong Special Self-Governing City to review a total of 1'793 ordinances (985 in Jeollanam-do and 808 in Sejong) in order to identify and address personal information infringement factors not subject to the assessment scope under the Personal Information Protection Act. Of these, 263 ordinances (115 in Jeollanam-do and 148 in Sejong) involved the processing of personal information, and 38 ordinances (19 in each jurisdiction) were found to contain infringement risks, including the unnecessary collection of excessive personal data beyond the intended administrative purpose, the processing of resident registration numbers without the legal basis required by presidential decree under the Personal Information Protection Act, and provisions inconsistent with the purpose or content of the Act. Examples included replacing requirements to provide resident registration numbers with date of birth, removing insufficiently grounded collection of resident registration card copies, and ensuring that when debt-related information on parents of children and adolescents is collected for legal aid purposes, the purpose, items, and retention period of the data are specified in compliance with Article 15(2) of the Act. The local governments plan to revise the ordinances in stages to better protect personal data, and the PIPC will share these findings with other municipalities. The PIPC noted that it expects the changes to reduce unnecessary data collection, strengthen safeguards against leaks, and enhance public trust in local governance.
On 11 September 2025, the Ministry of Trade, Industry and Energy closes the public consultation on the English and Korean drafts of the Korea-EU Digital Trade Agreement (DTA). The DTA establishes a framework to facilitate electronic commerce, support legal certainty and an open, fair online environment. Its core provisions centre on trusted data flows, requiring parties not to restrict cross-border data transfers through measures such as data localisation, while allowing for legitimate public policy objectives such as cybersecurity and privacy protection, provided measures are non-discriminatory and proportionate. The DTA also features consumer protection measures against fraudulent practices, promotes transparency in electronic transactions, ensures effective redress mechanisms, prohibits customs duties on electronic transmissions, and mandates legal frameworks for personal data protection that allow cross-border transfers under appropriate safeguards.
Last updated: 31/03/2026