Europe · CoE Framework Convention signatory
This content is for informational and educational purposes only and does not constitute legal advice.
On 31 March 2026, the Information Commissioner's Office (ICO) opened a consultation on guidance on automated decision-making, including profiling, until 29 May 2026. The guidance applies to all organisations carrying out automated decision-making, including those using in-house tools or third-party vendors, and is directed at data protection officers, compliance professionals, and technical leads. It defines automated decision-making as any decision based solely on automated processing, including profiling, that produces a legal or similarly significant effect on an individual, including decisions affecting access to credit, employment, housing, public services, or financial circumstances. Profiling, which involves analysing or predicting aspects of a person's behaviour, characteristics, or circumstances, frequently underpins automated decision-making and may involve algorithmic systems or Artificial Intelligence. The guidance requires organisations to assess whether all three triggering conditions are met, that a system is making a decision about a person, that the decision is significant, and that it is solely automated, and, where they are, to establish a lawful basis, meet additional conditions for special category data, and implement safeguards including providing information about decisions, enabling representations, ensuring human intervention, and facilitating the right to contest outcomes. It also highlights particular risks such as discriminatory outcomes, bias in training data, and heightened vulnerability for children and those in precarious circumstances.
On 31 March 2026, the Information Commissioner's Office (ICO) published a report and draft guidance on the use of automated decision-making (ADM) in recruitment. The guidance would apply to employers that use automated tools to process job applications and make hiring decisions, including systems that score, rank, or filter candidates without direct human involvement. The draft guidance sets out the ICO's expectations for organisations using ADM in recruitment. Organisations would be required to test their automated tools regularly for bias, to inform candidates when ADM is used, and to explain how such tools function and affect the application process. Candidates would have the right to contest automated decisions and to request review by a human recruiter. Alongside the publication, the ICO engaged with more than 30 employers on their use of automation in recruitment and wrote to 16 organisations considered likely to be using ADM in hiring.
On 18 March 2026, the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport published a report on copyright and artificial intelligence (AI) pursuant to section 136 of the Data (Use and Access) Act. The report evaluates the use of copyrighted works in developing AI systems, focusing on access to data, transparency regarding training sets, technical standards for access control, and licensing of copyright works for AI development. It addresses the enforcement of restrictions on AI outputs that utilise protected material. Additionally, the report explores the risks associated with digital replicas, such as AI-generated voices or likenesses, noting that existing legal frameworks, including copyright and performers' rights, may not provide sufficient redress for unauthorised impersonation. The report suggests that while current laws require a substantial part of a work to be copied for a violation to occur, they do not adequately cover new AI-generated performances. Consequently, the government proposes to explore new policy options, including the potential introduction of a personality right, to address these gaps while supporting innovation in the creative and technology sectors. The publication was issued alongside an economic impact assessment to outline the financial implications of potential regulatory changes for AI developers and rights holders.
On 10 March 2026, the Office of Communications closes its consultation to contribute evidence on the impact of artificial intelligence (AI) in United Kingdom telecoms markets. The consultation covers how AI tools are being adopted across the telecoms value chain and the customer journey, and how these uses affect residential and business customers. The work is structured around three questions on how AI tools are being deployed, the risks and opportunities for residential and business customers, and whether changes to the Office of Communications’ rules may be needed to support responsible innovation and protect customers. The Office of Communications plans to publish its findings in the second half of 2026 and to indicate any next steps at that stage.
On 24 February 2026, the Financial Conduct Authority (FCA) closes the consultation to contribute evidence to a review on the long-term impact of artificial intelligence (AI) on United Kingdom retail financial services. Known as The Mills Review, the initiative examines how AI could reshape retail financial services for consumers, firms, markets, and regulators by 2030 and beyond. The review is structured around 4 themes: the future evolution of AI technology, the impact of AI on markets and firms, future consumer trends, and the future regulatory approach. The FCA indicates that the review will inform recommendations to support an outcomes-based regulatory approach as the sector adopts AI, while considering effects on competition and innovation. The review discusses potential benefits such as personalisation and lower-cost services, and risks including AI-enabled fraud, bias, and discriminatory outcomes, and opaque decision-making.
On 5 February 2026, the Information Commissioner’s Office (ICO) fined MediaLab.AI, owner of the image sharing and hosting platform Imgur, GBP 247'590 for failing to use children’s personal information lawfully under the UK General Data Protection Regulation (UK GDPR). The ICO found that MediaLab allowed children to use Imgur without checking users’ ages, processed the personal information of children under 13 without parental consent or another lawful basis, and did not carry out a data protection impact assessment to identify and reduce risks to children. These failures meant MediaLab could not know which users were children and did not put basic protections in place. The ICO set the fine taking into account the length of the breaches, the number of children affected, the risk of harm, MediaLab’s global turnover, and MediaLab’s acceptance of provisional findings issued in a Notice of Intent in September 2025, and stated that further regulatory action may follow if committed measures are not implemented.
On 3 February 2026, the Information Commissioner’s Office (ICO) opened formal investigations into X Internet Unlimited Company (XIUC) and X.AI LLC (X.AI) regarding the processing of personal data by the Grok artificial intelligence (AI) system. The investigation follows reports that the AI tool was used to generate non-consensual sexual imagery, including content involving children, which raises concerns regarding compliance with the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018. The ICO aims to evaluate whether personal data was processed lawfully, fairly, and transparently, and if appropriate safeguards were implemented during the design and deployment of Grok to prevent the generation of harmful synthetic content. The ICO will gather evidence, examine technical design choices, and assess risk mitigation strategies. If infringements are confirmed, the ICO may issue enforcement notices or impose financial penalties of up to GBP 17.5 million or 4% of the annual worldwide turnover of the organisations.
On 27 January 2026, the Financial Conduct Authority (FCA) opened a consultation to contribute evidence to a review on the long-term impact of artificial intelligence (AI) on United Kingdom retail financial services, with responses requested by 24 February 2026. Known as The Mills Review, the initiative examines how AI could reshape retail financial services for consumers, firms, markets, and regulators by 2030 and beyond. The review is structured around 4 themes: the future evolution of AI technology, the impact of AI on markets and firms, future consumer trends, and the future regulatory approach. The FCA indicates that the review will inform recommendations to support an outcomes-based regulatory approach as the sector adopts AI, while considering effects on competition and innovation. The review discusses potential benefits such as personalisation and lower-cost services, and risks including AI-enabled fraud, bias, and discriminatory outcomes, and opaque decision-making.
On 27 January 2026, the Office of Communications (Ofcom) opened a consultation on the inquiry into the impact of artificial intelligence (AI) in the telecommunications sector, with submissions due by 10 March 2026. The Office of Communications is inviting input from telecoms providers, technology developers, consumer organisations, and other stakeholders to understand current and emerging AI use cases across the telecoms value chain and the customer journey. The work is structured around 3 questions on how AI tools are being deployed, the risks and opportunities for residential and business customers, and whether changes to the Office of Communications’ rules may be needed to support responsible innovation and protect customers. The Office of Communications plans to publish its findings in the second half of 2026 and to indicate any next steps at that stage.
On 20 January 2026, the Treasury Committee published its final report concluding the inquiry into the opportunities and risks of artificial intelligence (AI) in the financial services sector. The inquiry, which was originally launched on 3 February 2025, examined the impact of automated technologies on consumer protection and financial stability across insurers, international banks, and digital payment providers. The report highlights that approximately 75% of UK financial services firms currently utilise AI, representing a higher rate of adoption than other national economic sectors. The findings indicate that while AI may facilitate faster services and enhanced cybersecurity defences, it introduces significant risks regarding non-transparent decision-making in credit and insurance, potential financial exclusion for disadvantaged consumers, unregulated AI financial advice, and increased levels of fraud. The investigation clarified that the United Kingdom currently lacks AI-specific financial legislation, leaving the Financial Conduct Authority (FCA) and the Bank of England to supervise firms using the existing Senior Managers and Certification Regime and Consumer Duty. The committee identified concerns from industry stakeholders regarding a lack of practical clarity in applying existing rules to AI models, specifically concerning the application of the Senior Managers and Certification Regime to situations involving AI. Consequently, the report recommends that the FCA publish comprehensive guidance on consumer protection and accountability by the end of 2026. The committee also recommends that the Bank of England and FCA conduct AI-specific stress testing to address market shocks. Finally, the report recommends that HM Treasury designate major AI and cloud providers as critical third parties as part of the UK Critical Third Parties Regime, which would bring those parties under special oversight.
On 2 January 2026, the Department for Science, Innovation and Technology (DSIT) closes the public consultation regarding the establishment of an AI Growth Lab. The proposed Lab aims to support innovation in AI-enabled products and services by relaxing targeted regulations imposed upon the AI sector within a controlled environment. The initiative responds to concerns that current regulations, designed before AI existed, may unnecessarily assume human involvement and static products. The Lab would enable testing of AI innovations in sectors such as healthcare, planning, and transport. The government is seeking views on the Lab's design, including whether it should be centrally operated or led by sector regulators. Questions also focus on which sectors to prioritise, what regulations could be modified, and which should be permanent "red lines" for safety and rights. Questions also explore effective Lab oversight and whether successful pilots should lead to streamlined, permanent regulatory reforms. There is also consideration of extending this sandbox model to other emerging technologies, such as quantum and clean energy. The received information will guide policy development.
On 8 December 2025, the Office of Communications (Ofcom) closes the consultation on recommendations on designing for media literacy. The recommendations set out a framework of good practice to guide service providers in promoting media literacy across the UK, building on Ofcom’s statutory duty as part of its Online Safety Roadmap. They apply to service providers that create, host, and distribute content and media to significant UK audiences. Specific areas covered include embedding media literacy by design, offering transparent choices, equipping users with practical management tools, supporting critical assessment of content credibility, and assisting parents and caregivers. They also encourage engagement with expert third parties, educational initiatives, and support for underserved communities. The recommendations aim to empower individuals by enabling meaningful content choices, personalised online experiences, and strengthening critical engagement skills.
On 4 November 2025, the High Court of Justice in England and Wales ruled that Stability AI had partially infringed Getty Images’ ISTOCK and GETTY IMAGES trademarks. The Court found that synthetic watermark images produced by Stable Diffusion version 1.x (used through DreamStudio and the Developer Platform) breached the ISTOCK trademark, and that a few examples from version 2.1 infringed the GETTY IMAGES mark. However, the Court dismissed the rest of Getty’s trademark claims, including those related to versions 1.6 and SD XL, and made no separate ruling on passing off. It also rejected the secondary copyright claim, stating that the Stable Diffusion models were not “infringing copies” under UK copyright law, so Stability AI was not liable for copyright infringement. The Court further clarified that Stability AI was not responsible for models shared on CompVis GitHub or Hugging Face, and noted that Getty Images had withdrawn earlier claims about model training, output generation, and database rights.
On 3 November 2025, the Communications and Digital Committee of the House of Lords opened an inquiry into artificial intelligence (AI) and copyright. The inquiry examines practical measures that would enable creative rightsholders to reserve and enforce rights in relation to AI systems, the levels of transparency and accountability that can reasonably be expected from AI developers, and how licensing, attribution and labelling tools could support a marketplace for creative content. The inquiry includes oral evidence sessions, including sessions focused on international approaches to the use of copyrighted works in AI training and the operation of rights-reservation and opt-out mechanisms. A final report addressing issues including text and data mining and transparency obligations is expected in early 2026.
On 31 October 2025, the UK Information Commissioner's Office (ICO) closes the consultation on the framework on handling of data protection complaints. The consultation sought inputs on the approach to handling data protection complaints under the United Kingdom's General Data Protection Regulation (GDPR) and the Data Protection Act of 2018. The framework proposes triage and threshold-based models to prioritise high-impact cases, reduce backlogs, and focus on systemic risks. It was also highlighted that organisations may face regulatory scrutiny if complaints exceed set thresholds, while lower-risk cases will see lighter engagement.
On 31 October 2025, the Information Commissioner’s Office (ICO) closes its consultation, which had been open since 30 July 2025, on its guidance on profiling tools for online safety. The guidance outlines data protection and privacy requirements for profiling tools deployed in trust and safety systems, including where they are used to meet the obligations under the Online Safety Act 2023. The consultation was conducted via a structured survey with sections on respondent details, application of the guidance, and its effects. Data was gathered through the Citizen Space platform operated by UK supplier Delib. The ICO indicated that responses may inform statistical and general qualitative reporting without attribution to individual respondents.
On 30 October 2025, the Information Commissioner’s Office (ICO) closes the consultation on draft guidance on recognised legitimate interest as a lawful basis for processing data. The draft guidance clarifies the recognised legitimate interest basis, how it differs from legitimate interests, and when it can be applied. Recognised legitimate interest is a new lawful basis for handling personal information under the UK General Data Protection Regulation, introduced by the Data (Use and Access) Act 2025. It is distinct from the existing legitimate interests basis and applies only where one of the five pre-approved conditions in the public interest is met. These conditions allow personal data to be processed for public task disclosure requests, national and public security or defence, emergencies, crime prevention and investigation, and safeguarding. Unlike the legitimate interests basis, organisations using recognised legitimate interests do not need to balance their purposes against people’s rights and freedoms, as the law has already done this. The main benefit is therefore greater certainty and reduced compliance burden when handling personal information for these specific purposes. However, organisations must still show that processing is necessary for the relevant condition and comply with all other data protection obligations. While recognised legitimate interest is narrower in scope than legitimate interests, it provides a secure lawful basis for certain public interest activities. Organisations already relying on legitimate interests for purposes that fall within these conditions do not need to change their lawful basis.
On 29 October 2025, the Office of Communications (Ofcom) opened a consultation on a draft rule on combating mobile messaging scams until 28 January 2026. The draft rule would require mobile providers to limit messaging volumes on pay-as-you-go SIMs, block numbers reported for scams, and detect and stop scam messages in transit. For business messaging, providers and aggregators would need to conduct thorough “know your customer” and “know your traffic” checks, verify sender identities, manage incidents by addressing scam activity, and block malicious messages.
On 28 October 2025, the Office of Communications (Ofcom) closes the consultation on the Guidance for Data Preservation Notices under the Data (Use and Access) Act 2025. The Notices require providers of regulated services to retain data on a child’s online activity after their death. This includes content, metadata, search requests, friend lists, and channels followed. The Guidance sets out responses, extensions, and consequences for non-compliance. The consultation sought inputs on Ofcom’s proposed approach to issuing Data Preservation Notices, including what information must be preserved, including content, metadata, search requests, and friend lists, and how notices should be managed, extended, or cancelled. It also sought views on optional information coroners may share, including usernames or mobile numbers, and on ensuring the process is effective while minimising burden on bereaved families.
On 23 October 2025, the Competition Appeal Tribunal delivered a ruling in the collective proceedings case of Kent v Apple under the Competition Act 1998. The Tribunal found that Apple holds a near-absolute monopoly in both iOS app distribution and iOS in-app payment services. It also found that Apple had imposed exclusionary practices in both markets, including exclusive dealing and tying in the markets of iOS app distribution and iOS in-app payments. Apple's requirement that apps be distributed only through its App Store was ruled an illegal exclusionary practice. Similarly, forcing developers to use Apple's own payment systems was found to be an unlawful tie. Apple also charged developers a 30% commission between 1 October 2015 and 15 November 2024, which was judged to be an excessive and unfair price. Apple's justifications, such as user security and privacy, were rejected as neither necessary nor proportionate. The court set the competitive commission rate for distribution at 17.5% and for payments at 10%. Kent's class was awarded damages for these overcharges, with half the cost deemed to have been passed on to consumers, plus 8% interest.
On 22 October 2025, the Competition and Markets Authority (CMA) published its final decision designating Apple as having Strategic Market Status (SMS) under the Digital Markets, Competition and Consumers Act. The designation covers Apple’s smartphone and tablet operating systems (iOS and iPadOS), native app distribution (App Store), and mobile browser and browser engine (Safari and WebKit). The CMA concluded that these interconnected services form a single digital activity, Apple’s Mobile Platform, through which users access and engage with digital content and services on mobile devices. The SMS designation, effective from 23 October 2025 to 22 October 2030, gives the CMA powers to oversee Apple’s conduct in these areas.
On 22 October 2025, the Competition and Markets Authority (CMA) published its final decision designating Google as having Strategic Market Status (SMS) under the Digital Markets, Competition and Consumers Act. The designation applies to Google’s mobile platform, which includes its mobile operating system (Android), native app distribution (Play Store), and mobile browser and browser engine (Chrome and Blink). The CMA found that these interconnected services constitute a single digital activity, Google’s Mobile Platform, through which users access and interact with digital content and services on mobile devices. The designation, effective from 23 October 2025 to 22 October 2030, grants the CMA oversight powers over Google’s conduct. The CMA may review, extend, or revoke the designation before its expiry and issue revised notices if there are material changes in Google’s operations.
On 22 October 2025, the Department for Science, Innovation and Technology (DSIT) closes the consultation on amendments to the Telecommunications Security Code of Practice. The proposed revisions to the 2022 edition of the Code address evolving threats and technologies by strengthening security requirements on public telecommunications providers. The changes expand technical measures for eSIM technology, encryption standards, privileged access workstations, and Application Programming Interfaces (APIs), and introduce updated compliance deadlines extending to 2028. The consultation seeks stakeholder information on these adjusted obligations, technical guidance, and phased implementation schedules, structured under the Telecommunications (Security) Act 2021 and the Electronic Communications (Security Measures) Regulations 2022.
On 21 October 2025, the Secretary of State laid before Parliament the draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025. The draft statutory instrument updates Schedule 7 of the Online Safety Act to include cyberflashing (section 66A of the Sexual Offences Act 2003) and encouraging or assisting serious self-harm (section 184 of the OSA) as priority offences. Following the entry into force of the regulations, online service providers, including social media platforms and search services, will be required to prioritise these offences under their Online Safety Act duties for illegal content. They must take steps to ensure their services are not used to facilitate or commit cyberflashing or encourage or assist serious self-harm. Providers will also need to remove and limit users’ exposure to such content in accordance with the Office of Communications’ codes of practice.
On 21 October 2025, the Department for Science, Innovation and Technology (DSIT) announced the opening of a public consultation regarding the establishment of an AI Growth Lab until 2 January 2026. The proposed Lab aims to support innovation in AI-enabled products and services by relaxing targeted regulations imposed upon the AI sector within a controlled environment. The initiative responds to concerns that current regulations, designed before AI existed, may unnecessarily assume human involvement and static products. The Lab would enable testing of AI innovations in sectors such as healthcare, planning, and transport. The government is seeking views on the Lab's design, including whether it should be centrally operated or led by sector regulators. Questions also focus on which sectors to prioritise, what regulations could be modified, and which should be permanent "red lines" for safety and rights. Questions also explore effective Lab oversight and whether successful pilots should lead to streamlined, permanent regulatory reforms. There is also consideration of extending this sandbox model to other emerging technologies, such as quantum and clean energy. The received information will guide policy development.
On 20 October 2025, the Information Commissioner's Office (ICO) published guidance on “consent or pay” advertising models in the United Kingdom. The guidance defines “consent or pay” as a business model funding online services by offering users a choice. Under consent or pay, a user may consent to personal information use for personalised advertising, pay a fee to access the service without personalised advertising, or not use the service. Organisations present this choice upon accessing their platform. If users accept personalised advertising, targeted ads are delivered based on a profile formed from provided data (e.g., age), observed data (activity), and inferred data (assumptions). Opting to pay a fee means personal information must not be used for targeted advertising, though contextual advertising may still appear. The guidance states that “consent or pay” models can comply with data protection law, provided consent for personalised advertising is freely given. Organisations must transparently explain how personal information will be utilised for this purpose. Consent given under this model cannot extend to unrelated uses of personal information, which require separate consent requests. Users retain the right to withdraw consent for personalised advertising at any time through an easy mechanism. Upon withdrawal, the organisation may revert to the initial “consent or pay” choice, requiring the user to pay or leave the service. The right to object to direct marketing can be addressed via the same process. Complaints regarding unfair options or non-compliant models should initially be directed to the organisation, with unresolved issues reported to the ICO.
On 20 October 2025, the Competition and Markets Authority (CMA) issued its Phase 1 decision on Getty Images Holdings’ proposed acquisition of Shutterstock. The CMA found that the merger may result in a substantial lessening of competition in the supply of editorial content in the United Kingdom and the supply of stock content globally, including in the United Kingdom, due to horizontal unilateral effects. The CMA concluded that the merger could reduce competition by limiting alternative suppliers in editorial and stock content markets, particularly affecting UK media and creative sectors. The parties have until 27 October 2025 to offer undertakings to address the CMA’s concerns. If acceptable undertakings are not provided, the case will be referred for a Phase 2 investigation. The proposed transaction, valued at approximately GBP 245 million in cash and 319.4 million Getty Images shares, would result in a combined entity with an enterprise value exceeding GBP 3 billion.
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Illegal Content Codes of Practice for user-to-user services, which was initiated on 30 June 2025. These codes outline risk assessment provisions under sections 5.4 to 5.6, requiring providers to determine whether their service is at medium or high risk of specified kinds of illegal harm, as defined in Table C. A service is classified as multi-risk if it faces medium or high risk in two or more harm categories, excluding image-based Child Sexual Abuse Material (CSAM), CSAM URLs, and grooming. Risk levels are identified through the provider’s most recent risk assessment under section 9 of the Online Safety Act 2023 or via a confirmation decision under section 134 of the Act. The provisions specify that offences include inchoate offences such as aiding, abetting, or attempting the commission of listed crimes. Providers must apply these classifications to implement relevant recommended measures under the Codes. The updates form part of broader changes including new content moderation measures (ICU C11-C16), enhanced reporting requirements (ICU D15-D17), recommender system safeguards (ICU E2), settings (ICU F3), and user access (ICU H2-H3). Modifications affect existing provisions on content moderation (ICU C1, C4, C9), reporting (ICU D8-D11, D13), settings (ICU F1-F2), terms of service (ICU G1), and user controls (ICU J1-J2).
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Illegal Content Codes of Practice for user-to-user services, which was initiated on 30 June 2025. These codes outline measures for anti-harassment user controls under ICU J1 and ICU J2, applicable to large services and certain smaller services likely to be accessed by children. Providers must offer registered users the ability to block or mute other accounts, including preventing connected or unconnected users from sending direct messages or interacting with posted content. Blocking restricts all mutual encounters of content, while muting limits visibility unless the muted account’s profile is directly accessed. Services must also enable users to disable comments on their content and provide clear, accessible information on these controls, tailored to the youngest permitted users. These measures apply to services at medium or high risk of harassment, stalking, threats, or abuse, as defined in the risk assessment under the Online Safety Act 2023. The updates form part of broader changes including new content moderation measures (ICU C11-C16), enhanced reporting requirements (ICU D15-D17), recommender system safeguards (ICU E2), settings (ICU F3), and user access (ICU H2-H3). Modifications affect existing provisions on content moderation (ICU C1, C4, C9), reporting (ICU D8-D11, D13), settings (ICU F1-F2), terms of service (ICU G1), and user controls (ICU J1-J2).
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Illegal Content Codes of Practice for user-to-user services, which was initiated on 30 June 2025. These codes outline measures for fraud reporting, including ICU D14, which mandates large services at medium or high risk of fraud to establish dedicated reporting channels for trusted flaggers, such as law enforcement and regulatory bodies. The measure requires providers to publish clear policies on these channels, engage with trusted flaggers to understand their needs, and ensure complaints about suspected fraudulent content are reviewed under ICU C1.3. Providers must also seek feedback biennially to improve channel operations and maintain records of interactions. The recommended trusted flaggers include entities such as the City of London Police, the Financial Conduct Authority, and the National Crime Agency. These steps align with the illegal content safety duties under section 10(3) of the Online Safety Act 2023. The updates form part of broader changes including new content moderation measures (ICU C11-C16), enhanced reporting requirements (ICU D15-D17), recommender system safeguards (ICU E2), settings (ICU F3), and user access (ICU H2-H3). Modifications affect existing provisions on content moderation (ICU C1, C4, C9), reporting (ICU D8-D11, D13), settings (ICU F1-F2), terms of service (ICU G1), and user controls (ICU J1-J2).
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Illegal Content Codes of Practice for user-to-user services, which was initiated on 30 June 2025. These codes outline safety defaults for child-user accounts under Recommendation ICU F1, applicable to services capable of determining user age or age range. These defaults disable network expansion prompts, restrict connection lists to exclude child-user accounts, and limit direct messaging functionality to pre-approved connections unless time-critical interactions require prior informed consent. Automated location display is disabled by default for child accounts to mitigate grooming risks identified in risk assessments. Providers must apply these settings to all child-user accounts, excluding those verified as adults through highly effective age assurance meeting ICU B1's criteria for technical accuracy, robustness, reliability, and fairness. The updates form part of broader changes including new content moderation measures (ICU C11-C16), enhanced reporting requirements (ICU D15-D17), recommender system safeguards (ICU E2), settings (ICU F3), and user access (ICU H2-H3). Modifications affect existing provisions on content moderation (ICU C1, C4, C9), reporting (ICU D8-D11, D13), settings (ICU F1-F2), terms of service (ICU G1), and user controls (ICU J1-J2).
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on proposed amendments to Chapter 4 of the Illegal Content Judgements Guidance, which was initiated on 30 June 2025. The amendments enhance the guidelines for identifying and handling illegal content related to child sexual exploitation and abuse (CSEA). They introduce new paragraphs and boxes on 'Usage examples' and 'Reasonably available information for user-to-user services'. The guidance outlines the duty of providers to report detected but unreported CSEA content to the National Crime Agency, details criteria for identifying indecent images, pseudo-photographs, and prohibited images of children, addresses the use of generative artificial intelligence in creating illegal content, and covers the responsibilities of service providers in inferring the age of subjects in potentially illegal material.
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft guidance on highly effective age assurance for Part 3 Services, which was initiated on 30 June 2025. During the consultation period, stakeholders were invited to review and provide feedback on the proposed guidance, which covers aspects of age assurance, including the use of age estimation technologies and the implementation of access controls. The guidance also discusses the principles of accessibility and interoperability, ensuring that age assurance methods are easy to use and work effectively for all users. Following the closure of the consultation, Ofcom will review the feedback received to finalise the guidance, which will help service providers in adopting recommended measures to protect children and comply with regulatory obligations under the Online Safety Act 2023.
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Protection of Children Code of Practice for user-to-user services, which had opened on 30 June 2025. The draft code introduces measures including PCU C9, which requires providers to assess proactive technology for detecting content harmful to children, PCU C10 for evaluating existing proactive technology, PCU C11 for crisis response protocols, and PCU H2 for user sanctions. Amendments were made to PCU B1, refining criteria for highly effective age assurance, PCU C4 on performance targets for content moderation, PCU D8 and PCU D10 on handling content appeals, and PCU G1, updating terms of service requirements to include provisions on proactive technology and sanctions. These measures aim to enhance compliance with safety duties protecting children under the Online Safety Act 2023.
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Illegal Content Codes of Practice for Search Services, which began on 30 June 2025. These codes introduce the Search moderation measure ICS C8 – Hash matching for intimate image abuse content, which mandates large general search services to implement perceptual hash matching technology to detect intimate image abuse content. The measure requires providers to analyse relevant content, defined as photographs, videos, or visual images encountered by United Kingdom users, using a suitable perceptual hash function and an appropriate set of hashes. Providers must review detected content, prioritising cases with higher likelihood of false positives, and ensure the technology balances precision and recall. The hashes must be sourced from verified databases or the provider’s own records, with regular updates and security measures to prevent unauthorised access. Safeguards for freedom of expression and privacy are integrated, including human review processes and alignment with complaints procedures under section 33 of the Online Safety Act 2023.
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft Illegal Content Codes of Practice for user-to-user services, which was initiated on 30 June 2025. These codes detail recommended measures relating to content moderation for providers to comply with their illegal content safety duties. The measures require all services to maintain a content moderation function capable of reviewing suspected illegal content (ICU C1) and ensuring the swift takedown of such content (ICU C2), unless technically unfeasible. Large or multi-risk services must also establish internal content policies (ICU C3), performance targets (ICU C4), prioritisation strategies (ICU C5), adequate resourcing (ICU C6), and training and materials for moderators (ICU C7–C8). Additional obligations apply to high-risk or large services, including the use of perceptual hash-matching to detect and remove child sexual abuse material (ICU C9), the detection of listed Child Sexual Abuse Material (CSAM) URLs (ICU C10), and proactive technology assessments (ICU C11–C12). The updates form part of broader changes including new content moderation measures (ICU C11-C16), enhanced reporting requirements (ICU D15-D17), recommender system safeguards (ICU E2), settings (ICU F3), and user access (ICU H2-H3). Modifications affect existing provisions on content moderation (ICU C1, C4, C9), reporting (ICU D8-D11, D13), settings (ICU F1-F2), terms of service (ICU G1), and user controls (ICU J1-J2).
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on the draft guidance on proactive technology measures. The guidance aims to assist providers of regulated user-to-user services in implementing proactive technology for detecting illegal content and content harmful to children. It outlines criteria for assessing proactive technology, including the use of high-quality data, addressing biases, evaluating performance, safeguarding against misuse, contextual testing, maintenance, human review, and incorporating feedback. The guidance is structured to support providers through initial assessment, testing and configuration, and deployment and ongoing monitoring stages.
On 20 October 2025, the Office of Communications (Ofcom) closes its consultation on additional safety measures for online platforms. The consultation, which was opened on 30 June 2025, sought input on proposed measures to enhance online safety. These measures include improving livestreaming protections, employing proactive technologies for content detection, and implementing stricter age assurance mechanisms. The aim is to address risks associated with illegal content, such as terrorism, child sexual abuse material (CSAM), and content encouraging suicide or self-harm. Feedback from stakeholders, including service providers and civil society organisations, will be considered.
On 19 October 2025, the Information Commissioner’s Office (ICO) closes the consultation on draft guidance for organisations on how to handle data protection complaints under the rules introduced by the Data (Use and Access) Act. The Act added a new section, 164A, to the Data Protection Act 2018 to require organisations to provide a way for people to complain, acknowledge receipt within 30 days, investigate without undue delay while keeping the complainant informed, and communicate the outcome promptly. The ICO’s guidance explains what organisations must, should, and could do, offering information for each stage. It stresses the importance of a clear complaints procedure, using plain language, keeping proper records, training staff, and considering additional obligations such as the Equality Act 2010. Special attention is required when dealing with complaints from or on behalf of children, including mechanisms for urgent issues and safeguarding. The guidance sets out how to acknowledge complaints, conduct investigations, maintain records, and provide clear outcomes, including information on the right to escalate complaints to the ICO.
On 17 October 2025, the Competition and Markets Authority (CMA) issued a decision to release the commitments it had accepted from Google in February 2022 concerning the company’s Privacy Sandbox proposals under the Competition Act 1998. The commitments were originally imposed to prevent Google’s plans to phase out third-party cookies (TPCs) in Chrome from distorting competition in online advertising. As Google later abandoned its plans to remove or restrict TPCs and announced the retirement of various Privacy Sandbox technologies, the CMA concluded that its competition concerns no longer applied and released the commitments.
On 16 October 2025, the Office of Communications (Ofcom) opened a consultation on its proposed designation of Radio Selection Services (RSS) in the United Kingdom until 11 December 2025. Ofcom recommended that Amazon Alexa, Google Assistant, and Apple Siri be designated under the Media Act 2024 to provide United Kingdom's radio streams through voice commands. The recommendation applies to RSS providers serving over 700,000 UK users listening to internet radio. Designated services must enable users to select and play radio through voice, ensure uninterrupted playback, and comply with radio providers' technical requests without charging fees. It was highlighted that smart speakers account for 97% of current voice-activated radio listening, though smartphone and in-car usage also occur.
On 16 October 2025, the Competition and Markets Authority (CMA) opened a consultation on the revised merger remedies guidance, until 13 November 2025. The guidance updates the 2018 framework to reflect the CMA’s “4Ps” principles, including pace, predictability, proportionality, and process, aimed at improving efficiency, transparency, and business confidence. The proposed guidance expands the circumstances under which behavioural remedies may be accepted, clarifying that while structural remedies remain preferred, behavioural solutions can be effective in certain cases, and their risks can be mitigated. It also notes that remedies may secure efficiencies and customer benefits that promote competition, supporting pro-growth deals beneficial to UK consumers. In addition, the updated guidance also reflects recent procedural changes aimed at increasing transparency and engagement with merging businesses, potentially enabling earlier case clearance.
On 15 October 2025, the UK Information Commissioner's Office (ICO) imposed penalties totalling GBP 14 million on Capita, with Capita plc being fined GBP 8 million and Capita Pension Solutions Limited GBP 6 million for breaches of the UK's General Data Protection Regulation (GDPR) data security requirements. The ruling highlighted that a March 2023 ransomware attack resulted in the exfiltration of 6,656,037 personal data records, including sensitive financial and special category data. The ICO found that Capita failed to implement appropriate technical measures to prevent unauthorised lateral movement within its network despite penetration tests identifying these vulnerabilities as early as August 2022. It also found that its security operations centre took 58 hours to respond to a critical security alert against a one-hour target, allowing threat actors to escalate privileges and access data across multiple domains.
On 14 October 2025, the Office of Communications (Ofcom) released guidance outlining the regulatory obligations of online video game providers under the Online Safety Act. The guidance specifies that the Act applies to online services with user-to-user functions enabling users to create, share or upload content that can be encountered by others, including features such as matchmaking, in-game chat, livestreaming, and user-generated avatars or environments. It emphasises Ofcom’s role in enforcing online safety duties related to illegal content and content harmful to children. The guidance references Ofcom’s Register of Risks for 17 categories of illegal content and 12 types of content harmful to children, including terrorism, grooming, harassment, and violent or bullying content. Providers are instructed to assess risk, implement safety measures, conduct children’s access and risk assessments, and maintain records in accordance with the Act’s safety and related duties.
On 14 October 2025, the United Kingdom’s National Cyber Security Centre (NCSC) launched the Cyber Action Toolkit, aiming to assist small businesses in strengthening cyber defences. The toolkit is available to traders, micro businesses and small organisations. It offers personalised, step-by-step guidance tailored to size and need. It focuses on simple, high-impact actions and uses gamification to track progress through Foundation, Improver and Enhanced levels.
On 13 October 2025, the Office of Communications (Ofcom) provisionally determined that there are reasonable grounds to believe AVS Group Ltd has failed, and continues to fail, to comply with Section 12 of the Online Safety Act 2023, which imposes a duty on providers of regulated pornographic services to ensure that children are prevented from encountering pornographic content through the use of highly effective age-assurance measures. Consequently, Ofcom issued a provisional notice of contravention to AVS Group Ltd on 10 October 2025 under Section 130 of the Act, setting out its provisional findings and the actions proposed. The notice also states Ofcom’s provisional view that AVS Group Ltd breached Section 102(8) of the Act by failing to respond to a statutory information request issued as part of the investigation. The investigation, initiated on 31 July 2025 and expanded on 28 August 2025, concerns the deployment of age-assurance mechanisms across AVS Group Ltd’s pornography platforms, which collectively attract over 9 million monthly UK visits across 34 targeted websites. AVS Group Ltd has 20 working days to submit representations before Ofcom reaches a final decision.
On 13 October 2025, the Office of Communications (Ofcom) closed its investigation into Nippydrive’s compliance with its duties under the Online Safety Act 2023. The investigation, opened on 10 June 2025 under case reference CW/01303/06/25, examined potential non-compliance with statutory obligations applicable to regulated user-to-user services, including failure to respond to an information notice issued on 1 April 2025, to complete and retain a suitable and sufficient illegal-content risk assessment, and to comply with safety duties relating to illegal content that took effect on 17 March 2025. These duties require services to assess risks and implement proportionate measures to prevent users from encountering priority illegal content, including image-based child sexual abuse material. Nippydrive became unavailable to users in the United Kingdom and, to Ofcom’s knowledge, more widely from around 15 June 2025. In light of this, Ofcom determined that it was no longer an administrative priority to pursue enforcement action, while reserving the right to re-open the investigation if the service becomes available again.
On 13 October 2025, the Office of Communications (Ofcom) closed its investigation into the provider of Krakenfiles regarding compliance with obligations under the Online Safety Act. The investigation, opened on 10 June 2025 under Case No. CW/01301/06/25, examined potential failures to comply with statutory information requests issued on 1 April 2025 under Section 100 of the Act, which required providers to submit details of compliance and copies of their illegal content risk assessments by 1 May 2025. The investigation formed part of an enforcement programme launched on 17 March 2025 to assess measures taken by file-sharing and file-storage services to address image-based child sexual abuse material. Ofcom confirmed that the provider had restricted access to users with UK IP addresses from 25 July 2025, thereby reducing exposure to illegal or harmful content in the United Kingdom. Having monitored the website since that date, Ofcom determined the investigation was no longer an administrative priority and therefore closed it, but reserved the right to reopen proceedings should restrictions not be maintained. Non-compliance under the Act may result in financial penalties of up to GBP 18 million or 10% of qualifying worldwide revenue.
On 13 October 2025, the Secretary of State for Science, Innovation and Technology, the Secretary of State for Business and Trade, the Minister for Security, and the Director General of the National Crime Agency issued letters to large businesses on cybersecurity responsibilities. It is addressed to businesses in the United Kingdom across all sectors and requests three actions, including adopting the Cyber Governance Code of Practice with board training and incident exercises, registering for the National Cyber Security Centre’s early warning service, and implementing Cyber Essentials across supply chains and internal systems. The measures aim to improve resilience, protect profitability, and support economic stability.
On 13 October 2025, the Office of Communications (Ofcom) closed its investigation into Nippyshare’s compliance with its duties under the Online Safety Act 2023. The investigation, opened on 10 June 2025 under Case No. CW/01304/06/25, examined potential non-compliance with statutory obligations applicable to regulated user-to-user services, including duties to complete and retain an illegal content risk assessment and to respond to an information notice issued on 1 April 2025 under Section 102(8) of the Act. The provider was required to submit the requested information by 1 May 2025. However, shortly after Ofcom initiated the investigation, Nippyshare became unavailable to users in the United Kingdom and more widely from around 15 June 2025. Given this development, Ofcom determined that it was no longer an administrative priority to continue enforcement action, while reserving the right to reopen the case should the service resume operations. Ofcom continues to monitor the service’s availability and may impose financial penalties of up to GBP 18 million or 10% of qualifying worldwide revenue or seek court orders restricting access if future non-compliance is identified.
Last updated: 31/03/2026