On 25 February 2026, the Committee of Ministers of the Council of Europe approved the HUDERIA Model. The HUDERIA Model complements the HUDERIA Methodology adopted in 2024 by providing supporting materials and resources for its implementation. These include flexible tools, illustrative resources and scalable recommendations linked to the Methodology’s elements, including the Context-Based Risk Analysis (COBRA), the Stakeholder Engagement Process, the Risk and Impact Assessment and the Mitigation Plan. The resources contained within the HUDERIA Model are not presented as best practice examples and do not set minimum standards. The Model may serve as a basis for developing interactive tools, such as online platforms or structured workflows, to facilitate the conduct of risk and impact assessments of AI systems.
On 23 February 2026, 61 Data Protection Authorities, including those from Australia, Spain, Hong Kong, New Zealand, Korea, Singapore, Switzerland, and the European Data Protection Board, adopted a joint statement raising concerns about Artificial Intelligence (AI) systems that generate realistic images and videos of identifiable individuals without consent. The statement focuses on organisations that develop or use AI content-generation systems. It notes that such tools enable non-consensual intimate imagery, defamatory content, and serious harms to children and other vulnerable groups. It reminds organisations that AI systems must comply with existing privacy and data protection laws. It notes that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions. The statement calls for robust safeguards, meaningful transparency, fast and accessible content removal mechanisms, and enhanced protections for children. It also highlighted that regulators are committed to coordinated action through enforcement, policy, and education.
On 20 February 2026, 87 countries, including Australia, the Philippines, Japan, Singapore, South Korea, the European Union, and the International Fund for Agricultural Development, endorsed the Artificial Intelligence (AI) Impact Summit Declaration, setting out a shared and voluntary international framework for advancing inclusive, trustworthy, and development-oriented AI. The Declaration applies across the AI ecosystem, including AI developers, deployers, and users in sectors including infrastructure, research, public services, industry, and workforce development. It advances a set of voluntary, non-binding principles and cooperative mechanisms across seven defined pillars. The Declaration prioritises widening access to AI through affordable digital infrastructure, shared foundational resources, and voluntary frameworks to support locally relevant innovation. It promotes the diffusion of AI for economic growth and social good through open and scalable AI approaches, supported by platforms to replicate successful use cases. It also advances secure and trusted AI through voluntary technical standards and collaborative tools, strengthens international cooperation on AI for science, supports AI adoption for social empowerment, and commits to large-scale AI skilling and reskilling. It also emphasises the development of energy-efficient, resilient, and affordable AI systems guided by non-binding principles.
Last updated: 25/02/2026