Europe · CoE Framework Convention signatory
Implementation is underway but faces significant delays across multiple fronts.
The Commission missed its 2 February 2026 deadline to publish guidelines on the practical application of Article 6 (high-risk AI classification), citing the need to integrate substantial stakeholder feedback. A revised set of guidelines is being prepared across 2026, covering high-risk classification, transparency requirements, incident reporting, fundamental rights impact assessments, provider and deployer obligations, and the interplay of the AI Act with other EU legislation.
The European standardisation bodies CEN and CENELEC failed to deliver harmonised technical standards for high-risk AI requirements by their August 2025 target. Standardisation work remains ongoing, with delivery now expected by end of 2026.
Many member states have not yet formally designated their national competent authorities (market surveillance and notifying authorities) despite the 2 August 2025 deadline. As of early 2026, only three member states had completed full designation; approximately ten had pending legislative proposals; and fourteen had yet to act.
On 13 March 2026, the Council of the EU adopted its negotiating mandate on the Digital Omnibus on AI, proposing to defer high-risk enforcement dates and link them to the availability of harmonised standards and Commission guidance. The Council mandate proposes backstop dates of 2 December 2027 for standalone high-risk AI systems (Annex III) and 2 August 2028 for embedded high-risk AI systems in regulated products (Annex I). The Council mandate also introduces a new prohibition on AI practices relating to the generation of non-consensual sexual and intimate content or child sexual abuse material. Negotiations with the European Parliament are now underway.
This content is for informational and educational purposes only and does not constitute legal advice.
Last updated: 22/03/2026