CoE Framework Convention signatory
This content is for informational and educational purposes only and does not constitute legal advice.
On 27 January 2026, the Spanish Data Protection Agency (AEPD) released a guide on protecting privacy while using AI tools. The guide provides ten recommendations, including advising users not to upload personal data and, in particular, to avoid sensitive data such as health, financial, contractual, geolocation and travel or accommodation data. Further, the guide advises users to review AI service terms and select safer versions, noting data flows including cookies, IP addresses, tracking and usage data, metadata, device data, contacts and location. Users should respect the privacy of third parties, especially minors, and not use images of other people to generate content, while avoiding inputting confidential professional information. Users are advised to seek specialised professional, emotional or psychological support from professionals rather than AI, should they require advice with personal situations.
On 13 January 2026, the Spanish Data Protection Agency (AEPD) published an information note analysing the data protection implications of using third parties’ images in Artificial Intelligence (AI) systems. The note applies to AI providers, platforms, and users that upload, generate, modify, or disseminate images or videos of identifiable persons. It highlights visible risks, including sexualisation and synthetic intimate content, attribution of false events with reputational effects, decontextualisation, wide dissemination, and heightened impact on minors and vulnerable persons. It also identifies less visible risks arising merely from uploading images, including effective loss of control due to third-party processing, hidden retention and copies, involvement of multiple actors, additional provider purposes, metadata and internal inferences, persistent identification across generated content, information asymmetry limiting the exercise of rights, and exposure through errors or security incidents. The note clarifies that some uses may fall outside the General Data Protection Regulation in strictly personal or household contexts, that images of deceased persons are generally excluded, and that other legal regimes, including image rights and criminal law, may apply. The note signals particular supervisory attention where risks are amplified, including loss of control over one’s image, generation of plausible but false content, sexualisation, humiliation or discredit, involvement of minors or vulnerable individuals, and dissemination with significant personal, social, or professional impact.
On 22 August 2025, the Spanish Data Protection Authority (AEPD) fined World 2 Meet EUR 70’000 for the unnecessary collection of personal data. The amount was reduced by 20 per cent to EUR 56’000 for acknowledgement of liability, and by a further 20 per cent to EUR 42’000 for voluntary payment. World 2 Meet, the travel division of the Iberostar Group, required a complainant to submit a copy of his identity document to complete traveller registration. The complainant argued this was unnecessary, as he had already provided the required data for himself and his travel party. The company maintained that copies of the full documents were needed to communicate with the Civil Guard. The AEPD noted that World 2 Meet was notified of the claim on 5 December 2024. On 5 August 2024, the company responded that the information was required to verify customer identities in line with Article 4.3 of Royal Decree 933/2021. However, the AEPD found that compliance with Article 4.3 could be achieved by having customers complete a form limited to the data required under sections A.3 and B.3 of Annexe I of the Royal Decree. Verification of authenticity, the AEPD explained, could be carried out either in person by visually checking the information against the identity document or virtually using mechanisms such as digital certificates.
On 29 July 2025, the National Commission for Markets and Competition (CNMC) expanded its investigation into Apple for potential anti-competitive practices in its App Store. The CNMC is now investigating whether Apple may have established a pricing schedule that developers are required to follow in order to distribute their apps in its stores. The action would constitute a restrictive practice of competition among businesses, thereby expanding the scope of the case to include Article 1 of the Spanish Competition Act (LDC) and Article 101 of the Treaty on the Functioning of the European Union (TFEU). The investigation focuses on allegations that Apple may be imposing unfair trading conditions on developers distributing applications via the App Store. Such practices would violate Article 2 of Law 15/2007 on the Defense of Competition (LDC) and Article 102 of the Treaty on the Functioning of the European Union (TFEU).
On 15 July 2025, the Spanish Data Protection Agency (AEPD) issued guidance clarifying that it can already act against prohibited AI systems that process personal data, before the EU Artificial Intelligence (AI) Act is fully in force. From 2 August 2025, supervisory and sanctioning provisions under Article 5 of the AI Act covering banned AI systems such as real-time biometric identification in public spaces, will begin to apply. Although Spain has not yet adopted its national AI law and the AEPD has not formally been designated as a market surveillance authority, it remains empowered to act in its capacity as data protection authority. It includes overseeing AI-driven data processing when it infringes on data protection rights. The AEPD advises organisations using AI to begin preparing for AI Act compliance and is assessing the internal resources it needs to fulfil future responsibilities under the regulation.
On 26 March 2025, the State secretariat for telecommunications and digital infrastructures closes a consultation on the draft Law for proper use and governance of Artificial Intelligence (AI). The law applies to developers and users of AI across sectors including infrastructure, biometrics, justice, and elections. It aligns with the European Union's AI Act, introducing a right to withdraw harmful AI, mandatory labelling of AI-generated content, and sector-specific oversight. High-risk AI systems face stricter rules, and banned practices including subliminal manipulation may incur fines up to EUR 35 million or 7% of global turnover.
On 18 March 2025, the State secretariat for telecommunications and digital infrastructures opened a consultation on the draft Law for proper use and governance of Artificial Intelligence (AI), until 26 March 2025. The law applies to developers and users of AI across sectors including infrastructure, biometrics, justice, and elections. It aligns with the European Union's AI Act, introducing a right to withdraw harmful AI, mandatory labelling of AI-generated content, and sector-specific oversight. High-risk AI systems face stricter rules, and banned practices including subliminal manipulation may incur fines up to EUR 35 million or 7% of global turnover.
On 11 March 2025, the Council of Ministers approved the Preliminary Draft Law for the proper use and governance of Artificial Intelligence (AI), for the purposes set forth in Article 26.4 of Law 50/1997, aligning Spanish legislation with the European AI regulation to ensure ethical, inclusive, and beneficial AI use. The draft law introduces a digital right to withdraw AI systems that cause serious incidents and mandates clear labelling of AI-generated content. Oversight responsibilities are assigned to specific authorities. The Spanish Data Protection Agency will oversee biometric systems and border management, while the General Council of the Judiciary will be responsible for AI in the justice sector. The Central Electoral Board will supervise AI systems affecting democratic processes, and the Spanish Agency for the Supervision of Artificial Intelligence will cover all other cases. This structure is intended to ensure effective supervision and enforcement across different AI applications. The European AI regulation prohibits AI practices such as subliminal manipulation, exploitation of vulnerabilities, and biometric classification based on sensitive traits, with penalties ranging from EUR 7.5 million to EUR 35 million or up to 7% of global turnover. High-risk AI systems, including those in critical infrastructure, biometrics, and justice, must comply with additional obligations such as risk management, human oversight, and transparency, with penalties for non-compliance.
On 11 March 2025, the Spanish Data Protection Agency (AEPD) published an article on artificial intelligence (AI) and data protection, referring to a recent publication by the UK's Information Commissioner's Office (ICO). The ICO's consultation on generative AI identified several points for clarification, including the requirement that incidental processing of personal data falls within the scope of data protection laws. The ICO stated that data protection rules apply to any processing of personal data, regardless of intent. It also noted that common practices do not necessarily reflect individuals' expectations of how their data will be used. The publication notes that the legal definition of personal data includes a wider range of information than 'personally identifiable information' (PII). It also notes that generative AI models may store or disclose personal data. The AEPD outlines that AI systems are subject to existing data protection regulations, referring to compliance requirements and transparency obligations in AI development.
On 28 January 2025, the Catalan Data Protection Authority (APDcat) published a document titled: "Model for the EIDF: Guide and Use Cases”, which provides a practical methodology for conducting Fundamental Rights Impact Assessments (EIDF) in the design and development of artificial intelligence (AI) systems. This model aims to ensure that AI systems are developed in compliance with fundamental rights and data protection regulations. The first part of the document outlines the EIDF methodology, including phases such as planning, data collection, analysis, and risk management, along with a template for implementation. The second part features practical use cases, such as an advanced learning analytics platform and a human resource management tool, demonstrating the application of the EIDF in real-world scenarios. This model aims to ensure that AI systems are developed in compliance with fundamental rights and data protection regulations.
Last updated: 27/01/2026