Deep Lex
Back to Regulation Tracker

Germany AI Regulation

Law(s) enactedTreaty

Europe · CoE Framework Convention signatory

Overview

EU AI ACT
  • The EU AI Act (Regulation 2024/1689) applies directly across all member states. Prohibitions on unacceptable-risk AI systems have been in force since 2 February 2025; GPAI model rules since 2 August 2025. High-risk AI obligations are due from 2 August 2026, subject to the Digital Omnibus proposal which may defer enforcement. For the full implementation timeline, governance structure, and current status, see the European Union overview.
  • The Federal Network Agency (BNetzA) has been proposed as lead market surveillance authority; the Federal Data Protection Commissioner (BfDI) is active on AI and data governance. The national AI Strategy was updated in 2023. AI liability and domestic implementation legislation is in preparation.

Key Sources

EU AI Act (Regulation 2024/1689)View
Bundesnetzagentur (BNetzA) — proposed lead MSAView
EU AI Act National Implementation TrackerView
BaFin — co-MSA for financial sectorView
Council of Europe Framework Convention on AI (CETS 225)View

This content is for informational and educational purposes only and does not constitute legal advice.

AI Regulation Timeline

  1. 09/03/2026
    introduction

    Act implementing EU Regulation on Artificial Intelligence (Draft Act 21/4594) was introduced to Parliament

    On 9 March 2026, the German Federal Government submitted the draft Act to implement the Artificial Intelligence Regulation (No. 21/4594) to the Parliament (German Bundestag). The Act would apply to providers, deployers, importers and distributors of artificial intelligence systems as defined in Regulation (EU) 2024/1689, as well as authorised representatives of non-EU providers and product manufacturers who place artificial intelligence systems on the market under their own name or trademark. The Act would designate the Federal Network Agency as the primary market surveillance and notification authority for AI systems not covered by other designated bodies. The Federal Financial Supervisory Authority would be designated as the competent market surveillance authority for AI systems used in connection with regulated financial activities, while the competent authorities of the federal states would remain responsible for systems deployed by public bodies and media service providers. The Act would further establish an independent Artificial Intelligence Market Surveillance Chamber within the Federal Network Agency for certain high-risk AI systems used in law enforcement, border management and the justice system. It would also establish a coordination and competence centre within the Federal Network Agency to support cooperation between the relevant bodies and act as the central contact point under Regulation (EU) 2024/1689. The Act would require the Federal Network Agency to operate at least one artificial intelligence regulatory sandbox, as well as acting as a central complaints office for violations of Regulation (EU) 2024/1689. It allow for administrative fines of up to EUR 50'000 for infringements not covered by Regulation (EU) 2024/1689.

  2. 12/02/2026
    introduction

    Act implementing EU Regulation on Artificial Intelligence (Draft Act 97/26) was submitted to Federal Council

    On 13 February 2026, the German federal government submitted the draft Act to implement the Artificial Intelligence Regulation (No. 97/26) to the Federal Council (Bundesrat). This Act would establish a national supervisory framework for artificial intelligence. The Act would apply to providers and deployers of AI systems, including entities operating in regulated financial sectors. It would designate the Federal Network Agency as the primary market surveillance authority under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689), as well as the single point of contact under Article 70 of the Act. It would also establish an independent AI Market Surveillance Chamber within the Federal Network Agency for high-risk AI systems used in law enforcement, border management, the justice system, and democratic processes. It would also designate the Federal Financial Supervisory Authority as the competent authority for AI systems connected to regulated financial activities. Additionally, the Act would establish a central Coordination and Competence Centre and require the Federal Network Agency to operate at least one AI regulatory sandbox. It would also grant the Agency powers to conduct on-site inspections and issue administrative fines of up to EUR 50'000 for specific procedural violations. The Act would also amend the Whistleblower Protection Act to include violations relating to the regulation of artificial intelligence and update the relevant provisions in the Social Code relating to data processing. The Federal Council is expected to respond by 27 March 2026. The Act would enter into force the day after its promulgation.

  3. 10/12/2025
    interim ruling

    Hanseatic Higher Regional Court upheld Hamburg Regional Court's judgment on lawfulness of use of photograph in AI training dataset

    On 10 December 2025, the 5th Civil Senate of the Hanseatic Higher Regional Court dismissed an appeal against a Hamburg Regional Court judgment, holding that the use of a photograph in an Artificial Intelligence (AI) training dataset by an association was lawful. The case concerned the creation of a publicly available image–text dataset, used to train generative AI models, which involved temporarily downloading the photograph from a photo agency website to compare it with descriptive data. The Court ruled that the use was permitted under the text and data mining exception in Section 44b of the German Copyright Act, which provides an exception for text and data mining, as the usage restriction on the agency’s website was not machine-readable as required by law, and therefore not legally effective. It further held that the activity was independently justified under the scientific research exception for text and data mining of copyrighted works in Section 60d of the Act. The ruling found that the dataset creation constituted a systematic, verifiable process aimed at future knowledge acquisition and attributable to applied research, notwithstanding the possibility of subsequent commercial use and in the absence of decisive private-sector influence.

  4. 11/11/2025
    ruling

    Munich Regional Court issued ruling on lawsuit concerning alleged copyright infringement of song lyrics by AI systems

    On 11 November 2025, the 42nd Civil Chamber of the Munich Regional Court largely upheld GEMA’s claims for injunctive relief and damages against two OpenAI group companies. GEMA, a collecting society, argued that lyrics by nine German songwriters had been memorised by OpenAI’s language models and reproduced in chatbot outputs, thereby infringing copyright exploitation rights. OpenAI maintained that its models consist of learned parameters rather than stored data, that users are responsible for generated outputs, and that any reproductions were covered by the text and data mining limitation (Section 44b UrhG) or the incidental inclusion exception (Section 57 UrhG). The court found that the lyrics were embedded in the models and reproduced in chatbot responses, constituting infringements of exploitation rights under Article 2 of the InfoSoc Directive and Section 16 of the German Copyright Act. It held that text and data mining limitations apply only to reproductions required for data analysis, not to reproductions incorporated within a model. The defence of incidental inclusion was dismissed on the basis that the lyrics were not incidental within the dataset. The court further determined that no consent from rights holders could be implied, as model training did not qualify as an expected use. Responsibility for the infringing outputs was attributed to OpenAI, given its control over the models, training data, and content generation architecture. Claims concerning personality rights arising from the incorrect attribution of altered lyrics were rejected.

  5. 17/10/2025
    adoption

    Data Protection Supervisory Authorities of Federal and State Governments adopted guidance on data protection for generative AI systems using retrieval augmented generation methodology

    On 17 October 2025, the Conference of the Independent Data Protection Supervisory Authorities of the Federal and State Governments (DSK) adopted guidance on data protection for generative Artificial Intelligence (AI) systems using Retrieval Augmented Generation (RAG) methodology. The guidance applies to all organisations processing personal data through RAG systems, including those using embedded models and vector databases. The guidance stresses that controllers must ensure reference documents are accurate and current and implement tenant separation and access controls to protect personal data. It also states that data in vector databases must be deletable and tied to specific purposes, and that personal data cannot be unnecessarily stored. Data subject rights to access, rectify, and delete data must be maintained, and system prompts must instruct AI to answer only from referenced sources.

  6. 08/10/2025
    investigation

    Federal Cartel Office announced investigation against Temu over alleged price restrictions on sellers

    On 8 October 2025, the Federal Cartel Office opened an investigation against Temu to review the seller terms and pricing practices on its German marketplace. The investigation will examine whether Temu unlawfully influenced sellers’ pricing, including by setting final prices itself. It was stated that conduct could restrict competition and raise prices in other sales channels. The authority also stated that Temu operates in Germany since 2023 and has 19.3 million users in the country. The investigation will determine if Temu’s practices breach the competition law.

  7. 18/09/2025
    investigation

    Federal Cartel Office approved the acquisition of Ceconomy by JD

    On 18 September 2025, the Federal Cartel Office approved JD's acquisition of Ceconomy, owner of MediaMarkt and Saturn. The decision centres on the online retail and e-commerce sector, where Ceconomy has a presence through its digital sales channels while JD, has so far had only limited activity in Germany. The authority found no significant overlap in online competition and noted that any security or foreign trade implications fall to the Federal Ministry for Economic Affairs and Energy.

  8. 13/08/2025
    outline

    Federal Commissioner for Data Protection and Freedom of Information released updated Information Brochure on General Data Protection Regulation and Federal Data Protection Act

    On 13 August 2025, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) published an informational brochure on the General Data Protection Regulation (GDPR) and the Federal Data Protection Act (BDSG). It was highlighted that the GDPR applies to all controllers and processors of personal data across the European Union, affecting over 449 million citizens, and imposes obligations including transparency, purpose limitation, data minimisation, technical and organisational safeguards, and consent rules for children. The brochure outlines instruments including the One-Stop-Shop and consistency mechanism, highlights rights, including access, rectification, erasure, portability, and protection from automated decisions, and stresses the importance of public awareness, digital literacy, and the link between data protection, data security, and responsible data use.

  9. 12/08/2025
    interim ruling

    Schleswig-Holstein Higher Regional Court rejected emergency application by Stichting Onderzoek Marktinformatie against Meta to prohibit use of certain customer data for AI training purposes (Case No. 6 UKI 3/25)

    On 12 August 2025, the Schleswig-Holstein Higher Regional Court rejected an application filed by Stichting Onderzoek Marktinformatie (SOMI) against Meta to prohibit the use of certain customer data from Facebook and Instagram services for artificial intelligence (AI) training purposes, on the grounds of lack of urgency, in case number 6 UKI 3/25. The court found that Meta had publicly announced in 2024, and by direct email to users, including SOMI in April 2025, its intention to use certain public profile data of adult customers, de-identified and tokenised, for AI model development and improvement, including the AI service Llama, beginning on 27 May 2025. This data could include information from posts, comments, and images that may contain personal data of children, unregistered third parties, and special category data under Article 9 of the General Data Protection Regulation (GDPR), such as ethnic or racial origin, sexual orientation, or political opinions, where such data had not been made public by the data subjects. The Court noted that SOMI, despite awareness of the potential unauthorised processing since April 2025, delayed filing its application until 27 June 2025, one month after processing commenced, whereas the Consumer Advice Centre of North Rhine-Westphalia had sought interim relief before the Cologne Higher Regional Court (I-15 UKl 2/25) prior to data use. The Court concluded that any alleged violations must be pursued in the main proceedings, with the judgment scheduled for publication in the State case law database.

  10. 10/08/2025
    consultation closed

    Federal Commissioner for Data Protection and Freedom of Information closes consultation on data protection-compliant handling of personal data in large language models

    On 10 August 2025, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) closes the consultation on data protection-compliant handling of personal data in large language models (LLMs). The consultation applies to stakeholders in science, industry, and civil society. It seeks insights on issues including anonymisation limits, memorisation of personal data, risks of data extraction, and enforcement of General Data Protection Regulation data subject rights in Artificial Intelligence (AI) systems. The findings will support the development of compliant approaches to managing memorised data in AI.

  11. 04/08/2025
    investigation

    Federal Cartel Office approved acquisition of Informatica by Salesforce

    On 4 August 2025, the German Federal Cartel Office (FCO) approved the acquisition of sole control of Informatica by Salesforce, both headquartered in the United States, following a merger control assessment of vertical integration in digital markets. Salesforce is the world’s largest provider of customer relationship management (CRM) software, while Informatica is a leading provider of data integration tools, integration platform as a service (iPaaS), data quality solutions, and data and analytics management platforms. The FCO’s market analysis and customer surveys confirmed the continued availability of a broad range of alternative suppliers in the field of data integration and management solutions, including SAP, Oracle, Microsoft, IBM, Qlik, Ab Initio, and Pentaho. The FCO concluded that the merger would not result in a significant strengthening of the parties’ market position and therefore granted clearance.

  12. 24/07/2025
    adoption

    Federal Office for Information Security published white paper on bias in Artificial Intelligence

    On 24 July 2025, the German Federal Office for Information Security (BSI) published a white paper on bias in Artificial Intelligence (AI). The paper applies to developers, providers, and operators of AI systems across all sectors. The paper highlights practices including designating bias-responsible personnel, implementing organisational and technical measures during data collection to reduce bias, and prioritising pre-processing and in-processing mitigation methods over post-processing approaches. It also establishes continuous bias detection and mitigation as an ongoing process throughout AI model lifecycles. The paper details 11 bias types, including historical bias and automation bias, and provides detection methodologies using fairness metrics and statistical analysis. It also outlines 13 mitigation techniques spanning pre-processing to post-processing approaches and emphasises bias as a cybersecurity risk affecting confidentiality, integrity, and availability.

  13. 10/07/2025
    consultation opened

    Federal Commissioner for Data Protection and Freedom of Information opened consultation on data protection-compliant handling of personal data in large language models

    On 10 July 2025, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) opened a consultation on data protection-compliant handling of personal data in large language models (LLMs), until 10 August 2025. The consultation applies to stakeholders in science, industry, and civil society. It seeks insights on issues including anonymisation limits, memorisation of personal data, risks of data extraction, and enforcement of General Data Protection Regulation data subject rights in Artificial Intelligence (AI) systems. The findings will support the development of compliant approaches to managing memorised data in AI.

  14. 16/06/2025
    adoption

    Conference of the Independent Data Protection Supervisory Authorities of the Federal and State Governments adopted guidance on recommended technical and organisational measures for the development and operation of artificial intelligence systems

    On 16 June 2025, the Conference of the Independent Data Protection Supervisory Authorities of the Federal and State Governments (DSK) adopted the guidance on recommended technical and organisational measures for the development and operation of artificial intelligence (AI) systems. The guidance applies to manufacturers and developers of AI systems processing personal data, especially where high-risk activities are involved. It sets out data protection obligations aligned with the principle of data protection by design and includes practical measures across all lifecycle phases including design, development, deployment, and operation. The measures include responsible data handling, secure software updates, and continuous monitoring.

  15. 27/05/2025
    interim ruling

    Hamburg Commissioner for Data Protection and Freedom of Information order against initiating lawsuit relating to Meta's training of Artificial Intelligence models with user data

    On 27 May 2025, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI), in agreement with the German data protection supervisory authorities, decided against initiating provisional proceedings to prohibit Meta from training its Artificial Intelligence (AI) models using social network user data. The decision, which considered the final ruling by the Cologne Higher Regional Court issued on 23 May 2025, aims to ensure a consistent European approach among data protection supervisory authorities. The HmbBfDI determined that an isolated urgency procedure for Germany would not be a suitable instrument for resolving existing assessment differences across Europe, in view of the planned evaluation of Meta's approach by the EU supervisory authorities.

  16. 23/05/2025
    ruling

    Cologne Higher Regional Court issued a decision finding Meta’s use of public user data for AI training compliant with GDPR and DMA

    On 23 May 2025, the 15th Civil Senate of the Cologne Higher Regional Court rejected an application by the Consumer Advice Center NRW against Meta Platforms Ireland Limited in an expedited proceeding. The application sought to prevent Meta from using publicly made user data for artificial intelligence (AI) training, which was scheduled to begin on 27 May 2025. Following a preliminary review, the Court determined that Meta had not violated provisions of the General Data Protection Regulation (GDPR) or the Digital Markets Act (DMA). The assessment aligned with the Irish Data Protection Authority (DPA) and the Hamburg Commissioner for Data Protection and Freedom of Information, who considered the processing legally possible under certain conditions. The Court preliminarily assessed the announced use of data for AI training as lawful under Article 6(1)(f) of the GDPR without requiring user consent, citing Meta's legitimate purpose, the necessity of large data volumes, and the outweighing of Meta's interests after considering implemented mitigation measures. The Court also found no violation of DMA Article 5(2) regarding data merging, noting that Meta's approach does not combine data from different services for a single user. The judgment was issued in summary proceedings and is final for this expedited case.

  17. 17/04/2025
    adoption

    Saxon Data Protection and Transparency Commissioner issued advisory on Meta's data processing for Artificial Intelligence training

    On 17 April 2025, the Saxon Commissioner for Data Protection and Transparency issued an advisory on Meta's plans to use the personal data of all adult European Facebook and Instagram users for Artificial Intelligence (AI) training from the end of May 2025. This includes public posts and photos, which will be used to improve services, including the Meta-AI chatbot on WhatsApp and language models, including Llama. The Commissioner highlighted that users have the right to object to this use of their data, while no action is required from those who consent. It was emphasised that, given current technology, data cannot be removed from the AI model once the training has occurred.

  18. 15/04/2025
    adoption

    Hamburg Commissioner for Data Protection and Freedom of Information issued guidance on Meta's data processing for AI training

    On 15 April 2025, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) issued guidance on Meta’s planned use of personal data for artificial intelligence (AI) training and informed users of their right to object. Meta’s AI training, originally scheduled for 2024, was postponed following concerns raised by the Irish Data Protection Commission regarding legal basis and transparency. Meta now plans to begin using the publicly accessible Facebook and Instagram data of adult European users from the end of May 2025. To prevent this, users must actively object. The HmbBfDI stressed that objections must be submitted before the end of May 2025, as data already used for training cannot be removed from AI models afterwards.

  19. 21/02/2025
    adoption

    State Commissioner for Data Protection in Lower Saxony (LfD Lower Saxony) adopted guidelines on use of Deepseek

    On 21 February 2025, the State Commissioner for Data Protection in Lower Saxony (LfD Lower Saxony) issued a recommendation highlighting the risks of using the AI tool DeepSeek R1 (DeepSeek), a generative AI chatbot developed by the Chinese company DeepSeek. While freely available online and through app stores in the EU, DeepSeek may not comply with the European AI Regulation or the General Data Protection Regulation (GDPR). The tool's privacy policy indicates unrestricted collection and processing of user data, with the potential obligation to share data with Chinese security authorities. The absence of a legal representative in the EU raises concerns about enforcing GDPR compliance and protecting user rights. The LfD Lower Saxony advises against using AI applications from non-EU providers without a legal EU representative and recommends prioritising AI tools that demonstrate compliance with European data protection standards.

  20. 30/01/2025
    adoption

    Data protection authority adopted guidance on obligations and prohibitions under the EU Artificial Intelligence Act

    On 30 January 2025, the Data Protection Authority of Hamburg adopted a guidance on obligations and prohibitions under the European Union's Artificial Intelligence (AI) Act. It was highlighted that certain requirements on AI competence requirements and bans on certain AI practices under the AI Act enter into force on 2 February 2025. The guidance highlighted that organisations deploying AI must ensure employees understand the specific technology they use, aligning competence with the AI system’s purpose. Prohibited AI practices include social scoring, AI-driven emotion analysis in workplaces and schools (except for medical or safety reasons), predictive policing based solely on behavioural traits, and large-scale facial recognition database creation. The AI Act also bans manipulative AI practices that exploit vulnerabilities, particularly in social media targeting minors.

  21. 29/01/2025
    adoption

    Conference of the Independent Data Protection Authorities announced guidelines on anonymisation and pseudonymisation of personal data

    On 29 January 2025, the German Conference of Independent Data Protection Supervisory Authorities of the Federal and State Governments (DSK) announced plans to develop guidance on the effective anonymisation and pseudonymisation of personal data. The plan would aim to assist entities in research, business, and the public sector in selecting appropriate methods for data processing. The guidance will build upon existing European Data Protection Board guidelines and clarify procedures and requirements for anonymisation and pseudonymisation, especially in areas including medical research, artificial intelligence development, and statistics.

  22. 14/01/2025
    announcement

    State data protection authorities announced investigation into DeepSeek to assess compliance with representative designation under General Data Protection Regulation

    On 14 February 2025, the state data protection supervisory authorities of Rhineland-Palatinate, Baden-Württemberg, Thuringia, Saxony-Anhalt, Hesse, Bremen, and Berlin announced an investigation into DeepSeek to assess compliance with representative designation under General Data Protection Regulation (GDPR). The investigation focuses on whether the two Chinese companies behind DeepSeek have appointed a representative in the European Union (EU) as required under Article 27(1) GDPR. This provision mandates that non-EU entities offering services to EU data subjects designate a representative to act as a contact point for supervisory authorities and data subjects with fines for non-compliance.

  23. 10/01/2025
    announcement

    Free Democratic Party parliamentary group launched an inquiry into competition distortions caused by online retailers from third countries (No. 20/14579)

    On 10 January 2025, the Free Democratic Party parliamentary group launched an inquiry into competition distortions caused by online retailers from third countries. The inquiry focuses on consumer protection, product safety, and customs compliance. Citing studies showing widespread non-compliance, the inquiry highlights that 95% of tested Temu toys violated European Union's safety rules and that hazardous chemicals were found in Shein clothing. The inquiry questions what measures the government is taking to address regulatory gaps. It also requests details on customs inspections conducted in September 2024, discrepancies between enforcement and consumer group findings, the scale of non-compliant imports, and revenue losses from misdeclared goods.

Last updated: 09/03/2026