Deep Lex
Back to Regulation Tracker

Germany AI Regulation

Law(s) enactedTreaty

Europe · CoE Framework Convention signatory

Overview

EU AI ACT
  • The EU AI Act (Regulation 2024/1689) applies directly across all member states. Prohibitions on unacceptable-risk AI systems have been in force since 2 February 2025; GPAI model rules since 2 August 2025. High-risk AI obligations are due from 2 August 2026, subject to the Digital Omnibus proposal which may defer enforcement. For the full implementation timeline, governance structure, and current status, see the European Union overview.
  • The Federal Network Agency (BNetzA) has been proposed as lead market surveillance authority; the Federal Data Protection Commissioner (BfDI) is active on AI and data governance. The national AI Strategy was updated in 2023. AI liability and domestic implementation legislation is in preparation.

Key Sources

EU AI Act (Regulation 2024/1689)View
Bundesnetzagentur (BNetzA) — proposed lead MSAView
EU AI Act National Implementation TrackerView
BaFin — co-MSA for financial sectorView
Council of Europe Framework Convention on AI (CETS 225)View

This content is for informational and educational purposes only and does not constitute legal advice.

AI Regulation Timeline

  1. 09/03/2026
    introduction

    Act implementing EU Regulation on Artificial Intelligence (Draft Act 21/4594) was introduced to Parliament

    On 9 March 2026, the German Federal Government submitted the draft Act to implement the Artificial Intelligence Regulation (No. 21/4594) to the Parliament (German Bundestag). The Act would apply to providers, deployers, importers and distributors of artificial intelligence systems as defined in Regulation (EU) 2024/1689, as well as authorised representatives of non-EU providers and product manufacturers who place artificial intelligence systems on the market under their own name or trademark. The Act would designate the Federal Network Agency as the primary market surveillance and notification authority for AI systems not covered by other designated bodies. The Federal Financial Supervisory Authority would be designated as the competent market surveillance authority for AI systems used in connection with regulated financial activities, while the competent authorities of the federal states would remain responsible for systems deployed by public bodies and media service providers. The Act would further establish an independent Artificial Intelligence Market Surveillance Chamber within the Federal Network Agency for certain high-risk AI systems used in law enforcement, border management and the justice system. It would also establish a coordination and competence centre within the Federal Network Agency to support cooperation between the relevant bodies and act as the central contact point under Regulation (EU) 2024/1689. The Act would require the Federal Network Agency to operate at least one artificial intelligence regulatory sandbox, as well as acting as a central complaints office for violations of Regulation (EU) 2024/1689. It allow for administrative fines of up to EUR 50'000 for infringements not covered by Regulation (EU) 2024/1689.

  2. 12/02/2026
    introduction

    Act implementing EU Regulation on Artificial Intelligence (Draft Act 97/26) was submitted to Federal Council

    On 13 February 2026, the German federal government submitted the draft Act to implement the Artificial Intelligence Regulation (No. 97/26) to the Federal Council (Bundesrat). This Act would establish a national supervisory framework for artificial intelligence. The Act would apply to providers and deployers of AI systems, including entities operating in regulated financial sectors. It would designate the Federal Network Agency as the primary market surveillance authority under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689), as well as the single point of contact under Article 70 of the Act. It would also establish an independent AI Market Surveillance Chamber within the Federal Network Agency for high-risk AI systems used in law enforcement, border management, the justice system, and democratic processes. It would also designate the Federal Financial Supervisory Authority as the competent authority for AI systems connected to regulated financial activities. Additionally, the Act would establish a central Coordination and Competence Centre and require the Federal Network Agency to operate at least one AI regulatory sandbox. It would also grant the Agency powers to conduct on-site inspections and issue administrative fines of up to EUR 50'000 for specific procedural violations. The Act would also amend the Whistleblower Protection Act to include violations relating to the regulation of artificial intelligence and update the relevant provisions in the Social Code relating to data processing. The Federal Council is expected to respond by 27 March 2026. The Act would enter into force the day after its promulgation.

  3. 10/12/2025
    interim ruling

    Hanseatic Higher Regional Court upheld Hamburg Regional Court's judgment on lawfulness of use of photograph in AI training dataset

    On 10 December 2025, the 5th Civil Senate of the Hanseatic Higher Regional Court dismissed an appeal against a Hamburg Regional Court judgment, holding that the use of a photograph in an Artificial Intelligence (AI) training dataset by an association was lawful. The case concerned the creation of a publicly available image–text dataset, used to train generative AI models, which involved temporarily downloading the photograph from a photo agency website to compare it with descriptive data. The Court ruled that the use was permitted under the text and data mining exception in Section 44b of the German Copyright Act, which provides an exception for text and data mining, as the usage restriction on the agency’s website was not machine-readable as required by law, and therefore not legally effective. It further held that the activity was independently justified under the scientific research exception for text and data mining of copyrighted works in Section 60d of the Act. The ruling found that the dataset creation constituted a systematic, verifiable process aimed at future knowledge acquisition and attributable to applied research, notwithstanding the possibility of subsequent commercial use and in the absence of decisive private-sector influence.

  4. 11/11/2025
    ruling

    Munich Regional Court issued ruling on lawsuit concerning alleged copyright infringement of song lyrics by AI systems

    On 11 November 2025, the 42nd Civil Chamber of the Munich Regional Court largely upheld GEMA’s claims for injunctive relief and damages against two OpenAI group companies. GEMA, a collecting society, argued that lyrics by nine German songwriters had been memorised by OpenAI’s language models and reproduced in chatbot outputs, thereby infringing copyright exploitation rights. OpenAI maintained that its models consist of learned parameters rather than stored data, that users are responsible for generated outputs, and that any reproductions were covered by the text and data mining limitation (Section 44b UrhG) or the incidental inclusion exception (Section 57 UrhG). The court found that the lyrics were embedded in the models and reproduced in chatbot responses, constituting infringements of exploitation rights under Article 2 of the InfoSoc Directive and Section 16 of the German Copyright Act. It held that text and data mining limitations apply only to reproductions required for data analysis, not to reproductions incorporated within a model. The defence of incidental inclusion was dismissed on the basis that the lyrics were not incidental within the dataset. The court further determined that no consent from rights holders could be implied, as model training did not qualify as an expected use. Responsibility for the infringing outputs was attributed to OpenAI, given its control over the models, training data, and content generation architecture. Claims concerning personality rights arising from the incorrect attribution of altered lyrics were rejected.

  5. 17/10/2025
    adoption

    Data Protection Supervisory Authorities of Federal and State Governments adopted guidance on data protection for generative AI systems using retrieval augmented generation methodology

    On 17 October 2025, the Conference of the Independent Data Protection Supervisory Authorities of the Federal and State Governments (DSK) adopted guidance on data protection for generative Artificial Intelligence (AI) systems using Retrieval Augmented Generation (RAG) methodology. The guidance applies to all organisations processing personal data through RAG systems, including those using embedded models and vector databases. The guidance stresses that controllers must ensure reference documents are accurate and current and implement tenant separation and access controls to protect personal data. It also states that data in vector databases must be deletable and tied to specific purposes, and that personal data cannot be unnecessarily stored. Data subject rights to access, rectify, and delete data must be maintained, and system prompts must instruct AI to answer only from referenced sources.

  6. 08/10/2025
    investigation

    Federal Cartel Office announced investigation against Temu over alleged price restrictions on sellers

    On 8 October 2025, the Federal Cartel Office opened an investigation against Temu to review the seller terms and pricing practices on its German marketplace. The investigation will examine whether Temu unlawfully influenced sellers’ pricing, including by setting final prices itself. It was stated that conduct could restrict competition and raise prices in other sales channels. The authority also stated that Temu operates in Germany since 2023 and has 19.3 million users in the country. The investigation will determine if Temu’s practices breach the competition law.

  7. 18/09/2025
    investigation

    Federal Cartel Office approved the acquisition of Ceconomy by JD

    On 18 September 2025, the Federal Cartel Office approved JD's acquisition of Ceconomy, owner of MediaMarkt and Saturn. The decision centres on the online retail and e-commerce sector, where Ceconomy has a presence through its digital sales channels while JD, has so far had only limited activity in Germany. The authority found no significant overlap in online competition and noted that any security or foreign trade implications fall to the Federal Ministry for Economic Affairs and Energy.

  8. 13/08/2025
    outline

    Federal Commissioner for Data Protection and Freedom of Information released updated Information Brochure on General Data Protection Regulation and Federal Data Protection Act

    On 13 August 2025, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) published an informational brochure on the General Data Protection Regulation (GDPR) and the Federal Data Protection Act (BDSG). It was highlighted that the GDPR applies to all controllers and processors of personal data across the European Union, affecting over 449 million citizens, and imposes obligations including transparency, purpose limitation, data minimisation, technical and organisational safeguards, and consent rules for children. The brochure outlines instruments including the One-Stop-Shop and consistency mechanism, highlights rights, including access, rectification, erasure, portability, and protection from automated decisions, and stresses the importance of public awareness, digital literacy, and the link between data protection, data security, and responsible data use.

  9. 12/08/2025
    interim ruling

    Schleswig-Holstein Higher Regional Court rejected emergency application by Stichting Onderzoek Marktinformatie against Meta to prohibit use of certain customer data for AI training purposes (Case No. 6 UKI 3/25)

    On 12 August 2025, the Schleswig-Holstein Higher Regional Court rejected an application filed by Stichting Onderzoek Marktinformatie (SOMI) against Meta to prohibit the use of certain customer data from Facebook and Instagram services for artificial intelligence (AI) training purposes, on the grounds of lack of urgency, in case number 6 UKI 3/25. The court found that Meta had publicly announced in 2024, and by direct email to users, including SOMI in April 2025, its intention to use certain public profile data of adult customers, de-identified and tokenised, for AI model development and improvement, including the AI service Llama, beginning on 27 May 2025. This data could include information from posts, comments, and images that may contain personal data of children, unregistered third parties, and special category data under Article 9 of the General Data Protection Regulation (GDPR), such as ethnic or racial origin, sexual orientation, or political opinions, where such data had not been made public by the data subjects. The Court noted that SOMI, despite awareness of the potential unauthorised processing since April 2025, delayed filing its application until 27 June 2025, one month after processing commenced, whereas the Consumer Advice Centre of North Rhine-Westphalia had sought interim relief before the Cologne Higher Regional Court (I-15 UKl 2/25) prior to data use. The Court concluded that any alleged violations must be pursued in the main proceedings, with the judgment scheduled for publication in the State case law database.

  10. 10/08/2025
    consultation closed

    Federal Commissioner for Data Protection and Freedom of Information closes consultation on data protection-compliant handling of personal data in large language models

    On 10 August 2025, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) closes the consultation on data protection-compliant handling of personal data in large language models (LLMs). The consultation applies to stakeholders in science, industry, and civil society. It seeks insights on issues including anonymisation limits, memorisation of personal data, risks of data extraction, and enforcement of General Data Protection Regulation data subject rights in Artificial Intelligence (AI) systems. The findings will support the development of compliant approaches to managing memorised data in AI.

  11. 04/08/2025
    investigation

    Federal Cartel Office approved acquisition of Informatica by Salesforce

    On 4 August 2025, the German Federal Cartel Office (FCO) approved the acquisition of sole control of Informatica by Salesforce, both headquartered in the United States, following a merger control assessment of vertical integration in digital markets. Salesforce is the world’s largest provider of customer relationship management (CRM) software, while Informatica is a leading provider of data integration tools, integration platform as a service (iPaaS), data quality solutions, and data and analytics management platforms. The FCO’s market analysis and customer surveys confirmed the continued availability of a broad range of alternative suppliers in the field of data integration and management solutions, including SAP, Oracle, Microsoft, IBM, Qlik, Ab Initio, and Pentaho. The FCO concluded that the merger would not result in a significant strengthening of the parties’ market position and therefore granted clearance.

  12. 24/07/2025
    adoption

    Federal Office for Information Security published white paper on bias in Artificial Intelligence

    On 24 July 2025, the German Federal Office for Information Security (BSI) published a white paper on bias in Artificial Intelligence (AI). The paper applies to developers, providers, and operators of AI systems across all sectors. The paper highlights practices including designating bias-responsible personnel, implementing organisational and technical measures during data collection to reduce bias, and prioritising pre-processing and in-processing mitigation methods over post-processing approaches. It also establishes continuous bias detection and mitigation as an ongoing process throughout AI model lifecycles. The paper details 11 bias types, including historical bias and automation bias, and provides detection methodologies using fairness metrics and statistical analysis. It also outlines 13 mitigation techniques spanning pre-processing to post-processing approaches and emphasises bias as a cybersecurity risk affecting confidentiality, integrity, and availability.

Last updated: 09/03/2026