On 20 March 2026, the White House published its National Policy Framework for Artificial Intelligence: Legislative Recommendations. The four-page document sets out the administration's position on what federal AI legislation should cover across seven policy areas, and identifies several questions it considers better resolved by the courts than by Congress.
The national framework was issued pursuant to Executive Order 14365 of December 2025 aimed at avoiding individual state regulation of AI. That EO itself followed Executive Order 14179 of January 2025 revoking the Biden administration's AI executive order and directing agencies to remove barriers to AI development.
The Seven Pillars
1. Protecting Children and Empowering Parents. The framework calls for age-assurance requirements on AI platforms likely to be accessed by minors, parental controls over privacy and content exposure, and confirmation that existing child privacy protections apply to AI systems, including limits on data collection for model training. It builds on the Take It Down Act and asks Congress to avoid "ambiguous standards" that could give rise to excessive litigation. States would retain the ability to enforce generally applicable laws protecting children, including prohibitions on AI-generated child sexual abuse material.
2. Safeguarding and Strengthening American Communities. This section addresses consumer protection and AI infrastructure. Congress is asked to protect residential ratepayers from electricity cost increases driven by data centre construction, streamline federal permitting for AI infrastructure, combat AI-enabled fraud targeting vulnerable populations, and provide grants, tax incentives and technical assistance to small businesses adopting AI. It also calls for national security agencies to develop sufficient technical capacity to assess frontier model capabilities.
3. Respecting Intellectual Property Rights and Supporting Creators. Discussed in detail below.
4. Preventing Censorship and Protecting Free Speech. The framework proposes prohibiting federal agencies from coercing AI providers into banning or altering content on partisan or ideological grounds, and providing individuals with a right of redress where agencies attempt to censor expression through AI platforms.
5. Enabling Innovation and Ensuring American AI Dominance. Congress is asked to establish regulatory sandboxes for AI applications, make federal datasets available in AI-ready formats, and rely on existing sector-specific regulators and industry-led standards rather than creating any new federal rulemaking body for AI.
6. Educating Americans and Developing an AI-Ready Workforce. The framework calls for AI training to be incorporated into existing education and apprenticeship programmes, expanded federal study of task-level workforce displacement, and strengthened capabilities at land-grant institutions. The administration has already begun operationalising this pillar through the Department of Labor's AI Literacy Framework, published in February 2026, which sets out five foundational content areas and seven delivery principles for workforce training.
7. Establishing a Federal Policy Framework and Preempting State AI Laws. The preemption provisions are broad. Congress would preempt state AI laws that impose "undue burdens," while preserving states' traditional police powers, zoning authority, and rules governing a state's own use of AI (including procurement, law enforcement and public education). States would be prohibited from regulating AI development, which the framework characterises as "inherently interstate" with foreign policy and national security implications. States would also be prevented from penalising AI developers for a third party's unlawful conduct involving their models.
The IP and Copyright Provisions
The intellectual property section addresses training data, collective licensing, and digital replicas, but is notable as much for what it defers as for what it proposes.
On training data, the administration states that it "believes that training of AI models on copyrighted material does not violate copyright laws," but acknowledges that arguments to the contrary exist and supports allowing the courts to resolve the question. Congress is asked not to take any action that would affect the judiciary's determination of whether training on copyrighted material constitutes fair use.
This position sits against a substantial and contested body of analysis. The US Copyright Office's Part 3 report on Generative AI Training, released in May 2025, concluded that fair use cannot be presumed and must be assessed on a case-by-case basis, noting that commercial use of copyrighted works to produce expressive content competing in existing markets is unlikely to qualify. The report also rejected the argument that AI training is inherently transformative, observing that language models absorb not just meaning but the selection and arrangement of expression at the sentence and paragraph level. The Authors Guild, which has cited the report in support of its ongoing litigation against OpenAI, has argued that unlicensed training threatens to erode the creative ecosystem.
On the other side, Professor Edward Lee has argued in Fair Use and the Origin of AI Training (63 Hou. L. Rev. 104, 2025) that training AI models serves a transformative purpose rooted in decades of university-led research, and that the Copyright Office's endorsement of a "market dilution" theory under the fourth fair use factor is both untested and constitutionally problematic. Lee contends that technological progress should be weighed alongside creative production in the fair use balance, and that courts should evaluate training as a further purpose distinct from the expressive outputs of deployed models.
The courts have begun to weigh in, though the early rulings reveal a notable divergence. In June 2025, two judges in the Northern District of California issued summary judgment decisions within days of each other in cases involving substantially similar facts: the use of copyrighted books obtained from shadow libraries such as Library Genesis to train large language models.
In Bartz v. Anthropic (N.D. Cal., Judge Alsup, 23 June 2025), the court held that using copyrighted books to train Claude was "quintessentially transformative" and constituted fair use. However, Judge Alsup drew a sharp distinction between training and acquisition. Anthropic had built what it described as a "central library of all the books in the world," purchasing and digitising millions of titles while also downloading over seven million pirated books from shadow libraries such as Library Genesis. The court held that the fair use defence did not extend to the pirated library, regardless of whether individual titles within it were later selected for training. With class certification granted and statutory damages of up to $150,000 per work, Anthropic settled for $1.5 billion plus interest in August 2025, the largest copyright settlement in US history, at approximately $3,000 per work for roughly 500,000 titles. Anthropic also agreed to destroy all works downloaded from pirate sites. The settlement covers historical use only; it does not grant Anthropic any licence for future training on copyrighted works. Final approval proceedings are ongoing.
In Kadrey v. Meta (N.D. Cal., Judge Chhabria, 25 June 2025), the court reached a similar outcome on fair use but through different reasoning. Judge Chhabria treated Meta's downloading of books from shadow libraries and their subsequent use in training as part of one unified process directed at a transformative end, declining to evaluate the source of the copies as a separate question. The court granted summary judgment to Meta on the copying claims, finding that the plaintiffs had "made the wrong arguments" and failed to develop a sufficient record on market harm. In a separate order two days later, the court also dismissed the plaintiffs' DMCA claim on the basis that, since the copying constituted fair use, Meta's removal of copyright management information could not have furthered an act of infringement.
However, the ruling was far from an unqualified endorsement of AI training practices. Judge Chhabria noted that market harm remains "the single most important element of fair use," and stated that "in cases involving uses like Meta's, it seems like the plaintiffs will often win, at least where those cases have better-developed records on the market effects of the defendant's use." He directly criticised Judge Alsup's analogy in Bartz between AI training and teaching schoolchildren to write, calling it "inapt" as a basis for setting aside the fourth fair use factor. The case continues with copyright claims related to Meta's distribution of pirated books during the torrenting process. The plaintiffs have moved to amend their complaint to add contributory infringement and uploading-based claims, with a summary judgment hearing on distribution claims scheduled for February 2027.
The tension between these two rulings is significant. Bartz treats acquisition and training as severable uses, with piracy capable of vitiating fair use even where the training itself is transformative. Kadrey treats the entire process as a single use, assessed by its ultimate transformative purpose. Both courts found AI training to be transformative, but they diverged on whether the provenance of the training data matters to the fair use analysis. A third case, Thomson Reuters v. ROSS Intelligence (D. Del., 2025), reached the opposite conclusion on fair use entirely and is currently on interlocutory appeal to the Third Circuit.
The practical effect of the framework's position is to preserve the status quo: training continues without a statutory licence requirement while these and other pending federal cases work through the courts. Visit the Deep Lex Disputes Tracker now for more AI copyright and other litigations.
On licensing, the framework suggests Congress consider enabling collective rights systems that would allow rights holders to negotiate compensation from AI providers without incurring antitrust liability. However, it specifies that any such legislation "should not address when or whether such licensing is required." The framework endorses creating a mechanism for negotiation without mandating its use.
On digital replicas, the framework calls for a federal right protecting individuals from the unauthorised distribution or commercial use of AI-generated reproductions of their voice, likeness, or other identifiable attributes, with exceptions for parody, satire, news reporting, and other expression protected by the First Amendment.
Notable Absences
Several areas addressed in other jurisdictions' AI governance frameworks do not appear in this document. There is no proposal for algorithmic transparency or audit requirements, no general-purpose AI safety evaluation framework (though the national security capacity-building provision touches on frontier model assessment), and no engagement with the copyrightability of AI-generated outputs, a question the US Copyright Office addressed in Part 2 of its report in January 2025.
The framework does not propose sector-specific obligations for high-risk AI systems comparable to those in the EU AI Act, nor does it address foundation model governance or general-purpose AI classification.
Outlook
The framework is a nonbinding document and does not itself impose new legal obligations. As Holland & Knight and Mintz have both noted, congressional efforts to operationalise these priorities are already underway, including through Senator Blackburn's TRUMP AMERICA AI Act discussion draft. At the same time, Democratic lawmakers have raised concerns about preemption without sufficiently robust federal safeguards, and a bill seeking to repeal the administration's preemption efforts (H.R. 8031) was introduced in the House on the same day the framework was published.
State AI laws remain in effect unless and until Congress enacts new legislation. Several of the areas the framework leaves to the courts, particularly training data copyright, are the subject of active litigation and may take years to reach definitive resolution.
References
- White House, National Policy Framework for Artificial Intelligence: Legislative Recommendations (20 March 2026), whitehouse.gov
- Executive Order 14365, Ensuring a National Policy Framework for Artificial Intelligence (11 December 2025), Federal Register
- Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence (23 January 2025), Federal Register
- US Department of Labor, AI Literacy Framework (13 February 2026), dol.gov
- US Copyright Office, Copyright and Artificial Intelligence, Part 3: Generative AI Training (pre-publication, 9 May 2025), copyright.gov
- US Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (29 January 2025), copyright.gov
- Edward Lee, Fair Use and the Origin of AI Training, 63 Hou. L. Rev. 104 (2025), Houston Law Review
- Bartz v. Anthropic PBC, No. 3:24-cv-05417 (N.D. Cal. 2025), Deep Lex
- Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417 (N.D. Cal. 2025), Deep Lex
- Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc., No. 1:20-cv-00613 (D. Del. 2025), Deep Lex
- Holland & Knight, White House Releases a National Policy Framework for Artificial Intelligence (March 2026), hklaw.com
- Mintz, White House Releases National AI Legislative Framework, as Competing Congressional Proposals Sharpen the Federal-State Divide (31 March 2026), mintz.com
- Sullivan & Cromwell, Trump Administration Releases National Policy Framework on Artificial Intelligence (March 2026), sullcrom.com
- Crowell & Moring, White House National AI Policy Framework Calls for Preempting State Laws, Protecting Children (March 2026), crowell.com
