Deep Lex
Back to Regulation Tracker

United States AI Regulation

Ad hoc state legislationTreaty

North America · CoE Framework Convention signatory

Overview

Status: No comprehensive federal AI law; legislative framework proposed
  • The United States has no comprehensive federal AI legislation. Regulation has developed through a sequence of executive orders and, at the state level, through a rapidly growing body of enacted laws.
  • On 20 March 2026, the Trump administration published a National AI Legislative Framework setting out detailed recommendations to Congress across seven pillars: protecting children and empowering parents; safeguarding communities; respecting intellectual property rights; preventing censorship; enabling innovation; workforce development; and establishing federal preemption of state AI laws. The framework is a set of legislative recommendations, not law — Congress must act on it.
  • This follows three earlier executive actions: EO 14179 (January 2025) revoking the Biden administration's AI Executive Order; America's AI Action Plan (July 2025) setting out 90 policy positions across three pillars; and EO 14365 (December 2025) directing the DOJ to establish an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy.
  • The framework recommends that Congress should not create any new federal rulemaking body to regulate AI, and should instead support sector-specific regulation through existing bodies. On copyright, the administration states its belief that training AI models on copyrighted material does not violate copyright law, but supports allowing the courts to resolve the issue. On state preemption, Congress should preempt state AI laws that impose undue burdens while preserving states' traditional police powers. States should not be permitted to regulate AI development, which the framework characterises as an inherently interstate phenomenon with national security implications. The framework also calls for regulatory sandboxes, federal dataset access for AI training, age-assurance requirements for AI platforms accessed by minors, and expanded workforce development programmes.
  • In the absence of a country wide federal legislation, regulation has developed at the state level. In 2025, 38 states adopted or enacted around 100 AI-related measures. Key state laws include Colorado AI Act SB 24-205 (effective February 2026); Texas Responsible AI Governance Act HB 149 (effective January 2026); California AI Transparency Act SB 942 (effective August 2026, delayed from January 2026); Utah AI Policy Act SB 149 (effective May 2024); and Illinois AI Video Interview Act (2020, amended 2025). A separate state legislation tracker is coming soon to Deep Lex.
  • There is no federal AI regulator. NIST maintains the AI Risk Management Framework (voluntary). The FTC, SEC, EEOC, and HHS assert jurisdiction over AI within existing mandates. The DOJ AI Litigation Task Force (established per EO 14365) challenges state AI laws inconsistent with federal policy. CISA handles AI cybersecurity within DHS.

Key Sources

National AI Legislative Framework — full text (March 2026)View
White House announcement — National AI Legislative Framework (March 2026)View
EO 14365 — Ensuring a National Policy Framework for AI (December 2025)View
America's AI Action Plan (July 2025)View
EO 14179 — Removing Barriers to American Leadership in AI (January 2025)View
NIST AI Risk Management FrameworkView

This content is for informational and educational purposes only and does not constitute legal advice.

AI Regulation Timeline

  1. 27/03/2026
    signing

    Responsible AI Safety and Education (RAISE) Act was signed by Governor

    On 27 March 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was signed by the Governor of New York. The Act establishes transparency and safety requirements for developers of frontier artificial intelligence models. The Act applies to frontier AI developers, particularly large developers with annual revenues exceeding USD 500 million, engaged in developing, deploying, or operating high-compute foundation models in New York. It imposes obligations, including the publication of frontier AI frameworks detailing risk assessment and mitigation processes, mandatory transparency reports before deployment, regular updates to safety frameworks, and prohibitions on misleading statements regarding risks. It further requires reporting of critical safety incidents within 72 hours or 24 hours where imminent harm is identified, periodic submission of internal risk assessments, and compliance with disclosure and registration requirements overseen by the Department of Financial Services, with enforcement through civil penalties of up to USD 1 million for initial violations and USD 3 million for subsequent violations. The Act also establishes reporting mechanisms, annual public safety summaries from 2028, and rulemaking authority for implementation.

  2. 26/03/2026
    interim ruling

    Northern District Court of California granted preliminary injunction in favour of Anthropic over usage restrictions on autonomous weapons and mass surveillance

    On 26 March 2026, the United States District Court for the Northern District of California granted a preliminary injunction in favour of Anthropic, blocking three government actions taken against the company following its public refusal to remove certain usage restrictions on its artificial intelligence model, Claude. These restrictions included prohibitions on uses such as mass surveillance and lethal autonomous warfare. The measures targeted Anthropic in its capacity as an AI developer supplying services to federal agencies and defence contractors. They included a Presidential Directive barring federal agencies from using Anthropic’s technology, a directive from the Secretary of War excluding Anthropic from engagement with defence contractors, and a formal designation of Anthropic as a “supply chain risk to national security”. The Court found that the measures were likely unlawful. It considered that they may constitute retaliation in violation of the First Amendment to the United States Constitution, may infringe protections under the Fifth Amendment to the United States Constitution due to the absence of prior notice or procedural safeguards, and may involve an incorrect application of the statutory framework governing supply chain risks.

  3. 26/03/2026
    announcement

    Federal Trade Commission issued letter to Stripe over discriminatory service denial

    On 26 March 2026, the Federal Trade Commission (FTC) issued a letter to Stripe, concerning potential violations of Section 5 of the Federal Trade Commission (FTC) Act prohibiting unfair or deceptive acts or practices in or affecting commerce. The letter alleges that Stripe may have denied consumers access to its payment processing services, including digital wallet transactions, based on their political or religious beliefs, despite Stripe's own public representations that it does not discriminate based on political affiliation or viewpoints. The FTC cautioned that such conduct could constitute both an unfair and a deceptive practice under the Act, particularly where it is inconsistent with Stripe's terms of service or causes substantial harm that consumers cannot reasonably avoid. The letter references the Executive Order on fair banking access as further grounds for the FTC's position. Stripe has been put on notice that continued non-compliance could result in a formal investigation and enforcement action.

  4. 26/03/2026
    announcement

    Federal Trade Commission issued letter to PayPal over discriminatory service denial

    On 26 March 2026, the Federal Trade Commission (FTC) issued a letter to PayPal concerning potential violations of Section 5 of the Federal Trade Commission (FTC) Act prohibiting unfair or deceptive acts or practices in or affecting commerce. The letter alleges that PayPal may have denied consumers access to its payment processing services, including digital wallet transactions, based on their political or religious beliefs. The FTC cautioned that such conduct could constitute both an unfair and a deceptive practice under the Act, particularly where it is inconsistent with PayPal's own terms of service or causes substantial harm that consumers cannot reasonably avoid. The letter referenced the Executive Order of 7 August 2025 on fair banking access as further grounds for the FTC's position. PayPal has been put on notice that continued non-compliance could result in a formal investigation and enforcement action.

  5. 24/03/2026
    opinion

    Attorneys General of 17 states issued a joint letter to Congress on federal government access to personal data for surveillance

    On 24 March 2026, the Attorneys General of 17 US states, including California, Minnesota, New Jersey and Connecticut, issued a joint letter to Congress on federal government access to data for surveillance. The letter urged Congress to devise legislation preventing federal agencies from using commercial data brokers and artificial intelligence (AI) to conduct mass surveillance of American citizens. It was stated that agencies, including the Federal Bureau of Investigation, Immigration and Customs Enforcement and Transport Security Administration, have been purchasing large datasets from private brokers without judicial oversight or consumer knowledge. The data purchased includes geolocation information, travel records, web browsing histories, and detailed behavioural profiles. The Attorneys General argued that existing laws, including the Privacy Act of 1974 and the E-Government Act of 2002, have failed to keep pace with modern surveillance capabilities. The letter called for a prohibition on federal purchases of sensitive personal data from brokers and also demanded mandatory judicial warrants before agencies acquire or search personal location data, browsing histories, or use AI to identify individuals. Further, they called for the deletion of unlawfully collected data and any algorithms trained on it. They also urged federal regulation of the data brokerage industry, without displacing stronger state-level protections. The Attorneys General endorsed the Government Surveillance Reform Act of 2026 as a suitable legislative vehicle.

  6. 20/03/2026
    adoption

    Responsible AI Safety and Education (RAISE) Act was adopted by Legislature

    On 20 March 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was passed by the New York Senate. The Act aims to establish transparency and safety requirements for developers of frontier artificial intelligence models. The Act applies to frontier AI developers, particularly large developers with annual revenues exceeding USD 500 million, engaged in developing, deploying, or operating high-compute foundation models in New York. It imposes obligations, including the publication of frontier AI frameworks detailing risk assessment and mitigation processes, mandatory transparency reports before deployment, regular updates to safety frameworks, and prohibitions on misleading statements regarding risks. It further requires reporting of critical safety incidents within 72 hours or 24 hours where imminent harm is identified, periodic submission of internal risk assessments, and compliance with disclosure and registration requirements overseen by the Department of Financial Services, with enforcement through civil penalties of up to USD 1 million for initial violations and USD 3 million for subsequent violations. The Act also establishes reporting mechanisms, annual public safety summaries from 2028, and rulemaking authority for implementation.

  7. 20/03/2026
    introduction

    Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards Act (GUARDRAILS Act) was introduced to House of Representatives

    On 20 March 2026, the Guaranteeing and Upholding Americans' Right to Decide Responsible Artificial Intelligence (AI) Laws and Standards Act (GUARDRAILS Act / HR 8031) was introduced to the House of Representatives. The Act would repeal the Executive Order entitled "Ensuring a National Policy Framework for Artificial Intelligence", issued on 11 December 2025, which aims to establish a moratorium on state-level AI policies. The Act would render the Executive Order void and prohibit the use of federal funds to implement, administer, enforce, or carry out the Executive Order. The sponsors aim to preserve the authority of states to enact AI-related regulations. The Act was referred to the House Committee on Energy and Commerce and the Committee on the Judiciary.

  8. 20/03/2026
    adoption

    White House released National Policy Framework for Artificial Intelligence

    On 20 March 2026, the White House released A National Policy Framework for Artificial Intelligence (NPFAI), a set of legislative recommendations addressing artificial intelligence (AI) policy across six objectives. Firstly, the NPFAI calls on Congress to empower parents with tools to manage their children's privacy settings, screen time, content exposure, and account controls. The NPFAI would establish commercially reasonable, privacy-protective age-assurance requirements such as parental attestation for AI platforms likely to be accessed by minors and would require such platforms to implement features reducing the risks of sexual exploitation and self-harm. The NPFAI would affirm that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising. Secondly, in accordance with the Ratepayer Protection Pledge, the NPFAI would ensure residential ratepayers do not experience increased electricity costs from new AI data centre construction and calls on Congress to streamline federal permitting for on-site and behind-the-meter power generation. The NPFAI calls on Congress to augment law enforcement efforts to combat AI-enabled impersonation scams and fraud and to ensure national security agencies possess sufficient technical capacity to understand frontier AI model capabilities. Thirdly, the NPFAI supports allowing courts to resolve whether training AI models on copyrighted material constitutes fair use and calls on Congress to consider enabling licensing frameworks or collective rights systems for rights holders to negotiate compensation from AI providers. The NPFAI calls on Congress to consider a federal framework protecting individuals from unauthorised distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes, with exceptions for parody, satire, news reporting, and other expressive works. Fourthly, the NPFAI would prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas and would provide Americans with a means to seek redress from the federal government for censorship efforts on AI platforms. Fifthly, the NPFAI calls on Congress to establish regulatory sandboxes for AI applications and to make federal datasets accessible in AI-ready formats for training AI models. The NPFAI states that Congress should not create any new federal rulemaking body to regulate AI and should instead support sector-specific AI applications through existing regulatory bodies and industry-led standards. Sixthly, the NPFAI calls on Congress to ensure existing education and workforce training programmes incorporate AI training. The NPFAI calls on Congress to pre-empt state AI laws that impose undue burdens while preserving states' traditional police powers. The NPFAI states that states should not be permitted to regulate AI development as an inherently interstate phenomenon with foreign policy and national security implications, should not unduly burden Americans' use of AI for activity that would be lawful if performed without AI, and should not be permitted to penalise AI developers for a third party's unlawful conduct involving their models.

  9. 19/03/2026
    implementation

    Responsible AI Safety and Education Act enters into force (SB 6953B/AB 6453A)

    On 19 March 2026, the Responsible Artificial Intelligence (AI) Safety and Education Act (RAISE Act) enters into force 90 days after becoming law. The Act imposes obligations for large developers of high-risk AI systems, referred to as frontier models. The Act requires that, before deployment, developers must create a written safety and security protocol, keep an unredacted version for at least five years post-deployment, and publish a redacted version while sharing it with the Attorney General. They must also retain detailed testing records, prevent unreasonable risks of critical harm, and review their protocols annually to reflect changes in model capabilities or best practices. Independent third-party audits of compliance must be conducted yearly, with unredacted audit reports kept on record and redacted versions submitted to the Attorney General. Developers must report computing costs annually and disclose any safety incidents within 72 hours. Employees, including contractors and advisors, are protected from retaliation when reporting risk and must be informed of their rights. Civil penalties for non-compliance may reach 15% of computed costs or USD 10,000 per affected employee. The Act prohibits contractual waivers of liability and allows courts to pierce corporate structures that deliberately evade responsibility.

  10. 18/03/2026
    drafting

    Senator released draft of TRUMP AMERICA AI Act including user rights

    On 18 March 2026, a United States Senator released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act. The Act would establish rights for users that covered platforms know to be minors, including safeguards to limit communications, restrict public access to personal data, limit by default design features encouraging compulsive usage, restrict geolocation sharing, and provide an option to limit time spent on the platform. Covered platforms would be required to offer prominently displayed options to opt out of personalized recommendation systems and to limit categories of recommendations, and to provide clear information about safeguards, parental tools, and recommendation systems prior to registration by a known minor, with verifiable parental consent required for children under 13. Parental tools, including the ability to manage account settings, restrict purchases, and restrict time spent on the platform, would be enabled by default for users known to be children. All users of platforms using opaque algorithms would have the right to switch to an input-transparent algorithm without differential pricing. The Act would establish a property right for individuals to authorise the use of their voice or visual likeness in digital replicas, licensable and surviving the death of the individual. The Act would further establish a federal product liability framework for AI systems under which developers would be liable for defective design, failure to warn, express warranty breaches, and unreasonably dangerous products. Strict liability would apply to unreasonably dangerous or defective AI products regardless of the care exercised. Deployers would be liable where they substantially modify a system or intentionally misuse it. Contractual terms waiving, restricting, or unreasonably limiting liability in both developer-deployer contracts and end-user terms and conditions would be unenforceable. Individuals and classes of individuals would have the right to bring a federal civil action to obtain damages, restitution, injunctive relief, and attorney fees, with a 4-year limitation period.

  11. 18/03/2026
    drafting

    Senator released draft of TRUMP AMERICA AI Act including copyright protection regulation

    On 18 March 2026, a United States Senator released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act. The NO FAKES Act of 2026 (Section 1202) would establish the digital replication right, granting each individual or right holder the right to authorise the use of their voice or visual likeness in a digital replica. The right would be a property right, licensable but not assignable during the life of the individual, and would survive death for a minimum of 10 years, renewable in 5-year periods upon registration with the Register of Copyrights, up to a maximum of 70 years after death. Statutory damages for violations would range from USD 5'000 per work for individuals to USD 750'000 per work for non-compliant online service providers. The Transparency and Responsibility for Artificial Intelligence Networks Act (Section 1302) would amend chapter 5 of title 17, United States Code, to enable the legal or beneficial owner of an exclusive right under a copyright to request a subpoena requiring a developer to disclose copies of or records identifying copyrighted works used to train a generative AI model, with developer non-compliance creating a rebuttable presumption that the developer copied the work. The Artificial Intelligence Copyright, Transparency, and Training Data Accountability title (Section 1501) would amend section 107 of title 17, United States Code, to provide that unauthorised reproduction or computational processing of copyrighted works for AI training shall not constitute fair use. The section would further provide that any AI created through inference, distillation, or similar processes is deemed to incorporate the copyrighted materials used in training the source model, unless the developer establishes by clear and convincing evidence that only authorised materials were used or that no copyrighted expression is embedded in or reproducible by the derivative AI, and that AI generation of content that reproduces or derives from copyrighted works constitutes infringement. The same title (Section 1502) would amend section 103 of title 17, United States Code, to provide that derivative works generated, synthesised, or produced by an AI system without the authorisation of the copyright owner of the underlying work shall be deemed infringing works ineligible for copyright protection, regardless of whether the absence of human authorship would otherwise limit a finding of infringement.

  12. 18/03/2026
    drafting

    Senator released draft of TRUMP AMERICA AI Act including artificial intelligence authority governance

    On 18 March 2026, a United States Senator released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act. The Act would direct the Secretary of Energy to establish an Advanced Artificial Intelligence Evaluation Programme within the Department of Energy within 90 days of enactment, terminating 7 years after enactment unless renewed by Congress. The Act would amend the National Institute of Standards and Technology Act to establish the Center for Artificial Intelligence Standards and Innovation within NIST within 90 days of enactment, directed to assist the private sector and agencies in developing voluntary best practices for the assessment of AI systems. The Director of NIST would further be required to establish the Center for Artificial Intelligence Standards and Innovation Consortium within 180 days of enactment. The Act would require the Director of the National Science Foundation to establish the National Artificial Intelligence Research Resource within 1 year of enactment, overseen by a NAIRR Steering Subcommittee chaired by the Director of the Office of Science and Technology Policy, with a Program Management Office within the National Science Foundation overseeing day-to-day operations. The Act would establish an 11-member Kids Online Safety Council to advise Congress on the safety of minors online, with appointments to be made within 180 days of enactment. The Council would submit an interim report within 1 year and a final report within 3 years of its initial meeting, after which it would terminate. The Attorney General would be required to maintain a publicly accessible registry of designated agents of foreign AI developers and prohibit non-compliant foreign developers from deploying AI products in the United States. The Comptroller General would be required to submit to Congress within 1 year a report on regulatory impediments to AI innovation and federal agency adoption of AI.

  13. 18/03/2026
    drafting

    Senator released draft of TRUMP AMERICA AI Act including testing requirement

    On 18 March 2026, a United States Senator released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act. The Act would require each developer of an advanced AI system, defined as a system trained using more than 10²⁶ computing operations, to participate in the Advanced Artificial Intelligence Evaluation Programme and to provide the Secretary of Energy, on request, with materials including the underlying code, training data, model weights, the interface engine, and detailed information regarding training and model architecture. No person would be permitted to deploy an advanced AI system for use in interstate or foreign commerce unless in compliance with these obligations, with a penalty of not less than USD 1'000'000 per day of violation. The Secretary of Energy would be required to establish the Programme within 90 days of enactment. The Programme would conduct standardised and classified testing and adversarial red-team testing at a level matching sophisticated malicious actors, facilitate independent third-party blind model evaluations, and provide participating entities with formal reports identifying evaluated risks. The Programme would terminate 7 years after enactment unless renewed by Congress. The Act would further require each provider of a high-risk AI system to subject that system to an annual independent third-party audit to detect viewpoint discrimination or discrimination based on political affiliation, with a report submitted to the FTC not later than 180 days after completion of each audit. Each covered entity would also be required to provide all personnel with annual AI ethics training using a curriculum established by the FTC. The Act would additionally require NIST to support the development of voluntary, consensus-based testing standards for AI system components and to establish AI blue-teaming capabilities, and would require the Under Secretary of Commerce and Secretary of Energy to jointly establish a testbed programme within 1 year of enactment for voluntary testing, evaluation, and security risk assessment of AI systems, including a voluntary foundation model test programme. The Under Secretary of Commerce would further be required to establish a public-private partnership to develop guidelines, metrics, and practices for evaluating synthetic content detection tools, including through AI red-teaming and blue-teaming.

  14. 18/03/2026
    drafting

    Senator released draft of TRUMP AMERICA AI Act including design requirement

    On 18 March 2026, a United States Senator released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act. The Act would establish a duty of care governing the design, development, and operation of AI chatbots, with minimum reasonable safeguards to be established by the Federal Trade Commission (FTC). With respect to covered platforms, the Act would regulate design features affecting minors, including infinite scrolling, auto playing, rewards for time spent on the platform, and notifications, and would require that default settings for minors reflect the most protective level of control offered by the platform. The Act would further prohibit platforms from designing or manipulating user interfaces with the purpose or substantial effect of impairing user autonomy with respect to safeguards or parental tools, and would prohibit minors from accessing AI companion services following age verification. Platforms using opaque algorithms would be required to notify users of the algorithm's use, features, inputs, parameters, data categories, and optimisation targets, and to enable users to switch to an input-transparent alternative without differential pricing. AI chatbots would be required to disclose their non-human status at the initiation of each conversation and at 30-minute intervals, and to disclose that they do not provide medical, legal, financial, or psychological services. Beginning 2 years after enactment, persons making available for a commercial purpose a tool used primarily for creating synthetic or synthetically-modified content would be required to provide users with the ability to include content provenance information and to establish reasonable security measures ensuring such information is machine-readable and not easily removed, altered, or separated from the underlying content.

  15. 09/03/2026
    announcement

    Anthropic filed lawsuit over blacklisting of Anthropic over usage restrictions on autonomous weapons and mass surveillance

    On 9 March 2026, Anthropic filed a complaint for injunctive relief in the Northern District Court of California against the Department of War and sixteen other federal agencies, challenging three government actions taken against the company after it refused to remove usage restrictions on its Artificial Intelligence (AI) model Claude. The complaint stated that the actions targeted Anthropic as an AI developer that had been supplying services to federal defence and intelligence agencies since 2024, holding a Top Secret facility security clearance, and a projected public sector revenue of several hundred million dollars in 2026. Anthropic brought five claims against the government, including that the supply chain risk designation was legally baseless and improperly applied, that punishing Anthropic for publicly expressing its views on AI safety violated the First Amendment, that the Presidential Directive exceeded the President's constitutional and statutory powers, that all three actions deprived Anthropic of its rights without any prior notice or opportunity to respond, in breach of the Fifth Amendment and that the wider federal agency crackdown lacked any lawful authority. Anthropic sought permanent injunctions against all defendants, vacatur of the Secretarial Order and Letter, a declaration that the Presidential Directive is unconstitutional, and rescission of all implementing guidance issued across the federal government.

  16. 05/03/2026
    adoption

    Utah Legislature adopted Artificial Intelligence Modifications Bill (HB 276) including design requirement

    On 5 March 2026, the Utah Legislature adopted the Artificial Intelligence Modifications Bill (HB 276) after the House concurred with the Senate amendment. The adopted Bill retains requirements for large online platforms to detect compliant system provenance data embedded in distributed content, provide a user interface disclosing its availability, and prohibit the knowing stripping of system provenance data or digital signatures compliant with widely adopted specifications of an established standards-setting body. Requirements for covered providers to include latent disclosures in image, video, or audio content created or substantially modified by a generative artificial intelligence system, and the requirement for capture device manufacturers to include latent disclosures in captured content for devices produced for sale in Utah on or after 1 January 2028, are also unchanged. Relative to the introduced version, the definition of "large online platform" is amended to remove file-sharing platforms from scope. The definition of "covered provider" is amended to exclude persons producing a generative artificial intelligence system used exclusively for internal business operations and not made publicly accessible. The user inspection provision is amended to allow download of compliant system provenance data rather than a version of the content with attached system provenance data. The Bill takes effect on 1 January 2027.

  17. 03/03/2026
    introduction

    Kids Internet and Digital Safety Act (KIDS Act/HB 7757) including design requirement was introduced to House of Representatives

    On 3 March 2026, the Kids Internet and Digital Safety Act (KIDS Act/HB 7757) was introduced in the House of Representatives. Under the KIDS Act, a minor is defined as an individual under the age of 17, with children defined as individuals under the age of 13, and teen covered users as covered users who have attained the age of 13. Under the Kids Online Safety Act (Title II, Subtitle A), providers of covered platforms would be required to set the default for any safeguard for a known minor user to the most protective level of control with respect to privacy and safety, covering controls over personalized recommendation systems, geolocation information sharing, design features that result in compulsive usage, and communication with other users. Providers would be prohibited from knowingly using a user interface with the purpose or substantial effect of impairing a known minor user's or parent's use of any safeguard or parental tool. Under the Safer GAMING Act (Title III), online video game providers would be required to enable communication safeguards by default on covered user accounts at the most restrictive setting, adjustable only by a parent. Under the SAFE BOTs Act (Title IV), chatbot providers would be required to establish policies to advise covered users to take a break after 3 continuous hours of interaction and to address covered users' access to sexual material harmful to minors, restricted gambling, and promotion of narcotic drugs, tobacco products, or alcohol.

  18. 02/03/2026
    ruling

    Supreme Court declines review of AI-generated work copyright case (Thaler v Perlmutter)

    On 2 March 2026, the Supreme Court of the United States denied a petition for a writ of certiorari in Stephen Thaler v Shira Perlmutter, Register of Copyrights and Director of the United States Copyright Office. The case concerned the copyrightability of “A Recent Entrance to Paradise”, a visual work generated by an artificial intelligence system known as the Creativity Machine. The petitioner challenged earlier decisions of the District Court for the District of Columbia and the Court of Appeals for the District of Columbia Circuit, which held that the Copyright Act requires human authorship. The petition for certiorari was filed on 9 October 2025. Following a briefing by the parties, the Supreme Court declined to review the case. As a result, the lower court rulings remain in place, maintaining the position that copyright protection under US law applies to works created by human authors.

  19. 26/02/2026
    introduction

    South Carolina Artificial Intelligence Act (S 963) was introduced to Senate including non-discrimination requirements

    On 26 February 2026, the South Carolina Artificial Intelligence Act (S 963) was introduced to the Senate. The Act requires developers and deployers of artificial intelligence (AI) systems to exercise reasonable care in protecting consumers from foreseeable risks of discrimination arising from intended uses. Developers must provide statements describing data governance measures, training data types, and specific steps taken to mitigate biases before making systems available. Additionally, developers must notify the Attorney General and all deployers within 90 days if testing reveals that a system has caused or is likely to cause discriminatory outcomes. Deployers are required to implement iterative risk management programs that specify the personnel and processes used to mitigate algorithmic bias. These programs must align with international standards like ISO/IEC 42001 or NIST frameworks. The Attorney General holds exclusive authority to enforce these provisions as unfair trade practices. Entities can establish a rebuttable presumption of compliance by documenting their adherence to these non-discrimination safeguards and relevant risk management frameworks. The Act clarifies that no private right of action is created for alleged discriminatory impacts. The Act takes effect upon approval by the Governor.

  20. 26/02/2026
    introduction

    South Carolina Artificial Intelligence Act (S 963) introduced to Senate including testing and documentation requirements

    On 26 February 2026, the South Carolina Artificial Intelligence Act (S 963) was introduced to the Senate to mandate technical documentation and impact assessments for high-risk artificial intelligence (AI) systems. The Act requires developers to provide deployers with model cards, dataset cards, and performance evaluations to facilitate comprehensive testing. Deployers are obligated to complete initial and annual impact assessments that analyse the purpose, input data, and output metrics of the system, alongside post-deployment monitoring results. These assessments must be maintained for three years following the final deployment of the system. The Attorney General is authorised to request these documents for compliance review, though they remain exempt from public disclosure under freedom of information laws to protect trade secrets. Enforcement is managed by the Attorney General, and entities may use a rebuttable presumption of reasonable care if they adhere to recognised frameworks such as NIST's AI Risk Management Framework or ISO/IEC 42001. Violations of these testing standards constitute unfair trade practices, though the Act provides an affirmative defence for entities that discover and cure violations through internal red-teaming or review processes before enforcement actions commence. The Act takes effect upon approval by the Governor.

  21. 26/02/2026
    introduction

    South Carolina Artificial Intelligence Act (S 963) was introduced to Senate including disclosure obligations

    On 26 February 2026, the South Carolina Artificial Intelligence Act was introduced to the Senate to establish transparency obligations for artificial intelligence (AI) systems interacting with consumers. The Act mandates that developers and deployers making an AI system available for consumer interaction must disclose that the consumer is communicating with an automated system, except when such interaction is obvious to a reasonable person. Furthermore, deployers must provide clear notices before using high-risk AI to make consequential decisions, including a description of the system, its purpose, and instructions for human review or appeals. Consumers are granted the right to correct personal data processed by the system and to receive explanations regarding the data sources and logic contributing to adverse decisions. The South Carolina Attorney General holds exclusive enforcement authority, treating violations as unfair trade practices. Certain federal entities and healthcare recommendations are exempt from these provisions. While the Act establishes a rebuttable presumption of reasonable care for compliant entities, it explicitly denies a private right of action for individual consumers. To support implementation, the Attorney General is empowered to promulgate specific rules regarding the format and timing of these mandatory consumer disclosures. The Act takes effect upon approval by the Governor.

  22. 17/02/2026
    announcement

    Center for AI Standards and Innovation announced AI Agent Standards Initiative

    On 17 February 2026, the Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST) announced the launch of the AI Agent Standards Initiative. The initiative aims to support the secure and interoperable adoption of AI agents capable of autonomous actions across the digital ecosystem. In coordination with the Information Technology Laboratory (ITL) at NIST, and in collaboration with the National Science Foundation and other federal partners, CAISI will advance the Initiative along three pillars: facilitating industry-led development of agent standards and US leadership in international standards bodies, fostering community-led open-source protocol development and maintenance for AI agents, and advancing research in AI agent security and identity. NIST has stated that it will leverage public input mechanisms, including convenings, listening sessions, and requests for information. Stakeholders may respond to CAISI’s Request for Information on AI Agent Security, which is due on 9 March 2026, and to ITL’s AI Agent Identity and Authorisation Concept Paper, which is due on 2 April 2026. Further research, guidelines, and deliverables are expected to follow.

  23. 11/02/2026
    introduction

    Bill amending Kentucky Consumer Privacy Act (HB 633) including data protection regulation for covered minors was introduced to House of Representatives

    On 11 February 2026, the Bill amending the Kentucky Consumer Privacy Act (KRS 367.3611 to 367.3629) (HB 633) was introduced to the Kentucky House of Representatives. The Bill applies to covered online services that conduct business in Kentucky, determine the purposes and means of processing consumers' personal data, and annually process personal data of at least 100'000 consumers or derive at least 50% of annual gross revenue from the sale or sharing of personal data. It defines covered minors as users that a covered online service knows or should have known, based on objective knowledge or circumstances, to be under 18 years. The Bill requires covered online services to collect and use only the minimum amount of a covered minor's personal data necessary to provide the specific elements of an online service with which the covered minor has knowingly engaged. It mandates that collected personal data not be used for reasons other than those for which it was collected. The Bill requires covered online services to retain personal data only as long as necessary to provide the specific elements of an online service with which the covered minor has knowingly engaged. It prohibits profiling of covered minors unless profiling is necessary to provide a covered online service requested by a covered minor. The Bill requires covered online services to provide tools allowing covered minors to request deletion of account profiles, media, and personal data. It allows parents to make deletion requests on behalf of the child. The Bill requires compliance with deletion requests not later than 15 days after receiving the request. It requires covered online services to provide accessible and user-friendly tools that allow a covered minor to opt out of the use of the covered minor's personal data to select, recommend, or prioritise media in an algorithmic feed, with specified exceptions.

  24. 11/02/2026
    introduction

    Bill amending Kentucky Consumer Privacy Act (HB 633) including design requirements for covered minors was introduced to House of Representatives

    On 11 February 2026, the Bill amending the Kentucky Consumer Privacy Act (KRS 367.3611 to 367.3629) (HB 633) was introduced to the Kentucky House of Representatives. The Bill applies to covered online services that conduct business in Kentucky, determine the purposes and means of processing consumers' personal data, and annually process personal data of at least 100'000 consumers or derive at least 50% of annual gross revenue from the sale or sharing of personal data. It defines covered minors as users that a covered online service knows or should have known, based on objective knowledge or circumstances, to be under 18 years. The Bill requires covered online services to configure all default privacy settings provided to a covered minor to the highest level of privacy. It requires covered online services to disable by default all interaction counts, including counts of reactions and comments on all of the covered minor's media. The Bill prohibits covered online services from, by default, using an algorithmic recommendation system to recommend to any known adult user that they connect to a covered minor as a friend, follower, or contact or follow a covered minor's media unless accounts were connected prior to the recommendation. It prohibits covered online services from, by default, displaying a covered minor's friends, followers, or contacts, enabling search engine indexing of a covered minor's account profile and media, or displaying the location of any covered minor to any other user unless the covered minor has expressly chosen to share location. The Bill prohibits covered online services from, by default, sending push notifications to any covered minor. It requires covered online services to establish mechanisms for covered minors and parents to report harms. The Bill requires covered online services to provide a tool giving a covered minor the option to block specific users.

  25. 05/02/2026
    implementation

    South Carolina Social Media Regulation Act (Act No. 96) including design requirement enters into force

    On 5 February 2026, the Governor signed the South Carolina Social Media Regulation Act (Act No. 96), and the Act entered into force upon approval. The Act introduces a new “Age-Appropriate Code Design” chapter applicable to online services reasonably likely to be accessed by minors. It defines “covered design features” as features that encourage increased use by minors, including infinite scroll, autoplay, gamification elements, visible engagement metrics, notifications, in-app purchases, personalised recommendation systems, and appearance-altering filters. “Dark pattern” is defined as a user interface design that substantially impairs user autonomy or decision-making and prohibits the use of dark patterns, classifying such use as an unlawful trade practice under Section 39-5-20 of the South Carolina Unfair Trade Practices Act. It requires covered online services to exercise reasonable care in the use of minors’ personal data and in service design to prevent specified harms, including compulsive usage, severe psychological harm, discrimination, and financial injury. The Act requires high-level privacy settings and safeguards to be set by default for individuals known to be minors and mandates the provision of accessible tools to control design features and personalised recommendation systems. The Act restricts the collection and profiling of minors’ personal data, prohibits facilitating targeted advertising to minors, and requires parental control tools. The Act requires annual independent audit reports to be submitted to the Attorney General and provides for enforcement, including treble damages.

  26. 03/02/2026
    implementation

    Obligation to develop a strategy addressing debanking practices under the Executive Order on Fair Banking enters into force

    On 3 February 2026, provisions of the Executive Order on Fair Banking enter into force, including the obligation for the Treasury to develop a strategy addressing politicised or unlawful debanking practices. The order also requires federal banking regulators to review supervisory and complaint data to identify potential instances of religiously motivated debanking. Where non-compliance is identified, institutions may be referred to the Attorney General for potential civil enforcement.

  27. 01/02/2026
    implementation

    Implemented Bill for an Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (SB 24-205) including user rights

    On 1 February 2026, the Bill for an Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (SB 24-205), including user rights, enters into force. The Bill requires deployers of high-risk artificial intelligence (AI) systems to inform consumers that such systems are deployed, provide consumers with a statement disclosing the system's purpose, the nature of the decision, contact information for the deployer, and a clear description of the system's components and their role in the decision-making process. In addition, deployers must offer information to consumers about their right to opt out of personal data processing for profiling purposes. This information must be provided in a clear and readily available manner. The high-risk AI system is defined as systems developed or substantially modified to make consequential decisions that impact consumer's access to or the availability, cost, or terms of various aspects of their life, including criminal justice remedies, education, employment, essential goods or services, financial or lending services, government services, healthcare, housing, insurance, or legal services.

  28. 01/02/2026
    implementation

    Implemented Bill for an Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (SB 24-205) including testing requirement

    On 1 February 2026, the Bill for an Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (SB 24-205), including testing requirements, enters into force. The Bill includes measures for developers and deployers of high-risk Artificial Intelligence (AI) systems. The high-risk AI system is defined as systems developed or substantially modified to make consequential decisions that impact consumer's access to or the availability, cost, or terms of various aspects of their life, including criminal justice remedies, education, employment, essential goods or services, financial or lending services, government services, healthcare, housing, insurance, or legal services. In particular, the developers are required to provide deployers with information about the high-risk system, including the information necessary to conduct an impact assessment and issue public statements listing the types of high-risk systems developed or modified, along with details on known or potential risks of algorithmic discrimination and measures of addressing them. In addition, the developers have to notify the attorney general and known deployers of any discovered or anticipated algorithmic discrimination risks. Furthermore, the deployers have to develop a risk management policy, conduct impact assessments, inform consumers of consequential decisions, and publicly disclose system details while reporting discrimination discoveries to authorities. Moreover, before an AI system or model is marketed, deployed, or put into service, the developer must conduct extensive research, testing, and development. This testing should not be conducted under real-world conditions but should ensure the AI system's safety and compliance with relevant standards.

  29. 28/01/2026
    passage

    Responsible AI Safety and Education (RAISE) Act was passed by Senate

    On 28 January 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was passed by the New York Senate. The Act aims to establish transparency and safety requirements for developers of frontier artificial intelligence models. The Act applies to frontier AI developers, particularly large developers with annual revenues exceeding USD 500 million, engaged in developing, deploying, or operating high-compute foundation models in New York. It imposes obligations, including the publication of frontier AI frameworks detailing risk assessment and mitigation processes, mandatory transparency reports before deployment, regular updates to safety frameworks, and prohibitions on misleading statements regarding risks. It further requires reporting of critical safety incidents within 72 hours or 24 hours where imminent harm is identified, periodic submission of internal risk assessments, and compliance with disclosure and registration requirements overseen by the Department of Financial Services, with enforcement through civil penalties of up to USD 1 million for initial violations and USD 3 million for subsequent violations. The Act also establishes reporting mechanisms, annual public safety summaries from 2028, and rulemaking authority for implementation.

  30. 27/01/2026
    announcement

    California Department of Justice launched investigative sweep into surveillance pricing under California Consumer Privacy Act

    On 27 January 2026, the California Department of Justice launched an investigative sweep into businesses’ use of consumers’ personal information to set targeted, individualised prices for goods and services, referred to as surveillance pricing. The Department stated that surveillance pricing may trigger obligations under and violate the California Consumer Privacy Act, including the purpose limitation principle that limits the use of personal information to purposes consistent with consumers’ reasonable expectations. As part of the investigative sweep, the Department announced that it was sending letters to businesses with a significant online presence in the retail, grocery, and hotel sectors requesting information on how these businesses use consumers’ shopping history, internet browsing history, location data, demographic data, inferential data, or other personal information to set prices. The letters also request information on policies and public disclosures relating to personalised pricing, pricing experiments, and measures taken to comply with algorithmic pricing, competition, and civil rights laws.

  31. 26/01/2026
    implementation

    Governor of Texas updated Prohibited Technologies List for state employees and devices

    On 26 January 2026, the Governor of Texas updated the Prohibited Technologies List for state employees and devices to include certain hardware and software associated with the People's Republic of China. Following a threat assessment by the Texas Cyber Command (TXCC), the update extends restrictions to address potential risks to sensitive state information. The TXCC identifies technologies that may pose risks to state infrastructure. The expanded list covers sectors such as machine learning, network equipment, and e-commerce, including products from Baidu, Temu, Alibaba, and Shein. In total, 20 entities are included. The policy aims to strengthen cybersecurity and safeguard state data.

  32. 22/01/2026
    introduction

    Children Harmed by AI Technology Act (HR 7218) including data protection regulations was introduced in House of Representatives

    On 22 January 2026, the Children Harmed by AI Technology Act (HR 7218), including data protection regulations, was introduced in the House of Representatives. The Act applies to any person or organisation ("covered entity") that owns, operates, or makes available a "companion AI chatbot" to users in the United States. It explicitly excludes bots used for customer service, business operations, video games (with limited scope), and standard voice-activated virtual assistants. A covered entity must strictly limit the collection, use, and storage of a user's age verification data only to the purposes of verifying age, obtaining parental consent, or maintaining compliance records, thereby ensuring the information's confidentiality. The Act will be enforced primarily by the Federal Trade Commission (FTC), which must issue compliance guidance within 180 days and can treat violations as unfair or deceptive practices under the FTC Act. State attorneys general may also bring civil enforcement actions on behalf of their residents, subject to notifying the FTC. The FTC maintains the right to transfer the filing to a different court, be heard on the matter, and file appeals. A "safe harbour" provision protects covered entities from liability if they act in good faith by relying on user-provided age information, following FTC guidance, and conforming to accepted industry standards for age verification. The law will take effect one year after its enactment.

  33. 22/01/2026
    introduction

    Children Harmed by AI Technology Act (CHAT Act/ HR 7218) including user notification rights was introduced in House of Representatives

    On 22 January 2026, the Children Harmed by AI Technology Act (HR 7218), including a user notification right, was introduced in the House of Representatives. The Act applies to any person or organisation ("covered entity") that owns, operates, or makes available a "companion AI chatbot" to users in the United States. It explicitly excludes bots used for customer service, business operations, video games (with limited scope), and standard voice-activated virtual assistants. A covered entity must display a clear pop-up notification at the start and at least every 60 minutes during an interaction to inform the user they are not speaking with a human. The Act will be enforced primarily by the Federal Trade Commission (FTC), which must issue compliance guidance within 180 days and can treat violations as unfair or deceptive practices under the FTC Act. State attorneys general may also bring civil enforcement actions on behalf of their residents, subject to notifying the FTC. The FTC maintains the right to transfer the filing to a different court, be heard on the matter, and file appeals. A "safe harbour" provision protects covered entities from liability if they act in good faith by relying on user-provided age information, following FTC guidance, and conforming to accepted industry standards for age verification. The law will take effect one year after its enactment.

  34. 21/01/2026
    adoption

    Senate adopted and enrolled South Carolina Social Media Regulation Act (H 3431) including design requirement

    On 21 January 2026, the South Carolina Senate adopted the South Carolina Social Media Regulation Act (H.3431) by concurring in the House amendment and enrolling the Bill. The Act establishes an “Age-Appropriate Code Design” chapter applicable to covered online services reasonably likely to be accessed by minors. The Act requires covered online services to exercise reasonable care in the design and operation of services and in the use of minors’ personal data to prevent specified harms. The Act requires default high-level safeguards for individuals known to be minors, including user and parental control tools, restrictions on data collection and profiling, and a prohibition on facilitating targeted advertising to minors. The Act prohibits advertisements directed to minors for products prohibited for minors. The Act prohibits the use of dark patterns and classifies their use as an unlawful trade practice under the South Carolina Unfair Trade Practices Act. The Act further requires clear disclosures regarding personalised recommendation systems, annual independent audit reporting, and enforcement by the Attorney General, including treble damages for violations.

  35. 16/01/2026
    introduction

    Artificial Intelligence Modifications Bill (HB 276) including design requirement was introduced to House of Representatives

    On 16 January 2026, the Artificial Intelligence Modifications Bill (HB 276) was introduced to the Utah House of Representatives. The Bill would establish the Digital Content Provenance Standards Act, requiring large online platforms to detect compliant system provenance data embedded in distributed content, provide a user interface disclosing the availability of system provenance data, and allow users to inspect all available compliant system provenance data. Large online platforms would be prohibited from knowingly stripping system-provenance data or digital signatures compliant with widely adopted specifications of an established standards-setting body. Covered providers would be required to include latent disclosures in image, video, or audio content created or substantially modified by a generative artificial intelligence system. Capture device manufacturers would be required to include latent disclosures in captured content for devices produced for sale in Utah on or after 1 January 2028.

  36. 14/01/2026
    introduction

    House Bill 148 introduced to the House of Delegates including ban on surveillance-based price and wage setting

    On 14 January 2026, House Bill 148 was introduced to the Maryland House of Delegates to prohibit surveillance-based price and wage setting through automated decision systems. The Bill prohibits persons from using surveillance data in conjunction with automated decision systems to offer customised prices for goods or services. These restrictions do not apply to customised pricing based on actual cost differences or rewards programs where eligibility is disclosed and based on voluntarily provided information. Furthermore, the Bill prohibits employers from using surveillance data and automated systems to set customised wages for employees. Employers are permitted to set wages using such systems only if the data is directly related to the task or the cost of living at the work location, provided that the employer discloses how the system considers the data before hiring. The Commissioner of Labour and Industry is authorised to investigate complaints and may seek mediation or request the Attorney General to bring an action for injunctive relief or damages. The non-discrimination provisions of this Act enter into force on 1 October 2026.

  37. 08/01/2026
    introduction

    Responsible AI Safety and Education (RAISE) Act was introduced to Senate

    On 8 January 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was introduced to the New York Senate. The Act aims to establish transparency and safety requirements for developers of frontier artificial intelligence models. The Act applies to frontier AI developers, particularly large developers with annual revenues exceeding USD 500 million, engaged in developing, deploying, or operating high-compute foundation models in New York. It imposes obligations, including the publication of frontier AI frameworks detailing risk assessment and mitigation processes, mandatory transparency reports before deployment, regular updates to safety frameworks, and prohibitions on misleading statements regarding risks. It further requires reporting of critical safety incidents within 72 hours or 24 hours where imminent harm is identified, periodic submission of internal risk assessments, and compliance with disclosure and registration requirements overseen by the Department of Financial Services, with enforcement through civil penalties of up to USD 1 million for initial violations and USD 3 million for subsequent violations. The Act also establishes reporting mechanisms, annual public safety summaries from 2028, and rulemaking authority for implementation.

  38. 08/01/2026
    announcement

    Attorney General of Kentucky filed lawsuit against Character Technologies, Inc. over alleged child safety violations on AI chatbot platform

    On 8 January 2026, the Attorney General of Kentucky filed a lawsuit against Character Technologies, Inc. over alleged child safety violations on its Artificial Intelligence (AI) chatbot platform, Character AI. The complaint alleged violations of Kentucky's Consumer Protection Act, consumer data protection laws, and privacy protections relating to the Character AI chatbot platform, which reportedly has over 20 million monthly active users. The lawsuit alleged that Character AI encourages suicide, self-harm, isolation, and psychological manipulation and also exposes minors to sexual content, violence, and substance abuse, and noted that two teenage suicides had been linked to the platform. The complaint stated that the company allegedly misrepresented the platform as safe despite knowing of harmful interactions, failed to implement effective age verification or content filtering, and unlawfully collected and monetised children's personal data without parental consent. The lawsuit seeks permanent injunctions, civil penalties of USD 2'000 per wilful violation and up to USD 25'000 per injunction violation, disgorgement of profits, and costs.

  39. 01/01/2026
    implementation

    Law on Artificial Intelligence: Defences (AB 316) enters into force

    On 1 January 2026, the Law on Artificial Intelligence: Defences (AB 316) enters into force in California. The Act amends the California Civil Code through the addition of Section 1714.46, which defines artificial intelligence as an engineered or machine-based system with varying levels of autonomy that can, for explicit or implicit objectives, infer from input how to generate outputs capable of influencing physical or virtual environments. Section 1714.46 provides that, in any civil action against a defendant who developed, modified, or used artificial intelligence alleged to have caused harm, it does not constitute a defence to claim that the harm was autonomously caused by the system. The Law also preserves the right of defendants to raise other affirmative defences, including those related to causation, foreseeability, or comparative fault of other persons or entities.

  40. 01/01/2026
    implementation

    Act on frontier artificial intelligence models and whistleblower protections including transparency requirements (SB 53) enters into force

    On 1 January 2026, the Transparency in Frontier Artificial Intelligence Act (SB 53) enters into force. The Law applies to frontier developers and large frontier developers of foundation models, including those with annual revenue above USD 500 million and models trained with computing power exceeding a certain threshold. The Law requires large frontier developers to publish and annually update a frontier AI framework, issue transparency reports before deploying new or substantially modified frontier models, and transmit summaries of internal catastrophic-risk assessments to the Office of Emergency Services, with critical safety incidents reported within 15 days. Furthermore, the Law creates whistleblower protections for covered employees, requires an anonymous internal reporting channel with periodic status updates, and authorises civil penalties up to USD 1 million per violation, while pre-empting local rules on catastrophic-risk management and exempting specified reports from public records disclosure. The Law also establishes a consortium within the Government Operations Agency to design “CalCompute,” a public cloud computing cluster, with a framework report due to the Legislature by 1 January 2027 and operation subject to appropriation.

  41. 01/01/2026
    implementation

    Texas Responsible Artificial Intelligence Governance Bill (HB 149) including testing requirements enters into force

    On 1 January 2026, the Responsible Artificial Intelligence Governance Bill (HB 149) enters into force. The Bill establishes a regulatory sandbox programme under Chapter 553 of the Business and Commerce Code, enabling the unlicensed testing of artificial intelligence systems for up to 36 months, with possible extensions granted by the Texas Department of Information Resources. Applicants must provide detailed descriptions of the system and its intended use, a benefit-risk assessment covering potential impacts on consumers and public safety, and mitigation strategies for foreseeable harms. Compliance with applicable federal artificial intelligence laws must also be demonstrated. During the testing period, participants are subject to testing requirements, including the submission of quarterly reports that document system performance, applied risk control mechanisms, and stakeholder feedback. The Department protects trade secrets and proprietary information but may terminate participation if the system presents undue safety risks or breaches mandatory provisions of Subchapter B, Chapter 552. While the sandbox provides immunity from enforcement by the Attorney General or state agencies during testing, this exemption applies only if all testing activities comply with the programme’s requirements.

  42. 01/01/2026
    implementation

    Texas Responsible Artificial Intelligence Governance Bill (HB 149) including design requirements enters into force

    On 1 January 2026, the Responsible Artificial Intelligence Governance Bill (HB 149) enters into force. The Bill imposes design requirements on artificial intelligence systems under Subchapter B, Chapter 552 of the Business and Commerce Code. In particular, developers and deployers must ensure that systems are not designed to unlawfully capture biometric data, including through untargeted collection from publicly available sources without explicit consent. The Bill prohibits the deployment of systems intended to discriminate against protected classes such as race, sex, or disability, while clarifying that a disparate impact alone does not establish intent. Additional design prohibitions include the generation of child exploitation content using deepfake techniques, as defined in Sections 21.165 and 43.26 of the Penal Code, and text-based simulations of sexualised dialogue impersonating minors. Systems must also be designed to avoid infringement of constitutional rights, including free speech. The Bill requires that all consumer-facing disclosures be written in plain language, and in healthcare settings, integrated into patient documentation. These obligations pre-empt any conflicting local regulations, creating a uniform compliance baseline statewide.

  43. 01/01/2026
    implementation

    Texas Responsible Artificial Intelligence Governance Bill (HB 149) establishing Artificial Intelligence Council enters into force

    On 1 January 2026, the Responsible Artificial Intelligence Governance Bill (HB 149) enters into force. The Bill establishes the Texas Artificial Intelligence Council under Chapter 554 of the Business and Commerce Code. The Council is tasked with overseeing artificial intelligence governance in the state without possessing rulemaking or enforcement authority. Its responsibilities include identifying legal and regulatory barriers to innovation, evaluating the societal and public safety impact of artificial intelligence systems, and issuing legislative and policy recommendations. The Council is administratively supported by the Texas Department of Information Resources and monitors the state’s regulatory sandbox programme established under Chapter 553. The Council comprises 10 members, including public appointees and non-voting legislators, with expertise in AI ethics, data security, and risk management. The Council may publish advisory reports concerning compliance, ethical deployment, and potential legal implications of artificial intelligence systems, but it is not authorised to issue binding rules or intervene in the operations of state agencies.

  44. 01/01/2026
    implementation

    Texas Responsible Artificial Intelligence Governance Bill (HB 149) including consumer protection measures enters into force

    On 1 January 2026, the Responsible Artificial Intelligence Governance Bill (HB 149) enters into force. It mandates that any governmental agency or entity deploying Artificial Intelligence (AI) systems for consumer interaction provide clear and conspicuous disclosures prior to engagement, irrespective of whether the AI’s nature is obvious. Disclosures must avoid dark patterns (as defined in Section 541.001) and may be delivered via hyperlink or, in healthcare contexts, embedded in patient consent forms. The Bill prohibits AI systems developed or deployed with the intent to manipulate individuals into committing self-harm, harming others, or engaging in criminal activity. It also bans governmental use of artificial intelligence systems for social scoring that classifies individuals based on inferred personal characteristics in ways that may lead to unjustified or disproportionate treatment. Violations are subject to civil penalties ranging from USD 10’000 to 200’000, with a 60-day cure period available for remediable violations.

  45. 01/01/2026
    in force with grace period

    Age-Appropriate Online Design Code Act (LB 504) enters into force with grace period

    On 1 January 2026, the Age-Appropriate Online Design Code Act (LB 504) enters into force with a grace period. Violations are classified as deceptive trade practices, with civil penalties of up to USD 50’000 per violation enforceable from 1 July 2026. The Act includes severability provisions to ensure that the remaining sections remain valid if any part is held invalid. The Act defines terms such as “covered online service,” “covered minor,” and “covered design features,” and imposes requirements on such services to implement safeguards, including default high-protection settings, tools to limit data collection, and restrictions on targeted advertising and profiling. In addition, it mandates parental controls for users known to be under 13 and prohibits dark patterns and advertisements for prohibited products.

  46. 01/01/2026
    implementation

    Implemented Generative artificial intelligence: training data transparency bill (AB-2013)

    On 1 January 2026, Assembly Bill No. 2013 regarding Generative artificial intelligence: training data transparency is implemented. The Bill defines generative AI as artificial intelligence which can create synthetic content such as text, images, video, and audio based on its training data. This Bill mandates that, by 1 January 2026, developers of generative artificial intelligence systems or services released after 1 January 2022 must publish documentation on their websites detailing the data used to train these systems. Further, each time after 1 January that a generative artificial intelligence system or service is publicly released or substantially modified, the same obligation applies. The documentation would need to include a high-level summary of the datasets, including their sources, purposes, data points, and whether they contain any copyrighted or personal information. The disclosure would also need to detail whether synthetic data generation was used. Certain AI systems would be exempt from these requirements, including those designed solely for security, integrity, aircraft operation, or national security purposes.

  47. 01/01/2026
    implementation

    Implemented California AI Transparency Act (SB 942)

    On 1 January 2026, the California AI Transparency Act (SB 942) becomes effective. The Bill adds new consumer protection measures related to artificial intelligence (AI) under California law, requiring providers of generative AI systems with over 1 million monthly users to offer a free AI detection tool that allows users to verify if the content was AI-generated. This tool must be publicly accessible, support uploads or URLs, and offer metadata related to content authenticity. Furthermore, providers must include hidden (latent) disclosures in AI-generated content to indicate that it was created or altered by AI and give users the option to include an open (manifest) disclosure. The latent disclosures must contain specific information about the provider, the AI system, and the content creation date. If a provider's generative AI system is licensed to third parties, the provider must ensure the system's disclosure capabilities are maintained. Violations incur a USD 5'000 fine per incident, with the Attorney General or local authorities enforcing compliance.

  48. 22/12/2025
    introduction

    Artificial Intelligence Bill of Rights (SB 482) including design requirement was introduced to Florida Senate

    On 22 December 2025, the Artificial Intelligence Bill of Rights (SB 482), including design requirements, was introduced to the Florida Senate. Disclosures would be required to inform users that they are interacting with artificial intelligence. For minor account holders, default notifications would be required at the beginning of interactions and at least once every hour, stating that the companion chatbot is artificially generated and not human. Reasonable measures would be required to prevent companion chatbots from producing or sharing materials harmful to minors. Separately, operators of bots would be required to display pop-up notifications at the beginning of interactions and at least once every hour, informing users they are not engaging with a human counterpart. Platforms would be required to provide timely notifications to parents or guardians when a minor account holder expresses intent to engage in self-harm or harm to others. These obligations would apply to accounts held by minors and operate alongside parental consent requirements. Violations would be enforceable by the Department of Legal Affairs of the State of Florida with civil penalties of up to USD 50'000 per violation. Entry into force is set for 1 July 2026.

  49. 22/12/2025
    introduction

    Artificial Intelligence Bill of Rights (SB 482) including public procurement blacklisting was introduced to Florida Senate

    On 22 December 2025, the Artificial Intelligence Bill of Rights (SB 482), including public procurement blacklisting, was introduced to the Florida Senate. Governmental entities would be prohibited from knowingly entering into contracts for artificial intelligence technology, software, or products with entities owned by, controlled by, organised under the laws of, or having their principal place of business in, a foreign country of concern. These prohibitions would apply to contracts in whole or in part, including options within broader procurement arrangements. Entry into force is set for 1 July 2026.

  50. 22/12/2025
    introduction

    Artificial Intelligence Bill of Rights (SB 482) including data protection regulation was introduced to Florida Senate

    On 22 December 2025, the Artificial Intelligence Bill of Rights (SB 482), including data protection regulation, was introduced to the Florida Senate. The Bill would regulate the sale and disclosure of personal information by artificial intelligence technology companies. It would prohibit the sale or disclosure of personal information unless the information is deidentified data. Artificial intelligence technology companies would be required to ensure that deidentified data cannot be associated with an individual, to maintain and use the data only in deidentified form, and to prohibit reidentification except for testing deidentification processes. Contractual obligations would apply to recipients of deidentified data, alongside business processes to prevent inadvertent release. Violations would be treated as deceptive or unfair trade practices enforceable solely by the Department of Legal Affairs of the State of Florida, with civil penalties of up to USD 50'000 per violation. The Bill would establish statutory rights of Floridians relating to the use of artificial intelligence. These rights would include supervision, access, limitation, and control of minor children’s use of artificial intelligence. The Bill would also provide the right to know whether a person is communicating with a human being or an artificial intelligence system, program, or chatbot. Further rights would include knowledge of whether artificial intelligence technology companies collect personal information or biometric data, and an expectation of protection and deidentification of such data. Additional rights would cover transparency for political advertisements created using artificial intelligence and civil remedies for defamation or unauthorised commercial use of name, image, or likeness involving artificial intelligence. Entry into force is set for 1 July 2026.

Last updated: 27/03/2026