Abstract AI image
Source: Getty
article

The State of State AI Law: What’s Coming Now that the Federal Moratorium Is Dead

State legislators believe they have a window to shape the future of AI governance for their own citizens—and for everyone else.

Published on July 10, 2025

The dramatic collapse of the proposed federal AI regulatory moratorium, which was defeated last month in a 99-to-1 Senate vote, has put the spotlight back on state efforts to govern AI. Across the country, state legislators are pushing to establish frameworks that they hope will protect their citizens from AI risks and perhaps, in the absence of congressional action, become de facto national standards.

The most significant of these proposed laws are three recent bills that focus on the models advancing the frontier of AI capabilities and the unique risks those models may pose. These bills coalesce around a core set of policies to strengthen transparency, although they differ in some notable provisions. That focus on transparency is in part a response to the controversy over the California AI bill, known as SB-1047, vetoed by California Governor Gavin Newsom last year. All three bills depart from some of SB-1047’s most controversial provisions, such as liability for AI developers. But the bills’ differences show that debates over what comes next in AI governance aren’t settled yet.

In California, Democratic State Senator Scott Wiener, the author of SB-1047, this week proposed a set of policies that would increase transparency requirements on frontier AI companies and strengthen whistleblower protections for AI workers, amending a previously introduced bill known as SB-53.1 In New York, the state legislature last month passed the Responsible AI Safety and Education (RAISE) Act, which would bar model releases that pose certain risks, impose transparency requirements on model developers, and require AI companies to report safety incidents to the public. And in Michigan, Republican State Representative Sarah Lightner recently introduced the Artificial Intelligence Safety and Security Transparency Act, which would create similar transparency requirements and whistleblower protections while also subjecting AI developers to third party audits. 

The interplay between these frontier AI bills—and whatever new proposals emerge—will shape the future of the AI debate not only in the states, but also on Capitol Hill. If states end up with dramatically different approaches, they could create the compliance patchwork that proponents of the AI moratorium feared. But if they converge on a similar set of principles, they could lay the groundwork for broader, harmonized standards. That makes it important for observers to understand what each would and wouldn’t do and to watch closely as state efforts develop.

The World After SB-1047

Much of the current debate over state regulation of frontier AI stems from the controversy over SB-1047. When it was introduced in February 2024, SB-1047 was the first bill at the state or national level focused specifically on extreme risks from frontier AI models. Warning that AI could enable the proliferation of weapons of mass destruction and dangerous cyber capabilities, the bill’s proponents argued that if Congress wasn’t going to put guardrails on AI, California should. In the months that followed, legislators, industry, and civil society debated what measures were appropriate, how to decide which models would be covered, and whether states should pass laws aimed at addressing frontier AI risks at all.

SB-1047 proved controversial for many reasons. For example, it would have imposed new statutory liability on AI developers whose models caused or materially enabled “critical harms,” a move that could have created significant additional legal risk for AI companies. The bill would also have required developers to build “full shutdown” capabilities into their models and mandated that providers of cloud computing services monitor their customers’ AI development activities. The shutdown provision provoked debate about its impact on open-source development, and the cloud customer monitoring requirements sparked worries about surveillance and mandatory sharing of business information between competitors.

SB-1047 also came in for criticism for relying on a rigid computational threshold for regulation. The policies the bill outlined would have applied to any model trained on more than 10²⁶ floating point operations (FLOPs) that cost more than $100 million to produce. Critics argued that without the ability to update the threshold to incorporate other metrics as the technology developed, the bill risked becoming obsolete, potentially capturing routine, nonfrontier AI development while missing smaller models that achieved concerning capabilities through more efficient training methods.

Making Sense of Post-SB-1047 State Bills Focused on Frontier AI Risks

Partly in response to the controversy over SB-1047, several state legislative efforts have shifted toward a greater focus on transparency, and less on liability. The three most significant recent bills—California’s SB-53, New York’s RAISE Act, and Michigan’s AI Transparency Act—have several important overlapping features (see the appendix for more detail.)

  1. Transparency requirements: All three bills would require major AI developers to publish “Safety and Security Protocols (SSPs).” Although the exact details differ, the laws generally specify that these documents must explain the company’s risk assessment and mitigation policies, its security measures, and its plans for safety testing. Leading AI companies already publish some of this information through what are often called “preparedness” or “responsible scaling” policies. These usually include explanations of the different levels of risk the company’s models might pose and the mitigations the company plans to take. Practices aren’t fully consistent across companies, however, and there’s no legal requirement to publish these plans.

    SB-53 would also require developers to publish a “transparency report” for each major new model, with specific details of safety tests and the rationale for releasing the model. These reports would likely overlap with the current developer practice of publishing model cards, which often explain technical details of the model, its performance on major benchmarks, and safety tests conducted by the company or third-party evaluators. As with other data sharing, however, developers aren’t always consistent about when or even whether the model card is published or the information it includes, and the practice is voluntary.2 SB-53 would also force companies to conduct and publish risk assessments for models they deploy internally, even if they are not made available to the public, something no major developer currently does.
  2. Incident reporting: Both SB-53 and the RAISE Act would require developers to report certain safety incidents to their state’s attorney general, while the Michigan bill does not mandate such reporting.

    Triggers for reporting, which are similar in the California and New York bills, include if an unauthorized actor gains access to the model weights and incidents in which an AI model causes more than 100 deaths or $1 billion in economic damages. SB-53 would also require reporting if a model evades human control in a way that causes harm. SB-53 requires this reporting within fifteen days of the developer learning of the incident; the RAISE Act requires it within seventy-two hours.
  3. Whistleblower protections: Both the California and Michigan bills would strengthen whistleblower protections for employees of AI developers, requiring companies to create anonymous internal channels for employees to report to company leadership any legal violations or concerns that the developer’s activities pose catastrophic risks. The bills would also bar companies from retaliating against whistleblowers. The RAISE Act does not include whistleblower protections.

    Congress is also currently discussing whistleblower protections in the bipartisan proposed Artificial Intelligence Whistleblower Protection Act. This act would not require companies to create anonymous reporting channels, but it would bar employers from retaliating against employees who report serious risks posed by AI models, security vulnerabilities, or legal violations to their employer or state or federal authorities.
  4. External auditing: The Michigan bill would require major developers to conduct an annual third-party audit to assess the developer’s compliance with both the law and with the developer’s Safety and Security Protocol. Neither SB-53 nor the RAISE Act includes such a requirement, although SB-53 would require developers to disclose any third-party testing of models in their transparency reports.
  5. Deployment restrictions: The RAISE Act would bar companies developing large models from deploying them in New York if doing so would create an “unreasonable” risk that the model would cause a safety incident involving more than 100 deaths or $1 billion in economic damage.3 Neither the California nor the Michigan bill includes deployment restrictions.
  6. Developer-based thresholds: Rather than focusing on specific models, all three bills set triggers for regulation that would apply to the AI developer as a whole. Both the New York and the Michigan bills eschew FLOP-based thresholds. Instead, they apply to AI developers that have spent at least $100 million in aggregate to train AI models and,4 in the case of the Michigan proposal, also have spent at least $5 million on a single model. The California bill, meanwhile, applies to any developer that trains a model on more than 10²⁶ FLOPs before January 1, 2027.

    Although the SB-53 threshold resembles the compute-based threshold of SB-1047, the bill gives the state attorney general the freedom to adopt a different threshold based on alternative metrics, such as cost of development, model performance on benchmarks, or the size of the company.

Next Steps for Frontier AI State Bills

The three bills are at different stages, and they could all still change in response to feedback from industry, civil society, other states, and the federal government. The RAISE Act is the furthest along, having passed the New York legislature. But Governor Kathy Hochul has until December 31, 2025, to negotiate with legislative leaders and Assemblymember Alex Bores and Senator Andrew Gounardes, the act’s sponsors, if she wants to revise the bill. As a result, there is time for developments in California, Michigan, and elsewhere to influence Hochul’s decision.

SB-53 is earlier in the process. It still needs to pass through multiple committees, followed by the full California Assembly and Senate, before it reaches the governor’s desk. An upcoming hearing in the Assembly’s Committee on Privacy and Consumer Protection on July 16 will offer the first indication of whether SB-53 will be further amended. The Michigan bill is the most nascent, having been introduced for the first time in late June.

It’s not yet clear how other actors, including other U.S. states, the federal government, and foreign countries, will react if some or all of these proposals become law. But given the size and importance of the states involved, laws they enact are likely to influence what happens beyond their borders. For now at least, state legislators believe they have a window to shape the future of AI governance for their own citizens—and for everyone else.

Notes

  • 1The bill states that it draws on the policy principles set out in the California Report on Frontier AI Policy, which was commissioned by Newsom following his veto of SB-1047. One of us (Scott) was a lead writer of that report. The views reflected in this piece are the authors’ own and do not reflect the views of the Joint California Policy Working Group on AI Frontier Models.

  • 2Many leading AI developers are signatories to the 2023 White House Voluntary AI Commitments, which include a commitment to publishing information on system capabilities and risks, but it’s not clear whether the Trump administration will hold AI companies to these promises.

  • 3The bill doesn’t specify how model developers would ensure that their models weren’t deployed in New York, or how the developer should evaluate the likelihood of the risks involved.

  • 4The Michigan bill requires this amount to have been spent in the preceding twelve months; the RAISE Act has no time limit.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.