The war is on for Congress’ AI law ban

3 weeks ago 1

“This is absurd.”

That’s all Amba Kak, co-executive director of the AI Now Institute, recalls thinking when she first heard about the proposed moratorium on state AI regulation tucked into President Donald Trump’s “big, beautiful bill” — the same funding bill that had Trump and Elon Musk recently trading barbs online. According to the bill’s text, no state “may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for a 10-year period, which would start the same day the bill is passed.

The moratorium was worse than doing nothing at all to regulate AI, she remembers thinking. Not only was the proposed rule stopping such regulation in the future, but it was also “rolling back the very few protections we have.” It could scuttle bills covering anything from data privacy to facial recognition models in Washington, Colorado, and other states.

“It’s turning the clock back, and it’s freezing it there,” Kak says. Days after she learned about the moratorium, she was called to testify about it at the House Committee on Energy and Commerce.

The AI moratorium passed without issue as part of the House bill, and it’s been preserved in the current Senate version with a few changes. But for weeks, it’s been making waves among Democrats and Republicans alike, and every day brings new developments and draws new battle lines. That’s true not just in Washington, but also for the AI industry and its critics — who are working out what the rule could mean for business and society at large.

The basic contours of the debate are simple. Many Republicans and tech leaders — including OpenAI CEO Sam Altman — think the moratorium will cut through patchwork state AI regulations that could hamper US companies competing against rivals like China’s DeepSeek. Many Democrats and AI researchers, on the other hand, believe it will kneecap a broad range of tech regulation as states wait for federal action and pave the way for AI systems even less controllable than the ones we have today. Within those factions, things are a little more complicated.

To Jutta Williams, who has worked in regulatory compliance and responsible AI at tech companies like Google, Reddit, X, and Meta, a “patchwork quilt” of regulation “just confuses the issues and makes it impossible to do anything.” Now an advisor to startups focused on social good, Williams says she has spent 25 years working on compliance in all types of industries, mostly in the data governance space, which is similar to AI in many ways. When regulation is done “in a fragmented sort of way,” she says, “the net net is a lot of cost, a lot of internal friction, a lot of confusion, and no progress.”

Williams says that although the federal government “has not done their job in managing interstate issues,” states should be focused on societal components and the things they can control instead of regulating AI businesses.

OpenAI lobbied publicly for a moratorium on state laws citing SB 1047, a California AI safety bill narrowly vetoed by Gov. Gavin Newsom last year. Google, Microsoft, Meta, and Apple have stayed largely quiet and did not respond to requests for comment. Perplexity AI spokesperson Jesse Dwyer offered a relatively neutral statement with a stray shot at the hybrid nonprofit/for-profit OpenAI. “Some model builders have already shown us they’re going to do whatever they want, regardless of regulation—policies governing non-profits for instance—whereas we’re confident our commitment to accurate and trustworthy AI positions us well in any regulatory environment,” Dwyer told The Verge. And Josh Gartner, head of comms for Cohere, told The Verge that “the most effective way to promote the healthy and safe development of AI is a federal standard that provides a consistent and predictable regulatory framework.”

There’s one example of high-profile opposition. On June 5th, Anthropic CEO Dario Amodei published an op-ed in The New York Times arguing that while he understood the motivations behind the proposed moratorium, it’s “far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Amodei called for a federal transparency standard instead, mandating that leading AI companies be required “to publicly disclose on their company websites … how they plan to test for and mitigate national security and other catastrophic risks. They would also have to be upfront about the steps they took, in light of test results, to make sure their models were safe before releasing them to the public.”

“I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off,” Amodei wrote. “Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.”

For Kak, Amodei’s op-ed was important. It was a welcome change to see an industry player stick their neck out and acknowledge the need for any kind of regulation. But, she says, “We have no such federal standard, and the last 10 years do not inspire confidence that any [such] standards are coming.”

“In this debate, we’re going to hear a lot, especially from industry players, around punting the question of regulation to the federal level,” Kak says, adding, “It can’t be industry players acting as the messiahs of dictating that regulatory agenda because there’s a clear conflict of interest. That needs to come from industry-independent, public perspectives.”

As for the popular argument that state-level AI regulation will hurt AI startups, Kak says that’s a “smokescreen.”

While some proponents of the moratorium have floated a claim that there are about 1,000 state laws regulating AI, that’s not the case. Although more than 1,000 pieces of AI-related legislation were introduced so far in 2025, just over 75 have been adopted or enacted, according to Chelsea Canada of the National Conference of State Legislatures (NCSL). In 2024, out of 500 proposed AI bills, just 84 pieces of legislation were enacted and 12 resolutions or memorials were adopted.

What we have now, Kak says, is “very far from a patchwork of U.S. regulation — it’s a straightforward laundry list of targeted rules that get at the worst actors in the market.”

Another key concern, according to Kyle Morse, deputy executive director of the Tech Oversight Project, is that the provision bans a broad range of laws and may prohibit state-level regulation on any sort of “automated decision system.”

“It would apply to not just AI-specific laws — it would apply to state-level consumer protection laws,” Morse said. “We’re talking about civil rights laws. We’re talking about so much more than just AI companies’ abilities to do their jobs.”

So many businesses are now billing themselves as operating AI services, Morse said, that it’s possible that under the moratorium, “companies in healthcare or housing can claim to be AI companies [to get out of regulation], where AI is integrated into their business but it’s not their core business model.”

The rule’s fate in the Senate seems uncertain. On Tuesday, Sen. Edward J. Markey (D-MA), a member of the Commerce, Science, and Transportation Committee, announced he plans to file an amendment to the bill that would block the moratorium.

“Despite the overwhelming opposition to their plan to block states from regulating artificial intelligence for the next decade, Republicans are refusing to back down on this irresponsible and short-sighted provision,” he said in a statement.

And last Tuesday, he delivered remarks on the Senate floor calling the provision a “backdoor AI moratorium” that “is not serious, it’s not responsible, and it’s not acceptable.”

“They are choosing Big Tech over kids, families, seniors, and disadvantaged communities across this country,” Markey said. “We cannot allow that to happen. I am committed to fighting this 10-year ban with every tool at my disposal.” That same day, a bipartisan group of 260 state lawmakers from all 50 states wrote a letter to Congress calling for the opposition to the moratorium. Last Wednesday, Americans for Responsible Innovation and a group of policy and advocacy nonprofits announced a campaign to mobilize voters against the moratorium, which gathered 25,000 petitions opposing the provision in two weeks, according to ARI.

Rep. Marjorie Taylor Greene (R-GA) publicly opposed the bill due to the moratorium after voting in its favor, and some GOP senators have said they plan to vote against its current text.

On Thursday, the Senate Commerce, Science and Transportation Committee proposed alternative language for the moratorium, moving from a blanket ban on state AI regulation to a warning that states must not regulate AI if they want to receive federal broadband funding.

If the Senate does make changes to the bill’s language, the House will need to vote on it again. Proponents would also need to overcome the Byrd Rule, which disallows non-budgetary clauses from being included in fiscal-focused bills. For those trying to pass the bill, Kak said, “the emphasis right now is on finding a way to have a sweeping rollback on state legislation survive the Byrd Rule, given that on the face of it, this proposal would never survive.”

The moratorium, if passed, puts regulating the AI industry at large — a sector that’s predicted to surpass $1 trillion in revenue in less than seven years — squarely in the hands of a Congress that has railed against Big Tech but failed to pass everything from a digital privacy framework to an antitrust overhaul.

“If past is prologue,” Morse said, “Congress has struggled to get meaningful safeguards and protections over the finish line, and a moratorium isn’t the way to do that.”

Read Entire Article