Pages 278-279 of the OBBB strip states of the right to regulate AI for 10 years.
This is the debate: unified national standards versus a patchwork of 50 different state experiments. (Also – This article is in NO WAY a discussion of the overall OBBB, its merits, or lack thereof, it’s sponsors, costs, and other concerns – but is limited to a debate on the section relating to regulating AI.)
Over the past two decades, I’ve led innovation teams at Intel working on digital health platforms, and later with Modern Edge Inc, collaborating with Samsung on designing online digital health platforms. I’ve also worked with primary insurance providers to create products for Massachusetts’s 2006 health reforms, and I’ve experienced both sides of this federalism debate. The Massachusetts experience demonstrated how state innovation works, as the Massachusetts model became the template for the Affordable Care Act. Having worked in AI and metadata management since 2015, I’ve watched the recent glut of state regulation bills emerge. Over the last two years, working with EPIRA Artificial Intelligence Solutions, we have had to address compliance regulations spanning GOVTECH and FINTECH, as well as privacy and data handling. The rate of change in legislation is increasingly challenging to navigate, and much of it is of questionable value to the public. Scaling state solutions nationally underscores the complexity of coordinating across jurisdictions.
Nevertheless, the case for state-led regulation is compelling. The Tenth Amendment reserves broad “police power” to the states for public health and safety. Justice Brandeis’s “laboratories of democracy” concept has proven effective: state experiments in airline deregulation, welfare reform, and healthcare all became national policy. Today, states pioneer everything from climate laws to voting reforms. It makes sense for California, with its massive population and corresponding automobiles, to have different regulations from Wyoming, for example. Massachusetts proved healthcare reform was possible; other states are tackling AI harms like deepfake pornography and algorithmic discrimination that federal lawmakers have ignored.
However, with nearly 1,000 AI bills across states and more on the way, we are creating “rising legal uncertainty” for companies navigating conflicting rules. One of California’s proposed AI laws would have controlled national development, as most AI models originate there. Small companies can’t afford 50 different compliance regimes; only tech giants benefit from regulatory fragmentation. More troubling, well-meaning state AI laws often create costly compliance theater without meaningful protection—for example, requiring companies to provide “explainable AI” for unexplainable neural networks or mandating algorithmic audits that large companies can afford to game with superficial testing. Inevitably, the public bears the cost of regulatory compliance, fees, and fines, with little net effect or benefit.
History also shows the benefits of well-designed federal approaches. Federal coordination created the Interstate Highway System and enabled the internet economy through the establishment of uniform standards. The 1998 Internet Tax Freedom Act prevented states from “balkanizing” e-commerce, contributing to massive digital investment and economic growth during the late 1990s tech boom. And conversely, energy deregulation without proper federal oversight enabled Enron’s manipulation schemes, costing California over $40 billion through artificial shortages and market gaming – the consequences of deregulation without adequate safeguards.
Meanwhile, China graduates four STEM students for every one in America. While we debate jurisdictional authority, they’re building unified AI capabilities. A “time-limited moratorium” would create space for thoughtful federal standards while existing consumer protection laws still cover harms.
My experience with both state and federal regulatory strategies leads me to the opinion that neither pure state autonomy nor blanket federal control works in every case. Success requires knowing when each approach serves the public interest. For AI—a technology that crosses every border and affects interstate commerce—federal leadership makes sense.
Whether we can apply intelligent regulation to AI is a separate question, but whether America will lead with coherent national standards or fragment into 50 different experiments while our competitors pull ahead is something we need to decide now.