A Sensible Approach to State AI Policy

Neil Chilson and Adam Thierer

Two years ago, we warned that a growing patchwork of artificial intelligence (AI)-related laws and “algorithmic fairness” mandates could soon threaten digital technology innovators with layers of new regulatory red tape. 

The threat is upon us. 

States have passed 80 AI-related measures and 762 AI measures are pending in 45 states, according to one legislative tracking system. That total does not include the many AI-related measures that cities and counties have advanced.

This represents an unprecedented level of policy interest in preemptively regulating a new information technology, and it stands in stark contrast to the approach the U.S. adopted for the Internet. Thanks to wise, bipartisan policy choices policymakers made a quarter century ago, a robust national marketplace of digital speech and commerce developed. American innovators became world leaders in online services and advanced computational systems.

But now the mother of all state regulatory patchworks threatens the next great technology revolution. The sheer volume of new mandates would burden new AI startups with confusing, costly compliance requirements that would discourage entrepreneurialism and investment in next-generation digital technology. To avoid that result, governments need a better policy vision to address concerns about new AI applications while also ensuring that the U.S. continues to produce cutting-edge innovations to compete against China and the rest of the world.

While Congress still needs to step up and craft a sensible national policy vision for AI, state policymakers are going to continue to want to “do something” to show leadership on this front. The American Legislative Exchange Council (ALEC) recently formulated a model state AI Act that lawmakers across the nation could use to constructively engage on this complex, fast-moving set of issues.  

The ALEC model bill has three major components. First, the proposal recommends that state lawmakers create a dedicated office to coordinate AI policy matters and specifically conduct a thorough examination of existing laws and regulations that already likely cover concerns that have been raised. This process is meant to both “identify regulatory barriers to AI development, deployment, and use” and also determine if there are “regulatory gaps where existing law is insufficient to prevent” potential harms. This new state office could then determine how to either reform or remove barriers to innovation or formulate new policies where needed to fill gaps.

Second, the ALEC model would require states to take two inventories. First, it would require state agencies to create and submit a detailed inventory of how they are using artificial intelligence technologies within a specified timeframe, including the benefits and risks of such uses.  Second, the model would also require an inventory of existing regulations that affect AI technologies, including any barriers that should be removed or gaps that need to be filled. 

Third, the ALEC model bill encourages states to create an AI “Learning Laboratory” program that invites technology innovators to work together with the state government to foster new AI applications and inform regulatory approaches by creating partnerships that mitigate regulatory risks. This is similar to a provision in a law passed by Utah, where the state’s AI Learning Laboratory is up and running.

As part of this process, the model legislation would allow innovators to apply for “regulatory mitigation agreements,” which would reduce certain requested regulatory burdens for a specified duration, scope, and number of users. This “sandbox” would help innovators reduce regulatory risk for experiments and help the state learn which regulations are and are not necessary.

The ALEC model represents a welcome alternative to heavy-handed AI regulatory measures that have recently advanced in California and Colorado. In early September, the California legislature sent 21 bills to the Governor’s desk in one week alone. Most of them were quite interventionist. The most controversial measure, SB-1047, would have regulated AI at the model level, requiring a range of reports and submissions before companies could even start training certain types of models. The bill went through several iterations that improved it relative to its authoritarian origins, getting rid of the “Frontier Model Division” and reducing the risk of onerous pre-approval processes, among other changes. Still, the bill provides plenty of opportunities for burying the AI industry in red tape. Fortunately, although the bill was passed by the California legislature, it was vetoed by Governor Newsom. However, its supporters have pledged to bring it back in some form, perhaps even as a ballot initiative.

On the other side of the Rockies, Colorado Governor Jared Polis (D) signed the first generally applicable state AI law (SB24-205), although with serious reservations.  He worried that the measure would “create a complex compliance regime for all developers and deployers of AI” through “significant, affirmative reporting requirements.” Despite being “concerned about the impact this law may have on an industry that is fueling critical technological advancements,” Polis signed it. But he pleaded with Congress to craft “a needed cohesive federal approach” that would “limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines along with ensuring access to life-saving and money-saving AI technologies for consumers.”

Congress is unlikely to craft that federal approach any time soon, however. This leaves something like the ALEC model AI legislative framework as the best option for states to ensure those life-saving AI innovations can come about.

Neil Chilson is Head of AI Policy at the Abundance Institute, and Adam Thierer is a Senior Research Fellow at the R Street Institute.

Neil Chilson

Head of AI Policy

Abundance Institute


Adam Thierer

Senior Fellow, Technology & Innovation

R Street Institute


Emerging Technology

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the author(s). To join the debate, please email us at [email protected].

Related Content

Skip to content