×
×
homepage logo
LOGIN
SUBSCRIBE

States should follow California’s lead in AI regulation

By Jeremy Straub - InsideSources.com | Dec 11, 2024

California Gov. Gavin Newsom recently vetoed a measure seen as a potential blueprint for national AI legislation. Newsom’s veto should serve as a national model for dealing with similar regulations.

With more than 450 pieces of AI-related legislation being tracked by the National Conference of State Legislatures, numerous states are facing, or will soon, similar challenges as California, which has 40 AI-related bills in its legislative pipeline.

This are more than nine AI laws per state. These laws range from Alaska’s failed legislation, which sought to establish an AI task force, to Rhode Island’s, which aims to limit AI emissions. Other laws related to deepfakes and fake voice use, elections, workforce considerations and numerous different aspects of everyday life.

What is clear is that we are likely to soon be faced with a national patchwork of AI-related laws that have the potential to create herculean compliance challenges for even the largest AI developers and users. Firms, nonprofits, educational organizations and others may fall under regulations due to their presence in a jurisdiction. However, laws with long-arm statutes, such as those in the European Union’s General Data Protection Regulation and the California Consumer Privacy Act, which were developed to regulate data privacy, not specifically AI, may make this even more complex by attempting to create extraterritorial application of states’ laws.

Newsom cited concerns “of curtailing the very innovation that fuels advancement in favor of the public good.” These concerns should resonate with every governor and every state legislature.

This is not to say that AI should be left unregulated. Prudent AI regulations will have several key characteristics:

First, they won’t duplicate existing laws to have an AI-related law in an area. Some laws, which may be written to apply only to a human person, may need to be changed to ensure applicability to AI systems and their users.

Second, AI regulations should be embodied within other regulations regarding similar topics. If an AI-specific employment regulation is needed, it should be included within employment regulations so that it is readily locatable in this section and can be updated in lockstep with similar regulations.

Third, AI regulations should be written to apply only to the technology’s use. Attempts to regulate the development or non-use-specific output will likely have freedom-of-speech implications and risk impairing technology development and driving firms out of the marketplace. European Union regulations, for example, have resulted in firms not releasing new technologies within that marketplace.

Finally, laws should avoid extraterritorial reach. Extraterritorial laws can create confusion and compliance difficulties for firms facing conflicting laws regarding the same conduct. In the United States, these laws may also violate the constitutional assignment of intra-state activity regulation to the federal government.

Newsom noted that “adaptability is critical as we race to regulate a technology still in its infancy” and acted to protect the “pioneers in one of the most significant technological advances in modern history” in his state. While other states may not house the “32 of the world’s 50 leading Al companies” that Newsom has identified within California, the need to avoid damaging this industry is evident. It is an area where Newsom’s actions show that there can be transparent, bipartisan cooperation to protect our national AI capabilities state by state.

Jeremy Straub is the director of the North Dakota State University Cybersecurity Institute, a Challey Institute senior faculty fellow, and an associate professor in the NDSU Department of Computer Science. He wrote this for InsideSources.com.