
(Bloomberg/Emily Birnbaum) — Tech companies including OpenAI, Meta Platforms Inc. and Alphabet Inc.’s Google are revving up to block US states from regulating their fast-moving and lucrative artificial intelligence businesses.
The tech giants are pushing both the AI-friendly White House and Republican-led Congress after five states, including Texas and California, passed significant AI-related laws.
“Lawmakers should be inviting innovation, not driving it away from the state,” said Kouri Marshall, director of state government relations with tech trade group Chamber of Progress, which counts Andreessen Horowitz, Google, Apple Inc. and Amazon.com Inc. among its members.
While few of the farthest-reaching state rules have gone into full effect, advocates for the industry want to redirect the regulation to the ways AI is used and away from the development of new AI. The first companies to make AI breakthroughs potentially stand to gain trillions of dollars in market capitalization.
Related Articles
Apple boosts price of TV+ subscription by 30% to $13 a month
Crypto founder pardoned by Donald Trump turns focus to biohacking
Magid: What I learned from wearing a glucose monitor and a smart ring together
Honda taps Silicon Valley startup in self-driving software deal
Google’s Pixel 10 phones raises the ante on artificial intelligence
“My hope is the focus moves away from trying to regulate development to regulating use” by individual customers, said Matt Perault, head of artificial intelligence policy at venture capital firm Andreessen Horowitz.
Republicans in Congress tried and failed in June to attach a 10-year ban on enforcing state regulations onto President Donald Trump’s tax legislation.
Undeterred, the companies successfully pushed White House advisers to include a version of the federal moratorium on state regulation in Trump’s AI plan in July. The plan gives non-binding guidance to federal agencies, including a directive that AI-related federal funding shouldn’t go to states with “unduly restrictive” AI regulation.
Tech company representatives also are trying again to attach the 10-year ban to future legislation, said industry lobbyists who spoke on condition of anonymity.
As AI use grows, taking on roles such as assessing job applications, identifying criminal suspects, handling medical claims and creating images nearly impossible to distinguish from genuine photos or video, state lawmakers are eager to impose some rules of the road. Federal legislation has largely stalled amid partisan disagreements and pushback from tech-friendly lawmakers.
Some states are considering legislation that would require companies to conduct audits ensuring their systems don’t harm consumers, disclose when people are interacting with AI and bar the companies from copying artists’ creative work.
But tech companies and venture capital firms backing AI startups fear any regulation could slow their growth in a new sector. And if they have to be regulated, they’d prefer to avoid navigating 50 different state rules.
Hope Anderson, a privacy and AI lawyer with White & Case, said the speed of technological change makes it “tricky” for the law to keep up, and more so if states enact a “patchwork” of differing regulations.
State AI Laws
Some 500 laws are under consideration around the US that would affect private companies developing and deploying AI, according to Goli Mahdavi, an AI lawyer in San Francisco at Bryan Cave Leighton Paisner LLP.
Yet only five states have laws on the books that significantly impact how tech companies do business. Other states have passed more limited legislation regulating how businesses can use AI for specific purposes such as employers in screening job applications or health providers in making medical diagnoses.
Colorado’s AI law goes the farthest, requiring developers and companies to provide extensive documentation and conduct tests to ensure their systems don’t discriminate against users based on protected characteristics like race or gender. Yet tech industry lobbyists persuaded Governor Jared Polis and state lawmakers to reopen the law to amendments during a special session scheduled to begin Thursday. Polis said in a statement he’s concerned about the law’s impact on “critical technological advancements.” Lawmakers are considering a total repeal of the law or reducing the number of AI systems it applies to.
California’s narrower law requires AI developers to publicly post information about the data used to train AI systems and notify users when posts, images or videos are AI-generated.
Texas’s law restricts the development and deployment of AI systems for behavioral manipulation, discrimination, or the creation and distribution of child pornography.
Tennessee’s “Elvis Act” prohibits the use of AI to mimic a person’s voice without their permission, while Utah requires developers of “high-risk” AI systems to disclose that individuals are interacting with generative AI, not a human.
“Most of the concern flows from the potential for more laws” rather than existing ones, said Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals.
States to Watch
The New York legislature this summer passed a public safety bill that would require the largest tech companies to reduce risk of their products causing “critical harm.” It’s now with Governor Kathy Hochul, who has not said whether she plans to sign the legislation into law.
California Governor Gavin Newsom in 2024 vetoed the most comprehensive effort yet to regulate AI, which would have compelled companies to test whether their AI models would lead to mass death, endanger public infrastructure or enable cyberattacks. California lawmakers are considering narrower measures, including a requirement companies notify individuals when automated systems make decisions about them and more oversight of “high-risk” automated systems.
Filling Vacuum
The states are acting to fill the vacuum left by the federal government. “There’s a desire for states to regulate in this space as it’s increasingly clear that there will not be a federal omnibus AI law,” said Mahdavi.
And in some cases, the industry’s drive to use Congress to override state laws is stirring local lawmakers to move more quickly.
“I’ll be damned if the federal government is going to prevent me from protecting citizens within my state,” said South Carolina Representative Brandon Guffey, who chairs a state House subcommittee on AI. “People are scared they’re going to put in this moratorium so they’re trying to get bills passed before they do.”
More stories like this are available on bloomberg.com
©2025 Bloomberg L.P.