A California AI bill aimed at establishing new safety regulations for artificial intelligence (AI) has caused a rift in Silicon Valley, involving prominent lawmakers in an unusual intervention by national policymakers into state politics.
California Senate Bill 1047, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would mandate that powerful AI models undergo safety testing before their public release and hold developers accountable for significant damages caused by their models.
The bill needs to be approved by the state legislature by the end of the week to be presented to California Governor Gavin Newsom.
Would legislation hinder innovation?
Opinions are divided among major technology companies, AI startups, and researchers on whether the legislation would hinder innovation in this rapidly evolving field or establish necessary safeguards.
Billionaire tech entrepreneur Elon Musk, who owns an AI company named xAI, expressed support for SB 1047, describing it as a difficult decision and recognizing that his stance might upset some individuals. He suggested that California should likely pass the AI safety bill, citing his long-standing advocacy for AI regulation similar to other potentially risky products and technologies.
Conversely, former Speaker Nancy Pelosi and several other California legislators have opposed SB 1047. In a statement earlier this month, Pelosi noted that many in Congress perceive the legislation as well-intentioned but misguided. She emphasized that AI originates from California and advocated for legislation that sets a national and global standard, highlighting the need to empower small entrepreneurs and academia rather than large tech companies.
Democrat lawmakers urge Governor to veto the bill
Eight California Democrats—RReps. Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán, and J. Luis Correa—ssent a letter to Governor Newsom urging him to veto the bill if it reaches his desk.
They mentioned that it is somewhat unusual for them, as sitting Members of Congress, to provide views on state legislation. However, they expressed serious concerns about SB 1047 and felt compelled to share these concerns with California state policymakers.
The lawmakers contended that the methods for understanding and mitigating AI risks are still in their infancy, highlighting that the National Institute of Standards and Technology (NIST) has not yet issued the guidance companies would be required to follow. They also noted that the bill’s compute thresholds would almost certainly be obsolete by the time it goes into effect.
They criticized the legislation for focusing too much on extreme misuse scenarios and hypothetical existential risks rather than addressing real-world risks and harms such as disinformation or deepfakes.
The lawmakers expressed concern about the potential negative impact of the legislation on California’s innovation economy without any clear public benefit or a solid evidentiary basis, stating that high-tech innovation is the economic engine driving California’s prosperity.
Bill properly fine-tuned?
California state Sen. Scott Wiener, who introduced the legislation, has argued that it would only affect the largest AI developers, not small startups, and requires them to perform safety tests they have already pledged to complete. In response to Speaker Emerita Nancy Pelosi’s opposition, Wiener stated that he has great respect for her but strongly disagrees with her statement.
He further remarked that when technology companies promise to conduct safety testing and then resist oversight, it raises concerns about the effectiveness of self-regulation for humanity.
Read More: Starmer vows to ‘turn corner on Brexit’ in Berlin meeting with Olaf Scholz
Wiener also pointed out that legislators have amended the bill in response to industry concerns, including limiting the California attorney general’s ability to sue developers before actual harm has occurred and setting a threshold for fine-tuned open-source models.
Under the amendments, open-source models finetuned at a cost of less than $10 million will not be covered by the legislation, which is a positive development for the open-source community.
Open-source community writes to Sen. Scott Wiener
The bill has undergone several significant changes since its initial proposal, influenced by strong industry feedback, including from AI companies like Anthropic.
The bill no longer includes criminal penalties, though civil penalties remain. The attorney general can only seek civil penalties after harm has occurred. Additionally, the proposed Frontier Model Division, a new regulatory body specifically for AI, has been removed. The legal standard for ensuring developer compliance has shifted from providing “reasonable assurance” of safety to exercising “reasonable care,” a lower threshold. The bill now covers models that cost at least $10 million to develop and fine-tune, which is a smaller group than initially included.
Mozilla, EleutherAI, and Hugging Face had sent a letter to Wiener in early August expressing concerns about SB 1047’s potential impact on the open-source community. Burke noted that not all their concerns were addressed in the amendments.
Anthropic, whose feedback led to several amendments, also stated that the latest version of the legislation was a compromise between their suggested version and the original bill. According to a letter from Anthropic CEO Dario Amodei to Newsom, the new SB 1047 is significantly improved, to the extent that its benefits likely outweigh its costs. However, he mentioned that there are still some aspects of the bill that remain concerning or ambiguous.
OpenAI, Google and Meta oppose the bill
In contrast, OpenAI opposed the bill last week. Joining tech giants like Google and Meta in their opposition, OpenAI’s chief strategy officer Jason Kwon argued in a letter to Wiener that regulation on frontier AI models should come from the federal government and warned that the bill could stifle innovation and harm the U.S. AI ecosystem.
The bill has also divided key figures in the field. Fei-Fei Li, in an op-ed argued that the “well-meaning” legislation could have unintended consequences for the country. She expressed that SB-1047, if passed into law, would harm the emerging AI ecosystem, particularly sectors already disadvantaged compared to today’s tech giants, such as the public sector, academia, and smaller tech companies. Li emphasized that as California is a leading force in AI, the state’s actions would impact the rest of the country.
Read Also: Iran’s supreme leader Khamenei has a change of heart: Open to nuclear talks with US