California's AI safety bill, which requires 'kill switches' for AI models, is criticized for forcing AI startups to withdraw and damaging open source models



Silicon Valley venture capitalists and tech workers are sounding the alarm that a proposed 'AI Safety Bill' under consideration in the California Legislature could force some of the biggest AI companies to change the way they do business.

Silicon Valley uproars over California AI safety bill

https://www.ft.com/content/eee08381-962f-4bdf-b000-eeff42234ee0

Tech Companies Challenge California AI Safety Legislation - WinBuzzer
https://winbuzzer.com/2024/06/08/tech-companies-challenge-california-ai-safety-legislation-xcxwbn/

Silicon Valley on Edge as New AI Bill Advances in California
https://www.pymnts.com/artificial-intelligence-2/2024/silicon-valley-on-edge-as-new-ai-regulation-bill-advances-in-california/

The misguided backlash against California's SB-1047
https://garymarcus.substack.com/p/the-misguided-backlash-against-californias

Uproar in California tech sector over proposed new bill
https://www.cryptopolitan.com/uproar-in-california-tech-firms-over-ai-bill/

Silicon Valley Is On Alert Over An AI Bill in California - Bloomberg
https://www.bloomberg.com/news/newsletters/2024-06-06/silicon-valley-is-on-alert-over-an-ai-bill-in-california

SB 1047, commonly known as the 'AI Safety Bill,' currently being debated in California, is an AI-related bill introduced by Senator Scott Wiener of the state. The bill aims to establish 'common sense safety standards' for companies that create large AI models that exceed certain size and cost thresholds. The AI Safety Bill passed the California Senate in May 2024 and is currently in the process of being passed by the House of Representatives.

The AI Safety Bill requires AI developers to 'implement a kill switch that can shut down AI systems to prevent AI models from causing significant harm.' In addition, it will require AI developers to disclose their compliance efforts to the 'Frontier Model Division,' which will be newly established within the California Department of Technology. If companies do not comply with these requirements, they may be sued and subject to civil penalties.



At the time of writing, there is no federal law on AI in the United States, and Bloomberg points out that states are increasingly promoting their own regulations. Since major AI companies such as OpenAI and Anthropic are based in California, where the AI Safety Bill is being enacted, Bloomberg pointed out that if the AI Safety Bill is enacted, 'companies at the forefront of the AI industry could be directly affected.'

Wiener, the architect of the AI Safety bill, told Bloomberg, 'It would be great if Congress would step forward and pass reasonable, strong AI, innovation and safety legislation. This kind of legislation should be implemented at the federal level. But on data privacy, social media, net neutrality and even tech issues that have strong bipartisan support, it's been very difficult, and sometimes impossible, for Congress to act.'

The AI Safety Bill is supported by Geoffrey Hinton and Yoshua Bengio, who have been vocal about the possible existential threats of AI. Hinton

praised the AI Safety Bill for 'taking a very sensible approach to balancing those concerns.' However, the AI Safety Bill has also attracted significant backlash.

Experts opposed to the AI Safety Bill have criticized it, saying it could impose an unfeasible burden on open source developers, who make their code available for anyone to review and modify, to ensure that their services are not misused by malicious actors. In addition, there are concerns that the Frontier Model Division, which will be newly established as a result of the AI Safety Bill, may have greater powers.



Rohan Pandey, founder of Reworkd AI, an open source AI startup, said of the AI Safety Bill, 'No one thought this would pass. It seems pretty ridiculous. Maybe the rules will make sense in a few years' time when we know the standards for determining whether an AI model is safe or unsafe. But GPT-4 only came out a year ago. It's way too early to jump into legislation.'

Martin Casado, general manager at venture capital firm Andreessen Horowitz, said he has been approached by startup founders who are concerned about the AI safety bill and ask whether they should leave California.

The startup community is focused on the question of whether AI developers should be held liable for people who abuse their systems. The issue at stake is

Section 230 of the Communications Decency Act , which was enacted in 1996 and is a valuable legal basis that exempts platforms from liability for content created by users on their platforms. There is a heated debate over whether this law applies to AI tools.

Wiener, the architect of the AI Safety Bill, said he was making changes to the bill to reflect some of these concerns, amending the requirements for AI models covered by the bill to clarify that open source developers are not liable if their technology is misused, and clarifying that the shutdown requirements do not apply to open source models. 'I'm a big supporter of AI. I'm a big supporter of open source. I'm not trying to stifle AI innovation. But I think it's important that people be mindful of safety as these developments happen,' Wiener said, indicating he is prepared to make further changes to the AI Safety Bill.

Wiener also argued that hardline AI advocates with many online supporters have 'begun to make loud claims about the AI Safety Bill, sometimes spreading very inflammatory and inaccurate information.' In particular, Wiener said the AI Safety Bill does not contain any provision that would require companies to obtain permission from government agencies to train AI models, and emphasized that the liability risk from the bill is 'extremely limited.'



Casado also pointed out problems with the drafting process of the AI Safety Bill, arguing that it 'reflects the views of some 'outsider' people who have pessimistic concerns about the long-term risks of AI to humanity, but does not represent the consensus of the technology industry.'

In response, Weiner noted that the AI Safety Bill was born out of dinners and meetings he has had with AI industry insiders over the past 18 months, including OpenAI, Meta, Google, Microsoft, and Andreessen Horowitz, where Casado works. He claims that the bill was 'drafted in a very open environment.'

The AI Safety Bill is scheduled to be voted on by the state legislature by the end of August 2024, and Weiner said, 'I'm optimistic that there is a path to submit it to the governor.' Meanwhile, California Governor Gavin Newsom has said about AI regulation, 'If we over-regulate, if we over-indulge, if we pursue what's attractive, we may put ourselves in a dangerous position,' showing a cautious stance against excessive regulation.

in Software, Posted by logu_ii