.
The first Democrat with a computer science degree in New York wants to revive some ideas from the failed California AI Safety Bill, SB 1047. He wants a new version of the bill in his state, which would regulate the most sophisticated AI models. The RAISE Act is an acronym for “Responsible AI Safety and Education.”
Assemblymember Alex Bores is hoping that his bill, which is currently an unpublished draft and subject to change, will become law. MIT Technology Review Has seen, will address a number of concerns that prevented SB 1047 from becoming law.
SB 1047, at first glance, was thought to be a modest bill, which would pass quietly. It was passed by the California statehouse in record time and with a large margin.
Before it was even signed by Gavin Newsom in September, the bill sparked a fierce national debate. Google, Meta and OpenAI, along with top congressional Democrats such as Nancy Pelosi, Zoe Lofgren, came out against the legislation. Hollywood celebrities like Mark Hamill and Jane Fonda supported the bill.
Newsom ultimately vetoed SB1047, effectively killing the regulation of so-called Frontier AI models, not only in California, but also, due to the lack of laws at the national level, anywhere else in the US where the most powerful system are developed.
Bores now hopes to revive this battle. The RAISE Act’s main provisions include requiring AI firms to develop safety plans before developing and deploying their models.
The bill also protects whistleblowers in AI companies. The bill prohibits retaliation for an employee who shares information on an AI model with the belief that the model may cause “critical damage”. Such whistleblowers are allowed to report the information directly to the New York Attorney General. The bill defines critical damage as the use of AI models to create a chemical weapon, biological weapon, radiological weapon, or nuclear weapons that causes the death or serious injuries of 100 people or more.
A critical harm is an AI model that causes 100 or more deaths, or damages of at least $1 billion, in an act that has limited human oversight, but would be a crime under human law if committed with intent, recklessness or gross negligence.
The safety plans will ensure that the company has cybersecurity measures in place to prevent unauthorized entry to a model. The plan would also include testing models to assess risk before and after training as well as detailed procedures to assess the risk associated with modifications made post-training. Some AI systems can be easily and inexpensively modified by malicious actors. A safety plan should address how the company intends to mitigate these actions.
Third parties with technical expertise who test AI models would audit the safety plans. If violations are found, this bill allows the attorney general to issue fines or, if necessary to go to court to determine if unsafe development should be stopped.
Bill
SB 1047 included safety plans and external auditors, but Bores wants to differentiate his bill. “We focused on what the feedback for 1047 was,” he says. “Some of the criticism was in good faith, and could be improved.” We’ve made many changes.
The RAISE Act differs from SB 1047 on a few points. SB 1047 created the Board of Frontier Models to approve updates to the regulations and definitions around these AI models. The proposed act, however, would not create a brand new government body. The New York bill does not create a public cloud computing group, as SB 1047 did. The cluster was designed to support projects that develop AI for public good.
The RAISE Act does not have SB 1047’s requirement that companies must be able to halt the operations of their models, a capability that is sometimes referred as a “kill switch”. Some critics alleged this shutdown provision would harm open-source model developers, since they can’t shut down a new model that someone else might now possess.
The RAISE Act completely avoids the fight. SB 1047 described an “advanced persistant threat” that bad actors could use to steal information from model training. The RAISE Act eliminates this definition and focuses on addressing critical harms caused by covered models.
Focusing on the wrong issue?
Bores’ bill uses very specific definitions to make it clear what the bill is about and is not. The RAISE Act does not address some of the current AI model risks, such as bias, discrimination and job displacement. It is also very focused on the catastrophic risks of frontier AI models, like SB 1047.
Some AI experts believe that this focus is misguided. Kate Brennan, Associate Director of the AI Now Institute which conducts AI research, says: “We’re generally supportive of any efforts made to hold large models responsible.”
“But defining critical damages only in terms the most catastrophic harms caused by the most advanced models overlooks material risks that AI poses.
Bores has also worked on other bills that address current harms caused by AI systems like discrimination and a lack of transparency. Bores is clear, however, that the new bill aims to mitigate catastrophic risks posed by more advanced models. “We are not talking about any models that exist right now,” he says. “We’re talking about truly pioneer models, those at the edge of what can be built and what we know, and there are risks in that.”
The bill would only cover models that meet a certain threshold in terms of how many computations they require for training, usually measured in FLOPs. A covered model is defined as one that requires 10 or more computations.26 FLOPs in training and costing over $100 million. GPT-4 was estimated to have required 1025 FLOPs.
This approach could draw the attention of industry forces. “While we cannot comment on legislation that has not yet been made public, we believe that effective regulation should focus more on specific applications than broad model categories,” said a spokesperson for Hugging Face, an organization that opposed SB 1047.
Early Days
The bill is still in its infancy, so it will be subject to many edits and there has not yet been any opposition. However, there may be lessons that can be learned from the fight over SB 1047. “There is a lot of disagreement in this space, but I believe that debates around future legislation could benefit from more clarity about the severity, likelihood, and imminence harms,” says Scott Kohler. He tracked the development of SB1047.
Edward Ra, an assemblymember from the Republican party who hasn’t seen the draft of the new law yet, responded: “I have no general objection to the idea.” We expect businesses to act as good corporate citizens. But sometimes, you have to put that into writing.
Ra and Bores are co-chairs of the New York Future Caucus. This group brings together legislators 45 and younger to address pressing issues that will affect future generations.
Scott Wiener, the California state senator who sponsored SB1047, is pleased to see that, even though his bill failed, it is inspiring new legislation and discourse. The bill sparked a discussion about whether we should trust AI labs to make the right decisions. Some will, but from past experience we know that some won’t. That’s why it’s important to have some basic regulation of such a powerful technology.
He has plans to re-ignite the battle: “We’re still not done in California.” California will continue to work, even next year. I’m optimistic about California being able to accomplish some good things.”
Some believe that the RAISE Act highlights a contradiction: many of the industry players insist on regulation, yet when regulation is proposed they fight it. Brennan says that SB 1047 was a referendum about whether AI should even be regulated. “We can expect to see a lot of the things we saw in 1047 if this bill is presented. We should expect a massive lobbying response from the industry to even the lightest regulation.
Wiener and Bores want to see regulation on a national scale, but without such legislation they have taken it upon themselves. It may seem strange that states would take on such important reforms. However, California is home to the top AI companies and New York has the third largest state economy in the US. These two states are in a good position to lead the discussion on regulation.
Kohler says that there is uncertainty about the direction of federal policies, given the upcoming transition and the role of Congress. “It’s likely that states will continue stepping up in this field.”
What is Wiener’s advice to New York legislators who are entering the AI regulatory arena? “Buckle up and prepare.”