Last year was a busy period for lawmakers and lobbyists who were concerned about AI. This was especially true in California, where Gavin Newsom vetoed high-profile AI legislation while signing 18 new AI laws. Mark Weatherfordsays that 2025 will see a similar amount of activity, particularly at the state level. Weatherford, who has served as the Chief Information Security Officer of the states of California, Colorado, and as the Deputy Under Secretary of Cybersecurity for President Barack Obama, has seen “the sausage making of policy and legislative” at the state and federal level. Weatherford has held a variety of positions in the past, but his main role is to figure out “how we can raise the level of discussion around security and privacy so that we are able to influence how policy decisions are made.”
He said that he had different job titles over the years, but that his role was always centered around figuring out, “how do we elevate this conversation around security and privacy so that we may help influence policy decisions?”
In fall of last year, he became he joined Gretel
I was eager to ask him what he thought would be the next step in AI regulation, and why he believes states will lead. This interview has been edited to make it shorter and clearer.
The goal of raising the conversation level will resonate with many people in the tech industry who have watched congressional hearings on social media and related topics and clenched their hands, seeing what some officials know and do not know. How confident are you that legislators will be able to get the information they need to make informed decisions about regulation?
Well I’m very confident that they can get there. I’m not so sure about the timeline. AI is constantly changing. It’s amazing to me that the issues we discussed a month ago are now something else. I’m confident that the government can get there. But they need people who can guide them, staff and educate them.
This week, the US House of Representatives released a reportfrom a task group they had started about a month ago. I’m currently reading a 230 page report. [Weatherford and I first spoke in December.]
[When it comes to] When it comes to the making of legislation and policy, you have two very partisan groups trying to work together and create something which makes everyone happy. This means that everything gets watered-down a bit. It takes a while, and as we enter a new administration everything is up in the air.
Your view seems to be that in 2025, we will see more regulation at the state level than at the federal level. Is this correct?
Yes, I believe it. In California, I believe Governor [Gavin] Newsom signed 12 pieces legislation that had to do with AI in the last few months. [Again, it’s 18 by TechCrunch’s count.)] The big bill on AI was vetoed by Governor Newsom, who wanted AI companies to invest more in testing. This would have slowed down the process.
I spoke at the California Cybersecurity Education Summit in Sacramento yesterday, and I discussed the legislation that is happening across the US, in all the states. I think over 400 pieces of legislation have been introduced on the state level in the last 12 months. There’s a lot happening.
I think that one of the biggest concerns, which is a concern in technology and cybersecurity in general, but we are seeing it right now on the artificial-intelligence side, is the need for harmonization. Harry Coker and [the Department of Homeland Security] at the [Biden] White House use the word “harmonization” to [refer to]: How can we harmonize these rules and regulations so that we don’t have this [situation] where everyone is doing their own thing? This drives companies crazy. They have to figure out how they can comply with all the different laws and regulations of different states.
There’s definitely going to be more activity on the state level, and hopefully there can be some harmonization of these regulations so that there isn’t such a diverse set of rules for companies to follow.
This is a term I’ve never heard before, but I was going to ask it next. I imagine that most people would agree harmonization is an excellent goal, but what are the mechanisms that make that happen? What is the incentive for states to make sure that their laws and regulations are aligned with each other?
There’s not much incentive for states to harmonize their regulations, but I can see that the same language is used in different states, which indicates to me that they are all looking at each other’s work.
I don’t think it’s going to happen.
Would you say that other states could follow California’s example in terms of general approach? Many people don’t want to hear it, but California pushes the envelope [in tech legislation] to help people come along. They do all the heavy-lifting, they do the research for some of those laws.
Governor Newsom’s 12 bills were all over the place, from pornography to data-driven websites to a variety of other things. They have been very comprehensive in their approach.
My understanding is that the governor vetoed the bill, even though they passed specific, targeted measures, and then the larger regulation that received the most attention.
It was a difficult decision to make. I could see both sides. Initially, the privacy component was the driving force behind the bill. But you also have to take into account the costs of doing this, and the requirements placed on artificial intelligence companies in order to be innovative. There’s a balance.
It’s my expectation [in 2025] California will pass something a bit more stringent than what they did in [in 2024]
Your impression is that federally, there’s interest, as you mentioned in the House report, but that it’s not going to be a big priority or that major legislation will be passed [in 2025]?
I don’t really know. It depends on the emphasis that [new] Congress puts in. I think that we’re going see. You read what I wrote, and I believe that less regulation will be the focus. In many ways, technology, especially in the areas of privacy and cybersecurity is a bipartisan topic. It’s good for everyone.
Regulations are not my favorite thing. There’s a lot duplication and wasted resources with so many different laws. When the safety and security is at stake, such as with AI, then there is definitely a need for more regulation.
It was mentioned that it was a bipartisan matter. When there is a split in the vote, it’s usually not predictable. It’s not just all the Republican votes against all the Democratic votes.
This is a good point. Geography is important, whether we admit it or not. That’s why California has some of the most progressive legislation in comparison to other states.
This is a field in which Gretel works, but it appears that you, or the company, believe that more regulation will push the industry towards more synthetic data.
Maybe. Synthetic data is the future for AI, and that’s why I’m here. Data is essential to AI. As the pool of available data shrinks or is used up, quality of data becomes more important. There will be a growing need for high-quality synthetic data, which ensures privacy, eliminates bias, and addresses all of these non-technical issues. We believe synthetic data is a solution to this problem. I am 100% sure of it.
I’d love to know more about how you came to this point of view. I think that there are others who understand the problems you are referring to, but believe that synthetic data could amplify any biases or issues in the original data.
Yes, that is the technical part of the discussion. Our customers believe we have solved this problem. There is also the concept of the flywheel, which is a way to validate that data does not get worse. Gretel has solved this problem.
Many Trump supporters in Silicon Valley have warned about AI “censorship”the various weights, guardrails and restrictions that companies place around the content generated by generative AI. Do you think this will be regulated in the future? Should it?
Concerning concerns about AI-censorship, there are a number administrative levers that the government can pull. And when there is a perception of a risk to society, they will almost certainly take action.
Finding the sweet spot between reasonable moderation of content and restrictive censorship is going to be a challenge. The incoming administration is pretty clear that less regulation is better. We should expect some guidance, whether it comes in the form of formal legislation, executive orders, or other less formal methods such as [National Institute of Standards and Technology] frameworks and guidelines, or joint statements through interagency coordination.
Let’s go back to the question of what a good AI regulation would look like. There is a big difference in the way people talk about AI. It’s either the most amazing technology or it will destroy the world. There are so many different opinions about the technology and its potential. How can a single or multiple pieces of AI regulations encompass all this?
We have to be very cautious about managing the spread of AI. We’ve seen the negative effects of deepfakes, and it’s alarming to see kids in high school or even younger creating deep fakes. I think that legislation is needed to control how people use artificial intelligence without violating existing laws. We could create a new law which reinforces the current law but adds AI into it.
We — those of us who have worked in the technology sector — have to remember that a lot of these things that we consider second nature are actually not understood by my family and friends, even though they work in technology. We don’t wish to give the impression that government is overregulating, but we do want to make sure that people can understand these topics.
On the other hand, as you can probably tell from my conversation, I am giddy for the future of AI. I see so many good things coming. I think we will have some bumpy years, as people become more aware of AI and understand it better. Legislation is going to be a part of this, so that people can understand what AI means for them and to put up some guardrails around AI.