Amba Kak and Sarah Myers-West, Co-Executive Directors, AI Now Institute
The recent proposal for a sweeping moratorium on all state AI-related legislation and enforcement flies in the face of common sense: We can’t treat the industry’s worst players with kid gloves while leaving everyday people, workers, and children exposed to egregious forms of harm. Industry claims that state laws are a “burdensome” “patchwork” of unwieldy and complex laws is not grounded in fact.
What the record shows is that bipartisan state legislatures have passed or are considering reasonable, targeted, easily administrable rules that hit at AI applications that are patently unsafe and that simply should not be allowed at all. Each of these rules has been hard fought, as state lawmakers are responding to egregious harms faced by their constituents – in most cases fought tooth and nail and even whittled down to the bare minimum by armies of Big Tech lobbyists.
Honestly, states are just tinkering at the edges of the problem – there’s a lot more to do to go after the root causes. Two-thirds of US states have laws against AI-generated deepfake porn (most recently, the state of Montana, just ten days ago). Half of US states have laws targeting AI-generated deceptive election materials. At least eleven (from Arizona to Connecticut) have introduced bills regulating health insurance companies’ use of AI to deny claims. Tennessee and California have both enacted laws protecting artists against unauthorized use of their likeness. Other bills are focused on baseline disclosures that give people a fair understanding of when and how these tools are affecting their lives and livelihoods: requiring disclosures to people affected by algorithmic decisions in areas including healthcare, employment, housing, and education. Dozens more are considering such legislation along similar lines.
These aren’t onerous obligations; they’re the ceiling of what we should be looking for, not the floor
Read more .