We must remain vigilant against a scenario that’s as harmful as no regulation itself: weak regulation that serves to legitimize the AI industry’s behavior and continue business as usual. A federal law that imposes baseline transparency disclosures and then restricts states’ ability to impose additional—or stricter—requirements could place us on a dangerous trajectory of inaction.
—
Given the monumental stakes, blind trust in the benevolence of AI firms is not an option. Now, more than ever, we cannot let AI firms write the rules of their own game. We need an independent, publicly-led roadmap set by individuals and organizations on the ground, not AI firms acting like the kings of their own kingdom. In our 2025 landscape report, that can effectively hold this industry to account. These include:
- Bright line rules against the worst AI abuses
- Ex ante validation and testing to ensure that AI systems work as intended and don’t cause ancillary harms throughout the full life cycle of AI deployment
- Data minimization requirements that put constraints on firms’ ability to collect and repurpose data about us and limit secondary use to train AI models
- Strong antitrust rules and enforcement to tackle anti-competitive behavior and address the concentration of power within the AI market
Read more .