OpenAI presents a new blueprint for AI regulation that is its preferred version

OpenAI published on Monday an “economic plan” for AI. It’s a living document which lays out policies that the company believes it can build upon with the U.S. Government and its allies. OpenAI’s Vice President of Global Affairs, Chris Lehane, wrote in the blueprint that the U.S. should act to attract billions of dollars in funding to buy the chips, data and energy needed to “win” on AI. OpenAI describes this situation in the blueprint, which is untenable. The federal government has left AI regulation largely to the states.

State lawmakers introduced nearly 700 AI-related laws in 2024, some of them conflicting with others. Texas’ Responsible AI Government Act, for instance, imposes onerous liability requirements on developers of open-source AI models. OpenAI CEO Sam Altman also criticised existing federal laws, such as the CHIPS Act. This law was designed to revitalize the U.S. Semiconductor Industry by attracting investment from top chipmakers. Altman stated in a recent Bloomberg interview that the CHIPS Act has “[has not] not been as effective as we hoped” and that there is “a real opportunity” to “do something much better for a follow-up.” He believes that the Trump administration can “do something much better”. “Power plants, Data Centers, or any of that sort of stuff. I understand the bureaucratic cruft that builds up but it’s not good for the country. It’s especially not helpful when you consider what the U.S. needs to do to lead AI. The U.S. needs to be at the forefront of AI.

OpenAI’s blueprint calls for “dramatically increased” federal spending on data transmission and power to fuel the data centres needed to develop and run AI. It also recommends a meaningful buildout and development of “new energy resources,” such as solar, wind farms and nuclear. OpenAI, along with its AI competitors, has previously thrown its support for nuclear power projects. They argue they are needed to meet the energy demands of next-generation servers farms.

Tech titans Meta and AWS are having problems with their nuclear projects, but for reasons that do not have anything to do with nuclear energy.

OpenAI’s blueprint suggests that in the near future, the government “develops best practices” for the deployment of models to protect against misuse. It also proposes to “streamline” AI industry engagement with national agencies and to develop export controls to allow the sharing of models with allied nations while “limiting[ing]” their export to “adversary countries.” The blueprint states that “the federal government’s approach towards frontier model safety and cybersecurity should streamline requirements.” “Responsibly… exporting… models will help them set up their own AI eco-systems, including their developer communities innovating and distributing AI’s benefits, while also building AI using U.S. tech, not technology funded from the Chinese Communist Party.” The company has agreements with the Pentagon to work on cybersecurity and other related projects. It has also teamed up with defense start-up Anduril to provide its AI technology to systems that the U.S. Military uses to counter drone attack.

OpenAI’s blueprint calls for the creation of standards that are “recognized by other nations” and accepted by international bodies. This is on behalf of the U.S. Private Sector. The company does not endorse mandatory rules or edicts. The blueprint states that “[The government can create] is a defined, voluntarily pathway for companies who develop [AI] in order to work with the government to define model assessments, test models, exchange information, and support the companies’ safeguards.”

Biden’s administration adopted a similar approach with its AI executive orders, which sought to implement several high-level voluntary AI safety and cybersecurity standards. The executive order created the U.S. AI Safety Institute, a federal government agency that studies risks associated with AI systems. It has partnered up with companies such as OpenAI to evaluate models’ safety. Trump and his allies have pledged to repeal Bidenโ€™s executive order. This puts its codification – and the AISI – at risk. OpenAI’s blueprint

also addresses copyright in relation to AI, which is a hot-button issue . The company argues that AI developers should have the right to use “publicly accessible information,” including copyrighted material, to develop models.

OpenAI and many other AI companies train models using public data from the web. The company has licensing deals with several platforms and publishers and offers limited options for creators to opt out of its model development. OpenAI also stated it would be “impossible to train AI models without copyrighted material”and a number creators have sued OpenAI for allegedly training their works without permission.

The blueprint states that[O]other actors, such as developers in other countries do not respect IP rights or engage with their owners. If the U.S., and other nations who share the same views, do not take sensible measures to help advance AI in the long term, the same content may still be used for AI education elsewhere, but at the expense of other economies. [The government should ensure] AI can learn from universally available information just like humans, while protecting creators against unauthorized digital copies.

There is still a lot of work to be done to determine which parts of OpenAIโ€™s blueprint will influence legislation. The proposals are a sign that OpenAI is committed to a united U.S. AI strategy.

OpenAI spent $800,000 in the first half of 2018 compared to $260,000 for all of 2023. The company has also hired former government officials to its executive ranks. These include Sasha Baker from the Defense Department, Paul Nakasone as the NSA chief, and Aaron Chatterji who was the chief economist for the Commerce Department during the tenure of President Joe Biden. OpenAI, as it expands its global affairs division and makes hires, has become more vocal about the AI laws and regulations it prefers. For example, it has thrown its weight behind Senate Bills that would establish a federal ruling body for AI, and provide federal scholarship for AI R&D.

TechCrunch offers a newsletter focusing on AI! Sign up to receive it every Wednesday in your email!

Read More

More from this stream

Recomended


Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464