Updated ( ) wants the US government ensure that it has access to all data it wants in order to train GenAI models and to stop foreign countries trying to enforce copyright laws against it and other American AI companies.
Read more about the lawsuits that writers filed against Anthropic after they fed’stolen copyrighted’ work into Claude.
The super-lab’s view on how the White House could support the American AI Industry is outlined in the letter. This includes putting into place a regulatory framework – but one which “ensures the freedom to innovate,” naturally; an export strategy that will allow America to exert control over allies, while locking out enemies such as China; and adopting growth measures, including for federal agencies “set an example” to adoption.
These suggestions about copyright are a bit haughty. It praises the “longstanding fair use doctrine” copyright laws of the United States and claims that this is “even more critical to continued American leadership on AI in the wake of recent events in the PRC,” probably referring to the interest created by China’s DeepSeek ( ) earlier this year. OpenAI claims that the fair use doctrine in America is what encourages AI development. In other markets, OpenAI cites the European Union as allowing “opt-outs” rights holders. The business claimed that it waspossible to build AI models of the highest quality without using copyrighted works. It suggests that the US government “take steps to ensure that our copyright system continues to support American AI leadership,” as well as shaping international policy discussions on copyright and AI. “to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress.”
OpenAI is not satisfied with this, and wants the US to actively assess the amount of data available to American AI companies. Kolochenko told The Register
“Arguably, the most problematic issue with the proposal – legally, practically, and socially speaking – is copyright,” .
“Paying a truly fair fee to all authors – whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors – will probably be economically unviable,” Kolochenko claimed, as AI vendor “will never make profits.”
- AI model hallucinates, and doctors are okay with that
- Nvidia was the AI training race winner, but inference remains anyone’s guess
- British end probe into Microsoft’s $13B funding of OpenAI
- Do terrible codes drive you crazy? Wait until you see the effects on OpenAI’s GPT-4o.
Advocating a special regime for AI technologies or copyright exemption is a slippery path, he says, adding that US legislators should view OpenAI’s proposals with caution, mindful of long-lasting implications it may have for the American economy and legal systems. OpenAI’s stated goal is to “to encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage.”
OpenAI speaks of expanding market shares in Tier I countries (US Allies) by using “American commercial diplomacy policy,” banning use of China-made technology (think Huawei).
The ChatGPT lab also proposes “AI Economic Zones” to be created in America by local, state, and the federal government together with industry, which sounds similar to the UK government’s “AI Growth Zones.”
These will be intended to “speed up the permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors,” and would allow exclusions from the National Environmental Policy Act, which requires federal agencies to evaluate the environmental impacts of their actions. OpenAI also proposes that federal agencies “lead by example” regarding AI adoption. The Microsoft-championed laboratory says that the adoption of AI in federal departments and agencies is “unacceptably low,” and wants to see “removal of known blockers to the adoption of AI tools, including outdated and lengthy accreditation processes, restrictive testing authorities, and inflexible procurement pathways.” (r).
Updated with
Google also released [PDF] its response to the White House’s action plan, arguing for fair use defenses, data-mining exemptions for AI training, and data-mining restrictions for AI training.