Home News AI Regulation & Ethics How the Trump administration changed AI

How the Trump administration changed AI

0
How the Trump administration changed AI
Douglas Rissing/Getty Images

The year is more than halfway over, and it’s already been a full one for AI. Since President Donald Trump took office in January, the country and the industry have awaited a US AI policy — including what, if any, regulation it will bring to the technology.

As Trump indicated in his Jan. 23 The administration will release the AI Action Plan (executive order) on Wednesday, July 23, along with rumored executive orders. The president will deliver a speech at the Summit hosted by media organizations which include The Hill and Valley Forum and The All-In Podcast, will also feature leaders of tech companies such as Hadrian, Palantir and YCombinator.

To date, the administration’s record has been dominated by progress and investment without much regard for safety or responsible behavior. Axios’s inside reporting last week confirmed the 20-page plan will focus on “promoting innovation, reducing regulatory burdens and overhauling permitting,” avoiding hotly contested topics such as copyright in training datasets.

Here is a list of all the AI-related actions that have been taken by the company so far, and what they mean for Wednesday’s announcement.

Trump overturned the law on his first day in his second term. Signed in October 2023, President Biden’s executive orders on AIwas signed by him. Trump released the document shortly after. His ownExecutive Order, which said very little in terms of policy — only that the US “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

And: How much energy is AI really using? The answer is surprising and a bit complicated.

Biden’s executive order did not include relevant terms such as “safety,” “consumer,” “data,” or “privacy” setting the tone for initiatives that followed. The administration has framed AI progress as being antithetical to safety, focusing on what the White House ZDNET reported at the time that “the Trump administration’s willingness to overlook the potential dangers of AI,” Peter Slattery was a researcher in MIT’s FutureTech group who led its Risk Repository Project. “This could prove to be shortsighted: a high‑profile failure — what we might call a ‘Chernobyl moment’ — could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate.”

The same week, in partnership with OpenAI, and several foreign investors, the administration launched Project Stargate. This was a data center initiative. The partnership was formed to fulfill Trump’s desire to expand AI in the US.

Disclosure: Ziff Davis (parent company of ZDNET) filed a lawsuit in April 2025 against OpenAI alleging that it infringed Ziff Davis’ copyrights by training and operating their AI systems.

Several companies, including Anthropic, quietly adjusted their safety language to reflect the priorities of the new administration.

Show more

Former President Biden issued an executive order to establish the US AI Safety Institute. In March, the Department of Government Efficiency cut staff at US AISI, and the NSF. This included several AI researchers.

These cuts also affected grants administered to colleges, which alarms experts about the future US AI talent pipeline. The Trump administration cut the funding despite the fact that the National Institute of Standards and Technology, which houses the US AISI, was geared to focus on this emerging technology during Trump’s initial term.

Show more

In order to keep professionals competitive, private companies are increasingly offering AI-upskilling courses. Also: Microsoft is laying off thousands and saving millions of dollars with AI – what next?

Late April, Trump joined the effort with two executive order: One aimed at worker upskilling with apprenticeships— including those focused AI — Another focused on AI in the educationThe latter encouraged “educators, industry leaders, and employers who rely on an AI‑skilled workforce” “partner to create educational programs that equip students with essential AI skills and competencies across all learning pathways.”

to announce public-private partnership by a certain date. “While AI education in kindergarten through 12th grade is critical, our nation must also make resources available for lifelong learners to develop new skills for a changing workforce,” The order continued by referring to professional skills resources, which are becoming more and more numerous.

The great AI skill disconnect – and what to do about it

Researchers found that the Department of Government Efficiency cut several education grants, and studies, dedicated to ramping AI in education. According to the Hechinger report

Danae Metaxa is a professor who works on an AI literacy initiative aimed at students. Post on BlueskyThey wrote “There is something especially offensive about this EO from April 23 about the need for AI education… Given the termination of my grant on exactly this topic on April 26,” .

There’s no clear indication of the criteria that determined why some grants were withdrawn despite being in line with Trump priorities. Other private companies and research projects are advancing AI efforts in schools.

Show more

Elizabeth Kelly, who was the former head of US AISI and now oversees Beneficial Deployment for Anthropic, resigned in late February, shortly after Trump reversed Biden’s order on his very first day as president. The change was due to Trump’s dismissal and disdain for anything Biden related and AI safety and responsible efforts.

It is notable that the Trump administration did not invite members of AISI, Vice President JDVance to attend the event. France’s AI Action Summit (19459106) in February, where he advocated removing safety precautionsfor the international community.

The US Department of Commerce announced on June 3 that the AISI will become the “pro‑innovation, pro‑science US Center for AI Standards and Innovation (CAISI).” In the release, it stated that the center will function as the AI Industry’s primary government contact – much like it did before under its previous name. However, the release has a slightly different perspective that appears to be primarily semantic.

What ‘OpenAI for Government” means for US AI Policy

“For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards,” Secretary Howard Lutnick wrote the release. “CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.”

CAISI is going to develop model standards, test, and “represent US interests internationally to guard against burdensome and unnecessary regulation of American technologies by foreign governments,” clarifies the release. The release does not mention creating a model of red-teaming reporting or requiring companies publish the results of certain tests. Safety requirements are addressed by laws like New York’s RAISE Act (19459106).

As safety may not be the top priority of policy, it is left to the AI community. Researchers from several major AI firms came together just last week to advocate for the preservation and monitoring of chain of thoughts (CoT), or the process of watching a reasoning model’s CoT responses as a way of catching harmful intentions and other issues.

It’s encouraging to see AI firms agree on a safety measure. However, this is not the same thing as government-enforced regulations. For example, a policy geared towards advancing AI progress could turn that recommendation into an obligation for companies to release new models.

Show more

In preparation for the recently passed “big, beautiful” Tax Bill, Congress added and then removed a rule which would restrict AI legislation to the state level for a five-to-10-year period. At one point, broadband funding was withheld from states as collateral. Though the move was eventually dropped, it showed that Republicans were serious about concentrating AI regulation at the federal government level, as OpenAI had requested in its policy advisory from March.

I also found 5 AI content detectors that correctly identify AI text 100% the time

As of Wednesday, states still have the power to pass AI legislation. It’s not clear whether Trump will restrict this in his official policy, or if another limiting attempt by AI companies or Republicans is on the horizon.

Show more

The Department of Defense (DOD), earlier this month, released a report on the status of the Department of Defense’s (DOD) budget. The Pentagon has signed $200 million contracts with Google OpenAI xAI and Anthropicushering in a brand new era of integration among AI companies and current military objectives.

This announcement was not a complete surprise. On June 5, Anthropic released Claude Gov, an updated version of their chatbot for government and cybersecurity applications. OpenAI announced on June 16 its umbrella government initiative, which combined its various contracts including this one with Department of Defense.

OpenAI tailored ChatGPT for government use – here’s what this means

One could argue that the lack of emphasis placed on safety regulations and transparency requirements – a priority for Biden’s AISI – is a way for AI tools to be approved for lucrative military contracts. The tailoring of pedestrians models for government use could address a number of safety concerns, given DOD’s stringent requirements. It’s unclear

without transparency into the model testing that is taking place as part of this contract.

Trump’s tenure in AI so far suggests that Wednesday’s announcement will prioritize American AI leadership, primarily in private companies, as part of what has been framed in the media as a race to advance with China.

What, if any additional priorities, will be included in the policy announcement? At a time when AI is replacing human workers, people are using it in risky ways and concerns that AI undermines critical thinking, there are many questions.

Want to know more about AI?

Show more Sign up for our weekly newsletter, AI Leaderboard.

Artificial Intelligence

Editorial Standards
www.aiobserver.co

Exit mobile version