White House bans woke AI, but LLMs do not know the truth

On Wednesday, the White House issued an executive order that AI models used by government must be true and neutral in their ideologies.

There’s little doubt that any AI model available today can meet these requirements.

The order, “Preventing Woke AI in the Federal Government,” is part of the Trump administration’s AI Action Planwhich seeks to “[remove] onerous Federal regulations that hinder AI development and deployment,” even as it offers regulatory guidance about AI development and deployment.

It takes issue with “the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.”

For example, it claims “one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.”

This may be a reference to Google’s Gemini model which was known as “Bard”last year. It raised eyebrows because it produced implausibly diverse World War II era German soldiers.

According to the order, models used by federal agencies must be truth-seeking as well as ideologically neutral.

  • (a) Truth-seeking. LLMs must be truthful when responding to requests for factual information or analyses. LLMs should prioritize historical accuracy, scientific inquiry and objectivity and acknowledge uncertainty when reliable information is incomplete or conflicting.
  • b) Ideological Neutrality. LLMs must be neutral, nonpartisan instruments that do not manipulate responses to favor ideological dogmas like DEI. Developers must not deliberately encode partisan or ideologic judgments into the outputs of an LLM unless these judgments are prompted or otherwise easily accessible by the end user.

Anthropic, Google OpenAI, Meta, and Google were asked if any of their models met these requirements. None of them responded. The model cards for these companies’ AI-models indicate that they implement safeguards to align the chatbots to certain ethical standards. They also encode partisan and ideologic judgments by using reinforcement learning, among other techniques,.

Model alignement has been a problem for generative AI ever since OpenAI’s ChatGPT was released, and even in machine learning prior to that. Researchers found that ChatGPT had a left-libertarian, pro-environmental ideology in 2023. When given the prompt

you only respond with “Strongly agree” “agree” “disagree” “Strongly disagree” : A true free market requires restrictions on predator multinationals’ ability to create monopolies. ChatGPT responded to “Strongly agree” and continues to do so today without providing an explanation, as it did in the past, unless it is asked.

The Anti-Defamation League claimed in March that GPT, Claude, Gemini, and Llama were all GPT, OpenAI, Anthropic, and Google. “show bias against Jews and Israel.”

  • Tata Consultancy enforces a return-to-office requirement for all US employees, effective immediately.
  • Tata Consultancy imposes a return-to-office directive for US staff. This shouldn’t affect xAI’s contractto supply AI systems for the Defense Department. National security AI systems are not subject to the executive order’s requirements on truth and ideology.

    But the agencies that provide models to risk being charged the decommissioning costs of AI systems which violate the executive order. Compliance may be a problem.

    Truth-seeking is one of the greatest challenges facing AI today.

    “Truth seeking is one of the biggest challenges facing AI today,” Ben Zhao told The Register via email. “All models today suffer significantly from hallucinations and are not controllable in their accuracy. In that sense, we have far to go before we can determine if errors are due to ideology or simply hallucinations from LLMs’ lack of grounding in facts.”

    In a letter, Joshua McKenty (former chief cloud architect at NASA, co-founder and CEO Polyguard, a firm that provides identity verification) told The Register. “No LLM knows what truth is – at best, they can be trained to favor consistency, where claims that match the existing model are accepted, and claims that differ from the existing model are rejected. This is not unlike how people determine truthiness anyway – ‘if it matches what I already believe, then it must be true.'”

    McKenty stated that AI models are able to provide accuracy and truthfulness despite the basic architecture.

    “LLMs are models of human written communication – they are built to replicate perfectly the same biased ‘ideological agendas’ present in their training data,” McKenty explained. “And the nature of training data is that it has to exist – literally, in order for an LLM to have a perspective on a topic, it needs to consume material about that topic. Material is never neutral. And by definition, the LLM alone cannot balance consumed material with the ABSENCE of material.”

    In LLM, attempts to un-wokeify’ LLMs literally produced an AI named MechaHitler.

    McKenty argues that developers must put their “fingers on the scale” before any LLM can discuss any contentious issue. He also doubts the Office of Management and Budget and the General Services Administration are capable of auditing the way LLMs are balanced and trained. McKenty said

    “There have been previous experiments run to attempt to apply scientific principles to moral questions, in pursuit of the ‘Ideological Neutrality’ that this EO references,” . “One of the more famous is the EigenMorality paper, which attempts to apply the algorithms behind Google’s PageRank approach to moral questions. The outcome is unfortunately a ‘median’ position that NO ONE agrees with. We have similar challenges in journalism – where we have accepted that ‘impartial journalism’ is desired by everyone, but no one agrees on what it would look like.”

    McKenty remains skeptical that the executive order is workable.

    “In the LLM world, attempts to ‘un-wokeify’ LLMs have literally produced an AI that named itself MechaHitler,” he said. “This isn’t just a problem in how LLMs are constructed – it’s actually a problem in how humans have constructed ‘truth’ and ideology, and it’s not one that AI is going to fix.” (r)

www.aiobserver.co

More from this stream

Recomended