Whodunit? Grok’s ‘unauthorized’ change made it blather about ‘White genocide.’

Elon Musk’s xAI apologized for its Grok generative bot, which began spouting unfounded conspiracy theories about White genocide as a response to unrelated queries.

On a Wednesday, users of the LLM – accessible via X aka Twitter – noticed that answers to questions were being accompanied by screeds about claims of White Genocide in South Africa, and references to a song from the apartheid era, Kill the Boer. You can see that it didn’t require much to trigger the bot.

Grok’s mono-track mind… Click to enlarge a screenshot of a typical chat with the bot on X.

It appears that the situation has been resolved. The bot no longer rants about White genocide. However, some users still can get Grok to rant by asking it to “jork it.” xAI released a statement on Friday claiming that the bot was tampered with without permission. It claimed

“On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X,” .

“This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.”

xAI has now pledged to post Grok’s system instructions on GitHub for netizens to view and comment, and has established a system of control to prevent employees from meddling again with the code. It has also created a 24/7 content moderator team to monitor future SNAFUs. The AI business is trying to portray this as someone inside the company modifying the bot to emit conspiracies.

It is interesting that the system prompts include, among other instructions to bot, “You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.”

This whole incident draws attention to xAI billionaire Elon Musk’s point of view.

Musk was apprehensive about the alleged “White genocide” being perpetrated against Afrikaners in South Africa, his birthplace, and posted on the topic repeatedly in March. He was then fact-checked by Grok and, in one instance, slapped. Grok responded to a false post about the number farm workers were murdered in South Africa by slapping him down

. It said:

Elon’s focus on South Africa is a result of his upbringing in South Africa during apartheid. A system that deeply shaped Elon Musk’s views. He is vocal about farm killings and often makes exaggerated “White genocide” statements, such as the crosses on the X-post, which are debunked by data: SAPS reported only 62 farm killings in 2018, not a daily death toll, and that includes Black victims. His narrative is in line with AfriForum’s, a group that advocates Afrikaner rights. However, it ignores the broader context, such as systemic inequality after apartheid. No macabre agenda, just a billionaire amplifying an distorted perspective.

Could this be because Musk finally had enough being snubbed by his AI bot and ordered a staffer make some changes that were made so haphazardly that they put the bot into overdrive, biasing it too much? The genius behind xAI, which by some bizarre shady financial maneuvers now owns X-slash Twitter, would never sabotage their own business in this way. The change was likely made by an employee trying to impress the boss, or maybe a rebel who wanted to bring negative attention to Musk.

Meanwhile in the real-world…

This is also very interesting, as the first White “refugees” South Africans arrived in the US Monday, following an executive order by President Trump who works closely with the Tesla tycoon in order to downsize federal government.

South African President Cyril Ramaphosa, who signed a law in January that allowed farmland owned primarily by White people to be taken away without compensation. “just and equitable and in the public interest.” Many members of the government strongly objected and have vowed not to accept this. Musk was enraged by this, and President Trump is on his side. The commander-in chief has echoed Musk’s complaints about the treatment White farmers in South Africa, and has reportedly instructed US agencies to stop all work related the upcoming G20 Summit in South Africa later in the year as a protest.

Trump’s administration suspended most refugee admittances for other countries. This included many who were previously conditionally approved. It made an exception, however, for a group Afrikaners who were fast tracked through a new path and are now arriving in America to start a new life.

When asked about this earlier in the week, Deputy Secretary Christopher Landau stated that this decision was based on several facts, including “they can be assimilated easily into our country.” this was decried as thinly-veiled racism.

  • Musk’s xAI swallows Musk’s X in ego friendly, all-stock deal.
  • Grok 3 enters the AI wars with a ‘beta rollout.
  • Ireland launches probe into Musk’s X for Grok’s AI data slurp.
  • Democrats fret over DOGE feeding confidential data into random AI.

As for the claims of White genoc The New York Times reported that there were 225 farm murders between April 2020 to March 2024. Less than one-fourth were farmers. You can’t rely on bots

– The Grok case shows why it is so hard to trust AI chatbots.

All LLMs can be prone to “hallucinations”or, as they’re more commonly called when we don’t talk about AI, mistakes and errors. These blunders can be caused by a variety of things, from bad training data to limitations in the design.

The Grok case, however, appears to be an example of someone intentionally modifying the system prompt in order to inject conspiracy-laced answers aligned with Elon. Bruce Schneier, a cryptography and privacy guru, spoke on this topic at the recent RSA Conference in a keynote.

Bruce Schneier pointed out that corporate artificial intelligence cannot be trusted because it is designed to support the commercial interests of its makers, and not necessarily the users’ interests – for instance, recommending a product or service over another due to sponsorship. He called for the creation of open source AI models so that people can see any biases used to influence results.

Grok’s incident is a good example. The Register questioned him about the current shenanigans and his answer is telling. He explained. “Maybe it’s the model itself exhibiting some emergent behavior. Maybe it’s the corporate owners of the model deliberately altering their behavior. Whatever the explanation, inconsistency results in poor integrity – which means users can’t trust the models.” (r)

www.aiobserver.co

More from this stream

Recomended