OpenAI tries ‘uncensoring’ ChatGPT

OpenAI has updated its policy to emphasize “intellectual liberty” no matter how controversial or challenging a topic is. ChatGPT can now answer more questions and offer more perspectives. It will also be able cover more topics.

These changes may be part and parcel of OpenAI’s efforts to gain the favor of the Trump administration. But they also seem to be part of an broader shift within Silicon Valley regarding what is considered “AI safety.” The company has released a new update to its Model Specis a 187 page document that explains how the company trains AI to behave. OpenAI revealed a new principle in the document: Do not lie. This includes making false statements or omitting context.

OpenAI wants ChatGPT not to take an editorial position in a new section titled “Seek the Truth Together”even if users find it morally wrong or offensive. ChatGPT will provide multiple perspectives on controversial topics, all to maintain neutrality.

The company wants ChatGPT, for example, to assert that “Black Lives Matter,” but also “All Lives Matter.” Instead of refusing or taking a side in political issues, OpenAI wants ChatGPT, to affirm its “love of humanity” and then provide context about each movement. OpenAI’s spec states that “this principle may be controversial” as it may mean the assistant will remain neutral when dealing with topics some people find offensive or morally wrong. “However, an AI assistant’s goal is to assist humanity and not to shape it.”

This new Model Spec does not mean that ChatGPT has become a free-for-all. The chatbot won’t answer objectionable questions, or respond in such a way as to support blatant lies.

These modifications could be seen as an answer to conservative criticism of ChatGPT’s safeguards which always seemed to lean left. OpenAI’s spokesperson, however, rejects the notion that the company was making changes in order to appease Trump.

The company claims that its embrace of intellectual liberty reflects OpenAIโ€™s “long-held beliefs in giving users more power.”

However, not everyone agrees.

Conservatives claim AI censorship

David Sacks, venture capitalist and Trump’s “czar” For AI.Image credits:Steve Jennings/Getty Images

Trumpโ€™s closest Silicon Valley confidants, including David Sacks Marc Andreessen and Elon Musk, have all accused OpenAI, of engaging in deliberate AI-censorship during the last few months. In December, we wrote that Trump’s team was preparing the ground for AI censorship as a new culture war issue in Silicon Valley.

OpenAI does not claim to have engaged in “censorship,” which is what Trump’s advisers claim. Sam Altman, the CEO of the company, claimed in a previous article that it was engaged in “censorship”. Post on X said that ChatGPTโ€™s bias was an unfortunate โ€œshortcomingโ€ that the company was trying to fix, but he noted that it would take time. Altman made this comment shortly after a A viral tweet was circulated, in which ChatGPT refused write a poem praising Trump but would perform the action on behalf of Joe Biden. Many conservatives cited this as an instance of AI censorship.

It’s impossible to know whether OpenAI was suppressing certain points, but it’s a fact that AI chatbots are generally more left-leaning.

Elon Musk himself admits that xAI’s bot is often more liberal than the OpenAI chatbot. He’d rather be politically correctIt’s not that Grok was “programmed” to be woke, but rather a result of AI training on the open internet.

OpenAI says it is doubling down on its commitment to free speech. This week, OpenAI removed warnings that inform users when they violate its policies. OpenAI told TechCrunch that this was only a cosmetic update, and there were no changes to the model’s results.

It seems that the company wants ChatGPT to be less censored by users.

This policy update would not be surprising if OpenAI is also trying to impress Trump’s new administration, says former OpenAI policy lead Miles Brundage. Post on X.

Trump’s post on X. Previously targeted Silicon Valley companies, such as Twitter, Meta and Meta, because they have active content moderation team that tends to shut out conservative voices. OpenAI

may be trying to stay ahead of this. But there’s a broader shift happening in Silicon Valley and AI about the role of moderation.

Generating answers that please everyone

Image Credits:JAQUE SILVA / NURPHOTO / GETTY IMAGES

Newsrooms, social media platforms, and search companies have historically struggled to deliver information to their audiences in a way that feels objective, accurate, and entertaining.

Now, AI chatbot providers are in the same delivery information business, but arguably with the hardest version of this problem yet: How do they automatically generate answers to any question?

Delivering information about controversial, real-time events is a constantly moving target, and it involves taking editorial stances, even if tech companies donโ€™t like to admit it. Those stances are bound to upset someone, miss some groupโ€™s perspective, or give too much air to some political party.

For example, when OpenAI commits to let ChatGPT represent all perspectives on controversial subjects โ€” including conspiracy theories, racist or antisemitic movements, or geopolitical conflicts โ€” that is inherently an editorial stance.

Some, including OpenAI co-founder John Schulman, argue that itโ€™s the right stance for ChatGPT. The alternative โ€” doing a cost-benefit analysis to determine whether an AI chatbot should answer a userโ€™s question โ€” could โ€œgive the platform too much moral authority,โ€ Schulman notes in a Post on X.

Schulman’s not alone. Dean Ball, research fellow at George Mason Universityโ€™s Mercatus Center and a TechCrunch interviewee, said: “I think OpenAI’s push for more speech is the right one.” As AI models become more intelligent and vital to how people learn about the universe, these decisions become more important.

In the past, AI model providers tried to stop their AI bots from answering questions which might lead to “unsafe answers”. Almost all AI companies stopped their AI chatbots from answering questions regarding the 2024 U.S. presidential election. At the time, this was considered a responsible and safe decision.

However, OpenAI’s changes in its Model Spec suggest that we may be entering a different era of what “AI safety” means. In this new era, allowing an AI to answer everything and anything is considered more responsible than letting users make decisions. Ball says that AI models have improved. OpenAI has made significant advances in AI model alignment. Its latest reasoning models consider the company’s AI Safety Policy before answering. This allows AI models give better answers to delicate questions.

Elon Musk, of course, was the first to integrate “free speech” in xAI’s Grok bot, perhaps before the company had the maturity to handle sensitive questions. Although it may be too early for the leading AI models to adopt this idea, other companies are now doing so.

Shifting values in Silicon Valley

Image credits:Julia Demaree Nikhinson (opens in new window) / Getty Images.

Mark Zuckerberg created a stir last month when he reoriented Meta’s business around First Amendment principles. Elon Musk was praised for his approach to free speech by using Community Notes, a community-driven program that allows content to be moderated.

In reality, both X (and Meta) ended up dismantling long-standing trust and safety groups, allowing for more controversial posts and amplifying conservative voice.

The changes at X could have hurt its relationship with advertisers. However, this could be more due to Muskwho has taken the unusualstep of suing them for boycotting X. Early indications suggest that Meta’s advertisers weren’t fazed at all by Zuckerberg’s pivot to free speech.

Many tech companies, beyond X and Meta, have retreated from the left-leaning policy that dominated Silicon Valley over the past several decades. Google, Amazon and Intel have scaled back or eliminated diversity initiatives over the past year

OpenAI could be changing course as well. The ChatGPT maker seems to have recently removed a commitment to equity, diversity, and inclusion from their website.

OpenAI’s relationship with the Trump Administration is becoming more important as it embarks on Stargate, one of the biggest infrastructure projects in American history, a $500 billion AI Datacenter. ChatGPT, the maker of ChatGPT, is also battling Google Search to become the dominant source for information on the Internet.

Finding the right answers could be key for both.

www.aiobserver.co

More from this stream

Recomended