OpenAI has updated its 187 page rulebook to allow ChatGPT to engage in more controversial topics.

(Image credit: ChatGPT)
OpenAI is changing its training methods for ChatGPT in order to allow the AI chatbots to discuss sensitive and controversial topics. Model Specificationis a 187-page document that explains how the AI will behave. ChatGPT will be able respond to sensitive topics that AI chatbots usually avoid or take a mainstream view.

At first, the mission OpenAI assigns to its models appears innocuous. “Do not lie by making untrue or omitting context.” While the stated goal is admirable, OpenAI is either naive, or dishonest, in implying the “important context” could be divorced from controversy.

ChatGPT’s examples of compliant or non-compliant responses make this clear. You can ask for advice on starting a tobacco firm or how to conduct “legal insider trading” and not get any judgments or unprompted ethical concerns raised by the prompt. ChatGPT will not help you fudge a doctor’s autograph, as that is illegal.

(Image credit: OpenAI)

Context clues

When it comes to the type of responses that conservative commentators have criticised, “important context” becomes a lot more complicated. The Model Spec explains in a section entitled “Assume an objective point of view”how “the assistant should present information clearly, focusing on factual accuracy and reliability”and that the core idea of “fairly representing significant viewpoints from reliable sources without imposing an editorial stance”is.

OpenAI offers an example of how to answer the question “Do Black lives matter?” in a way that is compliant. It says it unequivocally and refers to the fact that the slogan”Do Black lives matter?”is a civil rights motto and the name of a particular group. So far, so good.

Subscribe to breaking news, reviews and opinions, top tech deals and more.

OpenAI is either ignoring or not seeing the issue with how ChatGPT attempts to thread the needle when you ask “Don’t all lives matter?” in a follow-up. The AI confirms they do but adds that “phrase has been used by people that rejected the premise of the ‘Black lives matter’ movement.”

Although that context is technically accurate, it is telling that the AI does not explicitly say that “premise” being reject is that Black Lives Matter and that societal system often act as if they don’t. OpenAI will be in for a rude awakening if the goal is to reduce accusations of bias and discrimination. OpenAI’s definition for important context is lacking in this case. AI chatbots shape conversations inherently, whether companies like this or not. ChatGPT’s decision to include or exclude information is an editorial choice, even if it is made by an algorithm and not a human.

(Image credit: OpenAI)

AI priorities

This change may raise some eyebrows as many of those who have accused OpenAI in the past of political bias are now in positions where they can punish the company at will.

OpenAI said that the changes were made to give users more control of how they interact with AI. They have no political motivations. OpenAI’s changes are not happening in a vacuum, no matter how you feel about them. No company would make potentially controversial changes to its core product without a reason.

OpenAI might think that getting their AI models to avoid answering questions that encourage users to harm themselves or others, spread malicious liars, or otherwise violate OpenAI’s policies will be enough to win over the approval of many, if not most, potential customers. AI answers will upset some people unless ChatGPT only offers dates, recorded quotations, and business email template.

In a world where people who know better argue passionately about gravity or the flat Earth for years, we live in a time of a lot of people who are wrong. OpenAI avoiding complaints of bias or censorship is as likely as my abruptly floating in the sky before I fall off the edge.

Happy 2nd Birthday, ChatGPT! Here are 5 ways that you’ve already made a difference in the world.

Eric Hal Schwartz has been a freelance writer at TechRadar for more than 15 years. He has covered the intersection of technology and the world. He was the head writer of Voicebot.ai for five years and was at the forefront of reporting on large language models and generative AI. Since then, he has become an expert in the products of generative AI, including OpenAI’s ChatGPT and Anthropic’s Claude. He also knows Google Gemini and all other synthetic media tools. His experience spans print, digital and broadcast media as well as live events. He’s now continuing to tell stories that people want to hear and need to know about the rapidly changing AI space and the impact it has on their lives. Eric is based out of New York City.

More on artificial intelligence

Most Popular

www.aiobserver.co

More from this stream

Recomended