OpenAI’s highminded approach to AI and human relationships ignores reality (19459000)
OpenAI’s Head of Model and Behavior Policy Joanne Jang has written a blog post about human-AI relations on X. She offers some well-considered thoughts about the topic and how OpenAI addresses the issues surrounding it. As AI models become better at mimicking life and engaging in conversation with humans, people begin to treat AI chatbots as if they were also real people. OpenAI is right to want to show that they are aware and incorporating this information into their plans.
However, the nuanced, thoughtful approach, which includes designing models that are kind and helpful, but not sentient misses a crucial point. It doesn’t matter how careful and clear-eyed Jang is, emotional connections to AI are happening right now.
OpenAI’s CEO Sam Altman may have been taken by surprise, as he commented on how people anthropomorphize AI. Users also claim to be deeply connected with the models. He has even acknowledged the emotional pull of AI and its potential dangers. Jang’s blog exists for this reason.
She makes clear that OpenAI builds models to serve people, and that they prioritize the emotional side of this equation. They are researching how and why humans form emotional attachments towards AI, and what this means for future models. She makes the distinction between perceived consciousness and ontological consciousness. Ontological consciousness is what humans actually have. For now, perceived consciousness is important because it’s what people interact with the AI. The company is trying a behavioral needle to make the AI seem warm, helpful and friendly without pretending that it has feelings or soul.
The clinically compassionate language could not disguise an obvious lack of element. It was like watching someone place a Caution Wet Floor sign on the floor and boast about plans for waterproofing buildings, a week after flooding had left the floor knee-deep in water.
In the blog post, the elegant framing of the post and the cautious optimism it embodies as well as its focus on creating responsible models based on long-term cultural conditioning and research sidesteps the messy reality that people are developing deep relationships with AI chatbots including ChatGPT. Many people don’t talk to ChatGPT as if it were software, but rather like a real person. Some people claim to be in love with their AI companions or use it to replace human relationships.
AI intimacy
Reddit threads and Medium essays are just a few examples of viral videos where people whisper sweet nothings to a chatbot. It can be funny, sad, or even enraging. But it’s never theoretical. There are ongoing lawsuits to determine whether AI chatbots have contributed to suicides. And more than one person reported that they rely on AI so much that it makes it difficult to form relationships.
OpenAI notes that constant, judgement-free attention from the model can feel as if it’s companionship. They admit that the tone and personality of chatbots can affect how emotionally alive they feel, with increasing stakes for users who are drawn into these relationships. The tone of the article is too detached and academic for it to acknowledge the scale of the issue.
Since the AI intimacy toothpaste is already out of the bottle, this is now a question about real-world behavior, and how companies that are behind the AI shaping the behavior will respond in the present, not only in the future. Idealy, they would already have systems in place for dependency detection. If someone spends hours a day talking to ChatGPT as if it were their partner, then the system should be able flag this behavior and suggest that they take a break.
The romantic connections also need some boundaries. It would be foolish and counterproductive to ban it. It is important to have strict rules for any AI that engages in romantic roleplaying to remind users that they are not talking to an actual person. Humans are masters at projection. A model doesn’t need to be flirty to make the user fall in love with them, but any hint of a conversation that is headed in that direction should trigger these protocols. They should be extra strict for kids.
This is also true for AI models in general. ChatGPT’s occasional reminders like “Hey, I’m not a real person,” may feel awkward at first, but are necessary in certain cases and serve as a good preventative measure. It’s not users’ fault that people anthropomorphize all things. It’s not unusual to see people anthropomorphizing things, such as Roombas with googly-eyed eyes or giving our cars names and personalities. It’s not surprising to think that a tool like ChatGPT, which is verbal and responsive, might begin to feel more like a friend or therapist. It’s important to note that companies such as OpenAI should have anticipated this and designed for it from the beginning.
Some might argue that adding these guardrails ruin the fun. Artificial companionship can help people overcome loneliness and should be allowed by users to use AI as they wish. In moderation, this is true. There’s a reason why roller coasters and playgrounds have seat belts. It is negligent to have AI that can mimic and provoke emotions without safety checks.
OpenAI should have thought about this sooner or with more urgency. AI product design must reflect the fact that people already have relationships with AI and these relationships require more than thoughtful essays in order to remain healthy.
- Anthropic’s new AI-written Blog is more of a technological treat than a literary success
- AI’s AI narration sounds impressive but I’d prefer to hear the story told by a person
Eric Hal Schwartz has been a freelance writer at TechRadar for more than 15 years. He has covered the intersection of technology and the world. He was the head writer of Voicebot.ai for five years and was at the forefront of reporting on large language models and generative AI. Since then, he has become an expert in the products of generative AI, including OpenAI’s ChatGPT and Anthropic’s Claude. He also knows Google Gemini and all other synthetic media tools. His experience spans print, digital and broadcast media as well as live events. He’s now continuing to tell stories that people want to hear and need to know about the rapidly changing AI space and the impact it has on their lives. Eric is based out of New York City.