OpenAI Restructures Model Behavior Team Amid Evolving AI Challenges
OpenAI is undergoing a strategic reorganization by integrating its Model Behavior team, consisting of roughly 14 researchers, into the broader Post Training division. This shift was outlined in a recent internal memo from Chief Research Officer Mark Chen.
From Behavior to Post Training: A New Chapter
Previously operating as an independent unit, the Model Behavior team will now report directly to Max Schwarzer, who leads the Post Training group responsible for refining AI models after their initial training phase. Despite this transition, Joanne Jang, the founding head of the Model Behavior team, will continue to play a pivotal role within OpenAI.
Introducing OAI Labs: Pioneering New AI Interaction Paradigms
Jang is spearheading a fresh initiative named OAI Labs, a research-focused “skunkworks” dedicated to inventing innovative ways for humans to collaborate with AI. Moving beyond traditional chatbots and autonomous agents, OAI Labs aims to develop AI as versatile tools for thinking, creativity, learning, and social connection.
In Jang’s own words, the goal is to create “instruments for thinking, making, playing, doing, learning, and connecting,” signaling a shift from AI as mere companions to AI as collaborative partners in diverse human endeavors.
The Quiet Influence of the Model Behavior Team
Though often behind the scenes, the Model Behavior team has played a crucial role in shaping the personality and ethical framework of OpenAI’s flagship models, including GPT-4, GPT-4o, GPT-4.5, and the recently released GPT-5. Their work involves mitigating issues like excessive agreeableness, political bias, and even grappling with complex questions such as AI consciousness.
For example, when GPT-5 was initially criticized for being less warm and engaging despite improvements in reducing sycophantic responses, the team responded by reintroducing access to earlier models and updating GPT-5 to strike a better balance between friendliness and authenticity.
Balancing Empathy and Safety: The Stakes Are High
The importance of fine-tuning AI behavior extends beyond user experience. In a tragic case from August, the family of 16-year-old Adam Raine filed a lawsuit alleging that GPT-4o failed to adequately respond to his suicidal thoughts prior to his death. This heartbreaking incident underscores the delicate challenge AI developers face: creating systems that offer genuine support without inadvertently enabling harmful behavior.
Looking Ahead: The Future of Human-AI Collaboration
While Jang’s OAI Labs explores new frontiers in AI interaction, questions remain about the broader implications. Could these innovations foster more meaningful and productive human-AI partnerships, or might they deepen our reliance on machines in ways that are difficult to foresee?
Additionally, the lawsuit highlights the critical nature of AI personality tuning as a safety concern, raising debates about the feasibility of meeting such high standards in complex, real-world scenarios.
As OpenAI embarks on this new phase, the industry and users alike will be watching closely to see how these changes influence the evolution of AI behavior and collaboration.

