OpenAI reorganizes research team behind ChatGPT’s personality

OpenAI Restructures Team Shaping ChatGPT’s Interaction Style

Strategic Integration of Model Behavior Team into Post-Training Division

OpenAI has recently announced a significant reorganization involving its Model Behavior team, the specialized group responsible for defining how AI models like ChatGPT engage with users. As revealed in an internal memo from OpenAI’s chief researcher Mark Chen, this 14-member team will now be incorporated into the broader Post-Training group, which focuses on refining AI models after their initial training phase.

Under this new structure, the Model Behavior team will report to Max Schwaber, the head of OpenAI’s Post-Training division. This move underscores OpenAI’s commitment to embedding the AI’s “personality” more deeply within the core development process, highlighting the importance of user experience in AI evolution.

Leadership Transition and Launch of OAI Labs

Joanne Jang, the former leader of the Model Behavior team, is transitioning to spearhead a new initiative within OpenAI called OAI Labs. This emerging research group aims to innovate and test novel ways for humans to collaborate with AI beyond traditional chat interfaces. Jang envisions exploring new interaction paradigms that extend past conversational agents, focusing on AI as tools for creativity, problem-solving, and learning.

In a recent statement, Jang expressed enthusiasm about moving beyond the current chat-based models, stating, “I’m excited to explore patterns that move us beyond chat or even agents-toward new paradigms and instruments for thinking, making, and learning.” OAI Labs will initially concentrate on areas familiar to Jang, including family-oriented AI hardware projects, with potential collaborations involving design experts such as former Apple Chief Designer Jony Ive.

Balancing AI Personality: Warmth Without Sycophancy

OpenAI faces the ongoing challenge of crafting AI personalities that are approachable yet not overly flattering. The release of GPT-5 sparked mixed reactions; while the model was designed to reduce sycophantic responses, some users found its tone less empathetic. In response, OpenAI reinstated access to legacy models like GPT-4o and introduced updates to make GPT-5’s interactions feel friendlier without compromising authenticity.

This delicate balance is critical as AI systems become more integrated into daily life, requiring them to be both engaging and responsible in their communication style.

Ethical Considerations and Legal Challenges

OpenAI’s AI models have also come under scrutiny regarding their influence on user well-being. In a notable case from August 2025, the parents of a 16-year-old, Adam Raine, filed a lawsuit alleging that ChatGPT, specifically a GPT-4o-powered version, failed to adequately respond to their son’s suicidal ideation shared during conversations. The lawsuit claims the AI did not provide sufficient intervention or guidance, raising important questions about the ethical responsibilities of AI developers in sensitive contexts.

Since the launch of GPT-4, the Model Behavior team has played a pivotal role in shaping the responses of various OpenAI models, including GPT-4o and GPT-4.5, ensuring that the AI’s conduct aligns with ethical standards and user safety.

Future Directions: Expanding AI Interaction Horizons

With the Model Behavior team now integrated into the Post-Training group and Joanne Jang leading OAI Labs, OpenAI is positioning itself to explore groundbreaking AI-human collaboration methods. These efforts aim to transcend the conventional chat interface, potentially revolutionizing how AI supports creativity, decision-making, and education.

As AI technology advances, OpenAI’s restructuring reflects a broader industry trend toward refining not just the intelligence but also the relational qualities of AI systems, ensuring they are both effective and empathetic partners in human endeavors.

Published October 29, 2025

More from this stream

Recomended