Home News OpenAI has released the first research on how using ChatGPT impacts people’s...

OpenAI has released the first research on how using ChatGPT impacts people’s emotional well-being

0
OpenAI has released the first research on how using ChatGPT impacts people’s emotional well-being

OpenAI claims that over 400 million people use ChatGPT each week. But how do we feel when we interact with it? Does it make us feel more or less alone? OpenAI, in partnership the MIT Media Lab set out to explore these questions in a A pair of new studieshave been published.

The researchers found that only a subset of ChatGPT users are emotionally engaged. Kate Devlin a professor at King’s College London who is not involved in the project, says that this is not surprising, as ChatGPT was never marketed as a companion app for AI like Replika and Character.AI. “ChatGPT was designed as a productivity tool,” says Kate Devlin, a professor of AI and society at King’s College London. “But we know people use it as a companion app,” she says. In fact, those who do this are likely to interact for extended periods, with some averaging around half an hour per day. Devlin: “The authors are very open about the limitations of their studies, but it is exciting to see that they have done this.”

“To have this level of data available is incredible.”

Researchers found some interesting differences between men and women’s responses to using ChatGPT. After using the chatbot four weeks, female participants in the study were less likely to interact with other people than their male counterparts. Participants who interacted with ChatGPT in voice mode using a different gender than their own reported higher levels of loneliness at the end of their experiment. OpenAI plans on submitting both studies to peer reviewed journals.

Chatbots that use large language models to communicate are a new technology and it is difficult to study their emotional impact. Many existing studies in this area, including some of the work done by OpenAI and MIT, rely on self-reported data that may not be accurate or reliable. This latest research is in line with what scientists have found so far about how emotionally engaging chatbot conversations can. In 2023, MIT Media Lab researchers discovered that chatbots tends to mirror the emotional sentiments of a user’s messages. This suggests a feedback loop in which the happier you are, the happier AI appears, or, on the other hand, if you’re sadder, the AI also seems happier.

OpenAI, MIT Media Lab and other researchers used a two-pronged approach. They first collected and analyzed data from over 40 million real-world interactions with ChatGPT. Then, they asked 4,076 users that had experienced those interactions what they felt. The Media Lab then recruited almost 1,000 participants to take part in the four-week trial. This trial was more detailed, and examined how participants interacted daily with ChatGPT. Participants completed a questionnaire at the end of the study to assess their perceptions of ChatGPT, their subjective feelings about loneliness, their level of social engagement, emotional dependence on the bot and whether they felt that their use of it was problematic. Participants who “bonded” and trusted ChatGPT were more likely to be lonely and to rely more on it than other participants.

According to Jason Phang, a safety researcher at OpenAI who worked on the project, this work is a first step towards greater insight into ChatGPT’s impact on us. This could help AI platforms create safer and healthier interactions. “A lot of the work we do here is preliminary. But we’re trying start the conversation with the industry about the types of things we can measure and to think about what the long term impact on users is,” says Phang. Devlin says that although the research is welcomed, it is still difficult to determine when a person is engaging with technology on an emotionally charged level. She says that the participants in the study may have experienced emotions that were not recorded by the researchers. She says that, “in terms of what the teams were trying to measure, people may not have used ChatGPT in a very emotional way. But you can’t separate being human from your interactions (19459015],” she says. “We use these emotions classifiers we’ve created to look for specific things, but what this actually means in someone’s life can be hard to extrapolate.”

Correction. An earlier version of this story incorrectly stated that participants in the study set the gender of ChatGPT’s voice and that OpenAI didn’t plan to publish any study. OpenAI plans on submitting both studies to peer reviewed journals. The article was updated since then.

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version