NYT will start searching deleted ChatGPT Logs after beating OpenAI at court

What are the chances that NYT will gain access to your ChatGPT logs during the OpenAI court battles?

OpenAI filed objections last week in court to overturn an order requiring that the AI company retain all ChatGPT records “indefinitely,” – including deleted and temporary conversations.

Sidney Stein was the US district court judge who reviewed OpenAI’s request. He immediately denied OpenAI objections. He seemed unmoved by OpenAI’s claims that this order forced OpenAI “long-standing privacy norms” to abandon and weaken privacy safeguards that users expect from ChatGPT’s Terms of Service. Stein argued that OpenAI’s agreement with its users stated that data could be retained in a legal proceeding, which is exactly what Stein claimed is happening.

The magistrate judge Ona Wong issued the order just days after news organisations, including The New York Times had requested it. The news plaintiffs claimed that the order was urgently required to preserve potential evidence in copyright cases, alleging ChatGPT users will delete chats wherein they attempted to use ChatGPT to circumvent paywalls in order to access news content. A spokesperson for Ars told us that OpenAI intends to “keep fighting” this order. However, the ChatGPT maker appears to have few other options. They could petition the Second Circuit Court of Appeals to intervene and block Wang’s orders. However, the appeals court must consider Wang’s decision as an extraordinary abuse of discretion in order for OpenAI to prevail. OpenAI’s spokesperson refused to confirm whether the company intends to pursue this extreme measure.

OpenAI is currently negotiating a procedure that will allow news plaintiffs the ability to search through retained data. The sooner this process is started, the sooner data will be deleted. OpenAI is faced with a difficult choice: either give in to some data collection so that the data is deleted as soon as possible, or prolong the fight and put more users’ conversations at risk through litigation or worse, a data leak.

News orgs are soon to start searching ChatGPT logs.

Time is running out, and OpenAI has yet to provide any official updates since a 5th June update. Blog post describing which ChatGPT users are affected.

It’s obvious that OpenAI has retained and will continue to maintain mounds upon mounds of information, but it would be impossible to search through that data for The New York Times or other news plaintiffs.

Rather, only a small portion of the data, based on keywords agreed upon by OpenAI and news plaintiffs, will be accessed. This data will remain on OpenAI servers, where it will have been anonymized. It is unlikely that it will ever be provided directly to plaintiffs.

Both parties are negotiating how to search through the chat logs. Both parties seem to be trying their best minimize the time that the logs will remain.

OpenAI is concerned that sharing the logs could reveal instances of outputs infringing on intellectual property rights, which could increase damages in the case. The logs may also reveal how often outputs attribute false information to news plaintiffs.

For news plaintiffs, however, accessing logs is not a key part of their case – perhaps providing additional examples for copying – but could help news organizations to argue that ChatGPT dilutes their market for their content. This could be a factor in the fair use argument. A judge recently ruled that evidence of market diluting could tip an AI copyright lawsuit in favor of plaintiffs.

Jay Edelson is a leading consumer lawyer who told Ars he was concerned that judges aren’t considering that any evidence from the ChatGPT logs would not “advance” change plaintiffs’ case, “at all,” but really “a product that people are using on a daily basis.”

Edelson warned OpenAI probably has better security to protect against a possible data breach that could reveal these private chat logs. Edelson suggested that “lawyers have notoriously been pretty bad about securing data,” so “the idea that you’ve got a bunch of lawyers who are going to be doing whatever they are” “some of the most sensitive data on the planet” “they’re the ones protecting it against hackers should make everyone uneasy.”

Even though it’s likely that most users’ chats will not be included in the sample, Edelson warned the mere threat of inclusion might make some users rethink their use of AI. He warned that ChatGPT users switching to OpenAI competitors like Anthropic’s Claude or Google’s Gemini may suggest that Wang’s order is improperly influencing the market forces. Edelson stated that the order, regardless of the news plaintiffs motives, sets a dangerous precedent. He agreed with critics who suggested that AI data could be frozen in future, potentially affecting more users if the sweeping order survives scrutiny in this case. Edelson suggested that litigation could one day target Google’s AI search summary.

Lawyer criticizes judges for denying ChatGPT users a voice

Edelson said to Ars that this order could be so damaging to OpenAI that it may have no choice but to continue to fight it. Edelson predicted that the order would be “bonkers” because it ignored millions of users’ concerns about privacy while “strangely” excluding Enterprise customers. Edelson said that the exclusion of enterprise users may have been done to protect OpenAI’s competitiveness. Edelson said.

The order is instead “only going to intrude on the privacy of the common people out there,” and Edelson said that “is really offensive,” based on the fact that Wang refused two ChatGPT users who were in a panicked state of mind to intervene. Edelson noted that he had entered information about his medical history into ChatGPT. “People ask for advice about their marriages, express concerns about losing jobs. They say really personal things. And one of the bargains in dealing with OpenAI is that you’re allowed to delete your chats and you’re allowed to temporary chats.”

Edelson said that the greatest risk for users would be a breach of data, but this is not the only privacy concern. Corynne McSherry is the legal director of the Electronic Frontier Foundation. She told Ars previously that as long as the data of users is retained, they could be exposed by future law enforcement or private litigation requests. Edelson noted that, despite Altman’s claims, most privacy attorneys do not consider OpenAI CEO Sam Altman a “privacy guy,” . Edelson, echoing a ChatGPT user who dismissed the concern that OpenAI might not prioritize privacy concerns if it is financially motivated to resolve the matter, recently slammed NYT and claimed it sued OpenAI for “like user privacy.”

“He’s trying to protect OpenAI, and he does not give a hoot about the privacy rights of consumers,” Edelson. Edelson said

“The idea that he and his lawyers are really going to be the safeguards here isn’t very compelling,” . He criticised the judges for dismissing user concerns and refusing OpenAI’s demand that users be given a chance testify. Edelson said.

Ashley, a senior policy journalist for Ars Technica is dedicated to tracking the social impacts of new policies and technologies. She is a Chicago journalist with over 20 years’ experience.

119 comments

www.aiobserver.co

More from this stream

Recomended