Home News Sam Altman comes out swinging at The New York Times

Sam Altman comes out swinging at The New York Times

0
Sam Altman comes out swinging at The New York Times

Sam Altman takes on The New York Times

OpenAI CEO Sam Altman’s appearance onstage was a clear indication that this would not be a typical interview.

Altman, his chief operating officer Brad Lightcap and a crowded San Francisco venue, which usually hosts jazz concerts, were awkwardly positioned at the back of the stage. On Tuesday night, hundreds of people filled theater-style seats to watch Kevin Roose a columnist for The New York Times and Platformer’s Casey Newton as they recorded a live episode of the popular technology podcast Hard Fork.

Altman & Lightcap were the main attraction, but they had left too early. Roose explained to Newton that he and Roose were planning to – ideally before OpenAI executives were to come out – list off several headlines written about OpenAI during the weeks leading up the event. Altman said, “This is even more fun because we’re here for it.”

Altman added, “This is even more fun because we’re here for this.” The OpenAI CEO then asked, “Are we going to talk about how you sued us because you don’t like user privacy?” (19659005). Within minutes of the show starting, Altman hijacked to talk about The New York Times’ lawsuit against OpenAI, and its largest shareholder, Microsoft. The publisher claims that Altman’s firm improperly used articles to train large-language models. Altman was especially upset about a recent development, where lawyers for The New York Times asked OpenAI if they could retain customer ChatGPT and API data. Altman said that the New York Times has been adamant for a very long time about preserving our users’ logs, even when they are chatting in private mode. “Still love The New York Times but that one we are very passionate about.”

OpenAI’s CEO asked the podcasters for a few moments to share their opinions about the New York Times suit. They declined, stating that they were not involved as journalists who have published work in The New York Times.

Altman’s and Lightcap’s brash entry lasted for only a few moments, and the remainder of the interview continued, essentially, as planned. The escalation of the situation was indicative of a turning point in Silicon Valley’s relationship with the media.

Over the past few years, several publishers have filed lawsuits against OpenAI Anthropic Google Meta and Meta for using copyrighted materials to train their AI models. These lawsuits claim that AI models could devalue and even replace copyrighted media works. But the tides could be turning in the favor of tech companies. Anthropic, an OpenAI competitor, won a significant legal victory against publishers earlier this week. A federal judge ruled Anthropic was allowed to use books in certain circumstances to train its AI models. This ruling could have a wide range of implications for other publishers who are suing OpenAI, Google and Meta.

Altman and Lightcap may have felt empowered by the industry victory before their live interview with The New York Times journalists. OpenAI is facing threats from all directions, and this was evident throughout the night. Altman revealed on his brother’s podcast that Mark Zuckerberg was trying to recruit OpenAI’s top talent with $100 million compensation packages for Meta’s AI superintelligence laboratory. Lightcap, when asked if the Meta CEO believes in superintelligent artificial intelligence systems or if this is just a recruitment strategy, quipped, “I think [Zuckerberg] believe he’s superintelligent.” Microsoft used to be a major accelerator for OpenAI. However, they are now competitors in enterprise software as well as other domains. Altman said that “in any deep partnership there will be points of tension. We certainly have them.” “We are both ambitious companies so we do find some hot spots, but I expect that this is something we will find deep value for both sides in the future.” This may hinder OpenAI’s ability solve broader AI issues, such as how to deploy highly intelligent AI systems safely at scale. Newton asked OpenAI leaders about their thoughts on recent stories about mentally unstable people using ChatGPTto navigate dangerous rabbit holes, including to discuss suicide or conspiracy theories with the chatbot. Altman, OpenAI’s CEO, said that OpenAI takes a number of steps to prevent such conversations. For example, it will cut off the conversation early or direct users to professional services to get help. Altman said that he did not want to make the same mistakes as the previous generation of tech firms by not reacting fast enough. In response to a question, Altman said, “However we haven’t figured out yet how a warning can get through to users who are in a fragile mental place or on the verge of a psychotic breakdown.”

www.aiobserver.co

Exit mobile version