Welcome to TechCrunch AI’s regular AI Newsletter. Sign up to receive this weekly newsletter in your inbox.
AI news didn’t slow much down this holiday season. You’d be remiss if you missed the latest developments between OpenAI’s “shipmas”12 days of “shipmas”and DeepSeek’s release of a major model on Christmas Day.
It’s not slowing now. OpenAI CEO Sam Altman wrote on his blog Sunday that OpenAI has mastered the art of building artificial general intelligence (AGI), and is now aiming for superintelligence.
AGI can be a vague term, but OpenAI’s definition is “highly autonomous systems which outperform humans in most economically valuable tasks.” Altman believes that superintelligence could “massively speed up” innovation beyond what humans alone are capable of. Altman wrote: “[OpenAI continues] To believe that iteratively placing great tools in people’s hands leads to great, widely-distributed results.” Altman, like OpenAI’s CEO Dario Amodei, is optimistic that AGI and superintelligence can lead to prosperity and wealth for everyone. How can we be certain that AGI and superintelligence will benefit everyone, even if they are technically feasible without any new breakthroughs?
One recent worrying data point is the study flagged by Wharton Professor Ethan Mollick in X earlier this month. Researchers from the National University of Singapore (NUS), University of Rochester (UR), and Tsinghua University (TSU) investigated the impact of OpenAIโs AI-powered ChatGPT on freelancers in different labor markets. The study identified a “AI inflection” point for different job types. Before the inflection, AI was a major contributor to freelancers’ earnings. Web developers, for example, saw a 65% rise in earnings. After the inflection, AIbegan replacing freelancing. The number of translators dropped by approximately 30%.
According to the study, once AI replaces a job it does not reverse course. If we are indeed facing a more capable AI, then this should be a cause for concern.
Altman wrote that he is “pretty sure” that “everyone will see the importance” of “maximizing broad benefits and empowerment” in an age of AGI – and superintelligence. But what if Altman is wrong? What if AGI or superintelligence arrives, but only corporations are able to benefit from it?
It won’t lead to a better world but will only increase inequality. If that’s AIโs legacy, then it will be a very depressing one.
News
Silicon Valley silences doom: For years, technologists have been sounding alarms about the potential of AI to cause catastrophic damages. In 2024, these warnings were ignored.
OpenAI is losing money: OpenAI’s CEO Sam Altman stated that the company currently loses money on its $200 per month ChatGPT Pro plan, because people use it more than expected. Record generative AI financing:
Investments in generative AI – which includes a range AI-powered tools, apps, and services that generate text, images and videos, as well as speech, music and more – reached new heights in the past year. Microsoft increases data center spending. Microsoft will spend $80 billion on data centers that can handle AI workloads in fiscal 2025.
Grok 3 is MIA: Grok 3, xAI’s next generation AI model, didn’t arrive in time. This continues a trend where flagship models have missed their launch dates.
Research paper of the Week
AI may make a lot mistakes. But it can also boost the work of experts.
That’s at least the conclusion of a team of scientists from the University of Chicago. In a recent study, the researchers suggest that investors who use OpenAIโs GPT-4o for summarizing earnings calls achieve higher returns than those who do not.
Researchers recruited investors and used GPT-4o to generate AI summaries that were aligned with the investing expertise of each investor. The AI notes generated for sophisticated investors were more technical, while those created for novices were simpler.
The experienced investors saw an increase of 9.6% in their returns over a year after using GPT-4o. The less experienced investors experienced a boost of 1.7%. I’d say that’s not bad for AI-human cooperation.
Model of the Week
Prime Intellect, a startup buildingย infrastructure for decentralized AI system training, has released an AI model that it claims can help detect pathogens.
The model, called METAGENE-1, was trained on a dataset of over 1.5 trillion DNA and RNA base pairs sequenced from human wastewater samples. Created in partnership with the University of Southern California and SecureBioโs Nucleic Acid Observatory, METAGENE-1 can be used for various metagenomic applications, Prime Intellect said, like studying organisms.
โMETAGENE-1 achieves state-of-the-art performance across various genomic benchmarks and new evaluations focused on human-pathogen detection,โ Prime Intellect wrote in a series of posts on X. โAfter pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection.โ
Grab bag
Anthropic, in response to legal action by major music publishers, has agreed to maintain guardrails preventing Claude, its AI-powered bot, from sharing song lyrics that are copyrighted.
Labels including Universal Music Group Concord Music Group and ABKCO sued Anthropic for copyright infringement in 2023. They accused the startup of stealing lyrics from 500 songs to train its AI systems. Anthropic has not yet resolved the lawsuit, but has agreed to stop Claude’s access to lyrics of songs owned by publishers, and to create new lyrics based on that material.
Anthropic stated that it was “continuing to look forward” to proving that using potentially copyrighted materials in the training of generative AI is a quintessential, fair use.