I get it. I
[[I
Ed [ This[[ Report According”
- In short, most of the coverage you read on artificial intelligence is led by companies that benefit financially from you thinking artificial intelligence is important and by default all of this coverage mentions OpenAI or ChatGPT.
So…yeah, of course ChatGPT has that many users. When you have hundreds of different reporters constantly spitting out stories about how important something may or may not be, and when that thing is available for free on a website, it’s going to get a bunch of people using it. This is predominantly the media’s doing!
But 300 million people is a lot!
- It sure is! But it doesn’t really prove anything other than that people are using the single-most-talked about product in the world. By comparison, billions of people use Facebook and Google. I don’t care about this number!
- User numbers alone tell you nothing about the sustainability or profitability of a business, or how those people use the product. It doesnโt delineate between daily users, and those who occasionally (and shallowly) flirt with an app or a website. It doesnโt say how essential a product is for that person.
Also, uhm, Ed? It’s early days for ChatGPT-
- Shut the fuck up! There isn’t a single god damn startup in the history of anything โ other than perhaps Facebook โ that has had this level of coverage at such an early stage. Facebook also grew at a time when social media didn’t really exist (at least, as a mainstream thing that virtually every demographic used) and thus the ability for something to “go viral” was a relatively new idea. By comparison, ChatGPT had the benefit of there being more media outlets, and Altman himself having spent a decade gladhandling the media through his startup investments and crafting a real public persona.ย
- The weekly users number is really weird. Did it really go from 200 million to 300 million users in the space of three months? It was at 100 million weekly users in February 2023. You’re telling me that OpenAI took, what, over a year to go from 100 million to 200 million, but it took three months (August 29 2024 to December 4 2024) to hit 300 million?
- I don’t have any insider information to counter this, but I will ask โ where was that growth from? OpenAI launched its o1 “reasoning” model (the previews, at least) on September 12 2024, but these were only available to ChatGPT Plus subscribers, with the “full” version released on December 5 2024. You’re telling me this company increased its free user base by 50% in less than three months based on nothing other than the availability of a product that wasn’t available to free users?
- This also doesn’t make a ton of sense based on data provided to me by Similarweb, a digital market intelligence company. ChatGPT’s monthly unique visitors were 212 million in September 2024, 233.1 million in October 2024, and 247.1 million in November 2024. I am not really sure how that translates to 300 million weekly users at all.
- Similarweb also provided me โ albeit only for the last few weeks โ data on ChatGPT.com’s weekly traffic. For the period beginning January 21 2025, it only had 126.1 million weekly visitors. For the period beginning February 11 2025, it only had 136.7 million. Is OpenAI being honest about its user numbers? I’ve reached out for comment, but OpenAI has never, ever replied to me.
- Sidenote: Yes, these are visitors versus users. However, one would assume users would be lower than visitors, because a visitor might not actually use the product. What gives?
- There could be users on their apps โ but even then, I’m not really sure how you square this circle. An article from January 29 2025 says that the iOS ChatGPT app has been downloaded 353 million times in total. Based on even the most optimistic numbers, are you telling me that ChatGPT has over 100 million mobile only users a week? And no, it isnโt Apple Intelligence. Cupertino didnโt launch that integration until December 11 2024.ย
- Here’s another question: why doesn’t OpenAI reveal monthly active users? Wouldn’t that number be higher? After all, a monthly active user is one that uses an app even once over a given month! Anyway, I hypothesize that the reason is probably that in September 2024 it came out that OpenAI had 11 million monthly paying subscribers, and though ChatGPT likely has quite a few more people that use it once a month, admitting to that number would mean that we’re able to see how absolutely abominable its conversion to paying users is. 300 million monthly active users would mean a conversion rate of less than 4%, which is pretty piss-poor, especially as subscription revenue for ChatGPT Plus (and other monthly subscriptions) makes up the majority of OpenAI’s revenue.
- Hey, wait a second. Are there any other generative AI products that reveal their users? Anthropic doesn’t. AI-powered search product Perplexity claims to have 15 million monthly active users. These aren’t big numbers! They suggest these products aren’t popular! Google allegedly wants 500 million users of its Gemini chatbot by the end of the year, but there isn’t any information about how many itโs at right now.
- Similarweb data states that google.gemini.com had 47.3 million unique monthly visitors in January 2025, copilot.microsoft.com had 15.6 million, Perplexity.ai had 10.6 million, and claude.ai had 8.2 million. These aren’t great numbers! These numbers suggest that these products aren’t very popular at all!
- The combined unique monthly visitors in January 2025 to ChatGPT.com (246m), DeepSeek.com (79.9m), Gemini.Google.com (47.3m), Copilot.microsoft.com (15.6m), Perplexity.ai (10.6m), character.ai (8.4m), claude.ai (8.2m) and notebookLM.google.com (7.4m) was 423.4 million – or an astonishing 97.5 million if you remove ChatGPT and DeepSeek.ย
- For context, the New York Times said in their 2023 annual report that they received 131 million unique monthly visitors globally, and CNN says they have more than 151 million unique monthly visitors.ย
- This isn’t the early days of shit. The Attention Is All You Need paper that started the whole transformer-based architecture movement was published in June 2017. We’re over two years in, hyperscalers have sunk over 200 billion dollars in capital expenditures into generative AI, AI startups took up a third of all venture capital investment in 2024, and almost every single talented artificial intelligence expert is laser-focused on Large Language Models. And even then, we still don’t have a killer app! There is no product that everybody loves, and there is no iPhone moment!
Well Ed, I think ChatGPT is the iPhone moment for generative AI, it’s the biggest software launch of all time-
- Didn’t we just talk about this? Fine, fine. Let’s get specific. The iPhone fundamentally redefined what a cellphone and a portable computer could be, as did the iPad, creating entirely new consumer and business use cases almost immediately. Cloud computing allowed us to run distinct applications in the cloud, which totally redefined how software was developed and deployed, creating both entirely new use cases for software (as the compute requirements moved from the customer to the provider), and$13 billion in annual run rate in revenue from its artificial intelligence products and services,” which amounts to just over a billion a month, or $3.25 billion a quarter.
- This is not profit. It’s revenue.
- There is no “artificial intelligence” part of Microsoft’s revenue or earnings. This is literally Microsoft taking anything with “AI” on it and saying “we made money!”
- $3.25 billion a quarter is absolutely pathetic. In its most recent quarter, Microsoft made $69.63 billion, with its Intelligent Cloud segment (which includes things like their Azure cloud computing solutions) making $25.54 billion in revenue, and spent $15.80 billion in capital expenditures excluding non-specific finance leases.
- In the last year, Microsoft has spent over $55 billion capital expenditures to maybe (to be clear, the $13 billion in run rate is a projection using current financial performance to predict future revenue) make $13 billion. This is not a huge industry! These are not good numbers, especially considering the massive expenses!
They’ll Work It Out!
- When? No, really, when?
- OpenAI burned more than $5 billion last year.
- According to The Information, Anthropic burned $5.6 billion. That may very likely mean Anthropic burned more money than OpenAI somehow last year! These companies are absolutely atrocious at business! The reason Iโm not certain is that in the past The Information has been a touch inconsistent with how it evaluates “costs,” in that Iโve seen it claim that OpenAI “burned just $340 million in the first half of 2024,” a number that they pulled from a piece from last year followed by the statement that “[OpenAI’s] losses “are steep due to the impact of major expenses, such as stock compensation and computing costs, that don’t flow through its cash statement.” To be clear, OpenAI burned approximately $5 billion on compute alone. Open Great stuff! I I This Sorry![[TheyGreatstuff!
[[Theonly”proof” that they are going to reverse this trend is The Information saying that “Anthropic’s management expects the company to stop burning cash in 2027.”
Sidebar:Hey, what is it with Dario Amodei of Anthropic and the year 2027? He He This Hey Stop printing it! Stop it!
While one could say “the costs will come down,” and that appears to be what The Information is claiming, suggesting that “Anthropic said it would reduce its burn rate by “nearly half” in 2025, the actual details are thin on the ground, and thereโs no probing of whether thatโs even feasible without radically changing its models. Huh? How? Anthropicโs burn increased every single year! So has OpenAI’s!
The Information โ who I do generally, and genuinely, respect โ ran an astonishingly optimistic piece about Anthropic estimating that it’d make $34.5 billion in revenue in 2027 (there’s that year again!), the very same year itโd stop burning cash. Its estimates are based on the premise that “leaders expected API revenue to hit $20 billion in 2027,” meaning people plugging Anthropic’s models into their own products. This is laughable on many levels, chief of which is that OpenAI, which made around twice as much revenue as Anthropic did in 2024, barely made a billion dollars from API calls in the same year.
It’s here where I’m going to choose to scream.
Anthropic, according to The Information, generated $908 million in revenue in 2024, and has projected that it will make $2.2 billion in revenue in 2025, and its “base case” โ which The Information says would be “the likeliest outcome (???) It is estimated that the company will generate $12 billion in revenue by 2027.
It’s what happens when bubbles burst! Assets are overvalued due to a combination of hysteria and vibes!
Dario Amodei, like Sam Altman, is a liar and a crook. His promises are both ridiculous and offensive. The Information (which should do a better job of actually criticizing these people) justified Amodei’s and Anthropicโs obscene, fantastical revenue targets. citing Amodeiโs blog
which at no time explains what “country of geniuses in a datacenter” means or what product it might be, or what he plans to do to increase revenues by over thirty billion dollars per year[by 2027].
But, wait! The Information claims to have gotten a little bit more specific!
Anthropic claims its technology can transform office roles, such as automating software engineering and generating or reviewing legal documents. It cited legal search firm LexisNexis and code repository GitLab as examples of clients. Other major customers of Claude software include startups like Anysphere, who develops Cursor, a coding assistant designed for programmers.
To be clear, Anthropic appears to have a big plan to “sell people more software to some people, maybe.”
Anthropic currently raises $2 billion at a valuation of $60 billion mostly based on this fabricated marketing nonsense. Why are we joking with these idiots?
The Actual Work of These Oafs
If you ignore the hype and anecdotes about generative AI, it has been stagnating for years, even if I give my best estimations. They’ve only been able “big thing” to use “reasoning” in order to make Large Language Models ( “think” ) (they don’t have consciousness, “thinking,” they are just using more tokens for a specific question and having several models check the work), resulting in them being more accurate, but at the cost of speed and costs. This became less exciting when DeepSeek’s open source “r1” performed similarly to reasoning software from companies like Google and openai. The idea that “reasoning” is the “killer app” – despite the fact nobody can explain why it’s a big deal – has now been largely quashed.
The model companies are a bit flailing as a result. In a recent post on Twitter, Sam Altman gave an “updated roadmap for GPT 4.5 and GPT-5” where he described how OpenAI would be “simplifying” its product offerings, saying that GPT-4.5 would be OpenAI’s “last non-chain-of-thought model,” and that GPT-5 would be “a system that integrates a lot of our technology,” including o3, OpenAI’s “powerful” and “very expensive” reasoning model, which it…would also no longer release as a standalone model.
Altman describes his next model, GPT 4.5, as launching at an indeterminate date and doing something similar to the GPT 4o. Altman seems to be saying that GPT-5 will not be a model, but rather a rat king that will include a variety of mediocre products including o3, which he won’t let you use.
So, that’s what the future holds for this company? OpenAI will release models, and uh… Uhh.
Uhhhhhhhh.
Wait! Wait! OpenAI has released a brand new product! Deep Research is a feature that allows you to ask ChatGPT for a report based on your web browsing. This is almost a great idea. I hope it doesn’t cost a lot of money and make obvious mistakes!
Anyway, Let’s go to Casey Newton for the review.:
In general, the more information you already have about something, I find deep research more useful. This may seem counterintuitive. Perhaps you thought that an AI agent could be a great way to get you up to date on a topic important that you just happened to be working on. In my initial tests, I found the opposite to be true. Deep research is great for drilling down into topics you already know a lot about. It allows you to find specific pieces of data, types of analyses, or new ideas.
You may be able to do this better than I did. I think we will all get better at prompting the models over time and the product should improve as well. It’s counter-productive to the purpose of research to know enough about something to ensure that the researcher did not mess up something .
And: “I think all of us will get better at prompting-” Casey! We’re paying them. We pay them to do things for us!
I did look up one of Caseyโs examples. A specific one about how Fediverse can benefit publishers
Letโs do some research.
Newton’s fawning praise is not backed up by the “deep research” in this article. The first and second citations
are from an article “news solutions” about the fediverse by a company “Twipe” called”Twipe”which is used to define “broad cross-platform reach.”
After that, the next three citations are posts from Hackernews, a web forum started by yCombinator (
).
The next three citations come from Hackernews – a web forum created by yCombinator ( ). Here’s an example (). What is “deep research” ?
This thing is not well-researched. Deep Resear ch cites Digiday’s article 8 times in the paragraphs that follow, before citing Twipe again. It also says, in a hilarious way, that federated postings “can simultaneously publish to [a] website and as a toot on federated platforms like Mastodon and Threads,”
A term Mastodon retired about two years ago
- In short, most of the coverage you read on artificial intelligence is led by companies that benefit financially from you thinking artificial intelligence is important and by default all of this coverage mentions OpenAI or ChatGPT.
- It sure is! But it doesn’t really prove anything other than that people are using the single-most-talked about product in the world. By comparison, billions of people use Facebook and Google. I don’t care about this number!
- User numbers alone tell you nothing about the sustainability or profitability of a business, or how those people use the product. It doesnโt delineate between daily users, and those who occasionally (and shallowly) flirt with an app or a website. It doesnโt say how essential a product is for that person.
Also, uhm, Ed? It’s early days for ChatGPT-
- Shut the fuck up! There isn’t a single god damn startup in the history of anything โ other than perhaps Facebook โ that has had this level of coverage at such an early stage. Facebook also grew at a time when social media didn’t really exist (at least, as a mainstream thing that virtually every demographic used) and thus the ability for something to “go viral” was a relatively new idea. By comparison, ChatGPT had the benefit of there being more media outlets, and Altman himself having spent a decade gladhandling the media through his startup investments and crafting a real public persona.ย
- The weekly users number is really weird. Did it really go from 200 million to 300 million users in the space of three months? It was at 100 million weekly users in February 2023. You’re telling me that OpenAI took, what, over a year to go from 100 million to 200 million, but it took three months (August 29 2024 to December 4 2024) to hit 300 million?
- I don’t have any insider information to counter this, but I will ask โ where was that growth from? OpenAI launched its o1 “reasoning” model (the previews, at least) on September 12 2024, but these were only available to ChatGPT Plus subscribers, with the “full” version released on December 5 2024. You’re telling me this company increased its free user base by 50% in less than three months based on nothing other than the availability of a product that wasn’t available to free users?
- This also doesn’t make a ton of sense based on data provided to me by Similarweb, a digital market intelligence company. ChatGPT’s monthly unique visitors were 212 million in September 2024, 233.1 million in October 2024, and 247.1 million in November 2024. I am not really sure how that translates to 300 million weekly users at all.
- Similarweb also provided me โ albeit only for the last few weeks โ data on ChatGPT.com’s weekly traffic. For the period beginning January 21 2025, it only had 126.1 million weekly visitors. For the period beginning February 11 2025, it only had 136.7 million. Is OpenAI being honest about its user numbers? I’ve reached out for comment, but OpenAI has never, ever replied to me.
- Sidenote: Yes, these are visitors versus users. However, one would assume users would be lower than visitors, because a visitor might not actually use the product. What gives?
- There could be users on their apps โ but even then, I’m not really sure how you square this circle. An article from January 29 2025 says that the iOS ChatGPT app has been downloaded 353 million times in total. Based on even the most optimistic numbers, are you telling me that ChatGPT has over 100 million mobile only users a week? And no, it isnโt Apple Intelligence. Cupertino didnโt launch that integration until December 11 2024.ย
- Here’s another question: why doesn’t OpenAI reveal monthly active users? Wouldn’t that number be higher? After all, a monthly active user is one that uses an app even once over a given month! Anyway, I hypothesize that the reason is probably that in September 2024 it came out that OpenAI had 11 million monthly paying subscribers, and though ChatGPT likely has quite a few more people that use it once a month, admitting to that number would mean that we’re able to see how absolutely abominable its conversion to paying users is. 300 million monthly active users would mean a conversion rate of less than 4%, which is pretty piss-poor, especially as subscription revenue for ChatGPT Plus (and other monthly subscriptions) makes up the majority of OpenAI’s revenue.
- Similarweb also provided me โ albeit only for the last few weeks โ data on ChatGPT.com’s weekly traffic. For the period beginning January 21 2025, it only had 126.1 million weekly visitors. For the period beginning February 11 2025, it only had 136.7 million. Is OpenAI being honest about its user numbers? I’ve reached out for comment, but OpenAI has never, ever replied to me.
- Hey, wait a second. Are there any other generative AI products that reveal their users? Anthropic doesn’t. AI-powered search product Perplexity claims to have 15 million monthly active users. These aren’t big numbers! They suggest these products aren’t popular! Google allegedly wants 500 million users of its Gemini chatbot by the end of the year, but there isn’t any information about how many itโs at right now.
- Similarweb data states that google.gemini.com had 47.3 million unique monthly visitors in January 2025, copilot.microsoft.com had 15.6 million, Perplexity.ai had 10.6 million, and claude.ai had 8.2 million. These aren’t great numbers! These numbers suggest that these products aren’t very popular at all!
- The combined unique monthly visitors in January 2025 to ChatGPT.com (246m), DeepSeek.com (79.9m), Gemini.Google.com (47.3m), Copilot.microsoft.com (15.6m), Perplexity.ai (10.6m), character.ai (8.4m), claude.ai (8.2m) and notebookLM.google.com (7.4m) was 423.4 million – or an astonishing 97.5 million if you remove ChatGPT and DeepSeek.ย
- For context, the New York Times said in their 2023 annual report that they received 131 million unique monthly visitors globally, and CNN says they have more than 151 million unique monthly visitors.ย
- This isn’t the early days of shit. The Attention Is All You Need paper that started the whole transformer-based architecture movement was published in June 2017. We’re over two years in, hyperscalers have sunk over 200 billion dollars in capital expenditures into generative AI, AI startups took up a third of all venture capital investment in 2024, and almost every single talented artificial intelligence expert is laser-focused on Large Language Models. And even then, we still don’t have a killer app! There is no product that everybody loves, and there is no iPhone moment!
Well Ed, I think ChatGPT is the iPhone moment for generative AI, it’s the biggest software launch of all time-
- Didn’t we just talk about this? Fine, fine. Let’s get specific. The iPhone fundamentally redefined what a cellphone and a portable computer could be, as did the iPad, creating entirely new consumer and business use cases almost immediately. Cloud computing allowed us to run distinct applications in the cloud, which totally redefined how software was developed and deployed, creating both entirely new use cases for software (as the compute requirements moved from the customer to the provider), and$13 billion in annual run rate in revenue from its artificial intelligence products and services,” which amounts to just over a billion a month, or $3.25 billion a quarter.
- This is not profit. It’s revenue.
- There is no “artificial intelligence” part of Microsoft’s revenue or earnings. This is literally Microsoft taking anything with “AI” on it and saying “we made money!”
- $3.25 billion a quarter is absolutely pathetic. In its most recent quarter, Microsoft made $69.63 billion, with its Intelligent Cloud segment (which includes things like their Azure cloud computing solutions) making $25.54 billion in revenue, and spent $15.80 billion in capital expenditures excluding non-specific finance leases.
- In the last year, Microsoft has spent over $55 billion capital expenditures to maybe (to be clear, the $13 billion in run rate is a projection using current financial performance to predict future revenue) make $13 billion. This is not a huge industry! These are not good numbers, especially considering the massive expenses!
They’ll Work It Out!
- When? No, really, when?
- OpenAI burned more than $5 billion last year.
- According to The Information, Anthropic burned $5.6 billion. That may very likely mean Anthropic burned more money than OpenAI somehow last year! These companies are absolutely atrocious at business! The reason Iโm not certain is that in the past The Information has been a touch inconsistent with how it evaluates “costs,” in that Iโve seen it claim that OpenAI “burned just $340 million in the first half of 2024,” a number that they pulled from a piece from last year followed by the statement that “[OpenAI’s] losses “are steep due to the impact of major expenses, such as stock compensation and computing costs, that don’t flow through its cash statement.” To be clear, OpenAI burned approximately $5 billion on compute alone. Open Great stuff! I I This Sorry![[TheyGreatstuff!
[[Theonly”proof” that they are going to reverse this trend is The Information saying that “Anthropic’s management expects the company to stop burning cash in 2027.”
Sidebar:Hey, what is it with Dario Amodei of Anthropic and the year 2027? He He This Hey Stop printing it! Stop it!
Sidebar:Hey, what is it with Dario Amodei of Anthropic and the year 2027? He He This Hey Stop printing it! Stop it!
While one could say “the costs will come down,” and that appears to be what The Information is claiming, suggesting that “Anthropic said it would reduce its burn rate by “nearly half” in 2025, the actual details are thin on the ground, and thereโs no probing of whether thatโs even feasible without radically changing its models. Huh? How? Anthropicโs burn increased every single year! So has OpenAI’s!
The Information โ who I do generally, and genuinely, respect โ ran an astonishingly optimistic piece about Anthropic estimating that it’d make $34.5 billion in revenue in 2027 (there’s that year again!), the very same year itโd stop burning cash. Its estimates are based on the premise that “leaders expected API revenue to hit $20 billion in 2027,” meaning people plugging Anthropic’s models into their own products. This is laughable on many levels, chief of which is that OpenAI, which made around twice as much revenue as Anthropic did in 2024, barely made a billion dollars from API calls in the same year.
It’s here where I’m going to choose to scream.
Anthropic, according to The Information, generated $908 million in revenue in 2024, and has projected that it will make $2.2 billion in revenue in 2025, and its “base case” โ which The Information says would be “the likeliest outcome (???) It is estimated that the company will generate $12 billion in revenue by 2027.
It’s what happens when bubbles burst! Assets are overvalued due to a combination of hysteria and vibes!
Dario Amodei, like Sam Altman, is a liar and a crook. His promises are both ridiculous and offensive. The Information (which should do a better job of actually criticizing these people) justified Amodei’s and Anthropicโs obscene, fantastical revenue targets. citing Amodeiโs blog
which at no time explains what “country of geniuses in a datacenter” means or what product it might be, or what he plans to do to increase revenues by over thirty billion dollars per year[by 2027].
But, wait! The Information claims to have gotten a little bit more specific!
Anthropic claims its technology can transform office roles, such as automating software engineering and generating or reviewing legal documents. It cited legal search firm LexisNexis and code repository GitLab as examples of clients. Other major customers of Claude software include startups like Anysphere, who develops Cursor, a coding assistant designed for programmers.
To be clear, Anthropic appears to have a big plan to “sell people more software to some people, maybe.”
Anthropic currently raises $2 billion at a valuation of $60 billion mostly based on this fabricated marketing nonsense. Why are we joking with these idiots?
The Actual Work of These Oafs
If you ignore the hype and anecdotes about generative AI, it has been stagnating for years, even if I give my best estimations. They’ve only been able “big thing” to use “reasoning” in order to make Large Language Models ( “think” ) (they don’t have consciousness, “thinking,” they are just using more tokens for a specific question and having several models check the work), resulting in them being more accurate, but at the cost of speed and costs. This became less exciting when DeepSeek’s open source “r1” performed similarly to reasoning software from companies like Google and openai. The idea that “reasoning” is the “killer app” – despite the fact nobody can explain why it’s a big deal – has now been largely quashed.
The model companies are a bit flailing as a result. In a recent post on Twitter, Sam Altman gave an “updated roadmap for GPT 4.5 and GPT-5” where he described how OpenAI would be “simplifying” its product offerings, saying that GPT-4.5 would be OpenAI’s “last non-chain-of-thought model,” and that GPT-5 would be “a system that integrates a lot of our technology,” including o3, OpenAI’s “powerful” and “very expensive” reasoning model, which it…would also no longer release as a standalone model.
Altman describes his next model, GPT 4.5, as launching at an indeterminate date and doing something similar to the GPT 4o. Altman seems to be saying that GPT-5 will not be a model, but rather a rat king that will include a variety of mediocre products including o3, which he won’t let you use.
So, that’s what the future holds for this company? OpenAI will release models, and uh… Uhh.
Uhhhhhhhh.
Wait! Wait! OpenAI has released a brand new product! Deep Research is a feature that allows you to ask ChatGPT for a report based on your web browsing. This is almost a great idea. I hope it doesn’t cost a lot of money and make obvious mistakes!
Anyway, Let’s go to Casey Newton for the review.:
are from an article “news solutions” about the fediverse by a company “Twipe” called”Twipe”which is used to define “broad cross-platform reach.”In general, the more information you already have about something, I find deep research more useful. This may seem counterintuitive. Perhaps you thought that an AI agent could be a great way to get you up to date on a topic important that you just happened to be working on. In my initial tests, I found the opposite to be true. Deep research is great for drilling down into topics you already know a lot about. It allows you to find specific pieces of data, types of analyses, or new ideas.
You may be able to do this better than I did. I think we will all get better at prompting the models over time and the product should improve as well. It’s counter-productive to the purpose of research to know enough about something to ensure that the researcher did not mess up something .
And: “I think all of us will get better at prompting-” Casey! We’re paying them. We pay them to do things for us!
I did look up one of Caseyโs examples. A specific one about how Fediverse can benefit publishers
Letโs do some research.
Newton’s fawning praise is not backed up by the “deep research” in this article. The first and second citations
After that, the next three citations are posts from Hackernews, a web forum started by yCombinator (
).
The next three citations come from Hackernews – a web forum created by yCombinator ( ). Here’s an example (). What is “deep research” ?
This thing is not well-researched. Deep Resear ch cites Digiday’s article 8 times in the paragraphs that follow, before citing Twipe again. It also says, in a hilarious way, that federated postings “can simultaneously publish to [a] website and as a toot on federated platforms like Mastodon and Threads,”
A term Mastodon retired about two years agoThese two citations are related to
Mediumโs embrace of Mastodon (19659056]followed by another citation from the Digiday article. Deep Research then cites Reddit posts from two different Reddit users, a company named The report mentions the same Twipe post multiple times and as well as another forum post. Then the support documentation for Bluesky social network several more times
You’ll be surprised to learn that the research paper mostly cites Twipe, Hackernews, and Reddit.
Deep Research is currently only available in ChatGPT Pro and OpenAI’s
somehow-unprofitable $200-a-month subscription, Though it’s coming to ChatGPT plus in a limited capacity.
Not impressed? What if I told
? [19659136][19659136][19659136][19659136][19659136][19659135][19659136][19659136][19659135][19659136][19659135][19659136][19659136]? Oh, and one more detail – the whole thing is on the very edge of being understandable.
Here is a bit about funding models:
“Memberships and Donations: A common monetization approach in the Fediverse (and across the open web) is voluntary support from the audience.”
No one talks like this! This is not how humans sound! I don’t enjoy reading it! There is something deeply unpleasant in the way Deep Research reads. It’s uncanny Valley, if its denizens were a little dense and lazy. It’s the quintessential LLM-copy — soulless, almost, but not quite right.
Ewww.
That’s it, folks. OpenAI’s next major innovation is its ability to produce a report you wouldn’t be able to use anywhere meaningfully. While it can browse the internet and find things, and write a document, it sources material based on what the system thinks will confirm its arguments, rather than ensuring the source is reliable or valid. This system might have worked if Google cared about the quality of its results, which it doesn’t .
Sorry, I may sound like a hater but this shit does not impress me at all. Wow, you created a superficially-impressive research project that’s really long and that cites a bunch of shit it found online that it made little attempt to verify? The report was a long time to produce, it can only be produced by paying OpenAI $200 per month, and the compute cost a lot of money to produce?
Deep Research shares the same problem with every other generative AI products. These models don’t know much, and everything they do – even “reading” or “browsing” on the web – is limited by their data and probabilistic model that can say “this is an article about a subject” but not really understand their contents. Deep Research citing SEO bait as a primary resource proves that even when these models are honed to the nth degree, they are mediocre and untrustworthy.
In addition, nothing in this product is geared towards OpenAI’s profitability. I think they are doing the opposite. Deep Research uses OpenAIโs o3 Model can cost up to $1,000 per queryand while I’m sure these prompts arenโt ascostly, they are still significantly higher than a standard query from ChatGPT.
The point of hiring a research assistant is to be able to rely on them, and have them do the work that would take you hours. Deep Research is the AI slop that’s sweeping academia. It’s low-quality, sloppy research for people who don’t care about quality and substance.
If you’re interested enough to pay $200 for an OpenAI subscription, and you know about Deep Research, then you must be able to distinguish between high-quality and low-quality content. If you were given a document that contained such low-quality and repetitive citations, then you would shred it. And, if the intern created it, you would shred them as well. Or, at least, give them stern advice.
I’ll put it bluntly: we are now more than two years in the generative AI boom, and OpenAIโs biggest and sexiest product is Deep Research, a product which dares ask “what if you were able to spend a lot of compute to get a poorly-cited research paper,” and Operator, a computing-intensive application, which rarely completes tasks in minutes, that would have otherwise taken you seconds
SoftBank, the perennial losers who backed WeWork, WireCard, and Lost more than $30 billion over the last few years is trying to invest upto $25 billion in OpenAI ().
It feels like I’m going insane
The media tells you that OpenAI, and their ilk, are the future. They’re building “advanced artificial intelligence” which can take “human-like actions,” but when you look into any of this shite for more than two second it’s clear that it isn’t, and it can’t.
Despite all the hype, marketing, tens thousands of articles in the media, and the trillions of dollar market capitalization, nothing feels real or at least real-enough to sustain this miserable, fictitious bubble. People like Marc Benioff claiming that “today’s CEOs are the last to manage all-human workforces” are doing so to pump up their stocks rather than build anything approaching a real product. These men lie constantly to maintain hype. They never discuss the products they will sell in 2025because they would have to say that “what if a chatbot, a thing you already have, was more expensive?”
the tech industry – and part of our economic system – is accelerating into a brickwall, driven by men like Sam Altman Dario Amodei Marc Benioff and Larry Ellison. All men who are incentivized for you to value their companies on something other than what their business actually sells.
The group delusion is a result of an economy run by people who do not do any labor other than sending and receiving email and attending lunches lasting several hours. People with money don’t care about people.
The narrative is built upon a mixture hysteria and hype, as well as cynical hopes in the hearts men who dream of automating jobs they would never do themselves. Altman uses the digital baba-yaga to stir up the hearts of weak-handed narcissists who would rather shoot a man than lose a single dollar, even if that meant making their product worse. Satya Nadella’s job is one of the easiest in the world. He tells Microsoft CFO Amy Hood, “we have to make sure that Bing contains generative AI”before jetting to Davos and yelling that he plans to spend more money on GPUs than ever.
Sam Altman thinks you’re stupid. He thinks you’re a moron who will eat whatever slop you give him. Deep Research and Operator barely touch the fabric of the intended purposes. Yet the media applauds and screams at him like he is a gifted child who just tied his shoes.
Yes, I am a hater and a pessimist. I’m also a cynic. But I need you to listen to me. Everything I describe is unfathomably hazardous, regardless of the financial and environmental costs.
I’ll ask you this question: What is more likely?
OpenAI, which has never made a profit, is incapable of creating a product that is truly useful and meaningful, but somehow manages to make its products profitable? And then creates a truly autonomous AI?
Or that OpenAI, a company that has consistently burned billions of dollars, that has never shown any sign of making a profit, that has in two years released a selection of increasingly-questionable and obtuse products, actually runs out of money? How can this industry continue? Do hyperscalers continue to spend hundreds of billions in capital expenditures with little measurable return?
And, fundamentally, when will everyone start accepting that what AI companies say has nothing to do with the reality? When will the media stop treating each expensive, stupid, annoying, quasi-useless, new product as magical and start asking people to show us what the future looks like ?
I believe that Generative AI is an ecological, financial and social time bomb. It also shines a blinding, glaring light on the disconnect between the powerful people and the regular people. The fact that Sam Altman’s mediocre software can get more attention and coverage than all the scientific breakthroughs of the last five year combined is a sign our society is sick and our media is broken. It also shows that the tech industry believes we are all fucking idiots.
The entire bubble was inflated by hype and outright lies from people like Sam Altman, Dario Amodei and a tech-media that is incapable of writing what’s going on in front of them. Altman and Amodei have raised billions of dollars and are burning our planet because they believe that their mediocre software products will automate and wake up our lives.
The truth about generative AI is that it is both mediocre and destructive. Those who are pushing it as “the future” “will change everything” show how much contempt they hold for the average person. They think they can shovel shit in our mouths, tell us that it’s prime-rib and that these halfassed products will transform the world. And that as a consequence they need billions of dollar and to harm our power grid.
This has been a rant filled newsletter, but I am so tired of being encouraged to be excited about warmed-up dog shit. I’m tired reading stories about Sam Altman (19459136) saying we’re only a year away “everything changing” and that they exist to perpetuate the myth Silicon Valley doesn’t care about anyone’s problems except for finding new growth markets in the tech industry.
I refuse to pretend that anything matters. OpenAI and Anthropic do not innovate and are against the spirit of Silicon Valley. They are management consultants dressed up as founders. They are cynical scam artists who raise money for products that won’t exist, while peddling software which destroys our planet. They divert attention and capital from things that could solve real problems.
The delusion is tiring. I’m tired being forced to take them seriously. I’m tired to be told by investors and the media that these men are creating the future, when all they do is build mediocre things at high prices. There is no joy, no mystery, nothing magical, no problems are solved, no people saved, and only a few lives have been changed.
This is not impressive or powerful except in the way it has become a scam. Look at the products, and the actual outputs. Tell me if you think this is the future. Isn’t that strange that all the scary, big threats they have made about AI taking our jobs never seem translate into a product? Isn’t that strange that they haven’t yet created anything truly useful despite their money and power?
I feel my heart darken, if only briefly, when I consider how cynical this all is. Reporters who want to believe in their narratives and, in some cases, actively promote them are selling products that don’t do much bu t that might one day. The damage will be tens and thousands of people laid off, long-term infrastructural and environmental chaos, and a deep depression in Silicon Valley, which I believe will dwarf that of the dot-com crash.
When this all falls down — and I think it will — the tech industry will have to face a public reckoning.