(This post is reblogged from my personal blog)
It’s funny that I work as an applied AI researcher, and yet here I am raging against AI. Generative AI to be specific, although I will just use the term “AI” in general throughout the blog post. I am an applied researcher, in that I take the AI algorithms developed by “pure researchers” and apply them to solve real-world problems for the company that employs me. In the course of my work, I may need to modify the algorithm (engage in a bit of “pure” research), or perhaps figure out a way to change the data or the architecture of the real world settings to make the AI solution work. If I figure out some novel methodologies, I will probably file for a patent, and may publish a paper. However, since our research is a bit internal-focused, figuring out how the AI will work for our company and our customers is the top priority.
I would like to distinguish between “traditional AI” and GenAI. I like traditional machine learning and AI very much; it has been the bulk of my research work. There are many useful and stable applications of traditional AI, like in image processing (computer vision), where an AI can detect abnormalities in medical imaging scans that even experts can’t see yet, or predict failures in equipment before they occur. These models do not cost billions of dollars to train (unless you are aiming for autonomous vehicles, a different story), and their performance is reliable even in mission-critical applications. My rage is against the GenAI machine.
Since ChatGPT launched, I have noted the hype around it with concern. Large language models are nothing but next-word prediction machines. They are not capable of reasoning. Apple researchers proved that. That does not mean they are not useful. They can summarize an article for you that you don’t want to read. Will it do so correctly? You won’t know until you read the article.

Sure, LLMs bring some value in some domains, but are they worth the cost?
LLMs are an expensive way to generate text!
LLMs are very very expensive to train and to maintain. I will share some numbers later in the post, so please stick around.
So we have a very expensive solution in search of a problem that can bring in an amount of money equal to and hopefully surpassing the investment amounts.
If you are an individual user using an LLM for spelling, grammar check or as some sort of personal personal assistant, perhaps you are even paying for it, rest assured that you are paying a fraction of the actual cost to run that model. Individual users or students cheating on their assignments are not a “worthy” business model.
Possibly “worthy” applications could be in fields like material science, where generative AI can aid in the discovery of new materials, or new drug discovery, etc. However, the best results would be obtained by training domain-specific generative AI models for these applications, not LLMs trained on 4chan and reddit conversations, and stolen copyrighted articles and books from the internet.
Now the internet is filled with AI slop and broken searches that cost more computationally but return confident-even-if-false results that you still need to verify. People are watching AI generated videos on tik tok without a care, and using chatGPT for each little task, even if Microsoft itself admitted that it is making us dumber.
The true existential risk of GenAI is that it will succeed in being accepted, and by doing so will become essential.
The above is a quote from a comment on the post “when will the GenAI bubble burst” by Gary Marcus.
These companies want us to be reliant on AI, to stop verifying information, to lose our ability to think, our ability to search for information. That way when they recommend a product, you’ll buy it without second thought. When they serve you propaganda, you will accept it as facts. Who benefits from your compliance?
So in summary we have next word predictors that:
- Can check your grammar and spelling, at a very high cost
- Can help you code, but will atrophy your skills while at it, and the code may work but is it quality code?
- Will make you dumber, the more you use it/rely on it
- Hallucinate sometimes – but you may never know it hallucinated, unless you already know! For example if you don’t know glue doesn’t go on pizza…
- Perform worse on tasks like speech transcription, than existing non-LLM transcription models
- Are frankly dangerous for people with unhealthy attachments with people falling in love and having sex with chatbots. Here is an internet archive link if you don’t have a NY subscription.
- Cannot be trusted in mission critical or high-risk applications
We Are Clearly in an AI Bubble and the Question is, When Will it Burst?
All these applications are not paying the bills for all the billions poured into “AI”. So these LLM companies need to claim these products are the solution to everything, when in reality they are only helpful in a narrow set of circumstances – creative text generation, and even then that is debatable.

So What do the Numbers Say?
If you need convincing that we are in an AI Bubble, then please head over to Edward Zitron’s The Hater’s Guide To The AI Bubble. It is a comprehensive guide!
I will quote directly from his long and rambling essay:
The Magnificent 7 stocks — NVIDIA, Microsoft, Alphabet (Google), Apple, Meta, Tesla and Amazon — make up around 35% of the value of the US stock market, and of that, NVIDIA’s market value makes up about 19% of the Magnificent 7. This dominance is also why ordinary people ought to be deeply concerned about the AI bubble. The Magnificent 7 is almost certainly a big part of their retirement plans, even if they’re not directly invested.
In simpler terms, 35% of the US stock market is held up by five or six companies buying GPUs. If NVIDIA’s growth story stumbles, it will reverberate through the rest of the Magnificent 7, making them rely on their own AI trade stories.
The Magnificent 7’s AI Story Is Flawed, With $560 Billion of Capex between 2024 and 2025 Leading to $35 billion of Revenue, And No Profit
If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion in revenue.
So When?
Last year (April 2024), Gary Marcus predicted that the bubble will burst in 2025, but it is still going strong. In fact, venture capital is still being poured in billions of USD into the field.
Just this month, it was reported that former OpenAI CTO Mira Murati raised $2 billion for new AI startup Thinking Machines Lab.
Thinking Machines will announce its first product “in the next couple months,” Murati said.
So they don’t even have a product but have attracted billions in funding. Amazing.
To Quote From the Reuters article:
Murati, who started Thinking Machines after an abrupt exit from OpenAI last September, is among a growing list of former executives from the ChatGPT maker who have launched AI startups.
Another two, Dario Amodei’s Anthropic and Ilya Sutskever’s Safe Superintelligence, have attracted former OpenAI researchers and raised billions of dollars in funding.
Investor enthusiasm toward new AI startups has stayed strong, despite some questions about tech industry spending.
That helped U.S. startup funding surge nearly 76% to $162.8 billion in the first half of 2025, with AI accounting for about 64.1% of the total deal value, according to a Pitchbook repo
In 2024, according to this article, $50B of venture capital went into startups last year, for $3B out. So far in 2025, it seems that over twice that amount (approx $103B, 64% of $162B) has already been invested, and we are just halfway into the year!
It seems there is still a lot more cash left to burn.
So far in 2025, Meta, Amazon, Microsoft, Google and Tesla spent over $560B for $35 billion in revenue.
So I think the bubble won’t burst until late 2026/early 2027. I am not an expert on capital markets, so take my prediction with a pinch of salt!
The bubble will burst because there is not much room left to scale LLMs, and because they are incapable of understanding or reasoning, we won’t reach AGI by scaling them. Meta Chief AI Scientist Yann LeCun explains it on a YouTube interview.
Yet another article warning of the bubble.
However, All is Not Lost
A lot will be lost. I cannot predict just how big the damage will be, but it will be big.
However, maybe something will be left behind.
As J.E. Van Clief writes, The Great AI Paradox: Why This Bubble Will Burst—And Why It Doesn’t Matter
If you strip away the AI hype and marketing, you’ll find something more interesting—and more valuable—than most realize.
The internet bubble burst spectacularly, but the internet itself transformed our world.
The AI bubble will likely burst too, but the technology will still reshape industries in ways we’ve only begun to understand.
And in this blogpost, the author says:
Like the Dotcom Bubble Before It, the AI Bubble Will Leave Behind Tools.
The tower would fall but the blocks remain, ready for the next cycle of builders.
And in another blogpost by yet another person on the internet:
One thing that we’d have is a glut of GPUs. Not consumer-grade gaming GPUs, but heavy-duty H100s and B100s, designed to store giant sets of model weights in memory and serve LLM completions at massive parallelism. If we weren’t using these for AI, what would we use them for? Simulations and modeling, perhaps, or AI-adjacent fields like protein folding or drug discovery? There are probably a lot of fields which have some use-cases that are considered prohibitively GPU-expensive. Those use-cases might become surprisingly possible.
When the bubble bursts, we will still have “traditional AI”, and many versions of LLMs left behind, but will the general public still trust anything to do with AI?