Why the man behind 'The Hater's Guide to the AI Bubble' thinks Wall Street's hottest trade will go bust

Dow Jones
Jul 30

MW Why the man behind 'The Hater's Guide to the AI Bubble' thinks Wall Street's hottest trade will go bust

By Joseph Adinolfi

Ed Zitron has cultivated a following for a podcast and newsletter in which he tries to poke holes in what he calls the 'AI bubble'

Since OpenAI launched ChatGPT in November 2022, generative artificial-intelligence chatbots have attracted millions of users and provoked plenty of excitement.

Wall Street analysts have lauded the technology's potential. Media outlets have run story after story quoting corporate executives discussing the imminent threat to white-collar employment that the technology poses. AI believers have touted the technology's potential to boost productivity - and the global economy.

Yet some skeptics are asking: If this technology is destined to be as successful as its evangelists claim, why isn't it generating any profits?

Over the past couple of years, Ed Zitron, founder of a tech-focused public-relations firm in Nevada, has established himself as one of the loudest critics of the prevailing AI narrative.

Via a newsletter and podcast, as well as frequent social-media posts, Zitron has attempted to peel back the layers of hype that he says have been perpetuated by executives like OpenAI's Sam Altman. To hear Zitron tell it, the AI boom has been propped up by hopes and lofty expectations that have far outpaced the technology's capabilities.

He likes to remind his readers and listeners that leading private companies in the space, like OpenAI and Anthropic, have attained huge valuations yet remain deeply unprofitable. If these companies were charging enough to offset their costs, would anybody be willing to pay for their products?

If it were only the case of a handful of Silicon Valley venture capitalists financing the AI industry, then the potential consequences of what Zitron describes as profound mal-investment would be more limited. But as megacap companies like Nvidia Corp. (NVDA) have seen their valuations soar, becoming responsible for a growing share of the overall value of U.S. stock-market indexes like the S&P 500 SPX, everyday investors saving for retirement could be affected if the trade ultimately unravels like he expects it will.

Ahead of earnings reports from several of the major so-called AI hyperscalers due this week, MarketWatch spoke with Zitron to learn more about his perspective. Zitron elaborated on some of the points he made in a recent edition of his newsletter, "Where's Your Ed At" and explained why he believes the AI boom will eventually lead to a painful bust on par with the collapse of the dot-com bubble.

Investors might remember how the DeepSeek panic hammered shares of Nvidia back in January. According to Zitron, that was only a taste of what might be coming. With Anthropic and others recently raising prices for some customers, he said, cracks have started to show.

This interview has been edited for space and clarity.

MarketWatch: In your view, what are some of investors' most common misconceptions about generative AI and its feasibility as a business?

Zitron: It doesn't make any money or profit. Really, depending on the company, it is one or both. It's one of the strangest things I've ever seen. It's not like there are a few incumbents that are profitable but only making a little money. Even the two largest companies making the most revenues, OpenAI and Anthropic, are burning through billions of dollars a year.

Even the major enterprise [software-as-a-service] companies like Anysphere, which makes [AI code editor] Cursor - they're deeply unprofitable and they just had to increase their prices, and change their product materially, to the point where it's completely different and you can do so much less with it. They got to $500 million in [annual recurring revenue] selling a product that they can't actually sell. So you've got this giant industry where everybody's just saying "AI, AI, AI," and yet it doesn't make any money. Another thing is that the products themselves are not that useful. A lot of it is being sold on what people wish it would do rather than what it actually does.

[Editor's note: Anthropic, OpenAI and Anysphere did not respond to request for comment from MarketWatch.]

MarketWatch: One line in the "Hater's Guide" is, "Don't watch the mouth, watch the hands." Can you elaborate on what that means ?

Zitron: It's a very basic thing here. That phrase means don't listen to what people are saying, look at what they're doing, and that really is it with the AI boom. If you look at OpenAI, they've been saying "agents are imminent" for like a bloody year. Their actual agent can't even do the things - even in OpenAI's demo of their operator agent, with their pre-prepared example of a map of how to get to every baseball stadium within a season - there was a line that drew to the Gulf of Mexico. There is no baseball team in the Gulf of Mexico! It also seemed to miss Yankee Stadium, which I hear is a big ballpark.

So you have these fantastical things being said about what these things can do, but when you actually look at it, these companies don't actually talk about revenue, they don't generally talk about user numbers, they don't want to talk about these things, because when you actually look at the outcomes of these products, they're not there.

One more note on that. This is actually really important. The most egregious of these is agents. You're hearing everybody saying "agent, agent, agent." Salesforce is talking about being an agent-first company. But customers don't like Agentforce. The Information has reported that Agentforce has problems with hallucinations. Salesforce's own research says that agents start breaking down - they only achieve around a 58% success rate on single-step tasks. Everyone talks about these things like they exist, and they do not.

[Editor's note: A Salesforce Inc. $(CRM.AU)$ spokesperson shared a number of data points from the company's first-quarter earnings call that highlighted the growth in demand for Agentforce since it became available in October 2024, including that it has reached more than $100 million in annual recurring revenue. The spokesperson said customers are turning to Salesforce because they want a complete enterprise-grade platform, not just a large language model, and highlighted a couple of third-party reports showing strong customer return on investment and higher accuracy using Agentforce compared with do-it-yourself AI agent solutions. Salesforce has shared a number of customer-success stories about Agentforce on its website and has also used AI agents to handle 85% of inquiries through its own customer-support platform, the spokesperson said.]

MarketWatch: 2025 was supposed to be the year of agentic AI. How is that coming along?

Zitron: It isn't. It is not coming along. Nothing is happening. You'll notice that we don't have AI agents yet. Agentic AI is a phrase that exists to tell you that a company is building or has access to an autonomous AI. What agents actually are, depending on the product, is literally a chatbot. They are literally just calling anything they want agents. ServiceNow does it, for example - it's just a bloody chatbot. But they claim it's something more. So if you tell a ServiceNow agent, chatbot, whatever, to do something, it sometimes will successfully trigger something else. That is not autonomous. It's not completing a task. I have never seen anything like it in the public markets, or even the private ones. It is genuinely ludicrous. But everyone is saying this is the year of agentic AI because they don't have anything else to talk about. The reality behind the curtain is nothing is really happening. They are rebranding things as agentic when they aren't really doing anything new.

[Editor's note: A spokesperson for ServiceNow Inc. (NOW) shared the following statement with MarketWatch: "We'll let the results our agentic technology is delivering debunk the skeptics. Within ServiceNow alone, real agentic AI is automating 97% of software provisioning requests, reducing internal IT service desk volume by 40%, and resolving customer support cases 50% faster. As a native AI company, AI is infused throughout the entire ServiceNow AI Platform. Just last week during our Q2 earnings, we reaffirmed that we expect a Now Assist annual contract value contribution of $1 billion by the end of 2026. We're a real company doing real things, leading in AI. Any claim to the contrary is proof we're doing something worth talking about."]

MarketWatch: There has been a lot of talk about the potential for AGI - artificial general intelligence. How close are we to developing that?

Zitron: We are nowhere. We don't have proof it's even possible. We just don't. Even Meta, which is currently giving these egregious sums of money to AI scientists - their lead AI scientist said scaling up large language models isn't going to create AGI.

We do not know how human beings are conscious. We don't know how human thinking works. How are we going to simulate that in a computer? Furthermore, there's no proof that you can make a computer conscious, and right now, they can't even get agents right. How the hell are they meant to make a conscious or automated computer? These models have no concept of right or wrong, or rules, or really anything. They are just looking over a large corpus of data and generating, as they are probabilistic, the most likely thing that you may want it to. It is kind of crazy that they can do it, but what they are doing is not thinking. Reasoning models are not actually reasoning. They do not reason. They do not have human thought, or any thought. They are just large language models that just spit out answers based on what the user wants.

MarketWatch: In an earlier edition of your newsletter, you talked about what you called the "subprime AI crisis." Can you explain what you mean by that?

MW Why the man behind 'The Hater's Guide to the AI Bubble' thinks Wall Street's hottest trade will go bust

By Joseph Adinolfi

Ed Zitron has cultivated a following for a podcast and newsletter in which he tries to poke holes in what he calls the 'AI bubble'

Since OpenAI launched ChatGPT in November 2022, generative artificial-intelligence chatbots have attracted millions of users and provoked plenty of excitement.

Wall Street analysts have lauded the technology's potential. Media outlets have run story after story quoting corporate executives discussing the imminent threat to white-collar employment that the technology poses. AI believers have touted the technology's potential to boost productivity - and the global economy.

Yet some skeptics are asking: If this technology is destined to be as successful as its evangelists claim, why isn't it generating any profits?

Over the past couple of years, Ed Zitron, founder of a tech-focused public-relations firm in Nevada, has established himself as one of the loudest critics of the prevailing AI narrative.

Via a newsletter and podcast, as well as frequent social-media posts, Zitron has attempted to peel back the layers of hype that he says have been perpetuated by executives like OpenAI's Sam Altman. To hear Zitron tell it, the AI boom has been propped up by hopes and lofty expectations that have far outpaced the technology's capabilities.

He likes to remind his readers and listeners that leading private companies in the space, like OpenAI and Anthropic, have attained huge valuations yet remain deeply unprofitable. If these companies were charging enough to offset their costs, would anybody be willing to pay for their products?

If it were only the case of a handful of Silicon Valley venture capitalists financing the AI industry, then the potential consequences of what Zitron describes as profound mal-investment would be more limited. But as megacap companies like Nvidia Corp. (NVDA) have seen their valuations soar, becoming responsible for a growing share of the overall value of U.S. stock-market indexes like the S&P 500 SPX, everyday investors saving for retirement could be affected if the trade ultimately unravels like he expects it will.

Ahead of earnings reports from several of the major so-called AI hyperscalers due this week, MarketWatch spoke with Zitron to learn more about his perspective. Zitron elaborated on some of the points he made in a recent edition of his newsletter, "Where's Your Ed At" and explained why he believes the AI boom will eventually lead to a painful bust on par with the collapse of the dot-com bubble.

Investors might remember how the DeepSeek panic hammered shares of Nvidia back in January. According to Zitron, that was only a taste of what might be coming. With Anthropic and others recently raising prices for some customers, he said, cracks have started to show.

This interview has been edited for space and clarity.

MarketWatch: In your view, what are some of investors' most common misconceptions about generative AI and its feasibility as a business?

Zitron: It doesn't make any money or profit. Really, depending on the company, it is one or both. It's one of the strangest things I've ever seen. It's not like there are a few incumbents that are profitable but only making a little money. Even the two largest companies making the most revenues, OpenAI and Anthropic, are burning through billions of dollars a year.

Even the major enterprise [software-as-a-service] companies like Anysphere, which makes [AI code editor] Cursor - they're deeply unprofitable and they just had to increase their prices, and change their product materially, to the point where it's completely different and you can do so much less with it. They got to $500 million in [annual recurring revenue] selling a product that they can't actually sell. So you've got this giant industry where everybody's just saying "AI, AI, AI," and yet it doesn't make any money. Another thing is that the products themselves are not that useful. A lot of it is being sold on what people wish it would do rather than what it actually does.

[Editor's note: Anthropic, OpenAI and Anysphere did not respond to request for comment from MarketWatch.]

MarketWatch: One line in the "Hater's Guide" is, "Don't watch the mouth, watch the hands." Can you elaborate on what that means ?

Zitron: It's a very basic thing here. That phrase means don't listen to what people are saying, look at what they're doing, and that really is it with the AI boom. If you look at OpenAI, they've been saying "agents are imminent" for like a bloody year. Their actual agent can't even do the things - even in OpenAI's demo of their operator agent, with their pre-prepared example of a map of how to get to every baseball stadium within a season - there was a line that drew to the Gulf of Mexico. There is no baseball team in the Gulf of Mexico! It also seemed to miss Yankee Stadium, which I hear is a big ballpark.

So you have these fantastical things being said about what these things can do, but when you actually look at it, these companies don't actually talk about revenue, they don't generally talk about user numbers, they don't want to talk about these things, because when you actually look at the outcomes of these products, they're not there.

One more note on that. This is actually really important. The most egregious of these is agents. You're hearing everybody saying "agent, agent, agent." Salesforce is talking about being an agent-first company. But customers don't like Agentforce. The Information has reported that Agentforce has problems with hallucinations. Salesforce's own research says that agents start breaking down - they only achieve around a 58% success rate on single-step tasks. Everyone talks about these things like they exist, and they do not.

[Editor's note: A Salesforce Inc. (CRM) spokesperson shared a number of data points from the company's first-quarter earnings call that highlighted the growth in demand for Agentforce since it became available in October 2024, including that it has reached more than $100 million in annual recurring revenue. The spokesperson said customers are turning to Salesforce because they want a complete enterprise-grade platform, not just a large language model, and highlighted a couple of third-party reports showing strong customer return on investment and higher accuracy using Agentforce compared with do-it-yourself AI agent solutions. Salesforce has shared a number of customer-success stories about Agentforce on its website and has also used AI agents to handle 85% of inquiries through its own customer-support platform, the spokesperson said.]

MarketWatch: 2025 was supposed to be the year of agentic AI. How is that coming along?

Zitron: It isn't. It is not coming along. Nothing is happening. You'll notice that we don't have AI agents yet. Agentic AI is a phrase that exists to tell you that a company is building or has access to an autonomous AI. What agents actually are, depending on the product, is literally a chatbot. They are literally just calling anything they want agents. ServiceNow does it, for example - it's just a bloody chatbot. But they claim it's something more. So if you tell a ServiceNow agent, chatbot, whatever, to do something, it sometimes will successfully trigger something else. That is not autonomous. It's not completing a task. I have never seen anything like it in the public markets, or even the private ones. It is genuinely ludicrous. But everyone is saying this is the year of agentic AI because they don't have anything else to talk about. The reality behind the curtain is nothing is really happening. They are rebranding things as agentic when they aren't really doing anything new.

[Editor's note: A spokesperson for ServiceNow Inc. (NOW) shared the following statement with MarketWatch: "We'll let the results our agentic technology is delivering debunk the skeptics. Within ServiceNow alone, real agentic AI is automating 97% of software provisioning requests, reducing internal IT service desk volume by 40%, and resolving customer support cases 50% faster. As a native AI company, AI is infused throughout the entire ServiceNow AI Platform. Just last week during our Q2 earnings, we reaffirmed that we expect a Now Assist annual contract value contribution of $1 billion by the end of 2026. We're a real company doing real things, leading in AI. Any claim to the contrary is proof we're doing something worth talking about."]

MarketWatch: There has been a lot of talk about the potential for AGI - artificial general intelligence. How close are we to developing that?

Zitron: We are nowhere. We don't have proof it's even possible. We just don't. Even Meta, which is currently giving these egregious sums of money to AI scientists - their lead AI scientist said scaling up large language models isn't going to create AGI.

We do not know how human beings are conscious. We don't know how human thinking works. How are we going to simulate that in a computer? Furthermore, there's no proof that you can make a computer conscious, and right now, they can't even get agents right. How the hell are they meant to make a conscious or automated computer? These models have no concept of right or wrong, or rules, or really anything. They are just looking over a large corpus of data and generating, as they are probabilistic, the most likely thing that you may want it to. It is kind of crazy that they can do it, but what they are doing is not thinking. Reasoning models are not actually reasoning. They do not reason. They do not have human thought, or any thought. They are just large language models that just spit out answers based on what the user wants.

MarketWatch: In an earlier edition of your newsletter, you talked about what you called the "subprime AI crisis." Can you explain what you mean by that?

(MORE TO FOLLOW) Dow Jones Newswires

July 29, 2025 16:58 ET (20:58 GMT)

MW Why the man behind 'The Hater's Guide to the -2-

Zitron: Every AI startup that isn't Anthropic or OpenAI is connecting to their models and paying them using something called an API. You connect to them and that's how you run your software, using their models. Now, OpenAI and Anthropic burn billions and billions of dollars. I believe, and they have yet to suggest otherwise, that these companies are running at a massive loss. So those rates they're providing, the tokens - because customers pay to use their large language models per million tokens - they will eventually raise the price on them, making it impractical or impossible for a large AI startup to run their company.

We have already seen the beginning of this. Anysphere, which makes a product called Cursor, a popular AI coding app, in the middle of June had to change their pricing and add rate limits and reduce the amount of use that their customers were getting out of the product, because OpenAI and Anthropic both raised the rent on them.

This led Cursor to change their entire business model. The same thing happened with a company called Replit, which around the middle of June also added something called effort-based pricing, which changed from the general all-you-can-eat thing.

[Editor's note: Replit didn't respond to a request for comment from MarketWatch.]

Much like the variable-rate mortgages that were popular before the financial crisis, I believe the same thing is going to happen with AI products. The product people thought they were buying isn't what they had expected. In this case, the people with the variable-rate loans are the companies building startups on top of these AI models.

MarketWatch: People who push back on your criticisms like to cite Amazon Web Services as an example of a technology or product that took time to become profitable. How is AI different?

Zitron: First of all, AWS started with a use case. It existed for years before it was a publicly available service that Amazon created. Capital expenditures helped it grow as a business, but it was profitable pretty quickly. On top of that, the amount of money it was burning was miniscule. Nothing about AWS involved them having to convince people, having to convince customers, why they needed this.

[Editor's note: A representative for Amazon.com Inc. $(AMZN.UK)$ declined to comment.]

MarketWatch: If this does turn out to have been a bubble all along, what do you think will finally bring down the AI trade?

Zitron: I doubt there will be one Bear Stearns-style event where everything just suddenly falls apart. I don't think there is a true inflection point like that. I think the most obvious thing is if the market comes around to the idea that Nvidia will not show permanent growth, you will see a trip down on that stock. I think a big event is going to be one of the cloud providers - a Microsoft or an Amazon - showing negative growth or something approaching negative growth. Whatever causes the market to say to them, you need to bring down costs. It's going to be when they show really horrifying growth.

It's not like the market is sitting there and saying, 'Show us the money from AI.' Because if they did, they would not be happy. They're saying, 'Keep growing.' The market is conflating these two things: The company is growing and they're doing AI, so it must be AI itself that is growing. Once that growth goes away, so does the AI trade. So the question is, how long will these cloud-computing companies keep growing at a rate that will make investors overlook the fact that there is no growth from AI?

It is possible there could be a major event involving OpenAI or Anthropic, although I think it's more likely a big cloud provider misses hard on earnings, and then the market will say, didn't you say you're doing AI? How much? Where's the number? Show me the money, show me the profit.

At that point, things will start getting a little nasty.

-Joseph Adinolfi

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

 

(END) Dow Jones Newswires

July 29, 2025 16:58 ET (20:58 GMT)

Copyright (c) 2025 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10