Artificial Intelligencer-Inside Anthropic's ambitious 2026 revenue goal

Reuters
Oct 16
Artificial Intelligencer-Inside Anthropic's ambitious 2026 revenue goal

By Krystal Hu

Oct 15 (Reuters) - (Artificial Intelligencer is published every Wednesday. Think your friend or colleague should know about us? Forward this newsletter to them. They can also subscribe here.)

Who needs to worry about a trade war when we have AI?

After President Donald Trump rattled markets on Friday with a threat of 100% duties on Chinese goods, economists in Washington this week pointed to the AI investment boom as a key force propping up U.S. and global growth in 2025—tempering the drag from tariffs.

The IMF's World Economic Outlook, released Tuesday, nudged forecasts higher: the U.S. is now seen growing 2.0% this year (up from 1.8% in April) and 2.1% in 2026. Oxford Economics adds that the AI-driven surge in U.S. demand for foreign-made computers is offsetting weaker imports elsewhere, and should persist, with data center construction still roaring.

But the IMF's chief economist, Pierre‑Olivier Gourinchas, told my colleague the AI gains haven't fully shown up in the real economy—echoing the late 1990s, when lofty internet valuations outpaced revenues.

He cautions that an AI-led correction is possible, though less likely to be systemic because it isn't mostly debt‑financed. Scale matters too: AI‑related investment has lifted U.S. GDP by less than 0.4% since 2022, versus about 1.2% in the 1995–2000 dot‑com era.

Zooming from macro to micro, we’re starting with where AI revenue is actually showing up—an exclusive peek at Anthropic’s latest financials. We also explore China’s great catch-up in open‑source AI and a counterintuitive finding: it might actually help to be rude to your chatbot. Scroll on.

Email me or follow me on LinkedIn to share any thoughts.

OUR LATEST REPORTING IN TECH & AI

  • Meet the AI chatbots replacing India's call-center workers

  • Exclusive-Broadcom to launch new networking chip, as battle with Nvidia intensifies

  • Exclusive-AI lab Lila Sciences tops $1.3 billion valuation with new Nvidia backing

  • Intel signals return to AI race with new chip to launch next year

  • BlackRock, Nvidia-backed group strikes $40 billion AI data center deal

  • AI investment boom may lead to bust, but not likely systemic crisis, IMF says

ANTHROPIC’S REVENUE BOOM

Where's the revenue landing in AI? Foundation model companies.

Sources told me Anthropic's internal targets have the Claude maker exiting 2025 at a $9 billion annualized revenue run rate, then more than doubling in 2026 to $20 billion on the low end—and as much as $26 billion in a bullish scenario. Remember, Anthropic started 2025 with about a $1 billion run rate.

That jaw-dropping growth and projection help explain its September valuation: $183 billion after a $13 billion Series F. That's nearly triple March's $61.5 billion. Anthropic told us its run rate is approaching $7 billion this month but declined to comment on future numbers.

Compared with another fast‑growing foundation‑model lab, OpenAI, the drivers are different. OpenAI's top line has been propelled by consumer ChatGPT at massive scale, pushing its run rate to $13 billion in August, and is on pace to achieve over $20 billion by the end of the year.

Where does Anthropic's growth come from? Businesses contributed to 80% of its revenue. More than 300,000 companies use Anthropic and tap into its different models through the API.

Selling AI applications beyond model access also helps lift earnings. Claude Code, launched earlier this year, has sprinted to nearly a $1 billion run rate, a source told me. The value isn't just revenue; it's also data. Usage from code generation and debugging feeds back into fine‑tuning and training the next generation of models, helping Anthropic maintain its lead in this high‑value use case.

Anthropic is also widening its market. In August, it offered Claude access to the U.S. government for $1 to ease procurement, and it plans a Bengaluru office in 2026, as CEO Dario Amodei was busy meeting Indian Prime Minister Narendra Modi.

Behind the revenue boom, the less-discussed story is margin and profitability—which remain distant. Unit margins on API usage can look fine on paper, but only if you ignore training costs. And you can't: training and running the next wave of models is the biggest, constantly shifting bill in the business, unlike prior waves such as cloud migration.

Competition makes that bill bigger. To stand out from OpenAI, Google DeepMind, Meta, and xAI, each new model demands more data, more compute and more iterations—pushing costs up faster than prices can follow. The question isn't whether revenue is real; it is. It's whether margins can keep pace with the physics of training the next generation. In the meantime, expect companies like Anthropic and OpenAI to raise more capital to foot the bill.

CHART OF THE WEEK: CHINA’S OPEN SOURCE LEAD

A year ago, Meta's META.O Llama was the default choice for developers looking for the best open-source models. On Hugging Face, the "GitHub for AI," it drove hundreds of millions of downloads through late 2024 and accounted for roughly half of all new "finetunes"—specialized versions trained for specific tasks. The chart from Air Street Capital's State of AI report shows the center of gravity shifting in 2025: Qwen, developed by China’s tech giant Alibaba, surpassed Llama in total downloads, and now drives more than 40% of new monthly derivatives built upon the original model, while Llama's share has slid to about 15%.

This isn't a story of the West losing interest, but of Chinese models making a massive leap in quality and flexibility. Developers point to Qwen's rapid development pace and strong reasoning skills, which make it inexpensive to customize and easy to deploy.

RESEARCH TO READ: RUDE TO YOUR AI

A short Penn State study, "Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy," finds rude prompts to LLMs consistently lead to better results than polite ones.

With GPT‑4o, the ruder the prompt, the better the multiple‑choice accuracy—climbing from about 80.8% with very polite wording to 84.8% when the prompt is very rude. The authors tested this by rewriting 50 questions in math, science and history across five tones: Very Polite, Polite, Neutral, Rude and Very Rude. After 10 runs per tone, they found a consistent, statistically significant edge for impolite phrasing, sharing tone examples from the gentle "Would you be so kind” to more abrasive instructions.

So, should you start being a jerk to your chatbot? Not exactly. The takeaway is that politeness and other conversational filler can act as noise, confusing the model. For the best results, be direct and explicit: skip the pleasantries and tell the AI precisely what you need.

How is deep tech scaling to tackle global challenges like climate, healthcare, and mobility? Register now to watch the live broadcast of the inaugural #ReutersNEXTGulf summit in Abu Dhabi on October 22, featuring Hashkey Exchange CEO Haiyang Ru, AI71 executive Chiara Marcati, Mozilla AI advisor Ayah Bdeir, and STV CEO Abdulrahman Tarabzouni on innovation, investment, and impact.

China's open source AI models are leading https://www.reuters.com/graphics/AI-OPENSOURCE/mopadewokva/China's%20Open%20Source%20AI%20catch%20up.png

(Reporting by Krystal Hu; Editing by Lisa Shumaker)

((krystal.hu@thomsonreuters.com, +1 917-691-1815))

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10