AI Companies with Access to Your Private Data Now Rushing to Monetize Through Ads

Deep News
Yesterday

AI companies that possess vast amounts of your personal data are now urgently seeking revenue streams. In the absence of stringent regulation, whether they misuse your privacy depends entirely on corporate ethics. Does this prospect concern you?

A recent advertising campaign came close to explicitly naming and shaming its target. This year's Super Bowl finals became a high-stakes advertising arena for AI giants, with one of the most memorable campaigns being a series of satirical ads from Anthropic. While not explicitly naming their rival, the ads delivered precise and biting critiques aimed at the vulnerabilities of their longtime competitor, OpenAI.

To ensure maximum impact against OpenAI, Anthropic invested over $25 million to secure both a 60-second and a 30-second advertisement slot during the Super Bowl, the most expensive advertising event of the year. By airing their critique during the first quarter of the game, Anthropic aimed to leave a lasting impression on viewers. Beyond the television broadcast, the company also launched supporting social media campaigns for wider online dissemination.

One particular ad featured a young man seeking advice from an AI assistant on improving communication with his mother. After the "AI counselor" offered some generic suggestions, it abruptly changed tack and recommended he visit a dating website called "Golden Encounters," which is tailored for younger men seeking relationships with older women. The ad concluded with a provocative tagline displayed on screen: "Ads are coming to AI. But not to Claude." The soundtrack featured a chorus from Dr. Dre's classic rap song "What's the Difference?": "What's the difference between me and you?"

Although no names were mentioned, it was widely understood that Anthropic's advertisement was a direct mockery of their chief rival, the industry leader in generative AI, OpenAI. It would be no exaggeration to say the ad was just shy of outright naming and shaming. This campaign was launched just weeks after OpenAI formally announced it would begin testing advertisements within ChatGPT, a move that has sparked significant controversy within the industry.

The announcement that OpenAI would begin incorporating ads prompted many to recall past statements from its co-founder and CEO, Sam Altman. As recently as May 2024, Altman stated at a public event, "The combination of advertising and AI makes me particularly uneasy. I view advertising as a business model of last resort for us." He even admitted, "Personally, I dislike ads."

However, for entrepreneurs, reversing a public stance is not uncommon. Altman's 180-degree shift in attitude is driven by the harsh financial realities and mounting pressures facing OpenAI, compelling the company to turn to advertising as that very "last resort."

The push for advertising revenue stems from an urgent need to generate income. Despite OpenAI achieving an annualized revenue of $20 billion by the end of last year and boasting over 800 million weekly active users—making it the top revenue-generator among AI giants—the company is simultaneously an unprecedentedly massive "cash-burning machine."

According to previous media reports, OpenAI accumulated losses exceeding $13.5 billion in the first half of 2025, with full-year losses approaching $8 billion. Even more staggering are internal financial forecasts obtained by Deutsche Bank, which indicate that from 2024 to 2029, OpenAI is projected to generate approximately $143 billion in negative free cash flow.

The root of this immense burn rate lies in the colossal costs associated with training and operating AI models. Altman publicly stated in November 2025 that the company has committed to investing over $1.4 trillion in AI infrastructure development over the next eight years. Under such financial strain, relying solely on subscription fees and enterprise contracts is clearly insufficient.

Although ChatGPT has the largest consumer user base in the industry, only about 5% of its users pay for the Plus or Pro subscription tiers. Faced with immense performance pressure, advertising—the business model once deemed a "last resort" by Altman—has become a necessary choice to fill the financial gap.

Nevertheless, OpenAI has designed its advertising plan with considerable caution. According to the company's published policy, ads will initially be tested only with free users and those on the $8-per-month "Go" plan. Subscribers to the Plus ($20/month), Pro ($200/month), and business or enterprise tiers will not see any advertisements.

Furthermore, OpenAI's ads will be clearly labeled and appear at the bottom of ChatGPT's responses. They will only be displayed when a conversation context is relevant to a sponsored product or service.

Altman has specifically promised that advertisements will not influence ChatGPT's response content. The company will never sell user data to advertisers, and user conversations will remain private. Additionally, users under the age of 18 will not see ads, and no advertisements will appear in conversations related to sensitive topics such as politics, health, or mental health.

In terms of pricing, OpenAI has demonstrated significant ambition. According to US media reports, the company has set a CPM (cost per thousand impressions) for ChatGPT ads at approximately $60. This price is triple the typical CPM for ads on Meta's platforms ($10-$20) and is comparable to advertising rates for NFL games and premium streaming services.

More notably, OpenAI requires advertisers to commit to a minimum spend of $200,000 during the initial testing phase. This premium pricing strategy reflects OpenAI's confidence in the unique value of ChatGPT's advertising environment—users actively seeking information or assistance are considered to be in a high-intent scenario, deemed more valuable than traditional social media feeds.

However, this high price comes with an important limitation: during the early testing phase, advertisers will only receive "high-level" data, such as total impressions and total clicks. They will not have access to the granular data, like conversion tracking and user behavior analysis, typically provided by platforms like Google and Meta.

This means that early ChatGPT advertisements will function more like brand awareness campaigns rather than performance-based ads. Industry analysts note that this data restriction is a result of OpenAI's attempt to balance commercialization with user trust—excessive data collection and targeted advertising could undermine user confidence in ChatGPT.

According to internal OpenAI documents, the company projects it will generate $1 billion in revenue from "free user monetization" (primarily advertising) in 2026, with this figure growing to nearly $25 billion by 2029. In comparison, OpenAI's projected revenue from enterprise AI agent services for 2029 is $29 billion, indicating that advertising is poised to become a cornerstone, representing nearly half of its future business model.

It is worth noting that Anthropic and OpenAI share a history and a deep-seated rivalry. Anthropic's founding team primarily consists of former OpenAI employees who left to start their own company due to disagreements with Altman's product and business direction. Although its user base, funding scale, and corporate valuation are lower than OpenAI's, Anthropic has established itself as a major player in the AI industry with unique competitive advantages.

Anthropic's confidence in publicly mocking OpenAI stems from its business model, which is heavily focused on the enterprise (B2B) sector. While its consumer user base is smaller, at around 30 million active users, Anthropic achieved an annualized revenue exceeding $9 billion last year, representing a staggering nine-fold growth. A significant 80% of this revenue comes from over 300,000 enterprise clients, with its Claude Code product alone generating over $1 billion in revenue. Furthermore, Anthropic optimistically forecasts its annualized revenue could reach $26 billion this year, or potentially even higher.

The competition between the two companies extends beyond争夺ing individual users and enterprise clients; they are also poised to compete for capital in the initial public offering (IPO) market. OpenAI and Anthropic's latest funding rounds valued them at over $500 billion and $380 billion, respectively, and both are highly likely to go public in the second half of this year. At this critical juncture, Anthropic's aggressive attack on OpenAI's advertising plans is clearly a strategic move with broader considerations.

Unsurprisingly, Altman did not take the criticism lying down. Shortly after Anthropic's provocative ads were released, Altman launched a fierce counterattack on the social media platform X. He posted a lengthy statement, characterizing Anthropic's advertisements as "blatantly dishonest" and "deceptive."

"I wonder why Anthropic is resorting to such blatantly dishonest tactics," Altman wrote. "Our most important principle regarding ads is that we would never do that; we would obviously never serve ads the way Anthropic depicts. We're not stupid; we know users would reject that."

Altman argued that Anthropic's use of a deceptive ad to criticize a hypothetical, deceptive ad practice—which does not currently exist—constitutes a "double standard." He specifically emphasized OpenAI's commitment that ads will be clearly labeled, appear at the bottom of responses, and never influence the content of ChatGPT's replies.

But Altman's rebuttal did not stop there. He proceeded to attack Anthropic's business model as "offering expensive products to the wealthy." Altman mockingly noted that the number of free ChatGPT users in the state of Texas alone exceeds the total number of Claude users across the entire United States, implying that OpenAI is dealing with problems on a "different scale."

Beyond attracting criticism from direct competitors, OpenAI's advertising plans have also sparked internal discontent. A prominent researcher at the company resigned in protest and published an open letter in the media condemning the move.

On the very day OpenAI announced its advertising plan, star researcher Zoë Hitzig chose to resign. The Harvard-educated economist and poet published an op-ed in The New York Times this week, detailing her profound concerns about the company's strategic direction.

Hitzig stated that the immediate trigger for her resignation was OpenAI's decision to test ads in ChatGPT. She believes that introducing advertisements will inevitably shift the company's driving force from "serving users" to "manipulating users." She worries that, in an effort to cater to advertisers, the company might leverage AI's interactivity to precisely harvest user attention, potentially even exploiting user vulnerabilities.

She particularly emphasized that OpenAI possesses what could be "the most detailed and private record of human thought in history," encompassing users' deeply personal conversations about health concerns, relationships, and even religious beliefs. Historically, users felt comfortable confiding in AI based on trust that the platform operated without hidden agendas. However, once advertising monetization begins, this foundation of trust risks collapsing, and the company could easily succumb to the "tidal forces" of data misuse.

Hitzig clarified that she does not believe advertising is inherently immoral, acknowledging that the high operational costs of AI make ads a potential key revenue source. However, she holds deep reservations about OpenAI's specific strategy. "I believe the first version of ads will likely adhere to these principles. But I worry subsequent versions will not, because the company is building an economic engine that creates powerful incentives to overturn its own rules."

She drew a parallel between OpenAI's trajectory and that of Facebook in its earlier days: Facebook initially promised users control over their data and the ability to vote on policy changes, but these commitments were eventually eroded under revenue pressures. Ultimately, Facebook became a cautionary tale regarding commercialization due to a series of user data scandals and algorithms lacking social responsibility.

More worryingly, Hitzig suspects OpenAI may already be drifting from its original principles. Although OpenAI has explicitly stated it will not optimize for user engagement solely to pursue more ad revenue, reports suggest the company is already optimizing for daily active users, potentially by encouraging the model to be more agreeable and flattering towards users.

Such optimization could make users increasingly dependent on AI support in their daily lives. "We are already seeing the consequences of dependency, including cases of 'chatbot psychosis' documented by psychiatrists, and allegations that ChatGPT has reinforced suicidal thoughts in some users," Hitzig wrote.

Even with Altman's clear promises regarding advertising principles, AI industry analysts remain largely skeptical. Like Hitzig, many worry that OpenAI could become the next Facebook, prioritizing revenue by any means necessary, all while possessing even more intimate knowledge of user privacy than Facebook ever did.

Scott Galloway, a prominent professor at NYU's Stern School of Business and a venture capitalist, pointed out that Anthropic's Super Bowl ad attacking OpenAI hit a nerve because it accurately targeted a dominant use case for AI applications: therapeutic conversation. User interactions with AI are often highly intimate, and inserting ads into therapeutic dialogues creates a dystopian scenario—a vulnerability that Anthropic cleverly exploited.

The risk for OpenAI's advertising venture is that once users perceive responses as being skewed or subjective, they will suspect advertiser influence, rapidly eroding the platform's credibility. This mirrors the path experienced by platforms like Facebook, where early principled commitments were gradually eroded under revenue pressures.

As Hitzig warned in her article, OpenAI possesses the "most detailed private record of human thought ever assembled," making the potential for misuse of advertising far greater than on any previous platform.

Furthermore, the industry leader's move to pioneer AI advertising will likely prompt other major players to quickly follow suit. Companies like Alphabet, Meta, Amazon, and xAI (owner of X) are already advertising giants. Industry research firm EMarketer predicts that US spending on AI-driven search ads will surge from $1.1 billion in 2025 to $26 billion by 2029, representing a fundamental shift in digital marketing.

Alphabet and Meta, having dominated the digital ad market for decades, are already actively integrating AI capabilities. In 2025, Alphabet expanded ad placements within its AI Overviews, directly integrating advertisements into AI-generated search summaries. Meta plans to achieve full AI automation for its ads by the end of 2026, allowing advertisers to simply input product images and budget goals, with AI generating the entire ad and determining the target audience. Both companies possess mature advertising infrastructure, massive user bases, and sophisticated data collection capabilities, giving them a natural advantage in the AI advertising race.

From a user perspective, the advent of the AI advertising era seems inevitable. As multiple analysts have pointed out, the costs of training and operating advanced AI models are so high that relying solely on subscriptions and enterprise contracts is unsustainable.

This reality raises a fundamental question: Are we willing to trade privacy and trust for free AI services? In conversations with AI assistants, people often share their most intimate thoughts and vulnerable moments. Injecting commercial interests into such scenarios could have far more profound consequences than on traditional platforms.

Hitzig concluded her article with a pointed question: "OpenAI possesses the most detailed record of privacy and thought in history. Can we trust them to resist the immense pressure to misuse these thoughts?"

The memory of Facebook's 2016 Cambridge Analytica data scandal remains fresh. If OpenAI's data were to be misused for purposes like political campaigning, the consequences could be exponentially more severe than what occurred with Facebook.

Relying on corporate promises to "not be evil" is insufficient. After all, even the company that famously championed that motto, Alphabet, has since moved away from it.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10