OpenAI is confronting a challenging reality as it prepares for a potential initial public offering, with CEO Sam Altman acknowledging the significant difficulties in data center construction. Speaking at the BlackRock U.S. Infrastructure Summit earlier this month, Altman highlighted that large-scale projects are prone to numerous complications. He cited an instance where extreme weather caused temporary service disruptions at a data center campus in Abilene, Texas—a flagship site for the $500 billion "Stargate" project involving OpenAI, Oracle, and SoftBank. Altman also noted ongoing supply chain challenges and intense pressure to meet tight deadlines.
As Altman works to transform OpenAI—a private market darling valued at $73 billion in its latest funding round—into an asset attractive to more discerning public market fund managers, the company faces increasing scrutiny. This necessitates scaling back some ambitious spending plans, shelving certain projects, and accepting its role as a large-scale cloud computing purchaser rather than a builder of massive data centers. "OpenAI has recognized that the market may not endorse reckless growth and spending," said Daniel Newman, CEO of Futurum Group. "The market wants to see OpenAI's revenue growth align with its expenditure levels. This strategic shift appears aimed at demonstrating greater financial responsibility."
The new approach implies that OpenAI may have to accept a slower expansion pace while still competing fiercely with rivals like Anthropic, Google, and numerous other companies developing AI models, applications, and features. Training and running AI models demands immense computational resources, including chips, processing power, memory, and energy. Altman and other executives have long emphasized that computing power is the primary bottleneck for growth, driving the company to raise substantial capital, including a recent $110 billion round with Amazon contributing $50 billion.
Last November, Altman posted on X that due to severe computing constraints, OpenAI and other firms "have had to rate-limit our products and cannot launch new features and models." Throughout the year, a key focus was Altman's aggressive efforts to secure computing resources, with multi-billion-dollar infrastructure agreements signed with NVIDIA, AMD, and Broadcom. In his November post, Altman mentioned that the company was considering committing approximately $1.4 trillion over the next eight years. These deals stirred public markets, raising concerns about a potential AI bubble and leading many investors to question how OpenAI could manage such staggering commitments with annual revenue of just $13.1 billion.
The most notable agreement was with NVIDIA. The world's highest-valued chipmaker agreed in September to invest up to $100 billion in OpenAI over several years, with funding tied to the deployment and use of NVIDIA's technology. OpenAI stated it planned to deploy at least 10 gigawatts of NVIDIA systems, with the first $10 billion investment due when the initial gigawatt—equivalent to the power consumption of a mid-sized city—was operational. Analysts at the time compared the deal to the vendor financing that fueled the late-1990s internet bubble.
Altman has repeatedly downplayed concerns about OpenAI's grand infrastructure plans, suggesting revenue could soar to hundreds of billions by 2030. However, in preparation for a potential IPO this year, OpenAI has recently tempered expectations and outlined a more cautious strategy. In February, the company informed investors that it now aims for total computing expenditures of about $600 billion by 2030, a figure intended to align more directly with anticipated revenue growth.
OpenAI is also emphasizing discipline in other areas of its business. In December, facing intensifying competition from Google and Anthropic, OpenAI declared a "code red," focusing on improving its ChatGPT chatbot. Brad Lightcap, CEO of OpenAI's applied product division, held an all-hands meeting earlier this month to discuss enterprise business, stating the company is "aggressively focusing" on high-productivity use cases. "What really matters for us right now is staying focused and executing excellently," Lightcap said, according to excerpts of the meeting notes. "This is a race."
According to people familiar with the matter, OpenAI currently owns no data centers and likely will not in the foreseeable future. Instead, the company is heavily relying on partners like Oracle, Microsoft, and Amazon to aggregate as much computing power as possible. The situation was different a year ago. In January 2025, then-President Donald Trump joined Altman, SoftBank CEO Masayoshi Son, and Oracle Chairman Larry Ellison at a White House event to announce the Stargate project. The companies pledged to invest $500 billion over four years to build new AI infrastructure in the U.S. According to a blog post at the time, OpenAI would operate the project, with SoftBank handling financing. Oracle and NVIDIA were designated as key initial technology partners.
As Stargate launched, OpenAI initially planned to develop much of the project itself and intended to lease or own some data center campuses directly. However, after encountering practical construction challenges and difficulty securing lender support, the company changed course. Oracle is leasing the Stargate data center campus in Abilene and funding construction by taking on tens of billions in debt. In last September's announcement, OpenAI and NVIDIA stated that the first gigawatt of NVIDIA systems would be deployed in the second half of 2026. Experts say even under the best circumstances, this timeline is ambitious. Walid Saad, an engineering professor at Virginia Tech, noted that building a 1-gigawatt data center from scratch could take three to ten years, with challenges at every stage from site selection and permits to power access and hardware installation.
These obstacles have become very real for OpenAI, according to Gartner AI analyst Arun Chandrasekaran. "They are starting to realize, 'Let's get what we can from vendors who are willing to offer compute now,'" Chandrasekaran said. As part of the $110 billion funding round announced last month, OpenAI agreed to consume about 2 gigawatts of Trainium compute through Amazon Web Services infrastructure. Trainium is AWS's custom AI chip, with the latest version, Trainium3, released in December. NVIDIA also participated in the round, investing $30 billion. As part of the deal, OpenAI said it would expand its collaboration with NVIDIA and agreed to use 3 gigawatts of dedicated inference compute and 2 gigawatts of training compute on NVIDIA's upcoming Vera Rubin systems.
"OpenAI is doing what it must—acquiring compute resources at scale," said Futurum Group's Newman, adding that Meta, Anthropic, and Google are doing the same. "It's a race." NVIDIA's recent investment follows months of market speculation about the progress of the major infrastructure deal announced last September. The chipmaker disclosed in a November quarterly filing that the $100 billion transaction might not materialize, and January reports indicated the agreement had been "paused." In a February filing, NVIDIA noted it "cannot guarantee" that an "investment and collaboration agreement" with OpenAI will be reached or completed. At a conference earlier this month, NVIDIA CEO Jensen Huang further tempered expectations, suggesting the opportunity to invest $100 billion in OpenAI might be "off the table."
The latest investment is not tied to any deployment milestones and differs structurally from the deal promoted six months ago. Huang indicated this would "likely" be NVIDIA's "final" investment in OpenAI before its IPO. "To be fair, they have built an incredible growth story. But—the path ahead won't be smooth," Newman remarked regarding OpenAI. "And because their cost structure is so high, every step toward profitability will be closely scrutinized."