The disparity in enthusiasm for artificial intelligence between Silicon Valley and the rest of the world is currently vast and profound.
Observing the packed scenes at NVIDIA's recent conference in San Jose, one might assume universal agreement that AI's promising future has fully arrived. The event was the company's annual GTC developer conference, often called the tech industry's Super Bowl—a fitting comparison, given the sale of green knit sweaters featuring the endearing image of NVIDIA CEO Jensen Huang. Huang, now a top-tier celebrity in the tech world, drew crowds who queued for hours to hear his keynote. He painted a picture of immense prosperity, forecasting that NVIDIA, the world's most valuable company, could achieve $1 trillion in sales by 2027.
Yet, just days earlier at a lively Oscar party in Los Angeles, a prominent playwright publicly confronted OpenAI co-founder Sam Altman, loudly comparing him to Nazi propaganda minister Joseph Goebbels.
The playwright, Jeremy O. Harris, later conceded the analogy was inappropriate and apologized for his lack of precision. "It was late, I'd had too many martinis, and Goebbels was a misstatement," he told Page Six. "I should have said Friedrich Flick"—the German industrialist who supported Nazi Germany.
Anxiety about AI permeated the Oscars, with host Conan O'Brien setting the tone within minutes of opening the ceremony. "I'm honored to be the last human host of the Academy Awards," O'Brien introduced himself, adding, "Next year, it'll be a Waymo self-driving car in a tuxedo standing here."
This sentiment is not confined to Hollywood. On Thursday, major publishing group Hachette pulled the feminist horror novel "Shy Girl" from Amazon and UK physical bookstores, and canceled its upcoming US release, based solely on suspicion that parts of it were AI-generated. Consider this: Hachett was so wary of public backlash against AI that it axed the book's publication without confirming whether the author had used AI.
Even the trendy sweater-clad crowds cheering for NVIDIA occasionally show anti-AI sentiments. At GTC, NVIDIA unveiled an upgraded version of its Deep Learning Super Sampling (DLSS) technology, an AI tool for rendering video game graphics. One would bet NVIDIA didn't anticipate this tool being more controversial than its trillion-dollar revenue forecast—gamers reacted with horror to the demo footage, as if witnessing the real-life version of Resident Evil's Raccoon City. They criticized the tool for making all game visuals look like they were filtered through the same bland Instagram filter. In response, Huang attempted to reassure gamers by simply stating they were "completely wrong."
These examples are highlighted because it's uncertain if Silicon Valley is paying enough attention to what the general public truly thinks about AI. In my view, even the brightest minds in tech often stubbornly ignore the world outside their high-pressure bubble: once they latch onto an idea, they refuse to let go. After all, Meta Platforms still refuses to shut down its failed metaverse platform, Horizon Worlds, which now feels as desolate as outer space. Just days ago, the two buyers who paid $69.3 million for Beeple's "Everydays" artwork in 2021 finally stopped disputing the NFT ownership—yet hardly anyone mentions NFTs anymore.
Silicon Valley isn't completely unaware that average consumers rarely share their fervor for AI. This is precisely why, by early 2026, the industry is frantically pushing AI products for enterprise clients. A prime example is OpenAI, which heavily promotes its code tool Codex and hired the developer of OpenClaw. Corporate clients are easier to persuade: businesses can view AI from a purely quantitative perspective. If a technology can cut costs by 10%, why not adopt it? Or at least, why not try it?
Personally, many AI programs today can significantly boost individual productivity; I'm not suggesting AI is worthless. But most people do not enjoy living by pure quantitative metrics. Interests, leisure, and joy are inherently based on emotional and qualitative thinking—factors that make people reluctant to rely on AI for everything, even if it saves 90% of costs or increases efficiency fivefold.
In other words, I believe Silicon Valley faces a protracted battle: it will be difficult to get consumers to pay enough for AI products to sustain their development. Persuading the public to accept and use AI will take considerable time, progress much slower than the tech industry expects, and will not happen overnight or become universally adopted. The wisest venture capital in 2026 will approach the consumer AI space with at least some caution, and I sense some investors are already thinking this way. My view might change if the AI era produces an iPhone-like breakthrough—a must-have new product that fundamentally changes how we interact with technology. But it's hard to imagine a desktop-bound chatbot being the key to a brilliant future.
Another clear signal that Silicon Valley has found a way to bridge the enthusiasm gap and make AI more appealing would be if someone devised a more appealing name for autonomous AI.
In Silicon Valley, some AI evangelists view the Bloomberg Terminal as outdated technology, ripe for disruption. On Wall Street, the Bloomberg Terminal remains as indispensable as a sacred relic.
Discussing watches, Patek Philippe enthusiast Paul Graham noted: "Outdated technology isn't typically used to display wealth. So why are mechanical watches an exception? Because a watch is the perfect vehicle. What better way than something worn on the wrist, visible to all? More importantly, what item is more suitable? You could wear a diamond ring or a gold chain, but for investment bankers, such items are socially awkward. They may be profit-driven, but they aren't gangsters."
After a Times of Israel reporter covered an Iranian missile attack, gamblers on the Polymarket prediction platform issued death threats in an attempt to force him to alter his reporting.
Former Uber self-driving head Raffi Krikorian, writing about his own Tesla crash experience, pinpointed the dilemma of AI automation. "We ask humans to supervise systems designed to make supervision pointless," he wrote. "A machine that frequently errs keeps humans alert; a perfectly functioning machine needs no oversight. But what about an almost-perfect machine? That's where the danger lies."
Several new search engines allow users to upload photos of others to find similar-looking OnlyFans creators. More disturbingly, developers market these sites as positive—claiming they discourage people from creating deepfake pornography.
After posting so many pro-Trump tweets, Sam Bankman-Fried is surely headed for the same fate as Trevor Milton, isn't he?
As the founders of Blank Street Coffee discovered, venture capital and multiple espresso shots share a common trait: more isn't necessarily better.