By Mark Rosenberg
About the author: Mark Y. Rosenberg is a visiting scholar and lecturer with the UC Berkeley Department of Political Science and Computational Social Science. He is the founder of GeoQuant.
Last week, bad "science fiction" moved the market. Imagine what good science fiction will bring.
There are many reasons to be skeptical of Citrini Research's viral report published on Feb. 22 that projected massive, near-term economic fallout from AI. Its empirics are weak, its theory underspecified, and its narrative overwrought. The authors themselves framed the report as a worst-case scenario, caveated by a very high level of uncertainty.
Nonetheless, Citrini's analysis drove stocks to sell off, in large part because the market was already feeling on edge about its uncertainty around AI. The economic, social, and market impacts of AI are all "unknown unknowns," now being exacerbated by geopolitical flux. This weekend's confluence of political conflict between the U.S. government and Anthropic and a new, AI-enabled war in the Middle East certainly won't help. Neither will the pending release of DeepSeek's latest model from China.
Under these conditions, sci-fi can provide us with valuable information. Indeed, prescient sci-fi has been modeling an "artificial general intelligence-machines-take-over-the-world" moment for many decades. At their best, sci-fi thought experiments isolate technological shocks, project them forward under different conditions of social and environmental stress, and ask: What breaks first? The physical environment? Economic and social institutions? The human psyche?
At the very least, these models are useful heuristics amid extreme uncertainty -- and right now, we need all the good heuristics we can get. Take it from someone who built a business measuring geopolitical risk.
Which brings me to Star Trek, perhaps the most prescient of the major sci--fi franchises. According to Star Trek canon -- faithfully tracked and documented in online fan-fiction -- advances in AI and biotechnology generate sustained socio--economic and political instability in the mid--2020s, fomenting the rise of hellscape-ish " sanctuary districts" -- urban developments for the destitute erected amid massive wealth, typified by downtown San Francisco, circa 2024. Riots and social unrest ensue. By 2026, Americans are fighting a second civil war.
The broader geopolitical system fractures rapidly, accelerating global conflict and nuclear proliferation and exacerbating conflict between the two largest blocs: a U.S.--led "New United Nations" and a China--led "Eastern Coalition." The result is a nuclear World War III, which kills hundreds of millions and precipitates an ecological disaster that lasts well into the next century.
Scanning this weekend's headlines, these plot devices feel more like diagnosis than fantasy. AI is starting to shape financial markets, military planning, policing, and political competition. Tech executives continue to warn that AI will drive massive economic disruption, outpacing governments' ability to respond and amplifying social instability. Biotechnology is advancing faster than regulation. Generative AI has made misinformation far cheaper than truth. Political violence around democratic processes is rising alongside social polarization.
Geopolitical risk is surging as the great power rivalry between the U.S. and China hardens into competition across technology supply chains, advanced computing, and space. The race to AGI is a primary accelerant of this struggle.
That race is driving a rapid realignment and concentration of technological and political power, just as a fractious U.S. approaches the high--stakes midterm elections in November.
Justified as protecting American hegemony against Chinese competition, most leading tech companies have embraced the Trump administration's personalistic and transactional style of governance, with little regard for the democratic, capitalist, or national values ostensibly being defended. Those that push back are punished.
It is hard to imagine this ecosystem won't exacerbate perceptions of fraud, collusion, and manipulation in the midterms, not to mention the 2028 general election. A contested election may well trigger a dramatic bout of domestic social unrest. Either way, it won't be good for the market.
The path from these relatively reasonable speculations to nuclear apocalypse is, to say the least, another matter. Reality won't replicate a specific dystopian script, Star Trek's or otherwise. But the conditions that make such catastrophic outcomes conceivable -- growing technological, geopolitical, social, and environmental instability -- are aligning in ways that warrant concern.
These earthly dynamics are already part of the risk calculus for markets. If AI has introduced enough uncertainty that a Substack post titled "2028 GLOBAL INTELLIGENCE CRISIS" can trigger a legitimate risk-off day, try the much fatter tails from future AI--enabled social and geopolitical crises. The surging price of gold is almost certainly playing off these tail risks.
If we do follow a Star Trek-ian course , our story will end with utopia. By the end of the 22nd century, death and destruction caused by AI give way to an Earth free of poverty, hunger, disease, and war. As a united planet, we turn our attention to exploring the galaxy in partnership with other, more evolved worlds. Live long and prosper, et cetera.
The prospect of such an egalitarian future has its detractors, especially among the leading characters pushing the technology and politics of AI acceleration, such as Peter Thiel. Other tech leaders hope we can all just skip to the good part; others may not quite understand Star Trek at all. But all of their sci--fi libraries carry the same warnings as mine. Let's hope they pay more attention.
And the economists, too.
Guest commentaries like this one are written by authors outside the Barron's newsroom. They reflect the perspective and opinions of the authors. Submit feedback and commentary pitches to ideas@barrons.com .
This content was created by Barron's, which is operated by Dow Jones & Co. Barron's is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
March 02, 2026 11:45 ET (16:45 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.