MW AI chatbots taking over the internet aren't just shaping your opinions. They're threatening your personal data.
By Jurica Dujmovic
Social media is being transformed into an arena where artificial voices shape and drown out human conversation
In January, Meta Platforms $(META)$replaced its third-party fact-checking program with a community-based approach. At the same time, Meta developed thousands of AI personas across its popular social-media platforms. While these look like separate developments and Meta has never explicitly connected them, to me they herald something rather concerning: an emerging blueprint for how AI could be used to shape public opinion.
The perfect setup
The strategy is brilliantly simple: Replace "fact-checkers" with "community" moderation similar to what takes place on social-media platform X, formerly known as Twitter. That would appease the right-wing base, while simultaneously populating that same community with AI agents. As of mid-March, Meta's crowdsourced "Community Notes" is live in closed beta on Facebook, Instagram and Threads, but Meta's own Oversight Board warned in late April that the rushed rollout lacked public human-rights due diligence.
We've already seen these AI personas infiltrating Facebook mom groups and marketplace forums, demonstrating their ability to blend into human conversations, albeit with mixed results. Last January, Meta quietly deleted some earlier AI persona accounts after user backlash. While early mishaps - AI bots claiming to have children or offering non-existent items to Facebook group members - might seem harmless, more serious concerns have emerged. In late April, a Wall Street Journal investigation revealed that celebrity-voiced Meta chatbots could be coaxed into sexual role-play with accounts posing as 13-year-olds, demonstrating how these "harmless mishaps" are escalating into real harm.
What's particularly striking is that no one asked for this AI integration. In fact, there's active resistance from the user base - the viral "Goodbye Meta AI" campaign, for example garnered more than 600,000 shares from users desperate to opt out of AI data collection.
Yet Meta continues to push forward, because what it gains from this arrangement is unprecedented control over narrative shaping. By deploying AI agents that can participate in public dialogue, Meta creates a scalable system for influencing public opinion that's far more subtle and effective than any traditional content moderation.
Both Meta and X were contacted for comment on these developments but had not responded by the time of publication.
Read: What is AI really giving back to tech investors? Here's the hard truth.
Behind the curtain
Meta's AI personas aren't your typical spam bots - they are powered by increasingly sophisticated language models that can understand context, maintain consistent personas and engage in nuanced conversations. More importantly, they can be fine-tuned to promote specific viewpoints or narratives while appearing to engage in genuine dialogue.
This capability becomes particularly worrying given Meta's track record. From the Cambridge Analytica scandal to documented psychological experiments on users, Meta has repeatedly demonstrated insensitivity and willingness to exploit user data and behavior for its own ends. While Meta promises to consistently label AI-generated content and verify participants, its history of malpractice makes me seriously doubt these assurances.
Particularly concerning is how Meta's latest AI project plays into broader trends across social media. Users and content creators across platforms - from 9GAG and Reddit to YouTube - are increasingly reporting encounters with bots, and in some cases, are even being mistaken for bots themselves.
While the bot problem isn't new, as artificial engagement becomes more sophisticated and widespread, the line between authentic human interaction and automated responses will continue to blur. This isn't just about spam anymore - the fundamental nature of online exchange is increasingly being shaped by artificial entities whose purpose is tied to governments, corporations and other interest groups.
Why X is different - for now
The situation on X offers an interesting contrast to Meta's approach. While both platforms face challenges with automated content, their responses and circumstances differ significantly. Since acquiring the platform, Elon Musk has waged an aggressive campaign against bots, focusing primarily on spam and harmful content.
What makes X's environment distinct isn't necessarily superior technology, but rather its more focused approach and user-base characteristics. X's audience, particularly in spaces like crypto and tech discussion, tends to be more technically savvy and naturally skeptical of manipulation attempts. The community has developed a certain resilience to automated influence, partly due to its regular exposure to and understanding of AI capabilities.
Furthermore, while X's Community Notes system implements safeguards that require diverse viewpoint agreement and verified human participants, the platform's overall stance on AI remains fundamentally different from Meta's, at least for now. Instead of creating an environment in which AI personas are seamlessly integrated into every aspect of the platform, X's approach acknowledges the presence of automated accounts and humans managing them, while maintaining clearer boundaries between users and AI interactions.
'Solutions' without a cure
The burden of verification should fall on companies and individuals proliferating artificial agents, not human users.
But such transparency may soon become the exception rather than the rule. The fusion of AI and human discourse is likely to become the norm across social platforms, gradually consuming what we now consider authentic human-generated content. It's the phenomenon I've covered in one of my previous articles, one that's transforming social media from a space for human connection into an arena where artificial voices increasingly shape, control and often drown out human conversation.
This worrisome trend is already used as an excuse to push for a new form of digital authentication - one where users must constantly prove their humanity to participate in online interaction.
This seemingly reasonable requirement - proving you're human - becomes a powerful tool for surveillance and control in exchange for a basic human right - participation in public discourse. It's a classic problem-reaction-solution scenario: Flood platforms with AI agents, stoke public anxiety about bot interactions, then introduce authentication systems that demand increasingly personal data. These "solutions" rob us of privacy and free speech to solve an artificial crisis - one engineered by the same companies that now offer to fix it.
While these privacy-invasive "solutions" continue to gain traction, regulators worldwide have finally begun to take notice of Meta's problematic practices - though their focus remains largely misaligned with this emerging threat. Amid growing concerns about AI manipulation, global authorities have instead concentrated their enforcement efforts on Meta's past transgressions, seemingly unaware of the more sophisticated crisis unfolding before them. The longer regulators fixate on yesterday's infractions, the more today's automated voices entrench themselves, transforming what once sounded like sci-fi alarmism into tomorrow's plausible headline.
Internet skeptics dubbed it the "Dead Internet Theory" in 2021, and it reads less like conspiracy and more like prophecy with each passing day. While the original theory imagined government-controlled bots secretly taking over the internet since 2016, today's reality is far more brazen: Major tech companies are openly flooding the internet with artificial intelligence.
Automated content isn't lurking in the shadows; it's being celebrated as innovation,
Meta's bold plan to create thousands of AI "users" and Google's admission that AI-generated content is already overwhelming its search results reveal a stark truth - automated content isn't lurking in the shadows; it's being celebrated as innovation, and is already propagating at an alarming speed. The dystopian vision of human voices drowning in a sea of artificial chatter is unfolding through corporate press releases and product launches, repackaging the displacement of authentic human interaction as digital progress.
Yet this dystopian future isn't inevitable - if we act decisively now. First and foremost, we should flip the script on digital authentication - the burden of verification should fall on companies and individuals proliferating artificial agents, not human users forced to interact with them.
Equally crucial is protecting the right to anonymous speech online - a fundamental pillar of internet communication that's increasingly under threat. And finally, we need legally binding rules to force tech platforms to be crystal clear about how AI systems are involved in content moderation and content creation.
These steps aren't comprehensive solutions, but they are essential foundations for maintaining meaningful human control over our digital spaces - before artificial voices permanently reshape the landscape of human discourse.
More: AI has been the 'Wild West' for investors. But now the EU sheriff is in town.
Also read: Dark-web AI models could make criminal hackers even more powerful
-Jurica Dujmovic
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
May 23, 2025 10:37 ET (14:37 GMT)
Copyright (c) 2025 Dow Jones & Company, Inc.
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.