MW AI isn't the solution to financial fraud - it actually might be the biggest problem
By Hansen Rada
AI-generated scams are becoming more widespread
AI is a particularly powerful tool in the hands of fraudsters.
Fraud powered by generative AI is only limited by the criminal's imagination.
When it comes to preventing financial fraud, many experts optimistically predicted that AI would serve as the "silver bullet" that could halt bad actors who aim to exploit financial institutions and businesses alike. The inclination is understandable; computers could outsmart humans trying to pull scams and catch them, all without the added layer of a perpetual risk of human error.
But that prediction hasn't come to pass. If anything, the explosion of AI has only prompted more questions about the future of tackling fraud and scammers - especially as machine-learning-fueled scams emerge - successfully confusing consumers and fooling financial institutions once thought to be impenetrable.
AI won't be just the next big thing in solving fraud; it might be the biggest part of the problem.
Fraud is rising, and the stakes are high
Fraud-prevention professionals need to confront this reality now because businesses (especially small businesses) are struggling and are more vulnerable to fraud than ever.
Businesses are hurting. 2024 saw a 28% increase in small-business bankruptcies from the prior year, and 2025 has only thrown new challenges at owners - forcing them to navigate the whims of a new presidential administration, compounded by the pressure of rising inflation.
It all means that fraud frequency is increasing. A recent report shows a double-digit increase this year in fraud targeting banks' small-business lending programs, with bad actors misrepresenting themselves to obtain cash in the form of loans. Some predict fraud instances may actually continue to grow, increasing exponentially in the coming years. A report by Deloitte found that fraud losses may increase by 32% in the U.S., causing up to $40 billion in losses by 2027.
Unsurprisingly, experts have been quick to suggest AI as a solution. Everywhere you look, headlines seem to suggest that artificial intelligence is the answer.
But generative AI could actually drive the predicted increase in fraud. For every instance the tech makes us more efficient and automates work, it does the same for bad actors, equipping them with greater speed and more ways to commit fraud.
A study by Nationwide found that 25% of American small-business owners have been targeted by generative-AI scams over the past year. Over 50% admitted to being deceived by a deepfake image or video, and 90% of respondents believe that generative AI scams are becoming more sophisticated.
AI is a particularly powerful tool in the hands of fraudsters. Its ability to learn is just as useful to bad actors as it is to businesses. OpenAI's safety tests for ChatGPT-4 revealed that AI has already developed the ability to scam human users into helping them pass Captcha tests. Many of the AI-enabled deepfake programs used by cybercriminals deploy "learning" protocols that can be used to check security protocols and update accordingly. These programs are already popular and available on the dark web.
It can be simple, too. Scammers could use GenAI tools to create realistic fake invoices that mimic a company's real vendors or create highly realistic fake identities to secure loans or contracts.
Machine learning's broad accessibility places everyone at risk of fraud and everyone, especially bad actors, in prime position to commit fraud ruthlessly unless we reintroduce humans back into the detection and prevention mix.
AI can be used to prevent AI fraud - but AI alone is not enough.
AI has immense potential to improve our world, our work and our everyday lives. And when used thoughtfully, AI can help combat fraud threats; it has several promising applications in fraud detection, including bank-transaction monitoring, spam-messaging filtering, harmful-content blocking, and malware detection, per PwC.
AI is good at recognizing itself, and professionals can use AI to identify chatbots, deepfakes and voice clones. However, false positives, such as identifying legitimate behavior as fraud, are a recurring problem for fraud detection in financial sectors. One study found that false positives can occur as frequently as 90% of searches. Most of all, fraud is a fundamentally human challenge. As a Deloitte study puts it, fraud powered by generative AI is "only limited by the criminal's imagination."
Counterintuitively, many older systems relying on human expertise still provide reliable safeguards. AI maturity levels also vary widely across industries, making some companies more or less vulnerable depending on where they are with the tech. There is no one-size-fits-all solution for fraud prevention, and therefore, there is no master algorithm that can address every fraud case.
Balancing old and new approaches will be essential in staying resilient or rebuilding protections against AI-powered threats. We'll have to adapt to an AI-rich world without discarding valuable legacy tools that will keep working well.
AI can help us reduce fraud on an individual and institutional scale. But we need to embrace its limitations as much as we embrace its utility to accelerate the timeline.
Hansen Rada is CEO of Tax Guard, which provides tax compliance investigation and analysis.
More: You've got less than 5 years to rescue your money from AI and stablecoins. Here's what to do.
Also read: 'Pig butchering,' fake friends and 8 more of the craziest financial scams that money pros have seen themselves
-Hansen Rada
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
October 18, 2025 09:59 ET (13:59 GMT)
Copyright (c) 2025 Dow Jones & Company, Inc.