How AI Is Changing Fraud: 10 Ways Scammers Are Using Artificial Intelligence in 2026

There was a time when you could spot a scam by its grammar. A spelling error, an odd phrase, a request that didn't quite fit. Those signals are gone.
UK fraud hit a record 444,000 cases in 2025 - more than 1,200 incidents every single day. Fraud now accounts for over 40% of all crime in the UK, and AI is the primary accelerant. Not because it has invented new crimes, but because it has made old ones dramatically faster, cheaper, and more convincing.
Here are 10 ways scammers are using AI to steal your data and money.
1. AI-Generated Phishing Emails That People Actually Click
Over 82% of phishing emails now contain AI-generated content, and the results are significant. Microsoft's 2025 Digital Defence Report found people are 4.5 times more likely to click an AI-written phishing email, with a 54% click-through rate versus 12% for messages written manually. A campaign that previously took a team sixteen hours to build now takes five minutes and five prompts.
2. Voice Cloning: When the Voice Sounds Like Someone You Trust
AI voice cloning tools can produce an 85% voice match from just three seconds of audio, and more than a dozen are freely available online.
In July 2025, a woman in Florida received a call in what sounded exactly like her daughter's voice - crying, distressed, claiming she had been in an accident and needed money immediately. She transferred the funds before she could reach her real daughter. The voice had been cloned from audio scraped from social media.
3. Deepfake Video Calls: When Seeing Someone on Screen No Longer Means They Are Real
Real-time deepfake technology has made it possible to impersonate anyone on a video call, not with a static image but with a live, interactive AI-generated face and voice. According to the Wall Street Journal, AI-generated executive impersonations caused over $200 million in losses in Q1 2025 alone. Fortune reported in March 2026 that deepfake fraud drained $1.1 billion from US corporate accounts in 2025, tripling from the year before.
Professor Hany Farid, a digital forensics specialist at UC Berkeley, told Bloomberg: "As a general rule, this idea that you're on a video call with somebody, so you can trust that, is over." No passwords stolen. No systems breached. The attack surface is the video call itself.
4. Synthetic Identity Fraud: Building a Criminal From Thin Air
Synthetic identity fraud combines real and fabricated personal data to create new "people" who do not exist. These fake identities are used to open accounts, apply for credit, and build a financial footprint, before being used to commit fraud at scale.
AI has made this dramatically easier. Generative AI tools produce convincing documents, simulate digital behaviour, and generate the consistency that basic verification systems are built to catch. Identity fraud accounts for 72% of all UK fraud cases recorded in 2025, with synthetic identities growing as a proportion of those.
5. Fraud as a Service: Crime Now Runs on Subscription
Criminal organisations have started packaging their capabilities and selling access to others. Mentions and sales of AI-powered fraud tools on the dark web surged over 200% between 2024 and 2025. Dark web markets now resemble full-service outlets, offering fraud kits for beginners, phishing tools for the technically capable, and full subscription services for those running operations at scale.
6. Hyper-Personalised Spear-Phishing Using Your Own Data Against You
Classic phishing was broadcast and generic. Spear-phishing is targeted and specific, but it used to require significant human effort. AI has made it scalable.
Agentic AI systems can now research a target using public and stolen data and compose a personalised phishing message autonomously. Your job title, your manager's name, your recent project - all available online and all can be used against you. Breached personal data skyrocketed 186% in Q1 2025, giving criminals more raw material than ever.
7. AI Romance and Investment Scams Running Across Thousands of Targets at Once
Romance scams depended on time: building trust with one person over weeks. AI removes that constraint entirely. LLMs can now sustain dozens of simultaneous "relationships", adapting tone and personality to each target individually. Experian's 2026 Future of Fraud Forecast identifies AI-powered emotionally intelligent bots as a top emerging threat.
Chainalysis reported $17 billion in crypto scam losses in 2025, with AI-enabled scams proving 4.5 times more profitable than traditional fraud. A significant share of these are romance-based, where victims are drawn into fake investment platforms over weeks before being asked to commit serious sums.
8. Executive Impersonation and Authorised Push Payment Fraud (APP)
UK Finance's 2025 Annual Fraud Report recorded over £1.17 billion stolen in 2024, with 70% of authorised push payment cases enabled online. The dominant mechanism is executive impersonation: a finance team receives a convincing request that appears to come from a senior leader, with instructions to pay quickly and bypass the usual process.
9. AI-Powered Scam Call Centres Operating at Industrial Scale
Industrial-scale scam call centres use scripted operations and, increasingly, AI-generated callers to target victims at scale. Criminal developers now offer AI-powered call centre platforms built explicitly for fraud. Major retailers report receiving more than 1,000 AI-generated scam calls per day. Real-time voice modulation means one operator can run multiple simultaneous calls in different voices. AI manages the conversation; the human only steps in to close the transaction.
10. Agentic AI: The Shift to Fully Automated Multi-Step Fraud
Each of the above still requires some human involvement at some stage. Agentic AI is beginning to change that. Autonomous AI systems can now plan and execute multi-step fraud operations, researching a target, drafting and sending a message, holding a conversation, and guiding a payment with no human instruction at each step.
UK Finance's 2025 report specifically identifies agentic AI as an emerging threat, warning that criminals will soon be able to automate fraud in ways that existing detection systems are not designed to handle. This is not a future concern. The capability already exists.
What Connects All of These AI Fraud Threats
Every attack above shares one mechanism: it exploits the moment of trust. The moment you decide a voice is real. The moment a face confirms a request. The moment you act, because everything looks right.
The only reliable protection is confirming that the right person genuinely made the request, which is what UnDoubt is built to do. UnDoubt provides real-time human verification between two parties at the moment a high-risk action is about to be taken, whether that is a payment, an access request, or a sensitive communication. Both sides confirm who they are and what they are authorising, before anything happens.
→ Download UnDoubt app now and protect your money and data from impersonators.
→ Interested in an enterprise solution? Get in touch. undoubt@lastingasset.com
Frequently Asked Questions About AI Scams and Fraud
What is AI-powered fraud?
Fraud that uses artificial intelligence tools, including large language models, voice cloning, deepfake generation, and autonomous agents, to deceive people at a scale and level of believability previously impossible. Most AI fraud targets human trust rather than technical systems.
Are AI scams increasing in the UK?
Yes. UK fraud reached a record 444,000 cases in 2025, with Cifas warning that AI is enabling more convincing impersonation, faster synthetic identity creation, and automated attacks at scale.
How can you tell if a voice or video call is AI-generated
In most cases, you cannot tell reliably. Verification through a separate, trusted channel is the only reliable protection, which is exactly what UnDoubt is built for. Rather than trying to detect what is fake, UnDoubt confirms what is real: both parties verify each other in real time, before any action is taken.
What is the best way to protect yourself from AI impersonation fraud?
Pause before acting on any request involving money, access, or sensitive information. Verify through a channel you already trust, not the one the request came through. Treat familiarity (a voice, a face, a name), as insufficient confirmation on its own.
Sources
[1] OCCRP. AI Accelerates UK Fraud Cases to a Record 444,000 in 2025. https://www.occrp.org/en/news/ai-accelerates-uk-fraud-cases-to-a-record-444000-in-2025
[2] Sift. Q2 2025 Digital Trust Index. https://sift.com/index-reports-ai-fraud-q2-2025/
[3] Microsoft. Digital Defence Report 2025. https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Microsoft-Digital-Defense-Report-2025.pdf#page=1
[4] McAfee. AI Voice Cloning Research. https://www.mcafee.com/ai/news/ai-voice-scam/
[5] American Bar Association. The Rise of the AI-Cloned Voice Scam. https://www.americanbar.org/groups/senior_lawyers/resources/voice-of-experience/2025-september/ai-cloned-voice-scam/
[6] Wall Street Journal. AI Drives Rise in CEO Impersonator Scams. https://www.wsj.com/articles/ai-drives-rise-in-ceo-impersonator-scams-2bd675c4
[7] Fortune. Boards Aren't Ready for the AI Age: What Happens When Your CEO Gets Deepfaked. https://fortune.com/2026/03/03/boards-arent-ready-for-the-ai-age-what-happens-when-your-ceo-gets-deepfaked/
[8] Bloomberg. Deepfakes and Chatbots Have Web Users Struggling to Prove Their Humanity. https://www.bloomberg.com/features/2025-ai-deepfakes-chatbots-human/
[9] Cifas. Fraudscape 2026. https://www.cifas.org.uk/newsroom/fraudscape2026
[10] Telefonica Tech. A Dangerous Alliance: The New Dark Web and AI Marketplace. https://telefonicatech.com/en/blog/a-dangerous-alliance-how-ai-is-reshaping-the-dark-web-economy
[11] LexisNexis Risk Solutions. Fraud for Sale: Dark Web Research 2026. https://risk.lexisnexis.com/global/en/about-us/press-room/press-release/20260210-dark-web
[12] Malwarebytes. How AI Made Scams More Convincing in 2025. https://www.malwarebytes.com/blog/news/2026/01/how-ai-made-scams-more-convincing-in-2025
[13] Sift. Q2 2025 Digital Trust Index. https://sift.com/index-reports-ai-fraud-q2-2025/
[14] Vectra AI. AI Scams in 2026: How They Work and How to Detect Them. https://www.vectra.ai/topics/ai-scams
[15] Experian. 2026 Future of Fraud Forecast. https://www.experian.com/content/dam/marketing/na/thought-leadership/business/documents/2026-future-of-fraud-forecast-infographic.pdf
[16] Chainalysis. Crypto Scams 2026. https://www.chainalysis.com/blog/crypto-scams-2026/
[17] UK Finance. Annual Fraud Report 2025. https://www.ukfinance.org.uk/system/files/2025-05/UK%20Finance%20Annual%20Fraud%20report%202025.pdf
[18] OCCRP. AI Accelerates UK Fraud Cases to a Record 444,000 in 2025. https://www.occrp.org/en/news/ai-accelerates-uk-fraud-cases-to-a-record-444000-in-2025
[19] Group-IB. From Deepfakes to Dark LLMs: How AI Is Powering Cybercrime. https://www.group-ib.com/blog/ai-cybercrime-usecases/
[20] Fortune. 2026 Deepfakes Outlook and Forecast. https://fortune.com/2025/12/27/2026-deepfakes-outlook-forecast/

.png)
