AI Impersonation Attacks: How Deepfake Fraud Works, Real Examples, and How to Stop It

In early 2024, a finance worker at the global engineering firm Arup joined what appeared to be a routine video call. The CFO was there. So were several colleagues. The conversation made sense. The faces were familiar. The voices matched. Every single person on that call was a deepfake.

By the time the fraud was discovered, £19.8 million had been transferred across 15 transactions to five bank accounts. Not because systems were hacked or passwords were weak. Because a person trusted what they saw and heard, and acted on it.

That's where fraud lives now.

What is an AI impersonation attack? An AI impersonation attack is a form of fraud in which artificial intelligence tools, including voice cloning, deepfake video, and generative text, are used to impersonate a trusted person or organisation convincingly. The goal is to persuade the target to transfer money, share sensitive data, or grant access, without realising the person they are dealing with is not who they claim to be.

AI Impersonation Fraud Statistics: What the 2024 Data Shows

The numbers reflect a turning point.

In 2024, UK fraud losses totalled £1.17 billion across 3.31 million reported cases, a 12% rise in volume on the year before, according to UK Finance. Authorised push payment (APP) fraud, the category where impersonation is most prevalent, accounted for £450.7 million of that total. When wider scam losses beyond banking are included, the Global Anti-Scam Alliance estimates that UK individuals lost £11.4 billion in the same period, up £4 billion on the previous year.

These figures only capture what gets reported. The real scale is larger.

By the first half of 2025, the picture had already worsened. UK Finance data shows criminals stole £629.3 million in the first six months of the year alone, a 3% increase on the same period in 2024, with confirmed fraud cases rising by 17% to more than 2.09 million incidents.

Fraud now accounts for almost 50% of all crime in the UK. The Crime Survey for England and Wales recorded a 31% rise in fraud incidents in the year ending March 2025, reaching an estimated 4.2 million incidents: the highest figure recorded since fraud data collection began.

What's changed isn't just the volume. It's the method. AI has made it cheap, fast, and convincing to be someone else. Voice cloning tools require seconds of audio. Deepfake video can be generated in under an hour. Personalised phishing messages now read like they were written by someone who knows you, because the AI training data often came from your own communications.

Traditional security wasn't built for this. It was built to stop systems from being broken into, not to stop people from being persuaded.

10 Types of AI Impersonation Attack: Real Examples and How Each One Works

1. CEO Fraud and Business Email Compromise (BEC): When the Request Comes from the Top

What is CEO impersonation? An attacker impersonates a senior executive, usually via email or voice, and instructs a finance team to urgently transfer funds. The request feels real because the name, tone, and timing have been researched in advance.

In early 2025, a coordinated wave of deepfake attacks targeted Italy's corporate elite. Criminals impersonated Italian defence minister Guido Crosetto, contacting prominent business figures, including associates of fashion house Giorgio Armani and claiming they needed help to free journalists held in the Middle East. At least one victim transferred €1 million to a Hong Kong-based account, convinced the Bank of Italy would reimburse them. The voice was convincing. The request was urgent. The context was entirely fabricated.

The scale of this threat across businesses globally is reflected in the numbers. The FBI's 2024 Internet Crime Report recorded 21,442 BEC complaints in the United States alone, with losses totalling nearly $2.8 billion for the year. In the UK, CEO fraud remained the APP scam type with the highest average case value in the first half of 2025, at just over £20,000 per confirmed incident.

What makes it work: Urgency, authority, and familiarity. The request doesn't need to look unusual if the person making it sounds like someone you trust.

2. IT Helpdesk Impersonation: How a Single Phone Call Can Breach an Entire Organisation

What is IT Helpdesk Impersonation? An attacker calls a company's IT support team posing as an employee who has been locked out. They use publicly available information to sound convincing, then gain access to systems once they've obtained login credentials.

In April 2025, Marks & Spencer suffered one of the most costly cyberattacks in UK retail history. Attackers, believed to be the Scattered Spider collective, impersonated a legitimate M&S employee and called the helpdesk run by the company's third-party IT provider. The agent carried out a password reset. With those credentials, the attackers extracted M&S's entire Active Directory database, deployed ransomware across the company's estate, and encrypted critical systems. The attack directly cost M&S around £136 million in incident response, recovery, and specialist professional support, with statutory pre-tax profits for the half-year nearly wiped out, falling from £391.9 million to just £3.4 million.

What makes it work: IT helpdesks are trained to help. That instinct becomes a vulnerability when the person asking for help is not who they claim to be.

3. Vendor Impersonation and Invoice Fraud: How Attackers Intercept Business Payments

What is vendor impersonation & invoice fraud? Attackers monitor email correspondence between companies and their suppliers, then strike at payment time, sending fraudulent invoices with altered bank details. In more advanced cases, they compromise a real supplier's email account and insert fraudulent payment instructions mid-conversation, making the deception almost impossible to detect.

According to the FBI's 2024 Internet Crime Report, business email compromise, which includes vendor impersonation and payment diversion, caused $2.77 billion in reported losses in the United States in 2024 alone. A survey by the Association for Financial Professionals found that 79% of organisations experienced actual or attempted payment fraud in 2024, with BEC cited as the most common attack vector by 63% of respondents.

In 2025, the City of Baltimore lost over $803,000 after a fraudster gained access to its supplier platform, changed a genuine vendor's bank details, and intercepted two electronic fund transfer payments before the fraud was detected. One transfer was retrieved. The other was not.

What makes it work: Routine transactions don't trigger scrutiny. When the paperwork looks right and the context fits, payment happens automatically.

4. AI Voice Cloning Romance Scams: How Fraudsters Exploit Emotional Trust

What is a romance scam? Attackers build genuine-feeling relationships over weeks or months, then request money, citing an emergency. AI voice cloning adds a layer of authenticity that makes the manipulation harder to question.

In December 2024, BBC Scotland reported the case of Nikki MacLeod, a 77-year-old retired lecturer from Edinburgh, who lost £17,000 to a romance scammer using AI-generated deepfake videos. Having met someone online she knew as Alla Morgan, Nikki grew suspicious and asked for a live video call. Instead, she received recorded video messages: a woman on an oil rig, bad weather in the background, reassuring her directly by name. She was completely convinced. The documents looked real. The videos looked real. The bank details looked real. Over several months, she sent gift cards, bank transfers, and PayPal payments. Her own bank eventually stopped a transaction and alerted her to the fraud. Around £7,000 was recovered. The rest was not.

Nikki's case is not isolated. UK Finance recorded £30.5 million in romance scam losses in 2024, with losses rising a further 35% in the first half of 2025. The average amount lost per victim is significantly higher than most other fraud categories, which reflects how much emotional groundwork has been laid before any request is made.

What makes it work: Emotional investment overrides rational checks. By the time the request arrives, the relationship has already done the work.

5. Government and Tax Authority Impersonation Scams: Exploiting Fear of the Law

What is Government and Tax Authority Impersonation? Scammers pose as HMRC, the police, the Home Office, or other official bodies and threaten immediate legal consequences unless payment is made. Spoofed caller IDs make the calls appear legitimate.

HMRC impersonation is one of the most reported fraud types in the UK. Victims receive calls or texts claiming they owe unpaid tax, face arrest, or risk deportation, designed specifically to prevent calm thinking. The pressure is the point: act now, before you have time to check.

What makes it work: Fear of authority produces fast, unverified action. Victims don't check because the emotional stakes feel too high.

6. Fake Tech Support Scams: How Attackers Impersonate Trusted Technology Brands

What are fake tech support scams? Attackers contact individuals claiming to be from a well-known tech company, warning of a virus or security issue. They convince the victim to grant remote access to their device, then steal data or install malware.

According to Action Fraud, computer software service fraud, the category covering fake tech support, is consistently one of the most reported fraud types in the UK, with losses running into tens of millions annually. Older victims are disproportionately targeted: Cifas data shows those aged 61 and over account for 25% of identity fraud cases and are the most commonly targeted group across impersonation-led attacks. 

What makes it work: The framing is protective, not threatening. Victims believe they are being helped.

7. Deepfake Cryptocurrency and Investment Scams: When the Endorsement Is Fabricated

What are deepfake cryptocurrency and investment scams? Fraudsters impersonate financial advisors or create fake investment platforms using deepfakes of trusted figures to add credibility. Victims send money expecting returns.

Deepfake videos of well-known figures have been used repeatedly across 2024 and into 2025 to promote fraudulent cryptocurrency schemes, convincing victims to send funds in expectation of large returns. 

In 2024, deepfake videos of Elon Musk were used to promote fraudulent cryptocurrency schemes, convincing victims to send funds in expectation of doubled returns. These scams have reached British savers too: UK Finance recorded £144.4 million in investment fraud losses in 2024, a 34% increase on the year before, making it the single biggest category of APP fraud loss in the UK.

What makes it work: Credibility borrowed from recognisable figures lowers defences. When the person endorsing something looks and sounds real, the product feels legitimate.

8. AI Voice Cloning Family Emergency Scams: How Fraudsters Use Cloned Voices to Exploit Families

What are AI voice cloning family emergency scams? Attackers use cloned voices to contact family members, typically posing as a grandchild or relative in urgent trouble. The emotional intensity of the call prevents the victim from pausing to verify.

These scams are active in the UK. Action Fraud regularly receives reports of calls impersonating relatives in crisis situations, often requesting payment via bank transfer, gift card, or cash. Cifas data shows those aged 61 and over are the most frequently targeted group across all impersonation fraud types, accounting for a quarter of all identity fraud cases in 2024.

What makes it work: Caring for someone we love is instinctive. Scammers exploit that instinct by making the threat feel immediate and real.

9. Legal and Debt Collection Impersonation Fraud: How Fake Authority Triggers Fast Payment

What is legal & debt collection impersonation fraud? Scammers impersonate lawyers or debt collectors, threatening lawsuits unless immediate payment is made. They use official language, fake case numbers, and spoofed numbers from real law firms.

In the UK, impersonation of the police, HMRC, and official bodies remains a significant and persistent fraud category. UK Finance recorded losses across bank and police impersonation scams throughout 2024, and while awareness campaigns have driven some reduction, the tactic continues to claim victims, particularly among those who feel unable to question an authority figure.

What makes it work: Legal threats feel unignorable. The combination of official language and time pressure produces payment before verification.

10. Social Media Account Takeover and Impersonation: When the Face Is Real but the Person Is Not

What is social media account impersonation? Attackers compromise a real social media account, or create a convincing duplicate, then message the account holder's contacts asking for money or personal information under the guise of an emergency.

Instagram and Facebook accounts are regularly hijacked for exactly this purpose. The trust is inherited: because the message appears to come from someone you know, the request feels safe.

What makes it work: We assume the person behind a familiar account is who they say they are. That assumption is now regularly wrong.

Why Traditional Cybersecurity Fails Against AI Impersonation Fraud

Caller ID can be spoofed. Email domains can be impersonated. Voices can be cloned. Faces can be fabricated. Security awareness training tells people to stay alert, but it can't hold the line against a live deepfake video call.

The problem isn't that people are careless. The problem is that the signals we rely on to make trust decisions, how someone looks, how they sound, and whether the context feels right, can now be manufactured at scale.

Security tools can verify identities and authenticate devices. What they can't do is confirm whether a specific request, at a specific moment, has been genuinely made by the person who appears to be making it.

That's the gap. That's where the loss happens.

How to Prevent AI Impersonation Attacks: Real-Time Mutual Verification

Most verification systems check one side of an interaction. UnDoubt verifies both.

When a high-risk request is made, whether that's a payment, a sensitive data transfer, or access approval, both parties confirm it in real time. The confirmation happens through cryptographic authentication, not through a voice, a face, or a message that could be fabricated.

A deepfake can reproduce a CFO's appearance and voice with near-perfect accuracy. It cannot forge a cryptographic signature.

Here's how it works:

  • Two-way verification. Both the person making the request and the person receiving it confirm the interaction. One-sided authentication is where most attacks succeed.
  • Person-first, not device-first. UnDoubt verifies the human behind the action is real, not only the device or account being used.
  • Real-time confirmation. Verification happens during the interaction, before anything is acted on.
  • A cryptographic record. Every confirmed interaction creates an undeniable record of who authorised what, and when.

The Arup employee had every reason to trust what they saw. The faces were right. The voices were right. The meeting felt completely normal. What was missing was a single layer of verification that couldn't be faked: confirmation that the CFO had actually authorised the request at that moment.

That's what UnDoubt provides.

Verify Before You Act: Stop Impersonation Before It Becomes a Loss

Impersonation succeeds when people trust what they see and hear, and act without checking. The technology to fake that trust is now freely available and improving every month. The answer isn't to trust nothing. It's to verify the moments that matter.

Download the UnDoubt app and be among the first to protect your communications before trust becomes a liability.

Enterprise teams: contact us to discuss a pilot programme for your organisation.

References

[1] UK Finance – Annual Fraud Report 2025
https://www.ukfinance.org.uk/news-and-insight/press-release/fraud-report-2025-press-release

[2] UK Finance – Half Year Fraud Report 2025 (Full Report)
https://www.ukfinance.org.uk/system/files/2025-10/Half%20Year%20Fraud%20Report%202025_0.pdf

[3] Office for National Statistics – Crime in England and Wales: Year Ending March 2025
https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulletins/crimeinenglandandwales/yearendingmarch2025

[4] FBI Internet Crime Complaint Center – 2024 Internet Crime Report
https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf

[5] Association for Financial Professionals – 2025 AFP Payments Fraud and Control Survey
https://www.afponline.org/publications-data-tools/reports/survey-research-economic-data/details/2025-AFP-Payments-Fraud-and-Control-Survey-Report

[6] PYMNTS – Baltimore Loses Over $803,000 to Vendor Payment Fraud (2025)
https://www.pymnts.com/news/security-and-risk/2025/baltimore-loses-over-803000-to-fraud-involving-vendor-payments

[7] Cifas – Fraudscape: Identity Fraud Data
https://www.cifas.org.uk/insight/fraudscape

[8] Sky News – M&S Reveals Cost of Cyber Attack as Profit Almost Wiped Out (November 2025)
https://news.sky.com/story/mands-reveals-cost-of-cyber-attack-as-profit-almost-wiped-out-13464171