AI and Fraud: Protecting Your Money and Identity from Deepfakes
Tech Companies Fail to Combat AI-Powered Fraud: Your Guide to Staying Safe Online
A Call for Consumer Protection: Consumer advocacy group "Which?" is urging governments to impose stricter regulations on tech giants to enhance online user protection from sophisticated scams leveraging artificial intelligence. These operations, known as AI-powered fraud, use AI tools to create fake messages, images, and videos, making them harder to detect and increasing user targeting (Experian, 2024-04-05).

Fake Celebrity Videos: Investigations by "Which?" revealed the widespread prevalence of highly convincing fake videos of renowned financial journalist Martin Lewis, as well as UK Prime Minister Keir Starmer. These videos show celebrities urging the public to invest in fraudulent schemes, giving these scams a false appearance of government endorsement and bogus claims of being "risk-free."

Surge in AI Fraud: 2025 saw a dramatic rise in AI-powered identity fraud, as rapidly evolving technology makes it increasingly difficult to detect. "Which?" has now called out major tech companies like YouTube, X (formerly Twitter), and Meta, accusing them of a "clear unwillingness to remove dangerous and misleading content." The organization also demands that the government "ensure its upcoming anti-fraud strategy includes tough measures that oblige major tech companies to take responsibility."
Common Types of AI-Powered Fraud and How They Work
AI Voice Impersonation Scams
Mimicking voices to request information or money.
Deepfake Scams
Fake videos of celebrities or officials.
Romance Scams
Building fake emotional relationships to solicit money.
Phishing Messages
Convincing emails to steal data.
AI Chatbots
Human simulation to manipulate victims.
Fake Websites
Mimicking legitimate sites to collect information.
Investment Scams
Fake investment schemes, often cryptocurrency-related.
Exploiting AI Tools: Scammers leverage powerful AI tools like ChatGPT, Gemini, and Microsoft Copilot to create hard-to-detect scams and scale them easily (CanIPhish, 2025-01-10). Some of the most common types include:
- AI Voice Impersonation Scams (Vishing): Scammers use AI technology to accurately mimic voices, sometimes needing only 3 seconds of audio to clone a voice. They can then make calls that appear to be from loved ones or trusted officials to request personal information or urgent funds. In 2023, an Arizona mother received a call from her "kidnapped daughter," with scammers demanding a $1 million ransom, highlighting the convincing nature of this technique (Norton, 2025-09-16).
- Deepfake Scams: Scammers create highly convincing videos or live streams that mimic people's appearance and voices almost perfectly, including celebrities, colleagues, or even family members. These deepfakes can be used to demand victims transfer money or provide sensitive information. Deepfakes have increased by over 2100% since the advent of generative AI in 2022, and are expected to cause losses exceeding $40 billion in the next few years (Norton, 2025-09-16). In one prominent case, a Hong Kong employee lost over $25 million after being tricked by a deepfake of several senior executives in a video call (CanIPhish, 2025-01-10).
- AI-Powered Romance Scams: Scammers use AI to create fake identities, photos, videos, and voices of non-existent people, then interact with victims online to build false romantic relationships. Generative AI models allow them to manage multiple conversations simultaneously and build deep emotional bonds, often leading to requests for financial assistance. An example is "pig butchering" scams, where trust is slowly built before scammers request significant investments in fraudulent schemes (CanIPhish, 2025-01-10).
- AI-Generated Phishing Emails: Using natural language processing, AI can craft highly realistic and grammatically flawless phishing emails, increasing the likelihood of them being opened and interacted with. These messages aim to trick victims into visiting fake websites, downloading malware, or revealing sensitive information like passwords or bank account details (Norton, 2025-09-16).
- AI-Powered Chatbots: AI chatbots can skillfully mimic human conversations, posing as bank technical support or even a potential romantic partner on dating sites. They are programmed to manipulate victims into taking actions such as buying fraudulent products or revealing personal information (Norton, 2025-09-16).
- AI-Generated Fake Websites: Scammers use AI to create fake websites that accurately imitate legitimate company sites, government agencies, or news outlets, often featuring fake images and customer reviews. Victims are directed to these sites through phishing attacks, where they enter their personal or financial information, believing they are interacting with a real site (Norton, 2025-09-16).
- AI-Powered Investment Scams: These schemes exploit AI algorithms to mislead investors on a large scale, particularly targeting cryptocurrencies and stock trading. Scammers create fake social media profiles and websites to spread misinformation about investment opportunities, or even manipulate stock prices through tactics like "astroturfing," where thousands of fake accounts are created to generate artificial buzz or fear about a particular asset (CanIPhish, 2025-01-10).

Warning from Financial Influencers: The Financial Conduct Authority follows general guidelines warning against trusting advice from unvetted financial influencers. However, about one-fifth (20%) of investors trust online influencers when making investment decisions.
Importance of Content Review: Of course, fake clips that appear to be from genuine trusted sources are a completely different story, so it's crucial to carefully review the content you're watching (i.e., ensure it's from the official channel, with secure links, and a legitimate website).
Fake Websites: Complicating matters is the fact that criminals are able, with the help of AI, to quickly and accurately create fake websites designed to impersonate reputable news outlets like "Which?" and the BBC.

Difficulty Distinguishing Fact from Fiction: Rocio Concha, Director of Policy and Advocacy at "Which?", notes that "AI makes it much harder to detect what is real and what isn't."
Scammer Exploitation and Platform Inaction: "Scammers know this - and exploit it relentlessly. Meanwhile, the major tech platforms that many of us use daily are not doing enough to prevent scammers from operating on their sites, putting their users at risk."
Call for an Anti-Fraud Strategy: "To properly protect British citizens from fraud, the government must implement an action-oriented anti-fraud strategy that is tough on major tech companies and other weak links that enable scammers to thrive online."
YouTube's Cloning Reporting Tool: However, YouTube recently developed a tool to allow creators to report AI-generated video cloning, which won't necessarily target financial deepfake fraud, but is hopefully a step in the right direction for identifying fake videos.
How to Protect Yourself from AI-Powered Fraud?
Avoid Suspicious Links
Be Skeptical of Unsolicited Messages
Limit Sharing Personal Information
Beware of Urgent Requests
Verify Caller Identity
Invest in Fraud Protection
Essential Protection Steps: While AI-powered fraud can be complex, there are essential steps you can take to minimize risks (Norton, 2025-09-16):
- Do Not Click Suspicious Links: Avoid clicking links you receive via chat, email, or text messages, especially if they are from an unknown source. These links can lead to fake websites or expose your device to malware.
- Be Skeptical of Unsolicited Messages: Unexpected messages are often from scammers. AI helps scammers create and respond to more of these messages than ever before. It's best to avoid responding to them.
- Limit Sharing Personal Information Online: Do not share your personal information on social media, messaging apps, AI tools, or anywhere else online (except for trusted financial or government accounts). Scammers use AI to collect personal information, which they exploit to commit identity theft.
- Be Wary of Urgent or Unusual Requests: If someone sends you an urgent request or a threat, they may be a scammer trying to manipulate you. Take a step back and wait until you can think clearly before taking any action.
- Verify Caller/Sender Identity: Always double-check the identity of the caller or sender when you receive a message asking for personal information or money. Verify their identity by contacting them through a different communication channel (using a phone number you know, not the one in the suspicious message).
- Invest in AI fraud protection: AI fraud detection tools, such as Norton Genie, can identify risks and alert you before it's too late, with over 90% accuracy in identifying scams (Norton, 2025-09-16).
What to Do if You Fall Victim to AI-Powered Fraud?
Secure Compromised Accounts
Change passwords, close affected accounts, and notify financial institutions.
Report Fraud
Report to federal authorities at reportfraud.ftc.gov to aid tracking.
File a Police Report
Contact local police department, necessary for insurance and banks.
Document Everything
Keep precise records of financial statements, police reports, and screenshots.
Monitor Your Financial Accounts
Watch for any unfamiliar charges or unusual activity.
Protect Your Credit
Set up fraud alerts or freeze your credit with credit reporting agencies.
Scan Your Devices for Malware
Use antivirus software to identify viruses and malicious activity.
Immediate Actions After Fraud: If you have fallen victim to AI-powered fraud, it is crucial to take immediate actions to protect yourself and minimize damages (Norton, 2025-09-16):
- Secure any Compromised Accounts: Change passwords or close affected accounts. If they are financial accounts, contact the institution and set up fraud alerts.
- Report Fraud: Visit reportfraud.ftc.gov to report the fraud. This will help federal authorities track fraud and prevent others from falling victim.
- File a Police Report: Contact your local police department to file a report. Do this immediately, as insurance companies and banks may require a police report before they can assist you with reimbursements and disputed charges.
- Document Everything: Keep a record and document every way you are affected by the fraud. Print financial statements and police reports, and take screenshots of any communication with the scammer. Clear documentation is key to recovering your money and helping investigators hold perpetrators accountable.
- Monitor Your Financial Accounts: Monitor your accounts for unfamiliar charges or any other unusual activity.
- Protect Your Credit: Contact the major credit reporting agencies (Equifax, Experian, and TransUnion) to set up fraud alerts or freeze your credit. A credit freeze prevents anyone (including you) from opening new lines of credit, such as new credit cards. Fraud alerts require lenders to request verification before extending credit.
- Scan Your Devices for Malware: Use antivirus software to scan your internet-connected devices (phone, tablet, computer, etc.) for malware. This software will identify hidden viruses and other malicious activity occurring in the background.
Best Antivirus Software for All Budgets:
- Best Overall: Bitdefender Total Security.
- Best for Families: Norton 360 with LifeLock.
- Best for Mobile: McAfee Mobile Security.