FBI Warns of USD 262M in Account Takeover Losses as AI Scams Rise

AI-generated Image (Credit: Jacky Lee)

Cybercriminals have siphoned more than USD 262 million from US bank accounts so far in 2025 through account takeover (ATO) scams, the Federal Bureau of Investigation has warned, as retailers head into Black Friday and the peak online shopping season.

According to a public service announcement from the FBI’s Internet Crime Complaint Center (IC3), more than 5,100 victims reported ATO incidents between January and July 2025, with average losses of just over USD 51,000 per case. Criminals typically impersonate bank or card staff via calls, texts or emails, pressure victims into sharing one-time passcodes or login credentials, then move funds into mule accounts or cryptocurrency platforms, making recovery difficult once the money is in motion.

Security researchers say the spread of generative AI is amplifying those risks. Large language models can generate convincing messages in seconds, and deepfake audio tools now allow fraudsters to imitate call-centre agents or family members with far fewer of the spelling mistakes or awkward phrasing that once alerted wary users. Regulators and law-enforcement agencies in the US and Europe have separately warned that criminals are experimenting with AI to scale phishing, social-engineering and other fraud schemes, even though the FBI’s ATO alert itself does not single out AI.

From Long-Running Phishing to AI-Polished Scams

Account takeover fraud has been a feature of online banking for years, but the latest figures suggest a shift in both scale and sophistication. The FBI’s 25 November 2025 alert draws on complaints from sectors including finance, healthcare and retail, and stresses that once criminals gain access to online banking or card portals, they often move quickly to initiate large transfers or add new payees before victims or banks can intervene.

Holiday shopping periods are a particular flashpoint. In a 2023 analysis of US e-commerce fraud during the “Cyber Five” period (Thanksgiving through Cyber Monday), credit bureau TransUnion found that the share of online transactions flagged as suspected fraud was:

  • 18% higher during Cyber Five 2023 than during the same five-day stretch in 2022, and

  • 12% higher than the average level for the rest of 2023 (1 January to 22 November).

That pattern – elevated fraud attempts condensed into a short, high-volume window – has since become a recurring theme in vendor reporting around major sales events.

At the same time, criminals have begun systematically abusing large language models and related tools. Security firms have documented “phishing kits” and playbooks sold on dark-web markets that use AI to customise emails, chat messages or SMS lures at scale, often inserting a target’s name, bank or retailer into templates. Blockchain-analytics providers such as Chainalysis estimate that around USD 2.2 billion in cryptocurrency was stolen in 2024 through a combination of hacks and scams, many of which started with credential theft or social-engineering stages similar to ATO compromises.

Mobile, Retail and Deepfakes in Focus

Researchers say the convergence of ATO and AI-assisted fraud is most visible on mobile devices and shopping platforms:

  • Mobile phishing: Zimperium’s recent mobile threat research concludes that a large majority of phishing pages are now designed primarily for phones, with one widely cited figure indicating that around four in five phishing URLs are targeting mobile users. Attackers frequently send SMS (“smishing”) messages that impersonate delivery firms, banks or shopping sites and direct recipients to spoofed login pages optimised for small screens.

  • Email-borne attacks: Darktrace’s 2025 mid-year review reports more than 12.6 million malicious emails detected between January and May 2025, including campaigns that impersonated major retailers and financial brands. The company argues that many of these messages show hallmarks of AI assistance, such as longer, more coherent text and rapid variation in content designed to evade traditional filters.

  • Malicious holiday domains: Fortinet’s FortiGuard Labs has highlighted tens of thousands of holiday-themed domains spun up around large sales events, with a subset flagged as high-risk or clearly malicious. Many mimic legitimate e-commerce brands, including emerging marketplaces, by copying logos, layouts and product imagery to capture credentials or payment card details.

  • Fake shopping sites at scale: NordVPN data for 2025 points to a 250% increase in detected fake shopping websites ahead of Black Friday compared with earlier in the year, and a 232% rise in fake Amazon sites in October relative to September. These spoofed sites increasingly rely on AI-driven website templates and image tools to appear credible to casual buyers.

  • Industrial credential-stuffing: Bot-mitigation provider Kasada reports that large-scale credential-stuffing and ATO campaigns are now run as organised operations. Its 2025 research describes infiltration of 22 credential-stuffing crews targeting more than 1,000 major organisations and compromising millions of customer accounts; in one month alone, Kasada observed more than 1,100 credential-stuffing incidents across over 100 retailers, affecting hundreds of thousands of accounts.

Search-engine poisoning adds another layer of risk. Investigations by security companies and specialist outlets have found fraudulent ads and cloned shopping sites appearing above legitimate results when users search for Black Friday-style deals on Amazon or other brands, funnelling traffic into payment pages controlled by attackers.

Deepfake-enhanced fraud is also beginning to touch retailers and consumers. Voice-security firms report that a growing share of fraud attempts against some large retail contact centres now involve AI-generated audio, with thousands of automated calls per day at major brands during peak periods. These calls typically seek to reset passwords, change delivery details or trick staff into processing refunds to new accounts.

High Dollar Losses and Growing Fatigue

The financial impact of ATO is concentrated but severe. With an average loss per complaint above USD 51,000 in the FBI’s latest figures, individual incidents can wipe out years of savings, especially where attackers quickly move funds through multiple accounts or convert them to cryptocurrency before banks can intervene.

Survey data suggests that consumers feel increasingly exposed:

  • McAfee’s 2025 holiday-shopping research finds that around 46% of surveyed Americans say they have encountered scams that used AI-generated or AI-altered content while shopping online. Earlier McAfee work indicates that roughly one in five respondents know someone who has been targeted by a deepfake-related scam, such as manipulated product videos or fake customer-service calls.

  • A recent Bitdefender consumer cybersecurity survey reports that around 14% of respondents globally said they had fallen victim to an online scam in the previous year, with phishing emails and fake websites among the most common vectors.

  • In a report published in July 2025, based on survey work carried out in April 2025, Pew Research Center found that 73% of US adults say they have experienced at least one type of online scam or security issue, such as credit-card fraud, an online shopping scam or an attempted account compromise. The same study notes that a majority of Americans receive scam calls, emails or texts at least weekly, and that some groups – including Hispanic and Black adults – are more likely than White adults to report online shopping scams specifically.

For businesses, the costs extend beyond direct losses. ATO incidents can trigger chargebacks, customer-support surges and reputational damage if victims associate fraud with weak security on a merchant’s site. Holiday-season threat reports from firms such as Kasada, F5 and Adobe all point to spikes in automated attacks immediately before and during major sales, when transaction volumes increase and manual checks become harder to sustain.

Regulators and law-enforcement agencies are trying to strengthen response mechanisms, but the cross-border nature of many ATO campaigns slows recoveries. The FBI urges victims to contact their bank immediately, request recall of funds where possible, and file a complaint via ic3.gov, but acknowledges that only a fraction of stolen money is recovered once it has been moved through multiple accounts or converted to cryptocurrency.

Defences and Vendor Landscape: AI on the Defensive Side

A growing group of security vendors are using AI to detect and block ATO and related fraud, with different areas of emphasis:

  • Darktrace applies self-learning models to network and email traffic, looking for behavioural anomalies such as unusual login locations, sudden changes in typical transfer patterns or atypical device fingerprints. Its annual threat reports highlight the role of Malware-as-a-Service and automated phishing in enabling ATO and stress the value of autonomous response tools that can slow or block suspicious sessions in real time.

  • Fortinet, through products such as FortiNDR and its wider security fabric, combines behavioural analytics with firewall and secure-web-gateway controls. FortiGuard Labs threat briefings around holiday periods frequently focus on malicious domains that spoof major retailers and payment providers, and on traffic patterns that indicate scripted credential-stuffing or card-testing activity.

  • Zimperium specialises in mobile-focused defence, using on-device machine-learning models to flag malicious apps, risky configuration changes and phishing URLs opened on smartphones. Its global mobile-threat reporting emphasises the shift of phishing campaigns toward mobile browsers and messaging apps, where users may be less cautious and URL bars are harder to inspect.

Other vendors emphasise different detection strategies. Vectra AI markets its “attack signal” approach as a way to correlate events across cloud, identity and network systems while reducing alert volume, and ExtraHop focuses on analysing patterns in encrypted network traffic, using metadata and behavioural baselines to surface anomalies at scale without relying solely on payload inspection.

Endpoint-centric tools such as CrowdStrike Falcon and broader extended-detection-and-response (XDR) platforms aim to combine signals from endpoints, identities, networks and cloud workloads to capture the cross-channel activity that often characterises ATO schemes. Industry analysts generally advise organisations to choose combinations of tools that align with their risk profile, existing infrastructure and in-house expertise, rather than assume a single AI product can address all phases of fraud.

AI Arms Race and Regulatory Response

Experts expect an ongoing arms race between AI-assisted attackers and AI-powered defences. Fraud and threat-intelligence firms report that criminals are testing “agentic” AI systems that can chain tasks such as harvesting leaked credentials, testing logins, drafting personalised messages and adjusting tactics based on early results, potentially automating more of the ATO lifecycle.

Financial institutions, meanwhile, are expanding their use of AI for anti-money-laundering and fraud detection. Surveys from professional associations and consultancies suggest that a large majority of banks are either piloting or deploying AI-based systems for transaction monitoring and network-analysis, with adoption rates commonly reported in the 60–90% range depending on region and bank size.

On the policy side, the EU AI Act includes transparency obligations for AI systems that generate or manipulate content, including deepfakes, requiring clear labelling in many contexts once the rules take effect. Regulators and standards bodies hope that provenance and labelling measures, combined with watermarking and cryptographic signatures, will make AI-generated deception easier to flag and trace over time.

For individual users, however, the core advice remains relatively simple and technology-neutral. Law-enforcement agencies and consumer-protection bodies recommend:

  • Accessing banking and shopping accounts by typing web addresses directly or using bookmarks, rather than clicking links in unsolicited emails or texts.

  • Enabling multi-factor authentication wherever possible, preferably with app-based authenticators or hardware keys when supported.

  • Treating unexpected messages about urgent payments, refunds or security alerts with caution, and verifying them using official contact channels from bank cards or trusted websites.

  • Acting quickly if an account takeover is suspected by contacting the bank, changing passwords where they are reused, and filing reports with IC3 or the relevant national reporting body.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

New AI Tool Cuts Data Center CO2 by 45% and Extends Server Life by 1.6 Years

Next
Next

Microsoft Cuts Copilot Price to US$21 as Users Top 150 Million