Deepfakes and Scams Drive USD 4.6 Billion in Global Losses

Image Credit: Jacky Lee

A fresh wave of artificial-intelligence-enabled fraud is hitting creative professionals and cryptocurrency users in late 2025, mixing cheap AI tools with very old-fashioned deception. From scammers tricking illustrators into handing over free sketches, to deepfake videos fronting fake crypto giveaways, these schemes show how AI is shifting from a background “helper” to a central driver of financial crime. Police in India, for example, recently dismantled a “digital arrest” and crypto-laundering network in Gujarat that allegedly moved hundreds of crores of rupees through mule accounts and USDT, underscoring how global and organised these scams have become.

How “Proof Sketches” Become AI Training Fuel

Freelance illustrators and character artists are reporting a specific pattern:

  1. A “client” reaches out by email or social media with a seemingly legitimate commission.

  2. They often use convincing but questionable domains — community posts have flagged examples such as unityvanguardschool.org in illustration job offers.

  3. They insist on unwatermarked sketches “to prove your skill” before paying a deposit.

  4. As soon as the artist delivers a draft, the client claims the work looks “AI generated”, demands a refund, and then disappears.

Artist-advocacy groups and blogs have documented variants of this scam for years, but platforms like Hireillo say the pattern has accelerated as generative image tools have improved. Darren Di Lieto, who runs Hireillo, has told reporters that what used to be simple “free sketch shopping” is now being supercharged by AI: scammers don’t just get unpaid artwork, they get clean line work they can feed into image-to-image systems.

Why The Scam Works So Well

  • Low friction tools. Open-source models such as Stable Diffusion and user-friendly services built on similar technology make it trivial to refine or re-style an existing sketch into a polished illustration.

  • Stylistic mimicry. Once scammers have a few samples of an artist’s work, they can train or fine-tune models to imitate that recognisable visual style, essentially automating the artist’s “brand”.

  • Confusion about authorship. Since AI-generated art often looks similar to digital illustration, accusing the artist of using AI muddies the waters and makes it harder for them to push back, especially when working with new clients.

Artists like Kelly McKernan, who is part of the ongoing lawsuit against Stability AI, Midjourney, and DeviantArt for training on copyrighted works without consent, see these scams as part of the same continuum: art being treated as cheap training data instead of paid labour.

Other creators, including illustrator and blogger Julia Bausenhardt, have warned that when artists share layered PSD or Procreate files to “prove” they didn’t use AI, they may actually be handing over even richer material for future model training.

Practical Defences For Artists

Contract and workflow

  • Never send unwatermarked work without a clear paper trail. Use basic contracts or at least written terms specifying that preliminary sketches are billable and non-refundable.

  • Invoice structure. Take a non-refundable booking fee before starting, with milestones for sketches, line art, and final rendering.

  • Proof images only. For early stages, send low-resolution, heavily watermarked previews that are hard to reuse for AI training.

Verification and screening

  • Check domains and email headers. If a school or company domain isn’t listed on their official website, treat it as suspicious.

  • Search the exact email text. Scam commission requests are often reused word-for-word across targets.

  • Use platforms with escrow. Marketplaces that hold funds until delivery make it harder for clients to ghost you after receiving drafts.

Data-minimisation

  • Avoid sharing layered source files unless contractually required and fairly paid.

  • Embed visible and invisible watermarks in important works to help prove authorship if disputes arise.

These scams mirror broader trends in AI-assisted fraud: blockchain-analytics firm TRM Labs reports that suspicious activity referencing generative AI across its client base rose several-fold year-on-year into 2024, as criminals began systematically using image, video and text models as part of their toolkits.

From YouTube Giveaways to High-Value Scam Rings

In crypto, AI-generated faces and voices have moved from curiosities to core infrastructure for large-scale scams.

A 2025 anti-scam report from Bitget, produced with blockchain security firms SlowMist and Elliptic, found that:

  • Global crypto-scam losses reached US$4.6 billion in 2024, a 24% increase over 2023.

  • AI deepfakes accounted for about 40% of “high-value” crypto frauds, often impersonating celebrities, founders or officials.

Other analysis, summarised by Yahoo Finance drawing on Chainalysis data, suggests that roughly 60% of deposits into known scam wallets now involve AI-amplified schemes, whether via deepfake videos, AI-written romance scripts or automated phishing.

The Elon Musk Livestream Case

One widely cited example involves YouTube deepfake livestreams of Elon Musk promising “giveaways” if viewers send crypto first:

  • Fraud-intelligence write-ups show that, across a series of streams running from March 2024 to January 2025, wallets promoted in these fake Musk videos collected over US$5 million in deposits.

  • Follow-up analysis by TRM Labs traced some of the funds to accounts at the MEXC exchange, illustrating how such scams are integrated into the broader crypto ecosystem rather than existing as isolated one-off events.

This pattern fits a longer history of Musk-themed crypto scams documented by regulators and consumer-protection agencies since at least 2018.

Beyond Celebrity Face Swaps

Deepfakes are no longer limited to pre-recorded promo videos:

  • Real-time Zoom or video-call impersonations have been used to trick both individuals and project founders. In 2025, for example, a THORChain co-founder lost about US$1.35 million after attackers used a hacked Telegram account and a fake Zoom meeting to gain access to wallet keys.

  • Exchange KYC and liveness checks are also under pressure. MEXC has publicly reported thousands of “fraudulent liveness” attempts in a single month, attributing a double-digit percentage increase to deepfake and spoofing attacks.

  • Identity-verification firm Veriff says its own data show global fraud attempts up 21% year-on-year, with deepfakes now responsible for about one in every twenty failed identity checks.

The result is an ecosystem where pig-butchering romance scams, high-pressure boiler-room operations and fake airdrops increasingly use AI-generated faces and voices as standard tools. Investigations into call-centre scams targeting retail investors have linked deepfake video promos to thousands of victims and tens of millions of dollars in losses in a single ring.

Gujarat “Digital Arrest” Bust: AI, Fear and Crypto Laundering

On 4 November 2025, Gujarat’s Crime Investigation Department announced the arrest of six men from Morbi, Surendranagar and Surat for their alleged role in a nationwide “digital arrest” racket.

According to the CID:

  • The network is suspected of laundering around ₹200 crore (roughly US$24 million), linked to 386 cyber fraud cases across India, including 29 in Gujarat.

  • Local “handlers” allegedly worked under the direction of contacts in Dubai.

  • The operation used over 100 bank accounts and multiple crypto wallets, routing part of the funds through USDT on exchanges including Bitget, alongside traditional hawala channels.

How the “digital arrest” scam works

Victims, often older or less tech-savvy people, receive calls from fraudsters posing as police, tax officers or central-agency investigators:

  1. Spoofed caller IDs and AI-cloned voices make the calls sound legitimate.

  2. The target is accused of involvement in crimes such as money laundering or drug trafficking and told they face imminent arrest.

  3. They are instructed to remain on a video call, effectively a “virtual detention”, while the caller guides them through transferring money to “safe accounts” to prove innocence.

  4. Part of the proceeds is quickly converted to crypto and moved offshore.

Media reports on the Gujarat case describe fixed monthly payments to account holders plus per-transaction commissions, a structure that mirrors other professionalised scam-centre operations across Asia.

This bust comes against the backdrop of a much wider “digital arrest” wave in India, where national-level reporting has counted tens of thousands of complaints since 2024. The Gujarat network is relatively small compared with the tens of billions of dollars in crypto-linked scam flows estimated globally, but it shows how regional hubs plug into the same AI-driven machinery used elsewhere.

AI as a Force Multiplier for Fraud

Across these examples, several common patterns emerge:

  1. Professionalised tooling. Platforms like Huione Guarantee, a Telegram-based marketplace recently described by Elliptic and the UN as one of the largest illicit markets ever recorded, have sold everything from stolen personal data and “pig-butchering” scripts to AI voice-cloning and face-swap tools.

  2. Scales like software. Chainalysis has highlighted how vendors selling AI “services” to scam operations have seen revenue jump by well over 1,000%, turning what used to be bespoke attacks into repeatable products.

  3. Broader fraud ecosystem. NASAA’s 2025 threat list ranks digital-asset and social-media-driven scams — many now featuring AI-generated personas — among the top risks for retail investors in North America.

  4. Deepfakes as mainstream fraud vector. Fraud-intelligence and ID-verification firms estimate that deepfakes now account for a noticeable share of global fraud attempts; one industry survey pegs them around 7% of detected fraud, with steep year-on-year growth.

Law-enforcement and policy bodies are sounding the alarm. Europol’s 2025 Serious and Organised Crime Threat Assessment warns that AI is “turbocharging” everything from cyber-fraud to human trafficking. In the US, the American Bankers Association and FBI have launched joint campaigns warning that deepfake-based scams are rising quickly, with fraud losses of over US$50 billion reported since 2020 across all categories — an increasing slice of which now involves manipulated media.

FBI Criminal Investigative Division assistant director Jose Perez has stressed that the bureau is seeing a “troubling rise” in deepfake-related scam reports and that public education is now central to mitigation efforts.

Future Trends and Countermeasures

Looking ahead, regulators and analysts broadly expect AI-enhanced scams to keep rising in both number and complexity:

  • Investor-protection agencies anticipate that AI-powered crypto schemes, romance scams and “AI-stock pick” promotions will remain among the top threats into 2026, particularly on platforms such as Telegram, WhatsApp, TikTok and X.

  • Fraud-tech firms project that deepfake-enabled vishing and identity theft could cause tens of billions of dollars in losses by 2027 if left unchecked.

  • Enforcement actions, from the takedown of Huione-linked marketplaces on Telegram to regional busts like the Gujarat case, show that coordinated crackdowns can significantly disrupt infrastructure, even if scammers eventually migrate to new platforms.

On the defensive side, several strategies are emerging:

  1. Multi-factor and out-of-band verification. For both commissions and investments, insist on verification via separate channels (for example, confirming a video caller’s identity via a known phone number or code word).

  2. AI-against-AI detection. Exchanges, banks and ID-verification providers are investing in liveness checks, behavioural biometrics and model-based deepfake detectors to flag synthetic identities, sometimes in real time.

  3. Better reporting pipelines. Chainalysis and similar firms encourage rapid reporting to trace funds while they are still moving; their on-chain visibility can support asset freezes or law-enforcement referrals when platforms cooperate.

  4. Policy and standards. Investor-protection bodies, central banks and industry groups (including content-authenticity initiatives) are experimenting with provenance metadata, disclosure rules for synthetic media and clearer liability frameworks for platforms that profit from hosting scams.

For now, though, individual vigilance remains critical:

  • Artists need to treat every “test sketch” request with suspicion, use contracts, and limit unprotected file sharing.

  • Crypto users should treat every unsolicited investment pitch, even from a familiar face on video, as untrustworthy until verified elsewhere.

  • Anyone receiving a threatening call or video about “digital arrest”, urgent fines or frozen accounts should hang up, independently contact the relevant institution, and report the incident.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Google ADK Adds Real-Time Streaming: 40% of Apps to Use Agents by 2026

Next
Next

Google AI Mode Expands to 180+ Countries: Search Turns Into a Chatbot