NAB Blocks AUD 100k Loss in 'Kevin Costner' AI Deepfake Scam

AI-generated Image for Illustration Only (Credit: Jacky Lee)

National Australia Bank says a customer, identified only as “Sue” (a pseudonym), nearly transferred about A$100,000 after believing she was dealing with US actor Kevin Costner via a convincing AI deepfake video call. NAB says staff at its Bourke Street branch intervened after spotting red flags when Sue came in to open a new account to receive overseas funds.

NAB frames the incident as part of a broader shift in scam tactics, where impersonation is no longer limited to emails or phone calls. The bank says deepfakes and synthetic voices are being used more often to increase believability and apply pressure at the point where victims are asked to move money.

How the Scam Worked

Based on NAB’s account, the approach followed a familiar pattern seen in high loss impersonation scams, but with more convincing media:

  • Relationship and trust building online: Sue believed she was communicating with a well known celebrity.

  • A plausible pretext tied to the victim’s background: She was told “Costner” wanted help setting up an office in Australia and buying commercial property, something that aligned with her property management experience.

  • Payment setup and escalation: She planned to open an Australian bank account to receive a claimed US$200,000 transfer and also intended to move A$100,000 of her own savings into the account, expecting repayment with interest.

NAB staff also flagged the risk that she could have been used as a money mule, meaning an intermediary whose account is used to move scam proceeds, often leaving the person exposed to financial loss and potential legal complications.

Why AI Makes This Category of Scam Harder to Detect

eSafety defines a deepfake as a digital photo, video or sound file of a real person edited to create an extremely realistic but false depiction of them doing or saying something they did not do. This matters because it targets a person’s instinct to trust what they can see and hear, especially when the content appears to be live.

The Australian Cyber Security Centre’s guidance on social engineering notes that recent advances in AI have amplified the effectiveness of scams by weaponising empathy, urgency and trust. It also warns that criminals may use voice cloning or deepfake technology and may spoof caller IDs to make contact look legitimate.

In NAB’s description of this case, the deepfake element is not just a novelty. It is used to reduce hesitation right before a major transfer, when traditional warnings like “celebrities do not message people for money” can be overridden by perceived direct contact.

How This Compares with Other Scam Patterns in Australia

NAB’s example sits alongside a wider category of scams where celebrities or public figures are used to lend credibility. ASIC has warned that investment scam ads and websites increasingly misuse images of well known Australians, and says it shut down more than 330 investment scam websites using celebrity images in 2025 to date, a 25 percent increase compared with the same period a year earlier.

ASIC also points to scaling tactics that mirror what banks describe in impersonation scams: cloned websites, professional looking templates, fake news style articles, and “AI trading bot” claims used to funnel people toward fraudulent platforms.

Separately, the Australian Banking Association has urged Australians to stay alert to AI voice cloning, deepfake celebrity videos, and more personalised AI generated phishing that mimics trusted brands or banks.

Bank Controls That Target the Payment Moment

While the NAB story focuses on frontline intervention and fraud team escalation, other banks have been rolling out controls designed to interrupt scams at the decision point:

  • ANZ Digital Padlock: ANZ describes this as a feature aimed at giving customers real time control to lock scammers out, supported by specialist response teams.

  • Westpac SaferPay: Westpac says it prompts customers with questions when a payment is assessed as higher risk of being a scam, as an extra friction layer before funds leave the account.

These approaches are consistent with the broader direction in scam mitigation: add verification steps and delays at the point of transfer, because once funds move through fast payment rails or are converted (for example into crypto), recovery can become difficult.

Policy Direction and What to Watch Next

At the policy level, the Australian Government is consulting on designating banks, telecommunications providers, and some digital platforms as the first sectors to be regulated by mid 2026 under the Scams Prevention Framework, with mandatory sector codes intended to lift baseline obligations. The consultation is open until 5 January 2026.

For AI enabled scams, that matters because impersonation campaigns frequently rely on a chain of services across social platforms, messaging, telcos, and banking. A sector based framework is designed to reduce the gaps scammers exploit between those domains.

Practical Takeaways for Readers

From an IT security perspective, the NAB case reinforces a few operational rules that still hold even when audio and video look real:

  • Treat identity as untrusted until verified through a separate channel you control (known number, official website login, in branch confirmation).

  • Be cautious of urgency plus secrecy plus a large transfer request, especially when the story leans on status, romance, or “once in a lifetime” opportunities.

  • For investments, ASIC recommends a “Stop, Check, Protect” approach, including checking licensing and being sceptical of outsized or guaranteed returns.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Next
Next

Australia’s Under-16 Social Media Ban: AUD 49.5m Fines & 10 Key Apps Listed