CommBank Study: 58% of Australians Fail to Spot Deepfake Scams
Image Credit: Zanyar Ibrahim | Splash
A growing share of online scams now use AI generated media to impersonate real people, including synthetic faces, cloned voices and convincing looking documents. New research backed by Commonwealth Bank of Australia suggests many Australians still treat “seeing and hearing” as proof, even though generative AI is eroding those cues as a reliable security check.
89% of Australians Believe They Can Spot a Deepfake Scam — But Can They?
CommBank reports that 89 per cent of Australians believe they can spot a deepfake scam, but when tested on distinguishing real versus AI generated images, only 42 per cent were accurate. The research was based on a national survey of 1,988 Australians conducted in September 2025 and included an image recognition test.
The results also suggest this is not only an older age issue. CommBank reported people aged over 65 were only 6 percentage points less accurate than younger respondents in the test.
In terms of exposure, 27 per cent of respondents said they had witnessed a deepfake scam in the past year. Among those reports, the most common categories were investment scams, business email compromise style payment redirection scams, and relationship scams.
Why People Struggle Against Modern Deepfakes
Deepfakes work because they target everyday trust shortcuts: a familiar face on a video call, a voice message that sounds like a colleague, or an urgent request that fits a believable scenario. CommBank’s article links this to a confidence gap, where people assume they can detect fakes visually, but do not consistently use verification steps that are harder for AI to imitate.
The study also highlights a social factor that affects security outcomes. Most respondents had not discussed AI generated scams with friends or family, which reduces the chance that warning signs and tactics get shared early.
Where Deepfakes are Turning up in the Scam Pipeline
CommBank’s breakdown points to three practical risk zones.
First is investment fraud, where scammers use synthetic media to make an offer look credible and to create urgency. Australian regulator guidance has separately warned that fake celebrity endorsement scams can include AI deepfakes and can be distributed through social platforms, video sites, and fake news style pages that link to scam trading websites.
Second is business email compromise. CommBank reports that small businesses most often encountered deepfake attempts by email, a channel where AI can be used to tailor language, mimic a known sender, and package invoices or payment change requests in a believable way.
Third is relationship and trust scams, where impersonation is not always about technical access, but about persuasion. AI makes it easier to generate consistent messaging, realistic profile imagery, and voice notes that reinforce a false identity.
What the Small Business Numbers Imply for Controls
CommBank’s small business findings point to a common weak spot: payment detail changes. Even when staff suspect something is off, the “last mile” of verification is often missing, for example confirming bank details via a known phone number or a trusted directory rather than replying to the same email thread.
The same pattern shows up in the consumer findings. Many people agree that they should set up a family safe word or similar verification routine, but far fewer have actually done it. In security terms, the gap is between knowing a control exists and operationalising it as a habit.
Legal and Accountability Context in Australia
Australia is also moving toward stronger whole of ecosystem obligations rather than putting the burden solely on individuals.
The ACCC has welcomed passage of the Scams Prevention Framework, which is designed to require regulated entities in key sectors to take reasonable steps to prevent, detect, disrupt, respond to, and report scams. The ACCC notes that penalties for failing to meet obligations can be up to $50 million, and that banks, certain digital platforms and telecommunications providers are expected to be early designated sectors.
Treasury’s public pages currently show mixed timelines. The Treasury beta consultation page lists the draft law package and position paper consultation as open until 19 January 2026, while an older consultation portal page presents a closed date of 5 January 2026. As of today, the beta site still presents the consultation as open.
Transparency and Provenance: an Adjacent Trend that Supports Security
While anti-scam rules focus on responsibilities across banks, telcos and platforms, another policy strand is emerging around content transparency. The National AI Centre has released guidance for businesses on being clear about AI generated content, describing three mechanisms: labelling, watermarking, and metadata recording. These tools are not a complete defence against fraud, but they reflect a broader shift toward provenance signals that can support user trust when paired with verification and enforcement.
Practical Takeaway: Replace “Can I Tell” with “Can I Verify”
The key lesson from the CommBank results is that human judgement alone is not a dependable control in an AI driven scam environment. For consumers and small businesses, the more resilient approach is to treat identity and payment requests as verification problems, not perception problems.
That means using independent confirmation channels for money transfers, resisting urgency cues, and checking regulator resources when an offer is investment related. It also means assuming that a believable face, voice, or screenshot is no longer sufficient evidence by itself.
License This Article
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
