AI Deepfake Scam Uses Fake Anthony Bolton Video to Target Investors on Instagram

Image Source:: Youtube

A fraudulent scheme powered by artificial intelligence has surfaced, using a deepfake video of former Fidelity International fund manager Anthony Bolton to deceive retail investors. The 25-second clip, shared on Instagram in early May 2025, falsely portrays Bolton promoting a WhatsApp group offering daily stock tips and investment news. This scam highlights the growing threat of AI-generated forgeries in financial fraud.

[Read More: Deed Fraud and AI: How Scammers Use Technology to Steal Property Ownership Rights]

Scam Details

The deepfake video employs AI to replicate Bolton’s voice and appearance, creating a convincing impersonation that urges viewers to join a WhatsApp group for investment advice. Scammers likely used publicly available footage of Bolton, who retired from Fidelity International in 2014, to craft the forgery. Fidelity issued a statement confirming the video is fake and unaffiliated with the firm. The scam aims to exploit Bolton’s reputation to mislead investors, though no specific financial losses have been reported.

[Read More: AI-Powered Netflix Email Scam Targets Users with Sophisticated Deception]

Growing Trend of AI-Powered Fraud

The Bolton deepfake is part of a broader rise in AI-driven scams on platforms like Instagram and Facebook. These schemes use advanced technology to mimic trusted figures, making fraudulent content appear authentic. A 2024 warning from the New York State Attorney General highlighted similar deepfake videos impersonating celebrities and business leaders to promote fake investment schemes. Social media’s rapid reach enables such scams to spread quickly, often challenging platform efforts to remove them.

[Read More: Australia Introduces Identity Protection Bill to Combat Rising Cybercrime]

Threat to Retail Investors

Retail investors, who may trust familiar figures like Bolton, are prime targets for deepfake scams. The fraudulent WhatsApp group could lure victims into sharing personal information or investing in deceptive schemes. By misusing a respected name, the scam risks eroding confidence in online financial advice and institutions, complicating efforts to distinguish genuine content from fraud.

[Read More: AI-Generated Receipts Spark Debate Over Verification Systems and Fraud Risks]

Protective Measures

Experts advise verifying investment offers, especially those promoted via social media or messaging apps. Checking official company statements, avoiding unverified groups, and reporting suspicious content to platforms or authorities are essential steps. Tech companies are developing tools to detect deepfakes, and regulators are exploring measures to curb AI misuse, but public vigilance remains crucial.

[Read More: Airtel Launches AI-Powered Fraud Detection Solution in India]

License This Article

Source: The Times, Financial News, Yahoo! News

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Palo Alto Networks Launches IRAP-Assessed AI Cybersecurity Browser for Australian Agencies

Next
Next

Google Boosts Security with AI-Driven Scam Detection for Chrome, Search, and Android