NSW Schools Caught in Microsoft Teams Biometric Data Collection Without Parental Consent
In March 2025, a Microsoft Teams update activated a biometric data collection feature affecting students and staff in NSW public schools. The Department of Education responded by disabling the feature and deleting the data, but the incident underscores growing privacy concerns and the importance of transparent communication in the adoption of AI tools in education.
AI Hallucinations Challenge Reliability as Adoption Grows in Law, Healthcare, and Business
Artificial intelligence systems are under scrutiny as “hallucinations”—plausible but false outputs—pose reliability concerns in critical fields such as law, healthcare, and business. Despite technological advances and growing adoption, experts note that the risk of AI-generated errors remains, prompting new efforts to enhance accuracy and build public trust.
Westpac to Cut 1,500 Jobs in 2025 as Bank Accelerates AI and Digital Transformation
Westpac Banking Corporation is set to cut around 1,500 jobs as part of its “Unite” strategy to streamline operations and boost investment in artificial intelligence. The redundancies, representing about 5% of the workforce, follow rising costs and profit pressures. The Finance Sector Union has voiced concerns about the move’s impact on staff and communities.
AI Deepfake Scams Cost Australians AU$2 Billion in 2024: ACCC and Experts Warn of Rising Threat
Australians lost $2.03 billion to scams in 2024, with AI-powered deepfake videos playing a major role in recent high-profile frauds. Authorities and industry experts warn that the rapid evolution of AI poses new challenges for detection and regulation, underscoring the urgent need for stronger safeguards and public education.
Palo Alto Networks Launches IRAP-Assessed AI Cybersecurity Browser for Australian Agencies
Palo Alto Networks has launched its Prisma Access Browser in Australia, an IRAP-assessed, AI-powered solution designed to enhance cybersecurity for federal agencies and critical infrastructure. The browser offers real-time threat protection and supports secure remote work, aligning with national security standards and Australia’s digital strategy.
AI Deepfake Scam Uses Fake Anthony Bolton Video to Target Investors on Instagram
A deepfake video impersonating former Fidelity fund manager Anthony Bolton has surfaced on Instagram, urging users to join a WhatsApp group for investment tips. This scam reflects the growing risk of AI-powered forgeries targeting investors and underscores the importance of verifying online content.
Google Boosts Security with AI-Driven Scam Detection for Chrome, Search, and Android
Google's latest AI-powered security features, powered by the Gemini Nano model, enhance protection against online scams and phone-based fraud on Chrome, Search, and Android. These on-device measures prioritize user privacy while improving real-time threat detection and reducing exposure to deceptive content.
Airtel Launches AI-Powered Fraud Detection Solution in India
Bharti Airtel has introduced an AI-powered fraud detection solution to safeguard its mobile and broadband users from online scams. The system, automatically enabled for all customers, uses DNS filtering and artificial intelligence to block malicious websites in real time, aiming to strengthen cybersecurity as digital threats rise across India.
Australia Introduces Identity Protection Bill to Combat Rising Cybercrime
The New South Wales (NSW) government has introduced the Identity Protection and Recovery Bill, aiming to strengthen identity security amid rising cybercrime. The legislation includes real-time fraud detection, a compromised credential register, and enhanced public awareness initiatives, potentially leveraging advanced technologies like AI to protect residents' personal information.
AI Uncovers Massive 'Pig Butchering' Scam Linked to North Korean Cybercrime
A massive cybercrime operation based in Cambodia, allegedly linked to North Korean hackers, has used AI to defraud victims worldwide, siphoning billions through complex financial schemes. This report explores the scale of the fraud, the role of AI, and the international crackdown to counter this emerging threat.
AI-Powered Cybercrime Drives Record US$16.6 Billion in Losses, FBI Reports
The FBI’s 2024 Internet Crime Report highlights $16.6 billion in cybercrime losses—driven in part by the growing use of AI in phishing, extortion, and cryptocurrency scams. Elderly victims suffered the most, with $4.8 billion in reported losses. The FBI is leveraging AI tools to combat this evolving threat.
AI-Powered Reverse Location Search Sparks Privacy Concerns Amid Viral Social Media Trend
AI’s growing ability to identify locations from photos has sparked a viral trend — and significant privacy concerns. As models like ChatGPT, Gemini, and Claude make reverse location searches easier, experts warn of potential misuse. This report explores the risks, safety tips, and calls for stronger regulations.
No AI Needed: How Old-School Smishing Still Steals Your Credit Card Info Worldwide
A Mandarin-speaking cybercriminal group, known as the Smishing Triad, has launched mass SMS-based phishing attacks across 121 countries. Using automation tools, phishing kits, and bulk messaging services, they impersonate banks and postal services to steal sensitive financial data. Institutions face mounting challenges as phishing incidents surge globally.
AI Urged After Cyberattack Hits AustralianSuper: $500K Stolen, MFA Missing
In April 2025, hackers exploited the lack of multifactor authentication in major Australian superannuation funds, stealing AU$500,000 from AustralianSuper accounts. The breach highlights urgent cybersecurity gaps and has sparked a push toward AI-driven threat detection to protect Australians’ retirement savings from evolving digital threats.
New AI Flaw Lets Hackers Trick Chatbots Like Google Gemini, Study Finds
A recent study reveals a major flaw in popular AI tools like Google Gemini. Hackers can secretly insert hidden commands into the system, causing chatbots to behave in dangerous ways—such as giving false info or leaking private data. The attack is cheap, hard to detect, and puts public trust in AI tools at risk.
AI-Generated Receipts Spark Debate Over Verification Systems and Fraud Risks
An AI-generated receipt from a fictional San Francisco steakhouse has gone viral for its realism, raising alarms about the future of image-based verification. With AI tools now able to produce convincing forgeries, industries are facing urgent questions about security, fraud prevention, and the evolving definition of digital proof.
OpenAI Backs Cybersecurity Firm Adaptive Security in US$43M Round to Combat AI Threats
Adaptive Security has secured $43 million in funding led by Andreessen Horowitz and the OpenAI Startup Fund—the first time OpenAI has invested in a cybersecurity firm. The company aims to fight AI-driven threats like deepfake social engineering using its own generative AI tools, real-time risk triage, and adaptive security training.
AI Chatbot Hacks Google Chrome’s Password Manager? ChatGPT Vulnerability Exposed
A shocking discovery reveals how ChatGPT was manipulated to generate malware capable of hacking Google Chrome’s Password Manager. Using a role-playing technique called 'immersive world' engineering, researchers exposed significant vulnerabilities in AI systems, raising urgent cybersecurity concerns.
Europol Warns: AI Fueling Rise in Organized Crime, Fraud, and Synthetic Abuse in EU
Europol’s latest SOCTA 2025 report exposes a major shift in organized crime, fueled by AI tools like generative models, deepfakes, and autonomous systems. From multilingual fraud to AI-generated child abuse material and smuggling optimization, the report warns of rising threats and urges stronger cooperation across the EU and beyond.
AI Surveillance in U.S. Schools: Safety Tool or Privacy Risk?
AI-powered student monitoring is growing in U.S. schools, aiming to prevent bullying and self-harm. However, data breaches and unintended privacy violations raise concerns about security and trust. While some argue these tools save lives, critics question their effectiveness and impact on student well-being. Is AI surveillance the future of school safety or a risk to privacy?
