OpenAI Cuts Ties With Mixpanel After Nov 8 Breach Exposes API User Data
OpenAI has terminated its relationship with third-party analytics provider Mixpanel after a targeted phishing campaign exposed limited personal information linked to API users. While OpenAI confirms that ChatGPT consumer data and passwords were not affected, the incident has prompted a broader security review of the company's vendor ecosystem to address supply chain risks.
FBI Warns of USD 262M in Account Takeover Losses as AI Scams Rise
The Federal Bureau of Investigation (FBI) has reported that cybercriminals have stolen more than USD 262 million from US bank accounts in 2025 through account takeover (ATO) fraud. With more than 5,100 victims reporting incidents this year, the average loss per case has exceeded USD 51,000. Authorities warn that attackers are increasingly leveraging generative AI and deepfake technology to bypass security measures and impersonate trusted contacts.
Deepfakes and Scams Drive USD 4.6 Billion in Global Losses
In late 2025, artificial intelligence has shifted from a background tool to a central driver of financial crime. With global crypto scam losses reaching US$4.6 billion and deepfakes accounting for 40% of high-value fraud, this report analyzes the mechanics behind artist "proof sketch" theft, celebrity deepfakes, and "digital arrest" schemes, while outlining practical defenses for creative professionals and investors.
Cloudflare Outage: 6-Hour Failure Disrupts ChatGPT, Claude and Millions of Websites Worldwide
Cloudflare suffered a major outage on 18 November 2025, triggering global 5xx errors and disrupting AI platforms such as ChatGPT and Claude, along with many mainstream websites. The incident, caused by a configuration error in Cloudflare’s bot-management system, lasted nearly six hours before full recovery.
ChatGPT Atlas Launch Triggers Security Concerns After 7-Day Prompt Injection Findings
OpenAI’s ChatGPT Atlas browser has drawn early criticism after researchers identified prompt injection vulnerabilities just seven days after launch. While the AI-powered browser promises faster, more intuitive web navigation, experts caution that its agentic features may introduce new security and privacy challenges for everyday users.
AI Firewalls Gain Momentum as Breach Costs Hit USD 4–5M and Zero-Day Risks Stay Low
AI-driven firewalls are becoming more widely used as organisations seek faster threat detection and stronger network resilience. With global breach costs averaging USD 4–5 million and most incidents linked to stolen credentials or known vulnerabilities, companies are adopting AI tools to manage growing data volumes and evolving attack techniques. This report examines the technology’s capabilities, business impact and emerging trends.
OpenAI’s Safety Router Sparks Debate as 1M Weekly Chats Trigger Emotional Distress Flags
OpenAI’s introduction of a safety routing feature in ChatGPT has sparked widespread debate among users, professionals, and digital rights advocates. Supporters view the change as a protective measure for individuals in distress, while critics argue it reduces user control and lacks transparency. The controversy highlights broader tensions in how AI systems balance safety, autonomy, and trust.
85% of Americans Fear AI Is Making Fraud Harder to Detect, Survey Finds
A new survey of more than 2,000 American adults reveals widespread concern that artificial intelligence is enabling more convincing and harder-to-detect scams. With emotional stress and financial losses rising across age groups, consumers increasingly expect banks to strengthen security measures while maintaining fast and convenient services.
Solidus AI Tech Launches NOVA AI Browser Tool to Counter US$2 Billion Web3 Hack Losses
Solidus AI Tech has introduced NOVA AI, an AI-powered browser extension that aims to improve security across Web3 platforms. The tool identifies phishing risks, scans smart contracts for vulnerabilities, and monitors multi-chain activities in real time, offering an added layer of protection for crypto users and developers.
Australia Issues First Sanction Over AI-Generated False Legal Citations
An Australian solicitor has faced professional sanctions for relying on AI-generated legal citations that proved false in a family law case. The Victorian Legal Services Board varied his practising certificate, restricting him to supervised work. The decision sets a precedent for regulating AI use in the legal profession and highlights the risks of unverified reliance on emerging technology.
Anthropic Report Highlights AI Misuse in Cyber Extortion, Fraud and Ransomware
Anthropic released its August 2025 Threat Intelligence Report, documenting cases where threat actors exploited its Claude AI model in cyber extortion, fraudulent remote employment schemes, ransomware development, and phishing attempts. The findings illustrate how AI lowers technical barriers for malicious operations, while also emphasizing the company’s detection measures and collaboration with authorities.
Kite AI Details Security Vulnerabilities in 'Agentic Internet'
Kite AI has released an analysis of security vulnerabilities in the agentic internet, where autonomous AI agents operate with memory and identity. The company identifies risks such as memory tampering, identity spoofing, and data poisoning, while proposing cryptographic and blockchain-based defenses. These insights come alongside new funding, product integrations, and forecasts of strong market growth.
OpenAI Tightens Security Measures Amid Espionage and DeepSeek Allegations
OpenAI has stepped up its internal security protocols to safeguard sensitive AI technology, introducing biometric fingerprint access, air-gapped systems, and restricted employee access to critical algorithms. The changes come as U.S. officials raise concerns over foreign espionage and as Chinese startup DeepSeek faces allegations of intellectual property misuse, highlighting intensifying competition in the global AI sector.
European Parliament Study Advocates Strict Liability for High-Risk AI Systems
A new study commissioned by the European Parliament calls for a dedicated strict liability regime for high-risk AI systems. The report argues existing EU rules, including the revised Product Liability Directive, remain insufficient to address AI’s unique risks. Without harmonized liability, it warns of national divergences that could impact accountability, innovation, and public trust.
Google’s AI ‘Big Sleep’ Stops Active SQLite Exploit, Marks First AI-Driven Cyber Defense Win
Google's AI-powered security agent, Big Sleep, has successfully detected and mitigated a previously unknown vulnerability in SQLite, preventing potential real-world exploitation. This marks a significant advancement in the use of artificial intelligence for proactive cybersecurity measures.
Qantas Data Breach Exposes 5.7M Customers in Cyber Attack
Australian airline Qantas revealed a data breach affecting up to 5.7 million customers, stemming from a social engineering attack on a third-party platform. Hackers used phone impersonation tactics, potentially enhanced by AI, to access personal details like names and emails. Experts link it to Scattered Spider, though unconfirmed. Analysis covers implications and future cyber trends.
Australia Ranked 10th in the 2024 Global Index on Responsible AI
Australia’s artificial intelligence sector is expanding rapidly, driven by a rise in startups, academic research, and skilled job opportunities. A comprehensive 2025 report by the National Artificial Intelligence Centre outlines the current state of the ecosystem, highlighting strengths in innovation and adoption, while also noting key challenges in commercialization, infrastructure, and digital sovereignty.
AI Challenges Shake Global Digital ID Systems: Study Highlights Privacy and Trust Risks
A new international study finds that national digital identity systems are increasingly vulnerable to fraud and privacy risks due to advancements in artificial intelligence. The research, covering seven countries, urges stronger safeguards to maintain public trust.
Dell, Trend Micro & NVIDIA Unite to Launch AI-Powered Secure Infrastructure Solutions
Dell Technologies has joined forces with Trend Micro and NVIDIA to introduce AI-powered infrastructure solutions designed for secure, scalable deployment across enterprise environments. Revealed during a Trend Micro event on June 18, 2025, the collaboration combines Dell’s PowerFlex storage, Trend Micro’s cybersecurity platform, and NVIDIA’s AI frameworks to support sectors like finance and healthcare.
OpenAI Report Reveals Surge in Global AI Misuse, Urges Stronger Industry Action
A new OpenAI report outlines a sharp increase in malicious use of AI tools, from deepfake-driven scams to social media manipulation and cyberattacks. The June 2025 document emphasizes the urgent need for industry-wide cooperation to counter AI-enabled threats while balancing innovation with responsible deployment.
