Art

Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

No AI Needed: How Old-School Smishing Still Steals Your Credit Card Info Worldwide

A Mandarin-speaking cybercriminal group, known as the Smishing Triad, has launched mass SMS-based phishing attacks across 121 countries. Using automation tools, phishing kits, and bulk messaging services, they impersonate banks and postal services to steal sensitive financial data. Institutions face mounting challenges as phishing incidents surge globally.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

AI Urged After Cyberattack Hits AustralianSuper: $500K Stolen, MFA Missing

In April 2025, hackers exploited the lack of multifactor authentication in major Australian superannuation funds, stealing AU$500,000 from AustralianSuper accounts. The breach highlights urgent cybersecurity gaps and has sparked a push toward AI-driven threat detection to protect Australians’ retirement savings from evolving digital threats.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

AI-Generated Receipts Spark Debate Over Verification Systems and Fraud Risks

An AI-generated receipt from a fictional San Francisco steakhouse has gone viral for its realism, raising alarms about the future of image-based verification. With AI tools now able to produce convincing forgeries, industries are facing urgent questions about security, fraud prevention, and the evolving definition of digital proof.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

OpenAI Backs Cybersecurity Firm Adaptive Security in US$43M Round to Combat AI Threats

Adaptive Security has secured $43 million in funding led by Andreessen Horowitz and the OpenAI Startup Fund—the first time OpenAI has invested in a cybersecurity firm. The company aims to fight AI-driven threats like deepfake social engineering using its own generative AI tools, real-time risk triage, and adaptive security training.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Europol Warns: AI Fueling Rise in Organized Crime, Fraud, and Synthetic Abuse in EU

Europol’s latest SOCTA 2025 report exposes a major shift in organized crime, fueled by AI tools like generative models, deepfakes, and autonomous systems. From multilingual fraud to AI-generated child abuse material and smuggling optimization, the report warns of rising threats and urges stronger cooperation across the EU and beyond.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

AI Surveillance in U.S. Schools: Safety Tool or Privacy Risk?

AI-powered student monitoring is growing in U.S. schools, aiming to prevent bullying and self-harm. However, data breaches and unintended privacy violations raise concerns about security and trust. While some argue these tools save lives, critics question their effectiveness and impact on student well-being. Is AI surveillance the future of school safety or a risk to privacy?

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

10 Ways to Protect Your Privacy While Using DeepSeek

As DeepSeek AI faces scrutiny for collecting user data like keystrokes and storing it in China, users seek privacy solutions. This report outlines 10 actionable methods—VPNs, local use, and more—detailing how they work, their benefits, and limitations to help you balance convenience and security amid growing AI privacy concerns.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

South Korea Bans DeepSeek AI Chatbot Over Privacy Concerns, Following Italy’s Lead

South Korea has halted new downloads of China’s DeepSeek AI chatbot, citing privacy and national security concerns. This follows Italy’s earlier ban, with other countries imposing restrictions. Despite its cost-effective AI model, DeepSeek faces scrutiny over data collection and compliance with global privacy laws. Will it adapt to regain access?

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Does DeepSeek Track Your Keyboard Input? A Serious Privacy Concern

South Korea’s National Intelligence Service (NIS) has issued a security warning about DeepSeek, a Chinese AI chatbot accused of excessive data collection and keylogging. The agency claims the chatbot tracks users' keyboard input patterns and transfers data to Chinese servers, raising concerns about privacy, government surveillance, and potential security threats. Some countries have already banned the app.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

DeepSeek AI Among the Least Reliable Chatbots in Fact-Checking, Audit Reveals

DeepSeek AI, a newly launched Chinese chatbot, has raised concerns after scoring an 83% fail rate in a NewsGuard audit on news accuracy. The chatbot frequently failed to debunk misinformation, inserted Chinese government narratives, and struggled with outdated information. Despite its rapid adoption, DeepSeek's reliability remains in question, highlighting risks in the AI-driven news landscape.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Exploring Methods to Bypass DeepSeek's Censorship: An AI Perspective

DeepSeek AI applies strict censorship, limiting discussions on sensitive topics. However, users have found ways to bypass these restrictions using AI-driven techniques such as character substitution, role-playing, and running models locally. This report explores these methods, their effectiveness, and the ethical concerns surrounding censorship circumvention.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

DeepSeek AI Chatbot Exposed: 1M Sensitive Records Leaked, Misinformation Raises Concerns

A major security lapse in DeepSeek AI has exposed over 1 million sensitive records, raising serious privacy concerns. Cybersecurity firm Wiz discovered an unprotected database containing user chat histories, API keys, and backend details. Meanwhile, a study found the chatbot had an 83% misinformation rate, making it the least reliable among 11 tested AI models.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

DeepSeek AI Faces Security and Privacy Backlash Amid OpenAI Data Theft Allegations

DeepSeek AI, a rising competitor to OpenAI, is under fire for alleged data theft, security vulnerabilities, and privacy concerns. Investigations suggest the company may have used OpenAI’s API to develop its models, while its data collection practices in China raise national security alarms. With governments and experts voicing concerns, the AI industry faces tough questions about innovation, ethics, and data protection.

Read More