AI Browsers Exposed: 4 Major Prompt-Injection Flaws Hit ChatGPT Atlas and Perplexity Comet

Image Credit: Jacky Lee

Security researchers have confirmed that Perplexity AI’s Comet browser shares significant prompt-injection vulnerabilities with OpenAI’s newly released ChatGPT Atlas, underscoring what experts describe as a systemic challenge across the entire category of agentic AI browsers. Both products embed large language models (LLMs) directly into the browsing experience, allowing an AI agent to read pages, generate summaries and, in more advanced modes, perform actions on a user’s behalf. This convenience comes with a trade-off: the boundary between trusted user instructions and untrusted web data becomes blurred, creating persistent openings for indirect prompt injection, where malicious instructions are embedded within webpages, images or other inputs.

A Design Shift That Breaks Traditional Browser Assumptions

Prompt injection has been a core concern since generative AI systems rose to prominence in late 2022. Traditional web browsers enforce strict isolation mechanisms such as the same-origin policy, ensuring one site cannot freely access another’s content. Agentic browsers shift this model by design: they ingest and interpret arbitrary page content through LLMs, often treating everything, from visible text to hidden metadata, as potential instructions.

Throughout 2025, security teams from Brave Software, LayerX, NeuralTrust and Guardio collectively highlighted multiple attack vectors affecting AI browsers. Their findings included:

  • Invisible or hidden text, such as CSS-hidden instructions or HTML comments, capable of silently steering the AI agent.

  • Prompt injections embedded in images or screenshots, revealed once OCR processing occurs.

  • Malformed or misleading URLs that the AI interprets as trusted natural-language instructions.

  • Tainted agent memory, where malicious instructions can be written into stored “memories” and resurface in later interactions.

While each firm documented different aspects of the threat landscape, all reached similar conclusions: the underlying architecture of AI browsers makes it inherently difficult for an LLM to distinguish between legitimate user commands and hostile content supplied by a webpage.

ChatGPT Atlas: Early Flaws in a Unified Omnibox

OpenAI released ChatGPT Atlas, a Chromium-based browser for macOS, on 21 October 2025. Shortly after launch, security researchers, including NeuralTrust, demonstrated that Atlas’s unified “omnibox”, which accepts both URLs and natural-language commands, could be abused so that malicious prompts disguised as URL-like strings are treated as instructions by the AI agent.

Researchers also showed how indirect prompt injection via webpage content, copied text or manipulated clipboard entries could cause Atlas’s agent to reveal sensitive data or perform unintended actions. Several analyses illustrated how hidden webpage instructions could silently guide the agent to follow malicious links, navigate to sensitive sites, alter settings or attempt to extract private information from logged-in services.

OpenAI has publicly acknowledged that prompt injection remains a difficult and unsolved security challenge. Since launch, the company has deployed several security-focused updates to Atlas, including an isolated clipboard mode, stricter confirmation prompts for higher-risk actions and additional controls around agent mode and authenticated sessions. These changes aim to reduce the impact of prompt injection, but do not eliminate the underlying architectural risk.

Perplexity Comet: Rapid Adoption Paired With Recurring Injection Risks

Perplexity AI launched Comet in July 2025 for premium subscribers before expanding access to all users in October. Soon after its release, Brave Software demonstrated that even simple features such as webpage summaries could allow hidden instructions on a page to hijack the agent and exfiltrate sensitive information from logged-in accounts, such as email or other web services.

In October, researchers at LayerX detailed an attack they named “CometJacking”, where malicious links could redirect the agent into executing embedded instructions and interacting with sensitive sites. Around the same time, Brave documented a separate class of “unseeable” injections concealed inside screenshots or images — triggered as soon as Comet’s OCR extracted the hidden text and passed it to the model.

Following these disclosures, Perplexity has implemented a defense-in-depth mitigation framework, including real-time injection detection, layered controls across the task lifecycle and expanded confirmation prompts for high-risk actions. Updates released in November 2025 added further safeguards around multi-tab workflows and require more explicit user approval for actions involving authenticated sites or sensitive data. The company continues to patch vulnerabilities as they are reported but emphasises that complete prevention of indirect prompt injection remains an industry-wide challenge.

A Category-Wide Architectural Problem

Both Atlas and Comet exhibit similar weaknesses because they rely on similar architectural principles: an LLM is tasked with interpreting a mix of trusted user input and untrusted webpage content, then taking action based on that interpretation. Security researchers consistently describe this pattern as the root cause behind the shared vulnerabilities.

Brave Software has described prompt-injection flaws as a “systemic challenge facing the entire category of AI-powered browsers”, a sentiment echoed by independent reporting from outlets such as The Register, Fortune and TechCrunch. These reports highlight that LLM-driven browsers inherently expand the attack surface by turning page content into actionable commands.

Growing Risks as Browsers Become Autonomous

As developers push AI browsers toward more complex automation — handling tasks such as travel bookings, email management, shopping or financial workflows — the risks associated with prompt injection intensify. Cybersecurity analysts warn that without major architectural advances, attackers could exploit indirect prompt injection to orchestrate phishing campaigns, leak confidential data or trigger unauthorised operations within enterprise environments.

Experts argue that safer AI browsing may require:

  • Stricter separation between data the agent reads and commands it executes, potentially using different models or isolated contexts.

  • Dedicated control or policy models that specialise in secure decision-making, instead of relying solely on general-purpose language prediction.

  • Clearer consent and audit trails, especially for actions performed on logged-in accounts, so that users and organisations can understand what the agent did and why.

Until such approaches mature, users are urged to enable confirmation prompts, avoid fully automated agents for sensitive tasks and operate in logged-out or sandboxed modes when possible.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Cloudflare Outage: 6-Hour Failure Disrupts ChatGPT, Claude and Millions of Websites Worldwide

Next
Next

Google Launches Gemini 3 Pro: 37.5% HLE Score and Instant Search Integration