What Is Chatly AI? Features, Models and Security Risks Explained
Image Source: ChatlyAI
ChatlyAI.app is the public website for Chatly, an AI assistant positioned as an all in one conversational platform that combines chat, document analysis and an AI search style experience. Its help centre describes Chatly as a multi model assistant where users can choose among different large language models depending on the task.
On the Apple App Store in Australia, the app listing for “Chatly AI: AI Smart Assistant” shows the developer as “vyro.ai pvt ltd” and describes the product as “built on DeepSeek, GPT, Claude”, with features such as PDF analysis and image generation.
Who is Behind It
Public materials link Chatly to Vyro AI. The Chatly models page presents the product as part of Vyro AI’s offering, and Cybernews reported in September 2025 that Vyro AI was behind Chatly alongside other apps in its portfolio.
Model Aggregation is the Core Design
Chatly’s website maintains a “Models” page that lists “30+ AI models” and shows options from multiple AI labs in one interface. The list includes model names such as GPT, Claude, Gemini, Grok and others as selectable choices.
A practical detail for readers: the Google Play store listing for Chatly AI (package id com.vyro.chatly) highlights a smaller set of models and older model names than the website list, describing “triple AI power” using GPT 4o, Gemini 2.5 Flash and Claude 3.7 Sonnet.
That mismatch does not automatically mean either page is “wrong”. Store listings often lag product updates, and they can simplify model branding for general audiences.
What It does?
Chatly’s product pages and help centre describe several AI driven workflows:
AI chat with model switching: the help centre positions Chatly as a unified chat interface with multiple model options.
AI search engine mode: the website presents an “AI Search Engine” feature focused on answering questions using web style retrieval and summarisation.
AI Docs: the product describes document oriented features, including summarising and extracting insights from uploaded documents.
Image features: the Apple App Store listing describes both text to image generation and image understanding.
Data Handling and Privacy
Chatly’s Terms and Service includes a broad licence allowing the service to use user inputs to provide and improve the service, and explicitly states that user inputs may be used to train or refine models unless otherwise agreed. It also states user input is not confidential and users should not submit confidential information.
Its Privacy Policy lists a range of third party services used for operating the product, including analytics and payment processing providers, which is common for consumer apps but relevant when user content may include sensitive material.
The Google Play “Data safety” section for Chatly AI says “No data collected” and “No data shared with third parties”. Google’s own documentation explains that “data safety” declarations have specific definitions and can exclude some situations, such as ephemeral processing or certain flows where data is provided directly to another service.
For users, the practical point is that store labels should be read alongside the app’s own legal terms and privacy policy, especially for AI products where prompts and uploads can be highly sensitive.
Chatly Has Appeared in Breach Reporting
Cybernews reported in September 2025 that an unsecured Elasticsearch instance linked to Vyro AI exposed 116GB of live logs from three products including Chatly, and that exposed data included user prompts and bearer authentication tokens. Cybernews also published a disclosure timeline noting discovery in April 2025 and initial disclosure in July 2025. Dark Reading covered the incident as an example of weak security hygiene and the broader risk of users placing sensitive information into generative AI prompts.
It is not possible to confirm from public reporting whether any specific individual user was harmed, but the category of exposed material reported, prompts plus access tokens, is widely recognised as high risk because it can enable account takeover and expose private content.
How Chatly Compares with Similar Products in 2026
Chatly sits in the same broad category as “AI aggregators”, but it differs from major platforms in governance, data controls, and product goals.
ChatGPT: OpenAI documents that Temporary Chats are not used to improve models, and its Data Controls materials describe options intended to limit training use in certain modes. This is a more explicit training control story than many smaller aggregators, although the exact options vary by product tier and mode.
Google Gemini with Deep Research: Google’s help documentation describes Deep Research as a structured research workflow inside Gemini that gathers and organises information with citations and a report style output. This is more “research workflow” than “model marketplace”.
Perplexity: Perplexity states that agreements with third party model providers prohibit using Perplexity data to train those external models. Perplexity’s privacy materials also describe how data can be used depending on settings and product context, so the key distinction is often about external provider training and enterprise controls rather than “no data use at all”.
Claude Projects: Anthropic’s Claude Projects are positioned as self contained workspaces with their own chat history and a project knowledge base, which is a different organisational model than Chatly’s multi model switching.
Poe: Poe’s privacy centre states that conversations with third party bots made by independent creators may be used by those creators to improve their bots, highlighting that aggregator style platforms can involve multiple data handling regimes within the same app experience.
What to Watch If You Are Evaluating Chatly
Treat the model list as a product claim, not a guarantee. Aggregators can rename, swap, or retire models quickly. Cross check the in app model picker and the current website list.
Read the training and confidentiality language before pasting sensitive text or uploading documents, especially for work, health, legal, or client data.
Use the 2025 breach reporting as a reminder that prompt data can be highly sensitive, and smaller app ecosystems may not match the security maturity of large platform vendors.
License This Article
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
