Dam Secure Raises $6.1M Seed Round to Secure AI-Generated Code

Image Credit: Jacky Lee

Australia based AI security startup Dam Secure has raised $6.1 million in seed funding to address security risks created by AI generated code entering production at scale, with the round led by Paladin Capital Group, a Washington DC based cybersecurity and AI investor, according to SmartCompany. The publication described the round as oversubscribed.

The company was founded by Patrick Collins and Simon Harloff, both former executives at Zip Co and Secure Code Warrior, and Paladin Capital managing director Mourad Yesayan will join the board, SmartCompany reported.

Who Backed the Round

SmartCompany reported the seed round also attracted backing from several local industry figures, including Pieter Danhieux (CEO of Secure Code Warrior), Anthony Woodward (CEO of RecordPoint), Phaedon Stough (founder of Innovation Bay) and Steen Andersson (chief product officer at Tyro Payments).

Dam Secure said it will use the funding to grow its Australian research and development team to 13 people and build a United States based sales and marketing presence, ahead of a broader commercial rollout planned for 2026.

AI Coding is Creating a Verification Bottleneck

Dam Secure’s funding lands amid a broader shift in software delivery: AI coding assistants and AI agents are accelerating code output, but are also increasing the burden on teams to validate correctness and security.

Sonar’s 2026 State of Code Developer Survey found that AI now accounts for 42 percent of committed code, and developers expect that to rise to 65 percent by 2027. The same survey results highlight a trust gap: 96 percent of developers say they do not fully trust AI generated code to be functionally correct, yet only 48 percent say they always check AI assisted code before committing it. Sonar describes this growing review burden as a new bottleneck at the verification stage of development and points to “verification debt” as a concept used by Werner Vogels, CTO at Amazon Web Services.

For security teams, that combination of higher velocity and incomplete verification can mean more opportunities for logic errors and missed controls to ship into production, especially when existing scanning tools produce large volumes of alerts that are difficult to triage in fast moving workflows.

Rules for Logic Flaws, Not Just Known Signatures

SmartCompany reported Dam Secure is positioning its platform to catch logic gaps. These are flaws where code can work as intended from a functional perspective but still violate a basic security expectation, such as missing controls around authentication, authorisation, or abuse prevention.

Instead of relying only on scanners that look for known vulnerability patterns, Dam Secure says organisations can define security requirements in plain English, and the platform automatically enforces those rules across large codebases during development. The company positions the product as complementary to existing application security tooling, focusing on logic level flaws rather than known vulnerability signatures.

SmartCompany reported the platform builds a proprietary Security Knowledge Graph for each codebase, mapping relationships, data flows and logic paths across the system so it can reason about behaviour in context rather than scanning files in isolation.

On developer workflow, SmartCompany said the platform can operate as a security wrapper around popular AI coding tools such as GitHub Copilot, Claude and Cursor, regardless of the underlying model. It currently supports Java, C#, TypeScript, JavaScript, Python and Go.

The Volkswagen API Example

In SmartCompany’s reporting, Collins pointed to a May 2025 logic flaw disclosed in Volkswagen’s connected car APIs as an example of the kind of control a rule based approach aims to enforce. The example focused on a missing safeguard, rate limiting, on an authentication style endpoint involving a four digit access code, which could allow brute force attempts and expose vehicle location data.

SecurityWeek separately reported that Volkswagen patched vulnerabilities in its My Volkswagen application after a researcher published a technical write up. SecurityWeek said the flaws could have allowed attackers to obtain other users’ information, including vehicle location and personal data. Volkswagen told SecurityWeek the issue only impacted the app used in India and said there was no evidence of exploitation in the wild.

The lesson for the broader market is that many high impact incidents do not always hinge on exotic bugs. They can stem from missing guardrails that are easy to state as policy, such as requiring rate limiting on authentication endpoints, but easy to overlook when teams ship quickly.

Developer First Security is Attracting Capital

Dam Secure’s angle sits within a larger investor backed push toward developer first security controls that fit modern build pipelines and AI assisted coding.

On 14 January 2026, Reuters reported Belgian startup Aikido Security reached a $1 billion valuation after raising $60 million, positioning its tools as guardrails for developers, especially as AI influences software development. Reuters quoted Aikido’s CEO Willem Delbare and DST Global managing partner Tom Stafford on the need for new approaches as AI changes software delivery.

While Dam Secure and Aikido are different products, the shared theme is that security tooling is shifting closer to where code is written and merged. In practice, the market is moving from post release detection toward earlier stage prevention, partly because AI increases both the amount of code written and the speed at which it moves into production.

What to Watch Next

For Dam Secure, the next set of signals will matter more than the size of the seed round.

  1. Independent proof points: SmartCompany reported Dam Secure is deployed with six major technology organisations on a private, invite only basis and that Collins said early results show false positives below 10 percent, compared with around 50 percent as an “industry average”. These are company statements reported by SmartCompany and have not been independently verified.

  2. Scaling plain English rules without policy sprawl: Natural language policy sounds accessible, but large engineering organisations may need strong governance to avoid conflicting rules, unclear ownership, and a growing library of requirements that developers cannot easily interpret.

  3. Integration into real delivery pipelines: The practical impact will depend on how well the platform integrates with developer tools, CI pipelines, existing application security platforms, and incident response workflows, without slowing teams down or creating workarounds.

  4. Measurable reduction in production issues: The most meaningful metric is not only fewer alerts, but fewer real incidents and fewer logic level security defects escaping into production.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

UK Fast-Tracks AI Drones & Defence Tech by Cutting Regulatory Red Tape

Next
Next

Australia’s Disaster Ready Fund Backs AI for Bushfires and Heatwaves