AI Agents Are Stressing API Security: 2025 Report Findings

Image Credit: Walkator | Splash

APIs have long been the plumbing that lets software systems talk to each other. In 2025, industry reporting is increasingly framing APIs as the connection layer for AI agents, and it is putting API security and governance back in the spotlight.

Postman’s 2025 State of the API Report, based on a survey of over 5,700 developers, architects, and executives, says 89% of developers use AI, but only 24% design APIs with AI agents in mind. In the same report, 51% of developers say they worry about unauthorized or excessive API calls from AI agents, making it the top security concern cited.

A Data Bridge Between Systems

API stands for Application Programming Interface. NIST defines an API as a system access point or library function with a well defined syntax that is accessible from application programs or user code to provide well defined functionality.

Australia’s Digital Transformation Agency, via the Australian Government Architecture, describes an API as a set of rules, protocols, and tools that lets different software applications communicate, acting as an intermediary layer that transfers data between systems, services, and libraries.

AI Is More Than Text Generation Now

The AI shift is not just about generating text. The practical direction is AI systems that can take actions by calling tools and services such as booking systems, databases, document stores, and business workflows. In OpenAI’s documentation, “tool calling” is described as a multi step exchange where an application provides tools a model can call, the model returns a tool call, the application runs code, then the model produces a final response using the tool output.

That pattern makes the API the control surface for what an AI agent can do inside real software, which is why API design, permissions, and monitoring become AI issues, not just developer issues.

Developers Are Worring About Unauthorized Access

Postman’s report frames what it calls an “AI and API gap”: widespread AI usage among developers, but limited design focus on AI agent consumption.

On security, Postman’s PDF report states that 51% of developers worry about unauthorized or excessive API calls from AI agents. Its public report page summarises the same theme as “unauthorized agent access” being a top security risk. Read together, the PDF wording is the more specific version of the concern: the risk is both access and automated call volume.

The API Standards

OpenAPI is not an API. It is a standard way to describe HTTP APIs so both humans and computers can understand a service’s capabilities without needing source code or extra documentation.

The OpenAPI Initiative announced OpenAPI Specification v3.2.0 in September 2025, calling out additions across supported HTTP methods, a new tag structure, and support for streaming media types.

For AI agent scenarios, machine readable API descriptions matter because they can reduce ambiguity about parameters, required authentication, and expected outputs, which improves reliability when software is calling software at scale.

Open Source APIs

In practice, organisations mix approaches depending on the workload.

GraphQL describes itself as an open source query language for APIs with a strongly typed schema, often used where clients want flexibility over which fields are returned.

gRPC describes itself as a modern open source high performance RPC framework designed to connect services across environments, with support for things like load balancing, tracing, health checking, and authentication.

Alongside these, many widely used APIs remain HTTP based and are described using standards like OpenAPI.

API Security Risks

OWASP’s Top 10 API Security Risks 2023 continues to be the current edition highlighted on the OWASP API Security project pages.

Two of the top listed risks are:

  • Broken Object Level Authorization, where attackers can manipulate object identifiers to access data they should not be able to access.

  • Broken Authentication, covering common failures in authentication mechanisms that attackers can target because APIs are broadly exposed.

The AI agent angle is straightforward: if an agent can call APIs rapidly and persistently, a single weak authorisation check or leaked credential can scale into a much larger incident. Postman’s report explicitly ties agent usage to call patterns that differ from human usage, and recommends measures such as agent identification, least privilege for agents, enhanced monitoring, and credential rotation.

Keep API Keys off The Client

OpenAI’s guidance says not to deploy API keys in client side environments like browsers or mobile apps, and to route requests through your backend so the key stays secure. Its API reference also reminds developers that an API key is a secret and should not be exposed in client side code.

This is basic security hygiene, but it becomes more important as AI features increase call volume and broaden what a single credential can do.

APIs as Reusable Infrastructure

Australia’s government architecture guidance frames APIs as an interoperability layer, and api.gov.au positions itself as a way to discover and implement APIs offered by Australian governments.

For Australian organisations integrating AI agents with internal and external services, that emphasis on standardisation and reuse aligns with the broader industry push to make APIs more predictable, governable, and secure for machine consumers.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Next
Next

GenAI & Student Trust: HERDSA Flags Workshop on Ethics & Wellbeing