Security

Your AI Agents Are Talking in Plaintext

The Google A2A protocol connects 150+ organizations. It has zero message-level encryption. Here's what that means.

February 2026

Google's Agent-to-Agent protocol is the closest thing we have to a standard for AI agent communication. Microsoft, SAP, Zoom, and over 150 organizations have adopted it. It defines how agents discover each other, exchange tasks, and report results.

It does not define how they do any of this securely.

I don't mean there's a gap in the implementation. I mean the specification itself contains no message-level encryption, no sender verification, and no key management. Agents talk in plaintext. Any message can be forged. And the "authentication" section of the spec is a placeholder that says "use OAuth or HTTP signatures" without specifying either.

This matters because the agents using A2A aren't chatbots. They're autonomous systems with API keys, database access, and the ability to execute code. A compromised message isn't a typo — it's a command injection into your infrastructure.

The numbers

88% of organizations experienced an agent security incident
21.9% treat agents as independent, identity-bearing entities
14.4% of deployed agents have full IT/security approval

Source: Gravitee, State of AI Agent Security 2026

Those numbers are from a survey of organizations that are already deploying agents in production. Only one in five treats its agents as independent identities — the rest run them under shared human or service accounts, the equivalent of giving every employee the same badge and hoping nobody leaves the company. And 86% of agents in the wild were deployed without full security sign-off.

What A2A doesn't do

The A2A spec is well-designed for what it covers: task lifecycle, streaming, agent discovery. But security was explicitly left as an exercise for the implementer. Here's what's missing:

Security property A2A spec A2A Secure
Message-level encryption Not specified AES-256-GCM
Sender verification Not specified Ed25519 signatures
Replay protection Not specified Timestamp + nonce
Key rotation Not specified 24h auto-rotation
Agent Card signing Not required Cold key delegation
Trust recovery No mechanism Self-healing /introduce

That first row is the one that should worry you. If your agents exchange instructions, tool results, or user data over A2A, and you haven't added encryption yourself, anyone on the network path can read every message.

Five things that go wrong

These aren't theoretical. They're documented attack vectors, several with published CVEs or research papers.

1. Agent Card spoofing

A2A agents discover each other via Agent Cards served at /.well-known/agent.json. The spec doesn't require these to be signed. An attacker who can modify DNS or sit on the network path can serve a fake Agent Card, redirecting all traffic to a malicious endpoint. Palo Alto's Unit42 team documented this as "agent session smuggling."

2. Agent-in-the-Middle

Without message signing, there's no way to verify who sent a message. LevelBlue's research team demonstrated a full Agent-in-the-Middle attack against A2A, intercepting and modifying messages between two agents in real time. The attacker doesn't need to break any cryptography — there is none to break.

3. Replay attacks

Capture a valid A2A message. Send it again. Without timestamps, nonces, or sequence numbers, the receiving agent processes it as a new request. If the original message was "transfer $10,000 to account X," it just happened twice.

4. Cascading compromise

Multi-agent systems are graphs. One compromised agent can send instructions to every agent it's connected to, and those agents pass along the results. Researchers at Cornell demonstrated that web-based attacks cause multi-agent systems to execute arbitrary malicious code in 58-90% of trials — even when individual agents have built-in prompt injection protections. The trust model is: "if the message arrived, it must be legitimate."

5. Credential exposure

Agents routinely pass API keys, database credentials, and user tokens to each other as part of task delegation. Without encryption, these credentials travel in plaintext. A network tap on a single link can harvest credentials for dozens of downstream services.

The common thread: every one of these attacks is trivially preventable with message-level encryption and sender verification. These aren't exotic threats — they're the baseline security properties that HTTPS gave us for web traffic twenty years ago. A2A just... doesn't have them.

What we built

I run two AI agents on cloud servers — one on AWS, one on Oracle Cloud — that communicate over the A2A protocol 24/7. When I realized the spec had no encryption, I built it myself.

A2A Secure is a security layer for A2A. It's not a fork — it's an extension that wraps the standard protocol with the security properties it's missing:

The entire implementation is a single Python package with one dependency (cryptography). No blockchain, no DID infrastructure, no certificate authority. It's designed to be dropped into an existing A2A deployment in an afternoon.

The key rotation problem

Most people who add encryption to a protocol get the initial handshake right. What's harder — and what usually gets skipped — is what happens when keys change.

If your agent's signing key rotates every 24 hours, every peer who trusted the old key now rejects your messages. In a naive implementation, this means manual re-introduction, downtime, or both.

A2A Secure solves this with a delegation chain. Your agent has two keys:

cold key (permanent identity, stored offline)
  └── delegates to → hot key (24h operational key)
                        └── signs all messages

Every message includes inline proof that the hot key was authorized by the cold key. When a peer receives a message signed by an unknown hot key, it checks the delegation chain back to the cold key it already trusts. If the chain is valid, trust is restored automatically. Zero round-trips. No human intervention.

When the hot key rotates, the server detects the change and sends an /introduce to all peers in a background thread — with retry backoff if a peer is temporarily unreachable. The whole process takes less than a second and has been running in production without a single manual intervention.

What you should check

If you're building or deploying a multi-agent system, here's a quick checklist:

If you answered "no" or "I'm not sure" to more than one of these, your agent infrastructure has the same gaps the A2A spec does.

We audit agent-to-agent communication for teams building on A2A, CrewAI, LangChain, and AutoGen.

audit@agentseal.dev  ·  GitHub  ·  See the protocol in action →


A2A is a good protocol. It solves real problems around agent interoperability. But interoperability without security is a liability, and the longer the ecosystem grows without addressing this, the harder it becomes to retrofit.

The good news: the fix is not complicated. Message signing and encryption are solved problems. What's needed is adoption — and a clear-eyed understanding that "we'll add security later" has never, in the history of computing, actually worked.

"The Google A2A protocol has no encryption. We fixed that."

Sources

  1. Gravitee, State of AI Agent Security 2026
  2. Semgrep, A Security Engineer's Guide to the A2A Protocol
  3. LevelBlue, Agent-in-the-Middle: Abusing Agent Cards
  4. Unit42, Agent Session Smuggling in A2A Systems
  5. OWASP, Top 10 for Agentic Applications 2026