
The Linux Foundation announced at the Open Source Summit in Denver that it will now host the Agent2Agent (A2A) protocol. Initially developed by Google and now supported by more than 100 leading technology companies, A2A is a crucial new open standard for secure and interoperable communication between AI agents.
Also: What are AI agents? How to access a team of personalized assistants
In his keynote presentation, Mike Smith, a Google staff software engineer, told the conference that the A2A protocol has evolved to make it easier to add custom extensions to the core specification. Additionally, the A2A community is working on making it easier to assign unique identities to AI agents, thereby improving governance and security.
The A2A protocol is designed to solve one of AI's most pressing challenges: enabling autonomous agents -- software entities capable of independent action and decision-making -- to discover each other, securely exchange information, and collaborate across disparate platforms, vendors, and frameworks. Under the hood, A2A does this work by creating an AgentCard.
Also: Amazon's Andy Jassy says AI will take some jobs but make others more 'interesting'
An AgentCard is a JavaScript Object Notation (JSON) metadata document that describes its purpose and provides instructions on how to access it via a web URL. A2A also leverages widely adopted web standards, such as HTTP, JSON-RPC, and Server-Sent Events (SSE), to ensure broad compatibility and ease of integration. By providing a standardized, vendor-neutral communication layer, A2A breaks down the silos that have historically limited the potential of multi-agent systems.
For security, A2A comes with enterprise-grade authentication and authorization built in, including support for JSON Web Tokens (JWTs), OpenID Connect (OIDC), and Transport Layer Security (TLS). This approach ensures that only authorized agents can participate in workflows, protecting sensitive data and agent identities. While the security foundations are in place, developers at the conference acknowledged that integrating them, particularly authenticating agents, will be a hard slog.
A new era of productivity
So, what does the adoption of A2A mean for IT professionals? In an interview, Antje Barth, an Amazon Web Services (AWS) principal developer advocate for generative AI, explained, "Say you want to book a train ride to Copenhagen, then a hotel there, and look maybe for a fancy restaurant, right? You have inputs and individual tasks, and A2A adds more agents to this conversation, with one agent specializing in hotel bookings, another in restaurants, and so on. A2A enables agents to communicate with each other, hand off tasks, and finally brings the feedback to the end user."
As Jim Zemlin, executive director of the Linux Foundation, said in his keynote speech: "By joining the Linux Foundation, A2A is ensuring the long-term neutrality, collaboration, and governance that will unlock the next era of agent-to-agent powered productivity." Zemlin expects A2A to become a cornerstone for building interoperable, multi-agent AI systems.
The AI companies agree. A coalition of industry giants supports the A2A project. In addition to Google and AWS, Cisco, Microsoft, Salesforce, SAP, and ServiceNow also back the protocol.
Also: 5 ways you can plug the widening AI skills gap at your business
A2A is expected to work hand-in-glove with Anthropic's Model Context Protocol (MCP). While both are open standards that enable robust and interoperable AI agent ecosystems, they have different aims.
MCP provides a standardized way for an individual AI agent to access external tools, application programming interfaces (APIs), and data sources. It acts as a universal interface, a "USB-C port for AI applications," enabling AI agents to augment their capabilities by connecting to services such as search engines, databases, or third-party APIs.
On the other hand, A2A focuses on enabling communication, collaboration, and task delegation between multiple AI agents, regardless of their vendor, framework, or underlying technology.
Also: AI won't take your job, but this definitely will
As a blog by the AI company Clarifai explained, "These two protocols solve different parts of the communication problem: A2A focuses on how agents communicate with each other (horizontally), while MCP focuses on how a single agent connects to tools or memory (vertically)."
Putting protocols into practice
Industry experts also expect MCP to be used in vertical integrations. For example, agents will use MCP to gather or process information necessary for their tasks, such as retrieving user data from an upstream customer relationship management program or triggering a cloud service.
With A2A horizontal integration, when a task requires multiple agents with specialized expertise, A2A allows these agents to discover each other, share context, delegate subtasks, and coordinate to solve complex problems.
In practice, as AI specialist Aisera spelled out, when an IT support agent, using A2A, receives a ticket about slow application performance, it uses MCP to gather diagnostic data from monitoring tools. When an issue spans multiple domains, such as the network, database, or application, the agent can use A2A to delegate parts of the investigation to other specialized agents, each of which may use MCP to access their relevant tools.
Also: 4 ways to turn AI into your business advantage
But, as good as A2A sounds, Barth warns, "We're still in early stages from a protocol and standardization perspective. And I don't think we're yet there to call; there's just going to be one protocol to tackle all the use cases. MCP came out initially connecting models with tools, and there's a huge community that's excited and building around it. We're similarly excited about A2A, but there's some overlap."
Working out exactly where A2A and MCP start and end will be challenging. Besides A2A, there's also another agent-to-agent protocol, Agent Communication Protocol (ACP). This open-source protocol is based on the popular Representational State Transfer (REST) software architecture from IBM, and also has Linux Foundation support.
In short, let the best protocol win. However, what's already certain is that all this effort will pay dividends in the long term. As Barth concluded, "Gartner says a third of all applications and enterprises will be powered by Agentic AI by 2028. So, there's never been a better time for developers to start building, start learning."