Today Google released its open source agent to agent protocol, imaginatively named A2A or Agent to Agent. Since we already see a lot of momentum behind Anthropic’s MCP (Model Context Protocol), Google claimed that A2A is complementary to MCP. They even used a heart emoji to drive home the point. A2A A2A MCP MCP A2A is complementary to MCP A2A is complementary to MCP I’m not so sure, so I decided to take a deeper look and check what will be A2A’s position in the agentic universe. So we will cover how A2A works in real life, and a comparison with MCP. Test drive A2A Using A2A is surprisingly similar to MCP. You can run a few A2A agents/servers, and then the A2A client can connect to all of them. The good news is that typically you do not need to run the A2A agents along with the A2A client. Running A2A agents (servers) I spinned up all three example agents locally Google ADK agent that can submit expenses reports for you CrewAI agent that can find out generate an image LangGraph agent that can find out the latest foreign exchange rate Google ADK agent that can submit expenses reports for you Google ADK agent Google ADK agent CrewAI agent that can find out generate an image CrewAI agent CrewAI agent LangGraph agent that can find out the latest foreign exchange rate LangGraph agent LangGraph agent The way that an A2A server lets the world know its capabilities is through a “Agent Card” in JSON format. As an example the agent card for google ADK looks like this: { "name": "Reimbursement Agent", "description": "This agent handles the reimbursement process for the employees given the amount and purpose of the reimbursement.", "url": "http://localhost:10002/", "version": "1.0.0", "capabilities": { "streaming": true, "pushNotifications": false, "stateTransitionHistory": false }, "defaultInputModes": [ "text", "text/plain" ], "defaultOutputModes": [ "text", "text/plain" ], "skills": [ { "id": "process_reimbursement", "name": "Process Reimbursement Tool", "description": "Helps with the reimbursement process for users given the amount and purpose of the reimbursement.", "tags": [ "reimbursement" ], "examples": [ "Can you reimburse me $20 for my lunch with the clients?" ] } ] } { "name": "Reimbursement Agent", "description": "This agent handles the reimbursement process for the employees given the amount and purpose of the reimbursement.", "url": "http://localhost:10002/", "version": "1.0.0", "capabilities": { "streaming": true, "pushNotifications": false, "stateTransitionHistory": false }, "defaultInputModes": [ "text", "text/plain" ], "defaultOutputModes": [ "text", "text/plain" ], "skills": [ { "id": "process_reimbursement", "name": "Process Reimbursement Tool", "description": "Helps with the reimbursement process for users given the amount and purpose of the reimbursement.", "tags": [ "reimbursement" ], "examples": [ "Can you reimburse me $20 for my lunch with the clients?" ] } ] } Launch A2A Client demo app Let’s continue with the client. The instructions to get the demo web app working are here. https://github.com/google/A2A/tree/main/demo https://github.com/google/A2A/tree/main/demo https://github.com/google/A2A/tree/main/demo Once the web app is running, you can access it from your browser. The client looks a bit like the Gemini AI Studio with signature Google Material design. URL: localhost:12000 URL: localhost:12000 First thing first, we need to add all the agents to the client by specifying their base URL. Since in my case I ran all the agents locally, their base URL were: Google ADK localhost:10002 crewAI localhost:10001 LangGraph Localhost:10000 Google ADK localhost:10002 localhost:10002 localhost:10002 crewAI localhost:10001 localhost:10001 localhost:10001 LangGraph Localhost:10000 Localhost:10000 Localhost:10000 Side note: within the protocol, the final URL looks a bit like this: https://localhost:10002/.well-known/agent.json https://localhost:10002/.well-known/agent.json Now you can see all three agents that are connected: all three agents You can see the chat history here chat history All the event list event list And all the task list task list Settings is quite basic Settings Test Google ADK agent for expense claim Test LangGraph for forex rate Test CrewAI agent for image generation A combo test for multiple agents I want to see if the A2A client can use multiple agents to achieve a single goal. So I tested if it can combine the expense claim agent with the forex rate agent. And it did work. use multiple agents to achieve a single goal My task was to “claim for an expense for a beer in Germany while on a business trip, 5 euros, April 4 2025”. The conversation went through a few rounds of back and forth, and eventually got the right amount of US dollars in the expense claim form. claim for an expense for a beer in Germany while on a business trip, 5 euros, April 4 2025 Initial Observations of A2A I like that A2A is a pure Client-Server model that both can be run and hosted remotely. The client is not burdened with specifying and launching the agents/servers. The agent configuration is fairly simple with just specifying the base URL, and the “Agent Card” takes care of the context exchange. And you can add and remove agents after the client is already launched. At the current demo format, it is a bit difficult to understand how agents communicate with each other and accomplish complex tasks. The client calls each agent separately for different tasks, thus very much like multiple tool calling. Compare A2A with MCP Now I have tried out A2A, it is time to compare it with MCP which I wrote about earlier in this article. this article this article While both A2A and MCP aim to improve AI agent system development, in theory they address distinct needs. A2A operates at the agent-to-agent level, focusing on interaction between independent entities, whereas MCP operates at the LLM level, focusing on enriching the context and capabilities of individual language models. And to give a glimpse of their main similarity and differences according to their protocol documentation: Feature A2A MCP Primary Use Case Agent-to-agent communication and collaboration Providing context and tools (external API/SDK) to LLMs Core Architecture Client-server (agent-to-agent) Client-host-server (application-LLM-external resource) Standard Interface JSON specification, Agent Card, Tasks, Messages, Artifacts JSON-RPC 2.0, Resources, Tools, Memory, Prompts Key Features Multimodal, dynamic, secure collaboration, task management, capability discovery Modularity, security boundaries, reusability of connectors, SDKs, tool discovery Communication Protocol HTTP, JSON-RPC, SSE JSON-RPC 2.0 over stdio, HTTP with SSE (or streamable HTTP) Performance Focus Asynchronous communication for load handling Efficient context management, parallel processing, caching for high throughput Adoption & Community Good initial industry support, nascent ecosystem Substantial adoption from entire industry, fast growing community Feature A2A MCP Primary Use Case Agent-to-agent communication and collaboration Providing context and tools (external API/SDK) to LLMs Core Architecture Client-server (agent-to-agent) Client-host-server (application-LLM-external resource) Standard Interface JSON specification, Agent Card, Tasks, Messages, Artifacts JSON-RPC 2.0, Resources, Tools, Memory, Prompts Key Features Multimodal, dynamic, secure collaboration, task management, capability discovery Modularity, security boundaries, reusability of connectors, SDKs, tool discovery Communication Protocol HTTP, JSON-RPC, SSE JSON-RPC 2.0 over stdio, HTTP with SSE (or streamable HTTP) Performance Focus Asynchronous communication for load handling Efficient context management, parallel processing, caching for high throughput Adoption & Community Good initial industry support, nascent ecosystem Substantial adoption from entire industry, fast growing community Feature A2A MCP Feature Feature A2A A2A MCP MCP Primary Use Case Agent-to-agent communication and collaboration Providing context and tools (external API/SDK) to LLMs Primary Use Case Primary Use Case Agent-to-agent communication and collaboration Agent-to-agent communication and collaboration Providing context and tools (external API/SDK) to LLMs Providing context and tools (external API/SDK) to LLMs Core Architecture Client-server (agent-to-agent) Client-host-server (application-LLM-external resource) Core Architecture Core Architecture Client-server (agent-to-agent) Client-server (agent-to-agent) Client-host-server (application-LLM-external resource) Client-host-server (application-LLM-external resource) Standard Interface JSON specification, Agent Card, Tasks, Messages, Artifacts JSON-RPC 2.0, Resources, Tools, Memory, Prompts Standard Interface Standard Interface JSON specification, Agent Card, Tasks, Messages, Artifacts JSON specification, Agent Card, Tasks, Messages, Artifacts JSON-RPC 2.0, Resources, Tools, Memory, Prompts JSON-RPC 2.0, Resources, Tools, Memory, Prompts Key Features Multimodal, dynamic, secure collaboration, task management, capability discovery Modularity, security boundaries, reusability of connectors, SDKs, tool discovery Key Features Key Features Multimodal, dynamic, secure collaboration, task management, capability discovery Multimodal, dynamic, secure collaboration, task management, capability discovery Modularity, security boundaries, reusability of connectors, SDKs, tool discovery Modularity, security boundaries, reusability of connectors, SDKs, tool discovery Communication Protocol HTTP, JSON-RPC, SSE JSON-RPC 2.0 over stdio, HTTP with SSE (or streamable HTTP) Communication Protocol Communication Protocol HTTP, JSON-RPC, SSE HTTP, JSON-RPC, SSE JSON-RPC 2.0 over stdio, HTTP with SSE (or streamable HTTP) JSON-RPC 2.0 over stdio, HTTP with SSE (or streamable HTTP) Performance Focus Asynchronous communication for load handling Efficient context management, parallel processing, caching for high throughput Performance Focus Performance Focus Asynchronous communication for load handling Asynchronous communication for load handling Efficient context management, parallel processing, caching for high throughput Efficient context management, parallel processing, caching for high throughput Adoption & Community Good initial industry support, nascent ecosystem Substantial adoption from entire industry, fast growing community Adoption & Community Adoption & Community Good initial industry support, nascent ecosystem Good initial industry support, nascent ecosystem Substantial adoption from entire industry, fast growing community Substantial adoption from entire industry, fast growing community Conclusions Even though Google made it sound like A2A is a complimentary protocol to MCP, my first test shows they are overwhelmingly overlapping in purpose and features. They both address the needs of AI application developers to utilize multiple agents and tools to achieve complex goals. Right now, they both lack a good mechanism to register and discover other agents and tools without manual configuration. overwhelmingly overlapping in purpose and features MCP had an early start and already garnered tremendous support from both the developer community and large enterprises. A2A is very young, but already boasts strong initial support from many Google Cloud enterprise customers. I believe this is great news for developers, since they will have more choices in open and standard agent-agent protocols. Only time can tell which will reign supreme, or they might even merge into a single standard. open and standard agent-agent protocols