An AI Agent is the combination of a Large Language Model and a set of tools that the model uses for executing queries or accomplishing tasks in response to requests coming from Users or other Agents. AI agents have the potential to scale, automate, and improve business processes across various workplace functions and to significantly boost personal productivity. AI Agent Large Language Model There is a broad consensus that the “one agent fits all” approach is not suitable for the complexity of the tasks that Agents are expected to accomplish. The solution to this problem lies in Agentic Workflows, made by networks of autonomous AI Agents that make decisions, take actions and coordinate tasks with minimal human intervention. (IBM). Agentic Workflows, IBM IBM Google’s Proposal for Agent Interoperability: Agent2Agent Protocol (A2A) On April 9, 2025, Google announced the launch of the Agent2Agent (A2A)protocol, designed to enable AI Agents to communicate with one another, securely exchanging information and automating complex business workflows through interaction with enterprise and third-party platforms and applications. Agent2Agent (A2A)protocol The A2A protocol has been developed by Google in collaboration with more than 50 industry partners, who share a common vision of the future of AI Agent collaboration. This collaboration is independent from the underlying technologies and based on open and secure standards. A2A Design Principles As stated in the announcement, during the design of the A2A protocol, Google and its partners adhered to a few key principles: announcement announcement A2A must be an open, vendor-agnostic protocol, whose specifications are made publicly available; A2A must allow agents to collaborate in their natural, unstructured modalities, hence drawing a clear distinction between Agents and Tools and distinguishing itself from the Model Context Protocol (see below); The protocol must be built on existing standards, such as HTTP, SSE, and JSON-RPC, in order to be easy to integrate with existing IT stacks; A2A must be secure by default, supporting enterprise-grade authentication and authorization; A2A must be agnostic to the data modality, being able to handle text as well as images, audio and video streaming. A2A must be an open, vendor-agnostic protocol, whose specifications are made publicly available; open, vendor-agnostic protocol A2A must allow agents to collaborate in their natural, unstructured modalities, hence drawing a clear distinction between Agents and Tools and distinguishing itself from the Model Context Protocol (see below); collaborate in their natural, unstructured modalities Model Context Protocol Model Context Protocol The protocol must be built on existing standards, such as HTTP, SSE, and JSON-RPC, in order to be easy to integrate with existing IT stacks; built on existing standards A2A must be secure by default, supporting enterprise-grade authentication and authorization; secure by default A2A must be agnostic to the data modality, being able to handle text as well as images, audio and video streaming. agnostic to the data modality, What A2A Provides A2A provides the following functionalities out of the box: Capability discovery: The protocol allows the Agents to advertise their capabilities, helping the clients identify which agent is best for performing a task; Task and state management: The interaction between a client and an agent is based on the completion of Tasks, which is an object defined by the protocol that has a lifecycle. The output of a task is called an Artefact; Secure Collaboration: Agents can send each other messages to communicate context, replies, artefacts, or user instructions. User experience negotiation: Each message includes “parts,” which is a fully formed piece of content, like a generated image. Each part has a specified content type, allowing client and remote agents to negotiate the correct format needed and explicitly include negotiations of the user’s UI capabilities–e.g., iframes, video, web forms, and more. Capability discovery: The protocol allows the Agents to advertise their capabilities, helping the clients identify which agent is best for performing a task; Capability discovery: Task and state management: The interaction between a client and an agent is based on the completion of Tasks, which is an object defined by the protocol that has a lifecycle. The output of a task is called an Artefact; Task and state management: Tasks, Artefact Secure Collaboration: Agents can send each other messages to communicate context, replies, artefacts, or user instructions. Secure Collaboration: User experience negotiation: Each message includes “parts,” which is a fully formed piece of content, like a generated image. Each part has a specified content type, allowing client and remote agents to negotiate the correct format needed and explicitly include negotiations of the user’s UI capabilities–e.g., iframes, video, web forms, and more. User experience negotiation: The most intriguing aspects are the Capability Discovery and User Experience Negotiation features, as they facilitate the establishment of Agent Marketplaces. Within these Marketplaces, suppliers can publish Agents, and clients can select the most suitable Agent to perform specific tasks. While this concept is undoubtedly promising and may be essential for the growth of the AI Agents market, it will require much more than the definition of a protocol for interaction to achieve this goal. Agent2Agent Protocol Concepts The protocol is based on a number of concepts, some of which are already familiar to those developing AI Agents: Agent Card: A public metadata file, describing the agent's capabilities, skills, endpoint URL, and authentication requirements, to be used in the discovery phase for selecting the Agent and understanding how to interact with it; Server: An agent that implements the A2A protocol methods, as defined in the JSON specification; Client: An application or another agent that consumes A2A services; Task: The main unit of work for the Agent. It is initiated by the client and performed by the server, going through various states; Message: Represents communication turns between the client and the agent. Each Message has a role and is composed of Parts; Part: The fundamental content unit within a Message or an Artefact. A part can either be a text, a file or structured data; Artefact: Represents outputs generated by the agent during the accomplishment of a Task. Artefacts, just like Messages, contain Parts. Streaming: The protocol offers support for streaming, so that the server can use it to update the client about the state of long-running tasks. Agent Card: A public metadata file, describing the agent's capabilities, skills, endpoint URL, and authentication requirements, to be used in the discovery phase for selecting the Agent and understanding how to interact with it; Agent Card Server: An agent that implements the A2A protocol methods, as defined in the JSON specification; Server JSON specification JSON specification Client: An application or another agent that consumes A2A services; Client Task: The main unit of work for the Agent. It is initiated by the client and performed by the server, going through various states; Task states Message: Represents communication turns between the client and the agent. Each Message has a role and is composed of Parts; Message role Parts Part: The fundamental content unit within a Message or an Artefact. A part can either be a text, a file or structured data; Part Artefact: Represents outputs generated by the agent during the accomplishment of a Task. Artefacts, just like Messages, contain Parts. Artefact Streaming: The protocol offers support for streaming, so that the server can use it to update the client about the state of long-running tasks. Streaming State of the Art of the Agent2Agent Project A2A has just been announced to the public, and its specifications are now available on GitHub. At the present time, there is no official roadmap or production-ready implementation of the protocol, although Google is collaborating with partners to launch a production-ready version later this year (2025). The A2A GitHub repository contains a number of code samples in both TypeScript and Python, as well as a fairly comprehensive demo application. This application demonstrates the interaction between agents developed with different Agent Development Kits (ADK), as illustrated in the image below. A2A GitHub repository A2A GitHub repository code samples code samples demo application demo application Agent Development Kits (ADK) Agent Development Kits (ADK) Architecture of the A2A demo application (GitHub) GitHub GitHub This is enough to get started and experiment with the protocol, but before it is adopted in mission-critical projects, A2A needs to be integrated into the ecosystem of frameworks and tools built for adopting Agentic Workflows. Given the support of a large number of big names (interestingly, none of the companies that provide foundation models are present) working with Google on the protocol definition, there is little doubt that the tools will be here soon and that A2A will be integrated into the leading agent frameworks. List of the partners contributing to the Agent2Agent protocol (Google) Google Google Will A2A Replace Model Context Protocol (MCP)? Model Context Protocol (MCP) is a protocol designed by Anthropic that enables applications to provide context to Large Language Models. Anthropic describes MCP as the “USB-C port for AI applications”, providing a standardised way to connect LLMs to data sources and tools in the same fashion that USB allows connecting heterogeneous peripherals to devices. Model Context Protocol (MCP) Model Context Protocol (MCP) As Google explains, the A2A protocol is not meant to replace MCP. In fact, there is no overlap between MCP and A2A: they solve different problems and work at different levels of abstraction. A2A is designed to allow Agents to interact with other Agents, and MCP is designed to connect Large Language Models to tools, which in turn, connect them to services and data, as shown in the following image. Google explains Google explains Conclusions Agentic workflows have the potential to be a very interesting tool for solving complex problems, and open protocols such as A2A and MCP are certainly key enablers for the adoption of this technology. A2A is intended to become the protocol of choice for the interaction among agents and could be the basis for the development of marketplaces where agents can be advertised and made available to users. Large-scale adoption of Agentic workflows in enterprise-grade, mission-critical applications necessitates a multivendor ecosystem of tools and frameworks for the development, deployment, monitoring and tracing of multi-agent workflows. There are clear signals that the industry is moving in this direction, and A2A and MCP are an important part of this revolution. Links What are Agentic Workflows?; Announcing the Agent2Agent Protocol (A2A); A2A Documentation on GitHub; A2A examples and demos; A2A json specification; Model Context Protocol documentation; What are Agentic Workflows?; What are Agentic Workflows? What are Agentic Workflows? Announcing the Agent2Agent Protocol (A2A); Announcing the Agent2Agent Protocol (A2A) Announcing the Agent2Agent Protocol (A2A) A2A Documentation on GitHub; A2A Documentation on GitHub A2A Documentation on GitHub A2A examples and demos; A2A examples and demos A2A examples and demos A2A json specification; A2A json specification A2A json specification Model Context Protocol documentation; Model Context Protocol documentation Model Context Protocol documentation If you found this article helpful, I’d love to connect with you on LinkedIn. If you found this article helpful, I’d love to connect with you on LinkedIn. LinkedIn LinkedIn