Hello Reader, The last couple of weeks have been action-packed for Gen AI! Two specific announcements were at the forefront - MCP (Model Context Protocol) and A2A (Agent To Agent). In today's edition, we will learn the similarities and differences between both, and answer which parts YOU need to know for the job and interviews. MCPMCP is released by Anthropic. Before we understand MCP, let's understand the existing challenges. Let's say you send a prompt to the app, "What's the weather in Tokyo?". The LLM in the app doesn't know about the current weather, hence it invokes an agent that runs some code. This code reaches out to an external weather tool, which sends weather data in JSON to the agent. Agent passes it to the LLM, and the LLM formats the data in nice natural language and sends it to the user. Question is - how does the agent code interact with the weather tool? Via API. And to do that, as shown below, the agentic code, needs to know the API URL, required header information, and payload. This works, but there are some challenges too:
Hence, MCP was born! MCP standardizes the communication between the agentic code and tools (and local datasources, but tool is the most widely used). What does this mean?
Okay, so MCP standardizes the interaction between the Agent and the underlying tools. But what is this new A2A then? Let's find out. A2AMCP handles the communication between the agent and tools (and local datasources). But how about agent-to-agent? Let's look at the diagram below Agent B has the logic to get the stuff done with tools/datasources using MCP, and this part we understood from above. Now, Agent A needs to call Agent B. How does this happen BEFORE A2A was in the picture: Similar to any other API call, Agent A will invoke the API URL of Agent B, and pass AuthN/Z parameters, and a payload. This brings similar challenges as above:
A2A (or Agent2Agent) standardizes the communication between agents. What does this mean? A2A + MCP Flow
In summary, MCP standardizes the connection between LLM Agents and tools, and A2A standardizes the connection between two agents. They work hand in hand because they complement each other and do not compete with each other. This is a pretty detailed subject, and if you want a more detailed explanation with code snippets, check out video below: If you have found this newsletter helpful, and want to support me 🙏: Checkout my bestselling courses on AWS, System Design, Kubernetes, DevOps, and more: Max discounted links AWS SA Bootcamp with Live Classes, Mock Interviews, Hands-On, Resume Improvement and more: https://www.sabootcamp.com/
Keep learning and keep rocking 🚀, Raj |
Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top Solutions Architect at AWS.
Hello Reader, Another day, another MCP tool. But this one is special. Today we are going to go over newly released EKS MCP server. This is the official Kubernetes MCP server released and maintained by AWS. This one will rule them all! In today's edition, we are going to go over what it is, why this one is a game changer, how you can use this to get job interviews and demand more money, and whether it will eliminate SRE jobs. There are three Ways to Manage Kubernetes : Traditional Way (Manual...
Hello Reader, We all heard it - Gen AI is taking away your job. The reality is, it is for sure impacting your job functionalities. However, there is a bigger reason why many folks are failing interviews and not growing in their career, due to Gen AI, but NOT for the reasons you think! Let's dive deep You're Learning the Wrong Gen AI Tools ⚙ There are countless Gen AI tools and services out there. But here’s the key question you should be asking: Which tools are actually being used by the...
Hello Reader, MCP is all the rage now. But does this make RAG obsolete? This is becoming a burning question in real-world projects and in interviews. In today's edition, let's take a closer look and find out the answer. Let's go over what is what quickly. RAG (Retrieval Augmented Generation) RAG (Retrieval Augmented Generation) is used where the response can be made better by using company specific context that the LLM does NOT have. You store relevant company data into a vector database....