|
Hello Reader, The last couple of weeks have been action-packed for Gen AI! Two specific announcements were at the forefront - MCP (Model Context Protocol) and A2A (Agent To Agent). In today's edition, we will learn the similarities and differences between both, and answer which parts YOU need to know for the job and interviews. MCPMCP is released by Anthropic. Before we understand MCP, let's understand the existing challenges. Let's say you send a prompt to the app, "What's the weather in Tokyo?". The LLM in the app doesn't know about the current weather, hence it invokes an agent that runs some code. This code reaches out to an external weather tool, which sends weather data in JSON to the agent. Agent passes it to the LLM, and the LLM formats the data in nice natural language and sends it to the user. Question is - how does the agent code interact with the weather tool? Via API. And to do that, as shown below, the agentic code, needs to know the API URL, required header information, and payload. This works, but there are some challenges too:
Hence, MCP was born! MCP standardizes the communication between the agentic code and tools (and local datasources, but tool is the most widely used). What does this mean?
Okay, so MCP standardizes the interaction between the Agent and the underlying tools. But what is this new A2A then? Let's find out. A2AMCP handles the communication between the agent and tools (and local datasources). But how about agent-to-agent? Let's look at the diagram below Agent B has the logic to get the stuff done with tools/datasources using MCP, and this part we understood from above. Now, Agent A needs to call Agent B. How does this happen BEFORE A2A was in the picture: Similar to any other API call, Agent A will invoke the API URL of Agent B, and pass AuthN/Z parameters, and a payload. This brings similar challenges as above:
A2A (or Agent2Agent) standardizes the communication between agents. What does this mean? A2A + MCP Flow
In summary, MCP standardizes the connection between LLM Agents and tools, and A2A standardizes the connection between two agents. They work hand in hand because they complement each other and do not compete with each other. This is a pretty detailed subject, and if you want a more detailed explanation with code snippets, check out video below: If you have found this newsletter helpful, and want to support me 🙏: Checkout my bestselling courses on AWS, System Design, Kubernetes, DevOps, and more: Max discounted links AWS SA Bootcamp with Live Classes, Mock Interviews, Hands-On, Resume Improvement and more: https://www.sabootcamp.com/
Keep learning and keep rocking 🚀, Raj |
Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top AWS Solutions Architect.
Hello Reader, In today’s post, let’s look at another correct but average answer and a great answer that gets you hired to common cloud interview questions. This question is even more relevant now, after this week's AWS outage! Question - How did you do Disaster Recovery (DR) for your AWS application? Common but average answer - I will replicate it to another region What the interviewer is looking for is how DR strategies are chosen, and what are the different strategies. As an SA, you will be...
Hello Reader, Recently, I had the privilege of speaking to the Computer Science and Business Club at Rutgers University - ranked #1 in New Jersey for Engineering and Computer Science by U.S. News & World Report. It was incredible to see how driven and curious these students were. Many already had offers from Amazon, JPMorgan, and other top companies. Talking with them took me right back to my college days - studying for exams, chasing grades, and trying to figure out how to land that first...
Hello Reader, Another week, another AI announcement. But this one is worth studying because this one will become the defacto standard of running agents on AWS. I am talking about newly released Amazon AgentCore. Let's dive in. 🧩 The Big Picture: Why Agents Exist Let’s break it down using a practical example: What happens when a user asks an LLM app: What’s the time in New York? What’s the weather there? List my S3 buckets The LLM don't have these information, hence it needs to invoke tools...