Hello Reader, EDA (Event Driven Architecture) has become increasingly popular in recent times. In this newsletter edition, we will explore what EDA is, what the benefits of EDA are, and then some advanced patterns of EDA, including with Kubernetes! Let's get started: An event-driven architecture decouples the producer and processor. In this example producer (human) invokes an API, and send information in JSON payload. API Gateway puts it into an event store (SQS), and the processor (Lambda) picks it up and processes it. Note that, the API gateway and Lambda can scale (and managed/deployed) independently Benefits of an event-driven architecture:
Now that we understand the EDA basics, let's take a look at a couple of advanced EDA patterns: API Gateway + EventBridge Pattern
SNS + SQS Pattern This pattern is similar to the previous one but has some differences:
Finally, let's look at an advanced pattern which is popular with enterprises, combining the superpower of Serverless and Kubernetes Is EDA (Event Driven Architecture) possible only with Serverless? No, EDA is quite popular with Kubernetes as well. Many customers want to keep their business logic in containers. In today’s edition, we will look at EDA on Kubernetes with SNS and SQS. Use SNS payload-based filtering to send messages to different queues. Note that we are NOT sending the same message to these two queues like the traditional fanout one-to-many patterns, but each different message (signified by different colored JSON icons) is going to the separate SQS queue. SQS1 messages are processed by Kubernetes. Use Kubernetes Event Driven Autoscaling (KEDA) with Karpenter to implement event-driven workloads. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. One popular implementation is to scale up worker nodes to accommodate pods that process messages coming into a queue: KEDA monitors queue depth, and scales HPA for the application for processing the messages HPA increases number of pods. Assuming, no capacity available to schedule those pods, Karpenter provisions new nodes. Kube-scheduler places those pods on the VMs. The pods process the messages from queue. Once processing is done, number of pods go to zero. Karpenter can scale down VMs to zero for maximum cost efficiency SQS 2 messages are processed by Lambda and DynamoDB Now, you are equipped to handle even tough interview questions on EDA! Make sure to practice, and crush your cloud interview 🙌 If you have found this newsletter helpful, and want to support me 🙏: Checkout my bestselling courses on AWS, System Design, Kubernetes, DevOps, and more: Max discounted links
Keep learning and keep rocking 🚀, Raj |
Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top Solutions Architect at AWS.
Hello Reader, In today’s post, let’s look at another correct but average answer and a great answer that gets you hired to common cloud interview questions. Question - What is RTO and RPO? Common mistakes candidate make - they say RPO (Recovery Point Objective) is measured in unit of data, e.g. gigabyte, petabyte etc. Here is the right answer - Both RPO and RTO are measured in time. RTO stands for Recovery Time Objective and is a measure of how quickly after an outage an application must be...
Hello Reader, Most engineers are using MCP clients and agents. But very few know how to build and host an MCP server, let alone run it remotely on the cloud. In today's edition, we will learn how to create and run a remote MCP server on Kubernetes, on Amazon EKS! I will share the code repo as well, so you can try this out yourself. But first.. 🔧 What is an MCP Server really? It’s not just an API that performs a task. An MCP Server is a protocol-compliant endpoint (defined by Anthropic) that...
Hello Reader, On my interactions, this question is coming up a lot - “How are AWS Strands different from Bedrock Agents?”. In today's newsletter, we will go over this, so you can also answer this in your interviews or real-world projects Let’s break it down using a practical example: What happens when a user asks an LLM app: What’s the time in New York? What’s the weather there? List my S3 buckets The LLM don't have these information, hence it needs to invoke tools for time, weather, and AWS...