Traditional CICD Vs. GitOps Vs. Argo Workflows


Hello Reader,

What is the difference between traditional CICD and GitOps? This topic is coming up frequently in actual project meetings, and interviews. Let us understand this with the below diagram:

Traditional DevOps

Step 1: Developers check in Code, Dockerfile, and manifest YAMLs to an application repository. CI tools (e.g., Jenkins) kick off, build the container image and save the image in a container registry such as Amazon ECR.

Step 2: CD tools (e.g. Jenkins) update the deployment manifest files with the tag of the container image.

Step 3: CD tools (e.g. Jenkins) execute the command to deploy the manifest files into the cluster, which, in terms, deploys the newly built container in the Amazon EKS cluster.

Conclusion - Traditional CICD is a push based model where manifest files are pushed from a Git repo to the cluster by a CD tool. If a sneaky SRE changes the YAML file directly in the cluster (e.g. changes number of replica, or even the container image itself!), the resources running in the cluster will deviate from what's defined in the YAML in the Git. Worse case, this change can break something, and DevOps team need to rerun part of the CICD process to push the intended YAMLs to the cluster

GitOps

Step A: Developers check in Code, Dockerfile, and manifest YAMLs to an application repository. CI tools (e.g., Jenkins) kick off, build the container image and save the image in a container registry such as Amazon ECR.

Step B: CD tools (e.g. Jenkins) update the deployment manifest files with the tag of the container image.

Step C: With GitOps, Git becomes the single source of truth. You need to install a GitOps tool like Argo inside the cluster and point to a Git repo. Git keeps checking if there is a new file, or if the files in the cluster drifts from the ones in Git. As soon as YAML is updated with new container image, there is a drift between what's running in the cluster vs what's in Git. ArgoCD pulls in this updated YAML file and deploys new container.

Conclusion - GitOps does NOT replace DevOps. As you can see GitOps only replaces part of the CD process. If we think about the previous scenario where the sneaky SRE directly changes the YAML in cluster, ArgoCD will detect the mismatch between the changed file vs the one in Git. Since there is a difference, it will pull in the file from Git into the cluster, and then the Kubernetes controllers will act on the file brought from Git, and bring Kubernetes resources to it's intended state. And don't worry, Argo can also send a message to the sneaky SRE's manager ;).

Argo Workflows

Recently, Argo Workflows have been growing in popularity, and the differences between Argo CD and Argo Workflows can be confusing. Think of Argo Workflows as AWS Step Functions running inside the cluster. As in, Argo Workflows can create a sequence of steps that can run in parallel or one after another (or as a DAG for those of you familiar with it). Each step runs inside a container. Each step can be bash script, code, or run your container. Let's take an example - you want to migrate the workloads from Cluster Autoscaler to Karpenter, and this requires a series of steps. An Argo Workflow can be used to do this. If you are just deploying into the cluster, Argo CD is recommended. Not only is it simpler, but the UI is also built for deployments. If you need to execute a series of steps (e.g., batch processing or other complex multi-step processes), go with Argo Workflows. Check out my talk from Paris Kubecon on Argo Workflows with demo: Video Link.

So let me ask you this - are you camp traditional DevOps or camp GitOps, and why?

If you have found this newsletter helpful, and want to support me 🙏:

Checkout my bestselling courses on AWS, System Design, Kubernetes, DevOps, and more: Max discounted links

AWS SA Bootcamp with Live Classes, Mock Interviews, Hands-On, Resume Improvement and more: https://www.sabootcamp.com/

Keep learning and keep rocking 🚀,

Raj

Fast Track To Cloud

Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top Solutions Architect at AWS.

Read more from Fast Track To Cloud

Hello Reader, Happy Holidays! In today's newsletter, I am going to share three tips that helped me and many of my students switch careers to the cloud and get high-paying jobs. I will also share an update about the upcoming January cohort of the AWS SA Bootcamp. Tip 1: Leverage your IT experience Your existing IT experience is NOT throwaway. Don't think you can't reuse components of your existing knowledge in your cloud journey. For example, my student Abhisekh has deep knowledge of the...

Hello Reader, In today’s post, let’s look at another correct but average answer and a great answer that gets you hired to common cloud interview questions. Question - How did you do Disaster Recovery (DR) for your AWS application? Common but average answer - I will replicate it to another region What the interviewer is looking for is how DR strategies are chosen, and what are the different strategies. As an SA, you will be responsible for talking to the app team and coming up with an...

Hello Reader, Hi from Las Vegas, USA, where I am here to present at AWS Re:Invent, the biggest cloud conference on earth. In this edition I am going to go over a BIG announcement that you should learn - Amazon EKS Autonomous Mode or EKS Auto Mode or lovingly called EKS Auto! Why is this important? Getting started with Kubernetes is hard. Sure Amazon EKS manages control plane, but you need to install core addons, select AMI, create worker node, and scale the nodes. It doesn't stop there....