Traditional CICD Vs. GitOps Vs. Argo Workflows


Hello Reader,

What is the difference between traditional CICD and GitOps? This topic is coming up frequently in actual project meetings, and interviews. Let us understand this with the below diagram:

Traditional DevOps

Step 1: Developers check in Code, Dockerfile, and manifest YAMLs to an application repository. CI tools (e.g., Jenkins) kick off, build the container image and save the image in a container registry such as Amazon ECR.

Step 2: CD tools (e.g. Jenkins) update the deployment manifest files with the tag of the container image.

Step 3: CD tools (e.g. Jenkins) execute the command to deploy the manifest files into the cluster, which, in terms, deploys the newly built container in the Amazon EKS cluster.

Conclusion - Traditional CICD is a push based model where manifest files are pushed from a Git repo to the cluster by a CD tool. If a sneaky SRE changes the YAML file directly in the cluster (e.g. changes number of replica, or even the container image itself!), the resources running in the cluster will deviate from what's defined in the YAML in the Git. Worse case, this change can break something, and DevOps team need to rerun part of the CICD process to push the intended YAMLs to the cluster

GitOps

Step A: Developers check in Code, Dockerfile, and manifest YAMLs to an application repository. CI tools (e.g., Jenkins) kick off, build the container image and save the image in a container registry such as Amazon ECR.

Step B: CD tools (e.g. Jenkins) update the deployment manifest files with the tag of the container image.

Step C: With GitOps, Git becomes the single source of truth. You need to install a GitOps tool like Argo inside the cluster and point to a Git repo. Git keeps checking if there is a new file, or if the files in the cluster drifts from the ones in Git. As soon as YAML is updated with new container image, there is a drift between what's running in the cluster vs what's in Git. ArgoCD pulls in this updated YAML file and deploys new container.

Conclusion - GitOps does NOT replace DevOps. As you can see GitOps only replaces part of the CD process. If we think about the previous scenario where the sneaky SRE directly changes the YAML in cluster, ArgoCD will detect the mismatch between the changed file vs the one in Git. Since there is a difference, it will pull in the file from Git into the cluster, and then the Kubernetes controllers will act on the file brought from Git, and bring Kubernetes resources to it's intended state. And don't worry, Argo can also send a message to the sneaky SRE's manager ;).

Argo Workflows

Recently, Argo Workflows have been growing in popularity, and the differences between Argo CD and Argo Workflows can be confusing. Think of Argo Workflows as AWS Step Functions running inside the cluster. As in, Argo Workflows can create a sequence of steps that can run in parallel or one after another (or as a DAG for those of you familiar with it). Each step runs inside a container. Each step can be bash script, code, or run your container. Let's take an example - you want to migrate the workloads from Cluster Autoscaler to Karpenter, and this requires a series of steps. An Argo Workflow can be used to do this. If you are just deploying into the cluster, Argo CD is recommended. Not only is it simpler, but the UI is also built for deployments. If you need to execute a series of steps (e.g., batch processing or other complex multi-step processes), go with Argo Workflows. Check out my talk from Paris Kubecon on Argo Workflows with demo: Video Link.

So let me ask you this - are you camp traditional DevOps or camp GitOps, and why?

If you have found this newsletter helpful, and want to support me 🙏:

Checkout my bestselling courses on AWS, System Design, Kubernetes, DevOps, and more: Max discounted links

AWS SA Bootcamp with Live Classes, Mock Interviews, Hands-On, Resume Improvement and more: https://www.sabootcamp.com/

Keep learning and keep rocking 🚀,

Raj

Fast Track To Cloud

Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top Solutions Architect at AWS.

Read more from Fast Track To Cloud

Hello Reader, Are you thinking about becoming an AWS SA? The demand for AWS Solutions Architects has never been higher. And the data indicates it will continue to rise because there are literally trillions of dollars worth of projects currently running on legacy technologies that need to be migrated to the cloud. SA Bootcamp is developed to be the most direct and guided route to become a Solutions Architect and get a high paying cloud job. In as little as 3 months you could be an AWS SA...

Hello Reader, I have been a Cloud Solutions Architect for 10 years - 4 years at Verizon, 6.5 years at AWS. I was an Application Cloud Architect at Verizon, and then I joined AWS, where I had two different SA roles - first a General SA (Enterprise Architect) and then a Specialist SA. In this post, I will review my responsibilities as an SA in all these companies, including the hardest parts of the job (in my humble opinion). Let's get started: Solutions Architect at Verizon I became a SA at...

Hello Reader, In today's newsletter, I am going to share three tips that helped me and many of my students switch careers to the cloud and get high-paying jobs. I will also share an update about the upcoming Sep cohort of the AWS SA Bootcamp. Tip 1: Leverage your IT experience Your existing IT experience is NOT throwaway. Don't think you can't reuse components of your existing knowledge in your cloud journey. For example, my mentee and SA Bootcamper Rukmani, came from software engineering...