<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ gitops - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Fri, 08 May 2026 11:09:54 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/gitops/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Implement GitOps on Kubernetes Using Argo CD ]]>
                </title>
                <description>
                    <![CDATA[ If you’re still running kubectl apply from your local terminal, you aren’t managing a cluster, you’re babysitting one. I’ve spent more nights than I care to admit staring at a terminal, trying to figu ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-implement-gitops-on-kubernetes-using-argo-cd/</link>
                <guid isPermaLink="false">69b99877c22d3eeb8ae62100</guid>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Kubernetes ]]>
                    </category>
                
                    <category>
                        <![CDATA[ gitops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ ArgoCD ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Olumoko Moses ]]>
                </dc:creator>
                <pubDate>Tue, 17 Mar 2026 18:07:51 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/2fe40cbd-1b8a-4cc6-a721-45cc20a80c76.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>If you’re still running <code>kubectl apply</code> from your local terminal, you aren’t managing a cluster, you’re babysitting one.</p>
<p>I’ve spent more nights than I care to admit staring at a terminal, trying to figure out why a staging environment suddenly "broke" even though no one supposedly touched it.</p>
<p>We’ve all been there, right? a manual edit here, a quick hotfix there, and suddenly your Git repository is no longer a Source of Truth, it’s a historical document of what used to be running.</p>
<p>Without a reliable strategy, Kubernetes deployments quickly descend into a mess of drift, painful rollbacks, and non-existent audit trails. I learned the hard way that simply storing manifests in Git isn't enough. If your cluster isn't actively listening to your code, you're still working with a gap.</p>
<p>GitOps closes that gap. It turns your cluster into a mirror of your repository. If it isn't in Git, it doesn't exist.</p>
<p>In this tutorial, you aren't just going to read about the theory. You’re going to implement a "Zero-Touch" deployment loop from scratch. We’ll use Argo CD, GitHub Actions, and the Argo CD Image Updater to build a system that builds, tags, and deploys your code the second you hit <code>git push</code>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/93b61e74-66c3-47d7-aeff-f7a69d0d7390.jpg" alt="Architecture diagram of a complete GitOps CI/CD workflow. A developer pushes code to a GitHub repository, triggering a GitHub Actions pipeline that builds and pushes a new Docker image to DockerHub. The Argo CD Image Updater polls DockerHub for the new tag and commits the change back to the GitHub repository. Finally, the Argo CD Server detects the updated manifest in Git and syncs the changes to the live Kubernetes cluster." style="display:block;margin:0 auto" width="640" height="640" loading="lazy">

<h2 id="heading-table-of-contents">Table of Contents</h2>
<ol>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-what-gitops-really-means">What GitOps Really Means</a></p>
</li>
<li><p><a href="#heading-what-is-argo-cd-and-how-does-it-implement-gitops">What is Argo CD and How Does it Implement GitOps?</a></p>
</li>
<li><p><a href="#heading-preparing-the-application-source-code">Preparing the Application Source Code and Repo Structure</a></p>
</li>
<li><p><a href="#heading-automating-image-builds-with-github-actions">Automating Image Builds with GitHub Actions</a></p>
</li>
<li><p><a href="#heading-how-to-install-and-access-argo-cd">How to Install and Access Argo CD</a></p>
</li>
<li><p><a href="#heading-understanding-the-argo-cd-application">Understanding the Argo CD Application</a></p>
</li>
<li><p><a href="#heading-deploying-the-application-manifest">Deploying the Application Manifest</a></p>
</li>
<li><p><a href="#heading-automating-updates-with-argo-cd-image-updater">Automating Updates with Argo CD Image Updater</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you begin, make sure you have the following ready in your environment:</p>
<ul>
<li><p><strong>A GitHub repository:</strong> You'll need a repository (for example, <code>my-gitops-demo</code>) to serve as your Single Source of Truth. If you're following this tutorial from scratch, start with an empty repo.</p>
</li>
<li><p><strong>A DockerHub account:</strong> This will act as your Container Registry. You’ll need this to build, push, and store the Docker images that GitHub Actions creates.</p>
</li>
<li><p><strong>A running Kubernetes cluster:</strong> You can use a local solution like Minikube or Kind, or a cloud-managed service like Amazon EKS or GKE.</p>
</li>
<li><p><strong>Kubernetes tooling:</strong> Ensure <code>kubectl</code> is installed and configured to communicate with your cluster.</p>
</li>
<li><p><strong>Fundamental K8s knowledge:</strong> You should be comfortable with basic Kubernetes concepts like Pods, Deployments, and Services.</p>
</li>
</ul>
<h3 id="heading-note-for-readers-with-existing-projects">Note for Readers with Existing Projects</h3>
<p>If you already have a project and want to migrate it to this GitOps workflow, you don't need to start over. You can adapt your existing repository by following these three steps:</p>
<ol>
<li><p>Standardize your manifests: Move all your existing Kubernetes YAML files into a dedicated Kubernetes-manifest/ directory at the root of your project.</p>
</li>
<li><p>Containerize your services: Ensure every service you intend to deploy has a Dockerfile in its respective subdirectory (for example, /main-api/Dockerfile).</p>
</li>
<li><p>Prepare for automation: Be ready to replace any manual kubectl apply steps in your current CI pipeline with the automated tagging strategy we’ll implement in the next sections.</p>
</li>
</ol>
<h2 id="heading-what-gitops-really-means">What GitOps Really Means</h2>
<p>At its core, GitOps is an operational framework that uses Git as the single source of truth for your infrastructure and applications. In a traditional setup, you might run <code>kubectl apply -f deployment.yaml</code> from your laptop. This makes it impossible to track who changed what, leading to "snowflake" clusters that no one can reproduce.</p>
<p>GitOps enforces four key principles:</p>
<ol>
<li><p><strong>Declarative:</strong> You describe the <em>desired</em> state (for example, "3 replicas of Nginx"), not the commands to get there.</p>
</li>
<li><p><strong>Versioned and immutable:</strong> Your entire state is in Git. If a deployment fails, you <code>git revert</code> to a previous known-good state.</p>
</li>
<li><p><strong>Pulled automatically:</strong> A software agent (Argo CD) pulls the state from Git.</p>
</li>
<li><p><strong>Continuously reconciled:</strong> The system constantly fixes "drift." If a developer manually changes a service in the cluster, Argo CD will overwrite it to match Git.</p>
</li>
</ol>
<h2 id="heading-what-is-argo-cd-and-how-does-it-implement-gitops">What is Argo CD and How Does it Implement GitOps</h2>
<p>Before we dive into the setup, let’s define the tool we'll be working with.</p>
<p>Argo CD is a declarative, GitOps' continuous delivery engine built specifically for Kubernetes. As a graduated project of the Cloud Native Computing Foundation (CNCF), it has become the industry standard for managing modern infrastructure.</p>
<p>Think of Argo CD as a persistent watchdog that lives inside your cluster. To understand why it's so powerful, we have to look at how it differs from traditional CI/CD tools like Jenkins or GitHub Actions.</p>
<h3 id="heading-the-push-vs-pull-model">The "Push" vs. "Pull" Model</h3>
<p>Traditional tools like the one I mentioned above use a <strong>"Push" model</strong>. In this setup, an external pipeline sends commands (like <code>kubectl apply</code>) into your cluster. This is risky because you must store sensitive cluster administrative keys inside your external CI tool. If your CI tool is compromised, your cluster is, too.</p>
<p>Argo CD flips this script using a <strong>"Pull"</strong> <strong>model</strong>:</p>
<ul>
<li><p><strong>The bridge:</strong> It sits between your Git repo (the "Desired State") and your cluster (the "Live State").</p>
</li>
<li><p><strong>Continuous monitoring:</strong> It watches your Git repo 24/7. The moment it detects a new commit, it "pulls" that change and applies it from <em>inside</em> the cluster.</p>
</li>
<li><p><strong>Self-healing:</strong> If someone manually changes a setting in the cluster (known as "drift"), Argo CD detects the discrepancy and automatically overwrites it to match what is written in Git.</p>
</li>
</ul>
<p>This approach is not only more secure, since no cluster credentials ever leave the environment, but it also ensures that your infrastructure is a perfect, predictable mirror of your code.</p>
<h2 id="heading-preparing-the-application-source-code">Preparing the Application Source Code</h2>
<p>Before we automate the build, we need actual code in our repository. We'll create two simple microservices: a Main API and an Auxiliary Service.</p>
<h3 id="heading-repo-structure">Repo Structure</h3>
<p>Ensure your repository follows this structure exactly. Consistency in naming is vital for the automation to find your files.</p>
<pre><code class="language-plaintext">GITOPS-ARGOCD-DEMO/
├── .github/workflows/main.yml
├── auxiliary-service/
│   └── Dockerfile
├── main-api/
│   └── Dockerfile
├── Kubernetes-manifest/
│   ├── aux-api.yaml
│   ├── kustomization.yaml
│   └── main-api.yaml
├── application.yaml
└── image-updater.yaml
</code></pre>
<h3 id="heading-create-the-dockerfiles">Create the Dockerfiles</h3>
<p>In each service folder, create a simple <code>Dockerfile</code> so our pipeline has something to build.</p>
<p><strong>main-api/Dockerfile</strong></p>
<pre><code class="language-plaintext">FROM nginx:alpine
RUN echo "&lt;h1&gt;Main API - Version 1.0&lt;/h1&gt;" &gt; /usr/share/nginx/html/index.html
EXPOSE 80
</code></pre>
<p><strong>auxiliary-service/Dockerfile</strong></p>
<pre><code class="language-plaintext">FROM nginx:alpine
RUN echo "&lt;h1&gt;Auxiliary Service - Version 1.0&lt;/h1&gt;" &gt; /usr/share/nginx/html/index.html
EXPOSE 80
</code></pre>
<h2 id="heading-automating-image-builds-with-github-actions">Automating Image Builds with GitHub Actions</h2>
<p>In a professional GitOps workflow, your Kubernetes manifests and your application source code often live in the same repository (or linked ones). While Argo CD handles the deployment, you still need a way to turn your code into Docker images. This is where <strong>Continuous Integration (CI)</strong> comes in.</p>
<p>I have included a GitHub Actions workflow in this demo to automate this. Every time you push code to the <code>main</code> branch, this pipeline builds your images and pushes them to DockerHub.</p>
<h3 id="heading-the-ci-pipeline-workflow">The CI Pipeline Workflow</h3>
<p>Create a file at <code>.github/workflows/main.yml</code> and add the following:</p>
<pre><code class="language-plaintext">name: Build and Push Image to DockerHub

on:
  push:
    branches:
      - main
    # Skip builds for image updater commits
    paths-ignore:
      - 'Kubernetes-manifest/**'

jobs:
  docker_build:
    name: Build &amp; Push ${{ matrix.service }}
    environment: argocd-demo
    runs-on: ubuntu-latest
    permissions:
      contents: write

    strategy:
      matrix:
        include:
          - service: aux-service
            dockerfile: auxiliary-service/Dockerfile
          - service: main-service
            dockerfile: main-api/Dockerfile

    env:
      DOCKER_USER: ${{ secrets.DOCKERHUB_USERNAME }}
      RUN_TAG: ${{ github.run_number }}

    steps:
      - name: Check out code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Login to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ env.DOCKER_USER }}
          password: ${{ secrets.DOCKERHUB_PASSWORD }}

      - name: Build and Push ${{ matrix.service }}
        uses: docker/build-push-action@v6
        with:
          context: .
          file: ${{ matrix.dockerfile }}
          push: true
          tags: \({{ env.DOCKER_USER }}/\){{ matrix.service }}:${{ env.RUN_TAG }}
          cache-from: type=gha,scope=${{ matrix.service }}
          cache-to: type=gha,mode=max,scope=${{ matrix.service }}
</code></pre>
<p><strong>Pro tip:</strong> The <code>paths-ignore</code> section is critical. Later, the Argo CD Image Updater will write changes back to the <code>Kubernetes-manifest/</code> folder. Without this ignore rule, your pipeline would trigger itself forever in an infinite loop.</p>
<p><strong>Note:</strong> You must add <code>DOCKERHUB_USERNAME</code> and <code>DOCKERHUB_PASSWORD</code> to your GitHub Repo Settings &gt; Secrets.</p>
<h2 id="heading-how-to-install-and-access-argo-cd">How to Install and Access Argo CD</h2>
<p>Now that your cluster is running, you can install Argo CD. You'll perform the installation using a standard Kubernetes manifest provided by the Argo project.</p>
<h3 id="heading-step-1-create-the-namespace-and-apply-the-manifests">Step 1: Create the Namespace and Apply the Manifests</h3>
<p>In Kubernetes, it is a best practice to keep your administrative tools separate from your applications. You will create a dedicated namespace named <code>argocd</code> and then apply the official installation script from the Argo project. This script includes all the necessary ServiceAccounts, Roles, and Deployments.</p>
<p>Run the following commands in your terminal:</p>
<pre><code class="language-markdown">kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>You'll see a long list of resources being created. Wait a minute or two for the pods to initialize. you can verify that all the core components of Argo CD are running.:</p>
<pre><code class="language-markdown">kubectl get all -n argocd
</code></pre>
<p>Ensure all pods show a status of <code>Running</code> before proceeding.</p>
<h3 id="heading-step-2-access-the-argo-cd-user-interface">Step 2: Access the Argo CD User Interface</h3>
<p>To access the dashboard, we use a technique called <strong>port forwarding</strong>. Since the Argo CD server is running inside the cluster's private network, your browser can't see it yet. Port forwarding creates a secure 'tunnel' between a port on your local machine (8080) and a port on the cluster service (443). This allows you to interact with internal services without exposing them to the public internet.</p>
<p>Run the following command:</p>
<pre><code class="language-markdown">kubectl port-forward svc/argocd-server -n argocd 8080:443
</code></pre>
<p>You can now open your browser and navigate to <code>https://localhost:8080</code>. Your browser may warn you that the connection is not private because of a self-signed certificate. You can safely click "Advanced" and proceed to the site.</p>
<h3 id="heading-step-3-how-to-log-in">Step 3: How to Log In</h3>
<p>The default username for Argo CD is <code>admin</code>. The password is autogenerated during the installation process and is stored securely as a Kubernetes secret.</p>
<p>To retrieve this password, open a new terminal tab (so the port-forwarding keeps running) and run:</p>
<pre><code class="language-markdown">kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
</code></pre>
<p>Copy the output and use it as the password to log into the dashboard.</p>
<h2 id="heading-understanding-the-argo-cd-application">Understanding the Argo CD Application</h2>
<p>An Argo CD <strong>Application</strong> is a Custom Resource (CRD) that acts as a "contract" between your Git repo and your cluster. It defines:</p>
<ul>
<li><p><code>repoURL</code> &amp; <code>path</code>: This tells Argo CD exactly which Git repository to watch and which folder inside that repo contains your YAML manifests.</p>
</li>
<li><p><code>destination</code>: This defines where the app should live. We use <code>https://kubernetes.default.svc</code> to point to the local cluster where Argo CD is installed.</p>
</li>
<li><p><code>syncPolicy</code>: This is the heart of GitOps. By setting <code>automated</code> with <code>selfHeal: true</code>, we tell Argo CD to automatically fix the cluster if someone manually changes something (drift). The <code>prune: true</code> setting ensures that if you delete a file in Git, it also gets deleted in the cluster.</p>
</li>
</ul>
<h3 id="heading-the-application-manifest">The Application Manifest</h3>
<p>Create <code>application.yaml</code> in your project root:</p>
<pre><code class="language-plaintext">apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: gitops-argocd-demo
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/&lt;YOUR_GITHUB_USERNAME&gt;/&lt;YOUR_REPO_NAME&gt;.git
    targetRevision: HEAD
    path: Kubernetes-manifest
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd-demo-ns
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
    syncOptions:
      - CreateNamespace=true
</code></pre>
<h3 id="heading-deploying-the-application-manifest">Deploying the Application Manifest</h3>
<p>Now we'll define our Kubernetes resources in the <code>Kubernetes-manifest/</code> folder.</p>
<p><strong>main-api.yaml</strong></p>
<pre><code class="language-plaintext">apiVersion: apps/v1
kind: Deployment
metadata:
  name: main-deployment
  namespace: argocd-demo-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: main-api
  template:
    metadata:
      labels:
        app: main-api
    spec:
      containers:
      - name: main-service
        image: &lt;YOUR_DOCKERHUB_USERNAME&gt;/main-service:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: main-service-lb
  namespace: argocd-demo-ns
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: main-api
</code></pre>
<p><strong>aux-api.yaml</strong></p>
<pre><code class="language-plaintext">apiVersion: apps/v1
kind: Deployment
metadata:
  name: aux-deployment
  namespace: argocd-demo-ns
spec:
  replicas: 2
  selector:
    matchLabels:
      app: aux-service
  template:
    metadata:
      labels:
        app: aux-service
    spec:
      containers:
      - name: aux-service
        image: &lt;YOUR_DOCKERHUB_USERNAME&gt;/aux-service:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: aux-service
  namespace: argocd-demo-ns
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: aux-service
</code></pre>
<h2 id="heading-push-and-sync">Push and Sync</h2>
<h3 id="heading-step-1-apply-the-application-manifest">Step 1: Apply the Application Manifest</h3>
<p>Use <code>kubectl</code> to deploy this manifest into the <code>argocd</code> namespace:</p>
<pre><code class="language-plaintext">kubectl apply -f application.yaml -n argocd
</code></pre>
<h3 id="heading-step-2-push-to-your-repository">Step 2: Push to Your Repository</h3>
<p>To trigger the initial deployment and ensure Argo CD stays in sync with your source of truth, add, commit, and push your latest changes to the GitHub repository you configured in the manifest:</p>
<pre><code class="language-plaintext">git add .
git commit -m "initial argo application deployment"
git push origin main
</code></pre>
<h3 id="heading-step-3-verify-the-result-in-argo-cd">Step 3: Verify the Result in Argo CD</h3>
<p>Once you push your changes, head over to your Argo CD dashboard. You'll see the <code>gitops-argocd-demo</code> application appear. After the initial sync, the dashboard will display a healthy, green status indicating that your live cluster state perfectly matches your Git repository.</p>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/7d37a8bd-c913-4393-b82c-ab0adc875574.jpg" alt="Argo CD dashboard showing the gitops-argocd-demo application in a Healthy and Synced state. The resource tree displays the hierarchy of services, deployments, replica sets, and pods running in the cluster." style="display:block;margin:0 auto" width="1440" height="547" loading="lazy">

<p><strong>Note:</strong> As you can see in the screenshot above, Argo CD provides a visual representation of how your Kubernetes objects – Services, Deployments, and Pods – are related and confirms they are "Synced" with your Git repo.</p>
<h2 id="heading-automating-updates-with-argo-cd-image-updater">Automating Updates with Argo CD Image Updater</h2>
<p>Now that we have automated the deployment, let’s solve the final manual hurdle: automatically updating image tags in our manifests whenever a new build is pushed to DockerHub.</p>
<h3 id="heading-step-1-install-argocd-image-updater">Step 1: Install ArgoCD Image Updater</h3>
<p>Install the Image Updater into the <code>argocd</code> namespace:</p>
<pre><code class="language-plaintext">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/stable/config/install.yaml
</code></pre>
<p>Verify the pod is running:</p>
<pre><code class="language-plaintext">kubectl get pods -n argocd | grep image-updater
</code></pre>
<p><strong>Note:</strong> Version 1.1+ uses a CRD-based approach (<code>ImageUpdater</code> custom resources) instead of the annotation-based approach used in older versions. This guide covers the CRD method.</p>
<h3 id="heading-step-2-create-a-github-personal-access-token">Step 2: Create a GitHub Personal Access Token</h3>
<p>The Image Updater needs Git credentials to push write-back commits to your repository.</p>
<ol>
<li><p>Go to GitHub → Settings → Developer Settings → Personal Access Tokens → Tokens (classic)</p>
</li>
<li><p>Click Generate new token</p>
</li>
<li><p>Select the <code>repo</code> scope (full control of private repositories)</p>
</li>
<li><p>Copy the generated token</p>
</li>
</ol>
<h3 id="heading-step-3-create-the-git-credentials-secret">Step 3: Create the Git Credentials Secret</h3>
<p>Store the GitHub credentials as a Kubernetes secret in the <code>argocd</code> namespace:</p>
<pre><code class="language-plaintext">kubectl -n argocd create secret generic git-creds \
  --from-literal=username=&lt;YOUR_GITHUB_USERNAME&gt; \
  --from-literal=password=&lt;YOUR_GITHUB_PAT&gt;
</code></pre>
<p>Replace <code>&lt;YOUR_GITHUB_USERNAME&gt;</code> and <code>&lt;YOUR_GITHUB_PAT&gt;</code> with your actual values.</p>
<h3 id="heading-step-4-add-a-kustomization-file-to-your-manifests">Step 4: Add a Kustomization File to Your Manifests</h3>
<p>The Image Updater uses Kustomize's <code>images</code> field to write updated tags. If your <code>Kubernetes-manifest/</code> directory contains plain YAML files, you'll need to wrap them with a <strong>kustomization.yaml</strong> file.</p>
<p>Create a <strong>kustomization.yaml</strong> file:</p>
<pre><code class="language-plaintext">apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - main-api.yaml
  - aux-api.yaml
</code></pre>
<p><strong>How it works:</strong> When the Image Updater detects a new tag, it appends an <code>images</code> section to this file:</p>
<pre><code class="language-plaintext">images:
  - name: &lt;YOUR_GITHUB_USERNAME&gt;/main-service
    newTag: "12"
  - name: &lt;YOUR_GITHUB_USERNAME&gt;/aux-service
    newTag: "12"
</code></pre>
<p>Kustomize then overrides the image tags at deploy time, without modifying your original deployment YAML files.</p>
<p>We use Kustomize here because it allows the Image Updater to manage image tags in a separate, clean way. Instead of the Updater 'messing' with your original <code>main-api.yaml</code> file, it simply updates the <code>kustomization.yaml</code> file. Argo CD then uses Kustomize to merge those changes during deployment.</p>
<h3 id="heading-step-5-create-the-imageupdater-custom-resource">Step 5: Create the ImageUpdater Custom Resource</h3>
<p>Create <strong>image-updater.yaml</strong> in your project root:</p>
<pre><code class="language-plaintext">apiVersion: argocd-image-updater.argoproj.io/v1alpha1
kind: ImageUpdater
metadata:
  name: gitops-argocd-demo-updater
  namespace: argocd
spec:
  commonUpdateSettings:
    updateStrategy: newest-build
    allowTags: "regexp:^[0-9]+$"
  applicationRefs:
    - namePattern: "gitops-argocd-demo"
      writeBackConfig:
        method: "git:secret:argocd/git-creds"
        gitConfig:
          branch: main
          writeBackTarget: "kustomization:."
      images:
        - alias: main-service
          imageName: &lt;YOUR_DOCKERHUB_USERNAME&gt;/main-service
        - alias: aux-service
          imageName: &lt;YOUR_DOCKERHUB_USERNAME&gt;/aux-service
</code></pre>
<p>This ImageUpdater resource is the <strong>"brain"</strong> of our automated tagging system. Here is what the specific fields are doing:</p>
<p><code>updateStrategy:</code></p>
<ul>
<li><code>newest-build:</code> It tells the updater to always look for the most recent image version in DockerHub based on creation time.</li>
</ul>
<p><code>writeBackConfig:</code> This is where the magic happens. It uses the git-creds secret we created to authorize the updater to 'write' back to your repository.</p>
<p><code>writeBackTarget:</code></p>
<ul>
<li><code>kustomization:</code> We are telling the updater specifically to modify the kustomization.yaml file in the manifests folder rather than touching the deployment files directly.</li>
</ul>
<p><code>images:</code> We provide aliases (main-service and aux-service) so the updater knows exactly which images in DockerHub correspond to which containers in our Kubernetes manifests.</p>
<p><strong>Apply the ImageUpdater CR to the cluster:</strong></p>
<pre><code class="language-plaintext">kubectl apply -f image-updater.yaml -n argocd
</code></pre>
<p>Push the kustomization.yaml to your Git repository (the Image Updater clones the repo, so it must exist remotely):</p>
<pre><code class="language-plaintext">git add Kubernetes-manifest/kustomization.yaml
git commit -m "Add kustomization.yaml for image updater write-back"
git push origin main
</code></pre>
<h3 id="heading-step-6-verify-the-image-updater">Step 6: Verify the Image Updater</h3>
<p>Check the Image Updater logs to confirm it's working:</p>
<pre><code class="language-plaintext">kubectl logs -n argocd deployment/argocd-image-updater-controller --tail=20
</code></pre>
<p><strong>Successful output looks like:</strong></p>
<pre><code class="language-plaintext">msg="Starting image update cycle, considering 1 application(s) for update"
msg="Setting new image to YOUR_DOCKERHUB_USERNAME/main-service:11"
msg="Successfully updated image 'YOUR_DOCKERHUB_USERNAME/main-service:7' to 'YOUR_DOCKERHUB_USERNAME/main-service:11'"
msg="Setting new image to YOUR_DOCKERHUB_USERNAME/aux-service:11"
msg="Committing 2 parameter update(s) for application gitops-argocd-demo"
msg="git push origin main"
msg="Successfully updated the live application spec"
msg="Processing results: applications=1 images_considered=2 images_skipped=0 images_updated=2 errors=0"
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You have successfully implemented a professional-grade GitOps loop from scratch. By integrating GitHub Actions, Argo CD, and the Argo CD Image Updater, you’ve bridged the gap between your source code and your live environment.</p>
<p>Think about the workflow you just built:</p>
<ol>
<li><p>You push code to GitHub.</p>
</li>
<li><p>GitHub Actions builds and tags a fresh Docker image.</p>
</li>
<li><p>Argo CD Image Updater detects that new tag and automatically commits it back to your Git manifests.</p>
</li>
<li><p>Argo CD pulls those changes and reconciles your cluster to the new desired state.</p>
</li>
</ol>
<p>No more manual <code>kubectl apply</code>, no more configuration drift, and no more 2:00 AM mysteries. Your Git repository is now truly the Single Source of Truth. If it isn't in Git, it doesn't exist in your cluster, and that is the ultimate DevOps superpower.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ From Commit to Production: Hands-On GitOps Promotion with GitHub Actions, Argo CD, Helm, and Kargo ]]>
                </title>
                <description>
                    <![CDATA[ Have you ever wanted to go beyond ‘hello world’ and build a real, production-style CI/CD pipeline – starting from scratch? Let’s pause for a moment: what are you trying to learn from your DevOps journey? Are you focusing on GitOps-style deployments, ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/from-commit-to-production-hands-on-gitops-promotion-with-github-actions-argo-cd-helm-and-kargo/</link>
                <guid isPermaLink="false">6841f1319c94d5fa67dae6e3</guid>
                
                    <category>
                        <![CDATA[ gitops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Thu, 05 Jun 2025 19:34:08 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749151777327/ece5b0b7-4a9a-4f95-8ebb-32e3768b678f.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Have you ever wanted to go beyond ‘hello world’ and build a real, production-style CI/CD pipeline – starting from scratch?</p>
<p>Let’s pause for a moment: what are you trying to learn from your DevOps journey? Are you focusing on GitOps-style deployments, or promotions? This guide will help you tackle all of it – one step at a time.</p>
<p>As a DevOps engineer interested in creating a complete CI/CD pipeline, I wanted more than a basic "hello world" microservice. I was looking for a project where I could start from scratch – beginning with raw source code, writing my own Docker Compose and Kubernetes files, deploying locally, and then adding automation, environment promotion, and GitOps practices step by step.</p>
<p>In my search, I found several GitHub repositories. Most were either too simple to be useful or too complicated and already set up, leaving no room for learning. They often included ready-made Docker Compose files and Kubernetes manifests, which didn't help with learning through hands-on experience.</p>
<p>That’s when I discovered <strong>Craftista</strong>, a project maintained by <a target="_blank" href="https://www.linkedin.com/in/gouravshah/">Gourav Shah</a>. This wasn’t just another training repo. As described in its documentation:</p>
<blockquote>
<p><em>“Craftista is not your typical hello world app or off-the-shelf WordPress app used in most DevOps trainings. It is the real deal.”</em></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748210412834/5ef3f2b6-029d-4967-b6a9-825888b44706.png" alt="Craftista" width="1202" height="1060" loading="lazy"></p>
<p>Craftista stood out to me for several reasons:</p>
<ul>
<li><p>It’s a <strong>polyglot microservices application</strong>, designed to resemble a real-world platform.</p>
</li>
<li><p>Each service uses its own technology stack – exactly like in modern enterprises.</p>
</li>
<li><p>It includes essential building blocks of a real e-commerce system:</p>
<ul>
<li><p>A modern UI built in Node.js</p>
</li>
<li><p>A Product Catalogue Service</p>
</li>
<li><p>A Recommendation Engine</p>
</li>
<li><p>A Voting/Review Service  </p>
</li>
</ul>
</li>
</ul>
<p>By the end of this guide, you won’t just have a “hello world” demo – you’ll have a fully functioning CI/CD/GitOps pipeline modeled on a real-world microservices stack. You’ll understand how the pieces fit together, why each tool exists, and how to adapt this workflow to your own projects.</p>
<p>Ready to go beyond hello world and build a production-style pipeline from scratch? Let’s dive in.</p>
<h2 id="heading-table-of-contentsheading-table-of-contents"><a class="post-section-overview" href="#heading-table-of-contents">Table of Contents</a></h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites-and-what-youll-learn">Prerequisites and What You’ll Learn</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-topics-outside-the-scope-of-this-guide">Topics Outside the Scope of This Guide</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-gitops">What is GitOps?</a></p>
<ul>
<li><a class="post-section-overview" href="#heading-core-principles-of-gitops">Core Principles of GitOps</a></li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-tools-we-are-using-in-this-guide">Tools We Are Using in This Guide</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-github-actions">GitHub Actions</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-minikube">Minikube</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-argo-cd">Argo CD</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-kargo">Kargo</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-structure-repositories-for-microservice-applications">How to Structure Repositories for Microservice Applications</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-why-a-polyrepo-fits-my-microservice-service-app">Why a Polyrepo Fits My Microservice-Service App</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-git-branching-is-anti-pattern-to-gitops-principles">Git Branching is Anti Pattern to GitOps Principles</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-organize-kubernetes-manifests-for-gitops">How to Organize Kubernetes Manifests for GitOps</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-argo-cd-folders">Argo CD Folders</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-2-argo-cd-application-manifests-by-environment">2. Argo CD Application Manifests (by Environment)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-env-folders">Env Folders</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-kargo-folders">Kargo Folders</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-deploy-and-promote-your-craftista-microservices-application">How to Deploy and Promote Your Craftista Microservices Application</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-1-start-minikube">1. Start Minikube</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-2-install-argo-cd">2. Install Argo CD</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-3-access-the-argo-cd-ui">3. Access the Argo CD UI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-4-define-a-craftista-argo-cd-project">4. Define a “Craftista” Argo CD Project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-5-deploy-the-development-environment">5. Deploy the Development Environment</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-6-manual-promotion-staging-amp-prod">6. Manual Promotion (Staging &amp; Prod)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-7-automated-promotion-with-kargo">7. Automated Promotion with Kargo</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-further-reading-amp-resources">Further Reading &amp; Resources</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites-and-what-youll-learn">Prerequisites and What You’ll Learn:</h2>
<p>Before you progress through this guide, ask yourself:</p>
<ul>
<li><p>Do I understand how semantic tagging improves traceability across environments?</p>
</li>
<li><p>Can I replicate a multi-environment GitOps setup using Helm and Kubernetes?</p>
</li>
<li><p>Am I confident in organizing Helm charts and manifests for scalable deployments?</p>
</li>
<li><p>Do I know how Kargo and Argo CD work together to automate promotions and approvals?</p>
</li>
</ul>
<p>This guide will help you confidently answer those questions by walking you through:</p>
<ul>
<li><p>✅ An optimized Git branching strategy: using feature branches and a single main branch</p>
</li>
<li><p>✅ Semantic Docker image tagging for clean version tracking</p>
</li>
<li><p>✅ Helm chart and Kubernetes manifest structuring for multi-environment GitOps</p>
</li>
<li><p>✅ CI pipelines using GitHub Actions for build → test → tag automation</p>
</li>
<li><p>✅ Full GitOps workflows with Kargo and Argo CD for seamless promotion and delivery</p>
</li>
</ul>
<h3 id="heading-topics-outside-the-scope-of-this-guide"><strong>Topics Outside the Scope of This Guide</strong></h3>
<ul>
<li><p>Deployment to managed services like EKS, AKS, or GKE is not included. We’ll use Minikube for local development.</p>
</li>
<li><p>I assume you are already familiar with writing basic Kubernetes manifests. I won’t explain Pods, Services, Deployments, and their YAML structures here.</p>
</li>
<li><p>I also won’t discuss topics like logging, metrics, tracing, and security hardening.</p>
</li>
<li><p>This guide does not cover Managing Secrets and ConfigMaps and Implementing Service Discovery.</p>
</li>
<li><p>And finally, we won’t go over ArgoCD and Kargo installation.</p>
</li>
</ul>
<h2 id="heading-what-is-gitops"><strong>What is GitOps?</strong></h2>
<p>GitOps is a modern way to manage applications and infrastructure using Git as the main source of truth. Developers have used Git for a long time to manage and work together on code. GitOps takes this further by including infrastructure setup, deployment processes, and automation.</p>
<p>By keeping everything – from Kubernetes files and Helm charts to infrastructure code and app settings – in Git, teams have a central, version-controlled system that can be tracked. Changes in Git are automatically updated and matched with the target environments by GitOps tools like Argo CD or Flux.</p>
<h3 id="heading-core-principles-of-gitops"><strong>Core Principles of GitOps</strong></h3>
<ul>
<li><p>Git as the single source of truth</p>
</li>
<li><p>Declarative systems</p>
</li>
<li><p>Immutable deployments</p>
</li>
<li><p>Centralized change audit</p>
</li>
</ul>
<h2 id="heading-tools-we-are-using-in-this-guide"><strong>Tools We Are Using in This Guide</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748886977153/8d7eb087-8161-431b-b48f-c67d724909b9.png" alt="Tools" class="image--center mx-auto" width="1536" height="1024" loading="lazy"></p>
<h3 id="heading-github-actions"><strong>GitHub Actions</strong></h3>
<p>GitHub Actions is a platform for continuous integration and delivery (CI/CD) that helps automate your build, test, and deployment processes.</p>
<p>In our project, we’ll use it to store our microservice application code. We’ll use GitHub Actions workflows to build and push Docker images to Docker Hub as our Docker registry. We’ll rely on GitHub Actions for continuous delivery.</p>
<h3 id="heading-minikube"><strong>Minikube</strong></h3>
<p>We are deploying our application and ArgoCD locally on Minikube. To simulate promotion between different environments, I am using namespaces.</p>
<h3 id="heading-argo-cd"><strong>Argo CD</strong></h3>
<p>Argo CD is a declarative GitOps continuous deployment tool for Kubernetes that automates the deployment and synchronization of microservice applications with Git repositories. It follows GitOps principles and uses declarative configurations with a pull-based approach.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748345707461/f7484475-8867-48af-a36e-e97d68683a45.png" alt="ArgoCD Flow" width="1536" height="1024" loading="lazy"></p>
<p>Here’s a summary of the flow depicted in the above image:</p>
<ol>
<li><p>The developer modifies application code and changes are pushed to a Git repository.</p>
</li>
<li><p>The CI pipeline is triggered and builds a new container image and pushes it to a container registry.</p>
</li>
<li><p>Merge triggers a webhook to notify Argo CD of changes in the Git repository.</p>
</li>
<li><p>Argo CD clones the updated Git repository. Compares the desired state (from Git) with the current state in the Kubernetes cluster.</p>
</li>
<li><p>Argo CD applies the necessary changes to bring the cluster to the desired state.</p>
</li>
<li><p>Kubernetes controllers reconcile resources until the cluster matches the desired configuration.</p>
</li>
<li><p>Argo CD continuously monitors the application and cluster state.</p>
</li>
<li><p>Argo CD can automatically or manually revert the changes to match the Git configuration, ensuring Git remains the single source of truth.</p>
</li>
</ol>
<h3 id="heading-kargo"><strong>Kargo</strong></h3>
<p>Kargo manages promotion by watching repositories (Git, Image, Helm) for changes and making the needed commits to your Git repository, while Argo CD takes care of reconciliation. Kargo is built to simplify multi-stage application promotion using GitOps principles, removing the need for custom automation or CI pipelines.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748373053736/6c015e27-b47b-486a-bb6a-e581b0f29a30.webp" alt="Kargo (Source: Akuity Blog)" width="1011" height="896" loading="lazy"></p>
<h4 id="heading-kargo-components"><strong>Kargo Components</strong></h4>
<ol>
<li><p><strong>Warehouse:</strong> Watches image registries and discovers new container images. Monitors DockerHub for new tags like <code>v1.2.0</code>, <code>v1.2.1</code>, etc., and stores metadata about discovered images.</p>
</li>
<li><p><strong>Stage:</strong> Defines a deployment environment (Dev, Stage, Prod). When a new image is found by the warehouse, it updates the manifest under <code>env/dev/</code> with the new image tag. This triggers Argo CD to sync the <code>dev</code> environment.</p>
</li>
<li><p><strong>PromotionPolicy:</strong> Defines how promotion should happen between stages (for example, auto or manual).</p>
</li>
<li><p><strong>Freight:</strong> An artifact version to be promoted (for example, a specific container image or Helm chart). When <code>v1.2.1</code> is discovered by the warehouse, a new <strong>Freight</strong> is created.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748373806449/00c7a2e5-48af-43b9-b9fc-9463b55c1abb.png" alt="Kargo Components" width="1024" height="1024" loading="lazy"></p>
<h4 id="heading-practical-examples"><strong>Practical Examples</strong></h4>
<ul>
<li><p>A new <code>v1.2.0</code> image is pushed to DockerHub.</p>
</li>
<li><p>Kargo detects it via a <strong>warehouse</strong> and updates the <code>dev</code> environment.</p>
</li>
<li><p>Once verified (either by tests or metrics), Kargo automatically updates Helm values in the Git repo for staging.</p>
</li>
<li><p>Argo CD sees the Git change and syncs the new version to staging.</p>
</li>
<li><p>Manual approval (via Slack or UI) is required to push to production.</p>
</li>
</ul>
<h4 id="heading-why-kargo-is-the-perfect-companion-to-argo-cd"><strong>Why Kargo is the Perfect Companion to Argo CD</strong></h4>
<p>Have you ever had to manually promote versions across environments and wished it were automated? How would integrating Kargo have saved time or prevented errors in your last deployment?</p>
<p>Argo CD excels at GitOps-driven continuous deployment – syncing your Kubernetes cluster with the desired state declared in Git. But it lacks native support for promotion workflows between environments (like dev → staging → production) based on image metadata, test results, or approval gates. This is where Kargo becomes the perfect companion.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748474195349/8e615222-067e-4958-aa8c-19a9f44e4d74.png" alt="Kargo and Argo CD Comparison " width="1536" height="1024" loading="lazy"></p>
<p>Kargo doesn’t replace Argo CD – it extends it. You continue to use Argo CD for syncing and deploying apps, but Kargo adds promotion intelligence and automation.</p>
<h2 id="heading-how-to-structure-repositories-for-microservice-applications"><strong>How to Structure Repositories for Microservice Applications</strong></h2>
<p>My example application consists of 4 microservices (<a target="_blank" href="https://github.com/nitheeshp-irl/microservice-frontend">frontend</a>, <a target="_blank" href="https://github.com/nitheeshp-irl/microservice-recommendation">recommendation</a>, <a target="_blank" href="https://github.com/nitheeshp-irl/microservice-catalogue">catalogues</a>, and <a target="_blank" href="https://github.com/nitheeshp-irl/microservice-voting">voting</a>). Designing your repository structure is very important to start your project. There is a lot of debate between monorepo and multi-service repo.</p>
<p>A <strong>monorepo</strong> is a unified repository that houses all the code for a project or a set of related projects. It consolidates code from various services, libraries, and applications into a single centralized location.</p>
<p>On the other hand, a <strong>polyrepo</strong> architecture comprises multiple repositories, each containing the code for a distinct service, library, or application component.</p>
<h3 id="heading-why-a-polyrepo-fits-my-microservice-service-app"><strong>Why a Polyrepo Fits My Microservice-Service App</strong></h3>
<p>Imagine you're onboarding a new team to your app. Would you prefer giving them access to an entire monorepo or just the relevant service’s repo? What trade-offs are you willing to accept?”</p>
<p>Well, using a polyrepo approach,</p>
<ul>
<li><p>Teams can work independently on the frontend, recommendations, catalogs, and voting without stepping on each other’s toes.</p>
</li>
<li><p>Sensitive services remain locked down without complex directory-level rules.</p>
</li>
<li><p>CI runners operate on a smaller codebase, speeding up checkouts and reducing bandwidth.</p>
</li>
<li><p>Each service has its own release cadence (for example, <code>catalogues</code> v2.1.0 and <code>voting</code> v1.7.3).</p>
</li>
<li><p>As your organization grows, new teams can onboard to only the repos they care about.</p>
</li>
<li><p>Shared libraries can be versioned and published to an internal package registry, then consumed by each service.</p>
</li>
</ul>
<h3 id="heading-git-branching-is-anti-pattern-to-gitops-principles"><strong>Git Branching is Anti Pattern to GitOps Principles</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748376156187/f1fb6cc6-c5d1-4001-a8a1-63353cc03cd7.png" alt="Git Branching AntiPattern" width="1536" height="1024" loading="lazy"></p>
<p>Many teams default to “<a target="_blank" href="https://medium.com/novai-devops-101/understanding-gitflow-a-simple-guide-to-git-branching-strategy-4f079c12edb9"><strong>GitFlow</strong></a>”-style branching – creating long-lived branches for <code>dev</code>, <code>staging</code>, <code>prod</code>, and more. But in a true GitOps workflow, <strong>Git is your control plane</strong>, and “environments” shouldn’t live as branches.</p>
<p>Instead, you can keep things simple with just:</p>
<ul>
<li><p>A long-lived <code>master</code> (or <code>main</code>) branch</p>
</li>
<li><p>Short-lived feature branches for code work</p>
</li>
</ul>
<h2 id="heading-how-to-organize-kubernetes-manifests-for-gitops"><strong>How to Organize Kubernetes Manifests for GitOps</strong></h2>
<p><a target="_blank" href="https://github.com/nitheeshp-irl/microservice-helmcharts">This repo</a> shows how you can keep ArgoCD application manifests, environment-specific values, Kargo promotion tasks, Helm charts for each microservice, and CI/CD workflows all in one place. It is organized so that:</p>
<ol>
<li><p><strong>ArgoCD application manifests</strong> live under <code>argocd/</code>, split by environment (for example, <code>dev/</code>, <code>staging/</code>, <code>prod/</code>).</p>
</li>
<li><p><strong>Environment-specific overrides</strong> (Helm values or Kustomize patches) go under <code>env/</code>.</p>
</li>
<li><p><strong>Kargo promotion configurations</strong> are grouped under <code>kargo/</code>, defining how new images move between environments.</p>
</li>
<li><p><strong>Service Helm charts</strong> reside in <code>service-charts/</code>, one chart per microservice.</p>
</li>
</ol>
<pre><code class="lang-markdown">/microservice-helmcharts/
├── argocd/                # ArgoCD application manifests
│   ├── application/       # Application definitions
│   │   ├── dev/           # Development environment applications
│   │   │   ├── catalogue.yaml
│   │   │   ├── catalogue-db.yaml
│   │   │   ├── frontend.yaml
│   │   │   ├── recommendation.yaml
│   │   │   ├── voting.yaml
│   │   │   └── kustomization.yaml
│   │   ├── staging/       # Staging environment applications
│   │   │   └── [similar structure as dev]
│   │   ├── prod/          # Production environment applications
│   │   │   └── [similar structure as dev]
│   │   └── craftista-project.yaml
│   ├── blog-post.md
│   ├── deployment-guide-blog.md
│   └── repository-structure.md
├── env/                   # Environment-specific configurations
│   ├── dev/               # Development environment values
│   │   ├── catalogue/
│   │   │   └── catalogue-values.yaml
│   │   ├── catalogue-db/
│   │   │   └── catalogue-db-values.yaml
│   │   ├── frontend/
│   │   │   └── frontend-values.yaml
│   │   ├── recommendation/
│   │   │   └── recommendation-values.yaml
│   │   ├── voting/
│   │   │   └── voting-values.yaml
│   │   └── kustomization.yaml
│   ├── staging/           # Similar structure as dev but with image files
│   └── prod/              # Similar structure as staging
├── kargo/                 # Kargo promotion configuration
│   ├── catalogue-config/  # Catalogue service promotion
│   │   ├── catalogue-promotion-tasks.yaml
│   │   ├── catalogue-stages.yaml
│   │   └── catalogue-warehouse.yaml
│   ├── frontend-config/   # Frontend service promotion
│   │   ├── frontend-promotion-tasks.yaml
│   │   ├── frontend-stages.yaml
│   │   └── frontend-warehouse.yaml
│   ├── recommendation-config/ # Recommendation service promotion
│   │   ├── recommendation-promotion-tasks.yaml
│   │   ├── recommendation-stages.yaml
│   │   └── recommendation-warehouse.yaml
│   ├── voting-config/     # Voting service promotion
│   │   ├── voting-promotion-tasks.yaml
│   │   ├── voting-stages.yaml
│   │   └── voting-warehouse.yaml
│   ├── kargo.yaml         # ArgoCD application for Kargo
│   ├── kustomization.yaml # Combines all Kargo resources
│   ├── project.yaml       # Kargo project definition
│   └── projectconfig.yaml # Project-wide promotion policies
├── service-charts/        # Helm charts for each microservice
│   ├── catalogue/         # Catalogue service chart
│   │   ├── templates/
│   │   │   ├── deployment.yaml
│   │   │   └── service.yaml
│   │   ├── Chart.yaml
│   │   └── values.yaml
│   ├── catalogue-db/      # Similar structure as catalogue
│   ├── frontend/          # Similar structure as catalogue
│   ├── recommendation/    # Similar structure as catalogue
│   └── voting/            # Similar structure as catalogue
├── .github/workflows/     # CI/CD workflows
│   └── docker-ci.yml      # Docker image build and push
└── README.md              # Repository documentation
</code></pre>
<h3 id="heading-argo-cd-folders"><strong>Argo CD Folders</strong></h3>
<p>The <code>argocd/</code> directory contains all of the manifests that Argo CD needs in order to track, group, and deploy your microservices. In this guide, we break that directory into two main pieces:</p>
<ol>
<li><p><strong>Argo CD Project Definition</strong></p>
</li>
<li><p><strong>Argo CD Application Manifests (organized by environment)</strong></p>
</li>
</ol>
<h4 id="heading-argocd-projectshttpsargo-cdreadthedocsioenstableuser-guideprojects"><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/"><strong>ArgoCD Projects</strong></a></h4>
<p>Before you can give Argo CD a set of Applications to manage, it’s often best practice to define a “Project.” A Project in Argo CD serves as a logical boundary around a group of Applications. It can control which Git repos those Applications are allowed to reference, which Kubernetes clusters/namespaces they can target, and even which resource kinds they can manage.</p>
<p>In our example repo, the file <code>craftisia-project.yaml</code> lives at the top of <code>argocd/</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># argocd/craftisia-project.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">AppProject</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">craftisia</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-comment"># 1) Which Git repos are we allowed to pull from?</span>
  <span class="hljs-attr">sourceRepos:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/microservice-helmcharts"</span>
    <span class="hljs-comment"># (Or you could use "*" to allow any repo, but this is less secure.)</span>

  <span class="hljs-comment"># 2) Which clusters/namespaces can these Apps be deployed to?</span>
  <span class="hljs-attr">destinations:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">namespace:</span> <span class="hljs-string">"*"</span>
      <span class="hljs-attr">server:</span> <span class="hljs-string">"*"</span>    <span class="hljs-comment"># Allow deployment to any cluster (for a local Minikube demo, this is fine).</span>

  <span class="hljs-comment"># 3) Which kinds of Kubernetes resources may be created/updated?</span>
  <span class="hljs-comment">#    (For example, we want Pods, Services, Deployments, Ingresses, etc.)</span>
  <span class="hljs-comment">#    Argo CD will reject any manifest containing a disallowed kind.</span>
  <span class="hljs-attr">clusterResourceWhitelist:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">""</span>            <span class="hljs-comment"># core API group (Pods, Services, ConfigMaps, etc.)</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">"apps"</span>        <span class="hljs-comment"># deployments, statefulsets, etc.</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">"networking.k8s.io"</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
    <span class="hljs-comment"># (You can list additional resource kinds as needed.)</span>

  <span class="hljs-comment"># 4) Optional: define role-based access control or sync policies at the project level.</span>
  <span class="hljs-comment">#    (Not shown here, but you could add roles, namespace resource quotas, etc.)</span>
</code></pre>
<h3 id="heading-2-argo-cd-application-manifests-by-environment">2. Argo CD Application Manifests (by Environment)</h3>
<p>Inside <code>argocd/</code>, there is a subdirectory called <code>application/</code>. We use this to keep all of our Argo CD Application YAMLs, broken out by environment. The high-level layout looks like this:</p>
<pre><code class="lang-markdown">rCopyEditargocd/
└── application/
<span class="hljs-code">    ├── dev/            # “Dev” environment Applications
    │   ├── catalogue.yaml
    │   ├── catalogue-db.yaml
    │   ├── frontend.yaml
    │   ├── recommendation.yaml
    │   ├── voting.yaml
    │   └── kustomization.yaml
    ├── staging/        # “Staging” environment Applications (same names/structure as dev/)
    │   └── […]
    └── prod/           # “Prod” environment Applications (same names/structure as dev/)
        └── […]</span>
</code></pre>
<p>Each of those YAML files is a standalone <strong>Argo CD Application</strong>. An Application tells Argo CD:</p>
<ol>
<li><p>Which project it belongs to (in our case, <code>craftisia</code>),</p>
</li>
<li><p>Where to find its manifests (a Git repo and path),</p>
</li>
<li><p>Which Kubernetes cluster and namespace to deploy into, and</p>
</li>
<li><p>How to keep itself up to date (that is, sync policies).</p>
</li>
</ol>
<p>Below is a example of the <code>frontend.yaml</code> file for the <strong>dev</strong> environment:</p>
<pre><code class="lang-markdown">yamlCopyEdit# argocd/application/dev/frontend.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: frontend-dev
  namespace: argocd
spec:
  project: craftisia

  # 1) Source: Where to find the Helm chart and which values file to use
  source:
<span class="hljs-code">    repoURL: https://github.com/nitheeshp-irl/microservice-helmcharts
    targetRevision: main
    path: service-charts/frontend       # Helm chart folder for the frontend service
    helm:
      valueFiles:
        - ../../env/dev/frontend/frontend-values.yaml
</span>
  # 2) Destination: Which cluster &amp; namespace to deploy into
  destination:
<span class="hljs-code">    server: https://kubernetes.default.svc    # (Assumes Argo CD is running in-cluster)
    namespace: front-end-dev                   # A dedicated namespace for “dev” frontend
</span>
  # 3) Sync Policy: Automate synchronization and enable self-healing
  syncPolicy:
<span class="hljs-code">    automated:
      prune: true          # Delete resources that are no longer in Git
      selfHeal: true       # If someone manually changes live resources, revert to Git state
    syncOptions:
      - CreateNamespace=true  # If the namespace doesn’t exist, Argo CD will create it</span>
</code></pre>
<p>You would repeat a similar pattern under <code>argocd/application/staging/</code> and <code>argocd/application/prod/</code> – each environment has its own <code>frontend.yaml</code>, <code>catalogue.yaml</code>, and so on, but each will point to a different values file under <code>env/staging/…</code> or <code>env/prod/…</code> and likely deploy into a different namespace (for example, <code>front-end-staging</code>, <code>front-end-prod</code>).</p>
<h3 id="heading-env-folders"><strong>Env Folders</strong></h3>
<p>The <code>/env</code> directory is a critical part of our GitOps implementation, containing all environment-specific configurations for our microservices. Each environment (dev, staging, prod) has its own subdirectory containing service-specific configurations. These contain general <strong>Helm chart</strong> values like resource limits and replica counts and container image repository and tag.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span>
  <span class="hljs-attr">repository:</span> <span class="hljs-string">nitheesh86/microservice-frontend</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.11</span>

<span class="hljs-attr">replicaCount:</span> <span class="hljs-number">2</span>

<span class="hljs-attr">resources:</span>
  <span class="hljs-attr">limits:</span>
    <span class="hljs-attr">memory:</span> <span class="hljs-string">"512Mi"</span>
  <span class="hljs-attr">requests:</span>
    <span class="hljs-attr">cpu:</span> <span class="hljs-string">"100m"</span>
    <span class="hljs-attr">memory:</span> <span class="hljs-string">"128Mi"</span>
</code></pre>
<h3 id="heading-kargo-folders"><strong>Kargo Folders</strong></h3>
<p>Our Kargo setup is organized in the <code>/kargo</code> directory with several key components:</p>
<pre><code class="lang-markdown">/kargo/
├── catalogue-config/           # Catalogue service promotion configuration
│   ├── catalogue-promotion-tasks.yaml  # Defines how to update catalogue images
│   ├── catalogue-stages.yaml           # Dev, staging, prod stages for catalogue
│   └── catalogue-warehouse.yaml        # Monitors catalogue image repository
├── frontend-config/            # Frontend service promotion configuration
│   ├── frontend-promotion-tasks.yaml   # Defines how to update frontend images
│   ├── frontend-stages.yaml            # Dev, staging, prod stages for frontend
│   └── frontend-warehouse.yaml         # Monitors frontend image repository
├── recommendation-config/      # Recommendation service promotion configuration
│   ├── recommendation-promotion-tasks.yaml  # Image update workflow
│   ├── recommendation-stages.yaml           # Environment stages
│   └── recommendation-warehouse.yaml        # Image monitoring
├── voting-config/              # Voting service promotion configuration
│   ├── voting-promotion-tasks.yaml     # Image update workflow
│   ├── voting-stages.yaml              # Environment stages
│   └── voting-warehouse.yaml           # Image monitoring
├── kargo.yaml                  # ArgoCD application for Kargo installation
├── kustomization.yaml          # This file - combines all resources
├── project.yaml                # Defines the Kargo project
└── projectconfig.yaml          # Project-wide promotion policies
</code></pre>
<p><strong>Stage Configurations:</strong> Kargo uses the concept of "stages" to represent our deployment environments. Each stage defines:</p>
<ul>
<li><p>Which freight (container images) to deploy</p>
</li>
<li><p>The promotion workflow to execute</p>
</li>
<li><p>Environment-specific variables</p>
</li>
</ul>
<p><strong>Warehouse Configuration:</strong> The warehouse monitors our container registry for new images.</p>
<p><strong>Promotion Tasks:</strong> Promotion tasks define the actual workflow for promoting between environments.</p>
<h2 id="heading-how-to-deploy-and-promote-your-craftista-microservices-application">How to Deploy and Promote Your Craftista Microservices Application</h2>
<p>Now I'll explain how to deploy your Craftista microservices application using Argo CD.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748558624685/9f0d5725-cc8a-4e37-851c-d4ab2870bafc.png" alt="ArgoCD Dashboard" class="image--center mx-auto" width="3296" height="1456" loading="lazy"></p>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<ul>
<li><p><strong>A local Kubernetes cluster</strong>: We’ll use Minikube for local development.</p>
</li>
<li><p><strong>kubectl and helm</strong>: Ensure both are installed and configured.</p>
</li>
<li><p><strong>Git Clone of the microservice-helmcharts Repo</strong>:</p>
<pre><code class="lang-bash">  git <span class="hljs-built_in">clone</span> https://github.com/nitheeshp-irl/microservice-helmcharts.git
  <span class="hljs-built_in">cd</span> microservice-helmcharts
</code></pre>
</li>
</ul>
<h3 id="heading-1-start-minikube"><strong>1. Start Minikube</strong></h3>
<p>Start Minikube with the specified resources:</p>
<pre><code class="lang-bash">minikube start --memory=4096 --cpus=2
kubectl config use-context minikube
</code></pre>
<p>Adjust <code>--memory</code> and <code>--cpus</code> as needed for your machine.</p>
<h3 id="heading-2-install-argo-cd"><strong>2. Install Argo CD</strong></h3>
<p>Create a namespace:</p>
<pre><code class="lang-bash">kubectl create namespace argocd
</code></pre>
<p>Apply the official install manifest:</p>
<pre><code class="lang-bash">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<h3 id="heading-3-access-the-argo-cd-ui"><strong>3. Access the Argo CD UI</strong></h3>
<p>Port-forward the server:</p>
<pre><code class="lang-bash">kubectl port-forward svc/argocd-server -n argocd 8080:443
</code></pre>
<p><strong>Login</strong>:</p>
<ul>
<li><p><strong>Username</strong>: admin</p>
</li>
<li><p><strong>Password</strong>:</p>
<pre><code class="lang-bash">  kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=<span class="hljs-string">"{.data.password}"</span> | base64 -d
</code></pre>
</li>
</ul>
<p>Open your browser at <a target="_blank" href="http://localhost:8080/"><strong>http://localhost:8080</strong></a>.</p>
<h3 id="heading-4-define-a-craftista-argo-cd-project"><strong>4. Define a “Craftista” Argo CD Project</strong></h3>
<p>Scope Repos, Clusters, and Namespaces:</p>
<pre><code class="lang-bash">kubectl apply -f argocd/application/craftista-project.yaml
</code></pre>
<p>You should see:</p>
<pre><code class="lang-bash">project.argoproj.io/craftista created
</code></pre>
<h3 id="heading-5-deploy-the-development-environment"><strong>5. Deploy the Development Environment</strong></h3>
<p>Create Argo CD applications:</p>
<pre><code class="lang-bash">kubectl apply -f argocd/application/dev/
</code></pre>
<p>Argo CD will:</p>
<ul>
<li><p>Clone the microservice-helmcharts repo.</p>
</li>
<li><p>Render each Helm chart with its <code>env/dev/*-values.yaml</code>.</p>
</li>
<li><p>Create Deployment, Service, and so on in your dev namespaces.</p>
</li>
<li><p>Continuously reconcile desired vs. actual state.</p>
</li>
</ul>
<p>Monitor your progress:</p>
<pre><code class="lang-bash">argocd app list
argocd app get frontend-dev
</code></pre>
<h3 id="heading-6-manual-promotion-staging-amp-prod"><strong>6. Manual Promotion (Staging &amp; Prod)</strong></h3>
<p>Edit the image tag or other values:</p>
<ul>
<li><p><code>env/staging/&lt;service&gt;/&lt;service&gt;-values.yaml</code></p>
</li>
<li><p><code>env/prod/&lt;service&gt;/&lt;service&gt;-values.yaml</code></p>
</li>
</ul>
<p>Commit and push the changes:</p>
<pre><code class="lang-bash">git add env/staging env/prod
git commit -m <span class="hljs-string">"Promote v1.2.0 → staging &amp; prod"</span>
git push
</code></pre>
<p>Argo CD will detect the Git change and automatically sync your staging and prod applications (if automated sync is enabled).</p>
<h3 id="heading-7-automated-promotion-with-kargo"><strong>7. Automated Promotion with Kargo</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748689354529/3759ec0c-7db4-42a8-9f01-f0792dfec895.png" alt="Kargo DashBoard" class="image--center mx-auto" width="3604" height="1998" loading="lazy"></p>
<p>First, install Kargo:</p>
<pre><code class="lang-bash">kubectl apply -f kargo/kargo.yaml
</code></pre>
<p>Configure promotion tasks, stages, and warehouse:</p>
<pre><code class="lang-bash">kubectl apply -k kargo/

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - project.yaml
  - projectconfig.yaml
  - catalogue-config/catalogue-warehouse.yaml
  - catalogue-config/catalogue-stages.yaml
  - catalogue-config/catalogue-promotion-tasks.yaml
  - frontend-config/frontend-warehouse.yaml
  - frontend-config/frontend-stages.yaml
  - frontend-config/frontend-promotion-tasks.yaml
  - recommendation-config/recommendation-warehouse.yaml
  - recommendation-config/recommendation-stages.yaml
  - recommendation-config/recommendation-promotion-tasks.yaml
  - voting-config/voting-warehouse.yaml
  - voting-config/voting-stages.yaml
  - voting-config/voting-promotion-tasks.yaml
</code></pre>
<h2 id="heading-how-the-gitops-pipeline-works">How the GitOps Pipeline Works</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748885521249/285bcaed-447d-4a31-87cb-98b531d9cb0d.png" alt="285bcaed-447d-4a31-87cb-98b531d9cb0d" class="image--center mx-auto" width="1024" height="1536" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748557162526/f4716cfc-a71f-4ddc-bc58-7a42118c3190.png" alt="f4716cfc-a71f-4ddc-bc58-7a42118c3190" class="image--center mx-auto" width="3840" height="393" loading="lazy"></p>
<ol>
<li><strong>Developer Opens a Pull Request</strong>: The journey begins when a developer opens a pull request on one of the microservice repos. This signals that new code (feature, bugfix, config change) is ready to be integrated.</li>
</ol>
<ol start="2">
<li><p><strong>CI (GitHub Actions)</strong></p>
<ul>
<li><p><strong>CI: Lint → Test → Build &amp; Tag</strong>: A single workflow job lints the code, runs unit/integration tests, builds the Docker image, and applies a semantic tag (for example, v1.2.0).</p>
</li>
<li><p><strong>CI OK? (Decision)</strong>:</p>
<ul>
<li><p>If <strong>No</strong>, the pipeline stops and the developer is notified to fix errors.</p>
</li>
<li><p>If <strong>Yes</strong>, the newly built image is pushed to the container registry (DockerHub, ECR, and so on).</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Kargo</strong></p>
<ul>
<li><p><strong>Warehouse discovers new image tag</strong>: Kargo’s Warehouse component continuously watches your registry. As soon as it sees the new tag, it records that image metadata.</p>
</li>
<li><p><strong>Update env/dev values → Git</strong>: Kargo automatically commits an update to <code>env/dev/&lt;service&gt;/…-values.yaml</code>, pointing the dev Helm values file to the new image tag. This Git commit will drive the next step.</p>
</li>
</ul>
</li>
<li><p><strong>GitOps (Argo CD)</strong></p>
</li>
<li><ul>
<li><p><strong>Argo CD sync dev</strong>: Argo CD sees the Git change in the dev values file and pulls it into the cluster, reconciling the actual dev namespace with the desired state.</p>
<ul>
<li><p><strong>Dev deployment healthy? (Decision)</strong>:</p>
<ul>
<li><p>If <strong>No</strong>, Argo CD can optionally roll back and notifies the team (via Slack, email, etc.) of the failed dev rollout.</p>
</li>
<li><p>If <strong>Yes</strong>, it’s time to promote to staging.</p>
</li>
</ul>
</li>
<li><p><strong>Update env/staging values → Git</strong>: Kargo (or you, if manual) commits the same image tag into <code>env/staging/&lt;service&gt;/…-values.yaml</code>.</p>
</li>
<li><p><strong>Argo CD sync staging</strong>: Argo CD deploys that change to the staging namespace.</p>
</li>
<li><p><strong>Staging approval granted? (Decision)</strong>:</p>
<ul>
<li><p>If <strong>No</strong>, Kargo waits (and optionally notifies) until a manual gate is lifted.</p>
</li>
<li><p>If <strong>Yes</strong>, the final promotion commit is made: updating <code>env/prod/&lt;service&gt;/…-values.yaml</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Argo CD sync prod → End</strong>: Argo CD applies the production change, completing the pipeline from commit all the way to live production rollout.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<h3 id="heading-pipeline-summary"><strong>Pipeline Summary</strong></h3>
<ol>
<li><p>Developer opens PR → CI tests and builds → Docker image pushed</p>
</li>
<li><p>Kargo Warehouse detects new tag → Git commit to <code>env/dev</code></p>
</li>
<li><p>Argo CD syncs dev → Health check → (if successful) commit to <code>env/staging</code></p>
</li>
<li><p>Argo CD syncs staging → Approval → commit to <code>env/prod</code></p>
</li>
<li><p>Argo CD syncs prod → Live deployment complete</p>
</li>
</ol>
<p>Every stage must pass its health or approval check before the next begins, ensuring that only thoroughly tested and validated code makes it into production.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building a real-world CI/CD pipeline isn’t just about getting code from your laptop into a Kubernetes cluster – it’s about creating a repeatable, auditable, and reliable system that scales with your team and your application complexity.</p>
<p>In this guide, we walked through how I built a complete GitOps-based promotion pipeline using GitHub Actions, Argo CD, and Kargo, all driven by a hands-on microservices project: Craftista. From the first code commit to automated environment promotion, we leveraged industry best practices like semantic versioning, declarative infrastructure, and environment-based GitOps directories.</p>
<p>What makes this approach powerful is not just the tools but also the principles. By treating Git as the single source of truth, and using Kargo to automate what was traditionally a manual and fragile promotion process, we gain predictability and control over our deployments. Argo CD ensures that what’s in Git is always what’s running in our clusters, while Kargo eliminates human error in multi-stage rollouts.</p>
<p>If you’re tired of overly abstract “hello world” DevOps tutorials and want to get your hands dirty with something that feels <strong>real</strong>, Craftista offers the perfect sandbox. This pipeline reflects how teams operate in production – polyglot services, independent deployments, environment promotion gates, and GitOps as the operational backbone.</p>
<p>Whether you're a DevOps engineer sharpening your skills, or a platform team setting standards for internal development, I hope this tutorial provided the clarity and inspiration to build your own commit-to-production pipeline – step by step, with confidence.</p>
<h3 id="heading-further-reading-amp-resources">Further Reading &amp; Resources</h3>
<ul>
<li><p><a target="_blank" href="https://argo-cd.readthedocs.io/">Argo CD Documentat</a><a target="_blank" href="https://argo-cd.readthedocs.io/">ion</a></p>
</li>
<li><p><a target="_blank" href="https://docs.kargo.io/">Kargo Docs</a></p>
</li>
<li><p><a target="_blank" href="https://docs.github.com/en/actions">GitHub Actions Docs</a></p>
</li>
<li><p><a target="_blank" href="https://codefresh.io/blog/how-to-model-your-gitops-environments-and-promote-releases-between-them/">How to Model Your GitOps Environments and Promote Releases between Them</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/craftista/craftista">Craftista Repo</a></p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Simplify AWS Multi-Account Management with Terraform and GitOps ]]>
                </title>
                <description>
                    <![CDATA[ In the past, in the world of cloud computing, a company's journey often began with a single AWS account. In this unified space, development and testing environments coexisted, while the production environment lived in a separate account. This arrange... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/simplify-aws-multi-account-management-with-terraform-and-gitops/</link>
                <guid isPermaLink="false">6745e19265a8ceed4a65c3eb</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Terraform ]]>
                    </category>
                
                    <category>
                        <![CDATA[ gitops ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Tue, 26 Nov 2024 14:56:18 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730239127065/317aa4dd-aba9-4a9e-8abb-7cacfbd0e672.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In the past, in the world of cloud computing, a company's journey often began with a single AWS account. In this unified space, development and testing environments coexisted, while the production environment lived in a separate account.</p>
<p>This arrangement might work well in early days, but as a company grows and their needs become more specialized, the simplicity of a single account might start to show its limitations. The demand for dedicated environments will start to increase, and soon, that company may need to create new AWS accounts for specific functions like security, DevOps, and billing.</p>
<p>With each new account, the complexity of managing security policies and logging across the entire infrastructure grows exponentially. The cloud architects for these companies will then realize that they need a more centralized and streamlined approach to manage this expanding digital presence.</p>
<h3 id="heading-enter-aws-organizations">Enter AWS Organizations</h3>
<p>AWS Organizations is a service designed to streamline AWS account management. This powerful tool allows you to group multiple AWS accounts under a single umbrella. With AWS Organizations, you can easily create organizational units, apply service control policies, and manage permissions across all accounts. This not only simplifies the process but also enhances security and compliance.</p>
<p>The billing processes of AWS Organizations have also been optimized through the centralization of payments and the generation of comprehensive expense reports for each account. This improved clarity in financial management makes it easier for companies to allocate resources in a more efficient manner and strategize for future expansion.</p>
<p>AWS Organizations can help your team consistently enforce security policies, enable logging across all accounts, and streamline administrative tasks. Cloud infrastructure is now a well-organized, secure, and efficient machine, ready to support a company's ambitions for years to come.</p>
<p>In this article, we’ll discuss what it means to have a multi-account setup and how it works. I’ll walk you through everything from the deployment architecture to creating an Organizational Unit and beyond.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-components-of-multi-account-setup">Components of Multi-Account Setup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-automate-a-multi-account-strategy">How to Automate a Multi-Account Strategy</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-aws-organization-structure">AWS Organization Structure</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deployment-architecture">Deployment Architecture</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-overview-of-cicd-components">OverView of CI/CD Components</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-cicd-deployment-process-explained">CI/CD Deployment Process Explained</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-automate-landing-zone-creation">How to Automate Landing Zone Creation</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-create-an-organizational-unit">How to Create an Organizational Unit</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-automate-attaching-control-tower-control-to-the-ou">How to Automate Attaching Control Tower Control to the OU</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-components-of-multi-account-setup"><strong>Components of Multi-Account Setup</strong></h2>
<p>First, let's take a detailed look at the various components that make up an AWS multi-account strategy:</p>
<ul>
<li><p><strong>AWS Control Tower</strong></p>
</li>
<li><p><strong>Landing zone</strong></p>
</li>
<li><p><strong>AWS OU</strong></p>
</li>
<li><p><strong>AWS SSO</strong></p>
</li>
<li><p><strong>Control Tower Controls</strong></p>
</li>
<li><p><strong>Service control policies (SCPs)</strong></p>
</li>
</ul>
<h3 id="heading-what-is-aws-control-tower"><strong>What is AWS Control Tower?</strong></h3>
<p>AWS Control Tower is a comprehensive service that enables you to set up and manage a multi-account AWS environment efficiently. It’s designed based on best practices from AWS experts and adheres to industry standards and requirements.</p>
<p>By using AWS Control Tower, you can ensure that your AWS environment is secure, compliant, and well-organized, facilitating easier management and scalability.</p>
<h4 id="heading-features-of-aws-control-tower">Features of AWS Control Tower:</h4>
<ul>
<li><p>Cloud IT can be confident that all accounts are in line with company-wide regulations, and distributed teams may create new AWS accounts quickly.</p>
</li>
<li><p>You can enforce best practices, standards, and regulatory requirements with preconfigured controls.</p>
</li>
<li><p>You can automate your AWS environment setup with best-practice blueprints. These blueprints cover various aspects such as multi-account structure, identity and access management, as well as account provisioning workflow.</p>
</li>
<li><p>It lets you govern new or existing account configurations, gain visibility into compliance status, and enforce controls at scale.</p>
</li>
</ul>
<h3 id="heading-what-is-a-landing-zone-in-aws"><strong>What is a Landing Zone in AWS?</strong></h3>
<p>A landing zone helps you quickly set up a cloud environment using automation, including preconfigured settings that follow industry best practices for ensuring the security of your AWS accounts.</p>
<p>The starting point serves as a foundation for your company to efficiently initiate and implement workloads and applications, ensuring a secure and reliable infrastructure environment.</p>
<p>There are two choices for creating a landing zone. First, you can use the AWS Control Tower dashboard. Second, you can build a custom landing zone. If you are new to AWS, I recommend using AWS Control Tower to create a landing zone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137800622/f72dbf02-fa34-4004-999d-71a2af33f90b.png" alt="AWS Landing Zone" class="image--center mx-auto" width="1400" height="728" loading="lazy"></p>
<p>If you opt for creating a landing zone via the Control Tower dashboard, the following will be implemented in your landing zone:</p>
<ul>
<li><p>A multi-account environment with AWS organizations.</p>
</li>
<li><p>Identity management through the default directory in AWS IAM Identity Center.</p>
</li>
<li><p>Federated access to accounts using IAM Identity Center.</p>
</li>
<li><p>Centralized logging from AWS CloudTrail and AWS Config stored in Amazon Simple Storage Service (Amazon S3).</p>
</li>
<li><p>Enabled cross-account <a target="_blank" href="https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html">security audits</a> using IAM Identity Center.</p>
</li>
</ul>
<h3 id="heading-what-is-an-aws-organizational-unit"><strong>What is an AWS Organizational Unit?</strong></h3>
<p>Using multiple accounts allows you to better support your security goals and company operations.</p>
<p>AWS Organizations enables policy-based management of multiple AWS accounts. When you create new accounts, you can arrange them in organizational units (OUs), which are groupings of accounts that provide the same application or service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137833615/6eebb3ab-94d0-4286-8dc4-9d3ae297e186.png" alt="AWS Organizational Units" class="image--center mx-auto" width="786" height="496" loading="lazy"></p>
<h4 id="heading-advantages-of-using-ous">Advantages of Using OUs:</h4>
<ul>
<li><p>Accounts are units of security protection. Potential hazards and security threats can be contained within one account without affecting others.</p>
</li>
<li><p>Teams have different assignments and resource needs. Setting up different accounts prevents teams from interfering with one another, as they might do if they used the same account.</p>
</li>
<li><p>Isolating data stores to an account reduces the number of people who have access to and can manage the data store.</p>
</li>
<li><p>The multi-account concept allows you to generate separate billable items for business divisions, functional teams, or individual users.</p>
</li>
<li><p>AWS quotas are set up per account. Separating workloads into different accounts gives each account an individual quota.</p>
</li>
</ul>
<h3 id="heading-what-is-aws-iam-identity-center"><strong>What is AWS IAM Identity Center?</strong></h3>
<p>The AWS IAM Identity Center provides a centralized solution for managing access to multiple AWS accounts and business applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137875918/349673f8-1a09-4bcc-b1db-6a898d3d06b5.png" alt="AWS identity center" class="image--center mx-auto" width="1100" height="631" loading="lazy"></p>
<p>This method offers a single sign-on feature that allows employees to access all assigned accounts and applications from a single credential.</p>
<p>The personalized web user portal provides a centralized view of the user's assigned roles in AWS accounts.</p>
<p>For a uniformed authentication experience, users can sign in using the AWS Command Line Interface, AWS SDKs, or the AWS Console Mobile Application with their directory credentials.</p>
<p>You can also set up and oversee user IDs in IAM Identity Center's identity store, or you can connect to your existing identity provider, such as Microsoft Active Directory, Okta, and so on.</p>
<h3 id="heading-control-tower-controls-guardrails"><strong>Control Tower Controls (Guardrails)</strong></h3>
<p>Controls are predefined governance rules for security, operations, and compliance. You can select and apply them enterprise-wide or to specific groups of accounts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137911519/5dac3db6-15e6-476b-9b50-a1597a02fe84.png" alt="ControlTowerControls" class="image--center mx-auto" width="1322" height="843" loading="lazy"></p>
<p>Controls can be detective, preventive, or proactive and can be either mandatory or optional.</p>
<ul>
<li><p>First, we have detective controls (for example, detecting whether public read access to Amazon S3 buckets is allowed).</p>
</li>
<li><p>Next, preventive controls establish intent and prevent deployment of resources that don’t conform to your policies (for example, enabling AWS CloudTrail in all accounts).</p>
</li>
<li><p>Finally, proactive control capabilities use <a target="_blank" href="https://aws.amazon.com/blogs/mt/proactively-keep-resources-secure-and-compliant-with-aws-cloudformation-hooks/">AWS CloudFormation Hooks</a> to proactively identify and block the CloudFormation deployment of resources that are not compliant with the controls you have enabled. For example, developers cannot create S3 buckets that are capable of storing data in an unencrypted state at rest.</p>
</li>
</ul>
<h3 id="heading-service-control-policies-scp"><strong>Service Control Policies (SCP)</strong></h3>
<p>SCPs are a feature of the organization that allows you to set the maximum permissions for member accounts within the organization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137972306/80d0782c-0801-4548-9c0c-d4a11d43ecbe.png" alt="Service Control Policies" class="image--center mx-auto" width="1036" height="658" loading="lazy"></p>
<p>There are many functions and features of an SCP:</p>
<ul>
<li><p>If an SCP denies an action on an account, no entity in the account can perform that action, even if its IAM permissions allow it.</p>
</li>
<li><p>Prevents stopping or deletion of CloudTrail logging.</p>
</li>
<li><p>Prevents deletion of VPC flow logs.</p>
</li>
<li><p>Prohibits AWS accounts from leaving the organization.</p>
</li>
<li><p>Prevents AWS GuardDuty changes.</p>
</li>
<li><p>Prevents resource sharing using AWS Resource Access Manager (RAM) either externally or across environments.</p>
</li>
<li><p>Prevents disabling the default Amazon EBS encryption.</p>
</li>
<li><p>Prevents Amazon S3 unencrypted object uploads.</p>
</li>
<li><p>And prevents IAM users and roles in the affected accounts from creating certain resource types if the request doesn't include the specified tags.</p>
</li>
</ul>
<h2 id="heading-how-to-automate-a-multi-account-strategy"><strong>How to Automate a Multi-Account Strategy</strong></h2>
<p>Now that you’re familiar with the key concepts of a Multi-Account Strategy in AWS, let’s dive deeper into the practical parts.</p>
<p>In the coming subsections, we’ll cover how you can set up an AWS Control Tower, create a landing zone, and automatically create organizational units (OUs). I’ll also walk you through how to configure Control Tower controls—often known as guardrails—to uphold security, compliance, and governance over your AWS environment.</p>
<p>Once we finish this deployment, we will have a solution that includes the following components:</p>
<ul>
<li><p>Creates an AWS Organizations OU named Core within the organizational root structure.</p>
</li>
<li><p>Creates and adds two shared accounts to the Security OU: the Log Archive account and the Audit account.</p>
</li>
<li><p>Creates a cloud-native directory in IAM Identity Center, with ready-made groups and single sign-on access.</p>
</li>
<li><p>Applies all required preventive controls to enforce policies.</p>
</li>
<li><p>Applies required detective controls to identify configuration violations.</p>
</li>
</ul>
<h2 id="heading-aws-organization-structure"><strong>AWS Organization Structure</strong></h2>
<p>We will create and implement the following organizational structure. You can add or modify OUs as per your requirements.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732138006995/423e54cd-bf74-4aef-a2a3-d52294482ca0.png" alt="AWS Organization Structure" class="image--center mx-auto" width="1295" height="694" loading="lazy"></p>
<h2 id="heading-deployment-architecture"><strong>Deployment Architecture</strong></h2>
<p>I will be using Terraform Cloud and GitHub Actions for automating the entire process. This architecture applies to all three components, including core accounts, landing zones, and organizational unit (OU) creation and controls.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732138041912/0cba5af0-69ea-4ae9-986e-c1608d3d5c21.avif" alt="Deployment Architecture" width="1888" height="518" loading="lazy"></p>
<h3 id="heading-overview-of-cicd-components">Overview of CI/CD Components</h3>
<h4 id="heading-1-github-actions"><strong>1. GitHub Actions</strong></h4>
<p>GitHub Actions is a CI/CD platform that lets you automate your build, test, and deployment pipeline. You can create workflows that automatically build and test every pull request to your repository, ensuring code changes are verified before merging.</p>
<p>GitHub Actions also lets you deploy merged pull requests to production, streamlining the release process and reducing errors.</p>
<p>Using GitHub Actions enhances your development workflow, improves code quality, and speeds up the delivery of new features and updates.</p>
<h4 id="heading-2-terraform-cloud"><strong>2. Terraform Cloud</strong></h4>
<p>Terraform Cloud is a platform by HashiCorp for managing and executing your Terraform code. It offers tools and features that enhance collaboration between developers and DevOps engineers, making teamwork more efficient.</p>
<p>With Terraform Cloud, you can simplify and streamline your workflow, making it easier to handle complex infrastructure tasks and deployments. The platform also provides strong security features to protect your code and infrastructure, keeping your product secure throughout its lifecycle.</p>
<h3 id="heading-cicd-deployment-process-explained"><strong>CI/CD Deployment Process Explained</strong></h3>
<p>DevOps engineers are responsible for writing the Terraform code and then creating a pull request. I have added several test cases for my Terraform code in the <code>terraform-plan.yml</code> file, which runs only on the feature branch.</p>
<ul>
<li><p><strong>Check environment variables:</strong> Ensures all required environment variables are set.</p>
</li>
<li><p><strong>Checkout Code:</strong> Uses the <code>actions/checkout</code> action to check out the repository.</p>
</li>
<li><p><strong>Verify Checkout:</strong> Verifies that the checkout was successful.</p>
</li>
<li><p><strong>Validation:</strong> Verifies the Terraform code for any syntax errors. Pull requests contain proposed changes in code, allowing team members to review and merge them into the master branch. Once pull requests are merged with the master branch, all test cases are rerun, and the landing zone is created through Terraform Cloud</p>
</li>
</ul>
<h2 id="heading-what-to-know-before-setting-up-control-tower"><strong>What to Know Before Setting up Control Tower</strong></h2>
<p>Before beginning the process of setting up for AWS Control Tower, it is important to have a clear understanding of what limitations are associated with Control Tower and consider some key points.</p>
<ul>
<li><p>When setting up a landing zone, it is important to choose your home region. Once you have made a selection, you won’t be able to change your home region.</p>
</li>
<li><p>If you intend to establish a control tower on an existing AWS account that is already a part of an existing organizational unit (OU), you won’t be able to use it. In order to proceed, you’ll need to create a new AWS account that is not associated with any organizational Unit (OU).</p>
</li>
<li><p>As part of the Control Tower creation process, you’ll need to create mandatory accounts such as the Log Archive Account and Audit Accounts. Account-specific emails are required.</p>
</li>
<li><p>In order to set up the Landing Zone in the Management Account, it is essential to ensure that you have subscribed to the following services in the management account:</p>
<ul>
<li>S3, EC2, SNS, VPC, CloudFormation, CloudTrail, CloudWatch, AWS Config, IAM, AWS Lambda</li>
</ul>
</li>
<li><p>The AWS Control Tower baseline covers only a few services with limited customization options: IAM Identity Center, CloudTrail, Config, some configuration rules, and some SCPs in AWS Organizations.</p>
</li>
<li><p>Implementing IAM Identity Center is limited to the management account of an organization.</p>
</li>
<li><p>AWS Control Tower implements concurrency limitations, allowing only one operation to be performed at a time.</p>
</li>
<li><p>Note that certain AWS Regions do not support the operation of some controls in AWS Control Tower. This is because the specified Regions lack the necessary underlying functionality to support the required operations.</p>
</li>
</ul>
<h3 id="heading-how-to-create-a-control-tower"><strong>How to Create a Control Tower</strong></h3>
<p>Creating a Control Tower means setting up a landing zone. AWS landing zone requires creating two new member accounts: the Audit account and the Log Archive account. You will need two unique email addresses for these accounts.</p>
<p>We will manage this process using Terraform modules. To keep things simple and clear, we will divide the project into several modules. One module will create the two core accounts. Another module will handle the setup of the landing zone. The final module will create Organizational Units (OUs) and apply Control Tower controls to ensure governance and compliance.</p>
<h2 id="heading-how-to-automate-landing-zone-creation"><strong>How to Automate Landing Zone Creation</strong></h2>
<p>When you run this code, the Core OU and two accounts are created under the Core OU. I have mentioned two repositories for each component: one for deploying the AWS resources like the landing zone, OU, and Control Tower Controls and another for the Terraform module.</p>
<p>A <em>Terraform module</em> is a set of standard configuration files in a specific directory. Terraform modules group resources for a specific task, which reduces the amount of code needed for similar infrastructure components.</p>
<p>I have imported both the core account creation and landing zone creation modules into the same <a target="_blank" href="https://github.com/nitheeshp-irl/aws-landing-zone/blob/main/main.tf"><code>main.tf</code></a> file. This is necessary because the landing zone creation depends on the core account module. Including them together ensures all dependencies are managed properly and the deployment process is efficient.</p>
<p>This method also simplifies the project structure and helps avoid potential issues from managing these components separately.</p>
<p>The AWS Control Tower <a target="_blank" href="https://docs.aws.amazon.com/controltower/latest/APIReference/API_CreateLandingZone.html"><code>CreateLandingZone</code></a> API needs a landing zone version and a manifest file as input parameters. Below is an example <strong>LandingZoneManifest.json</strong> manifest.</p>
<pre><code class="lang-json">{
   <span class="hljs-attr">"governedRegions"</span>: [<span class="hljs-string">"us-west-2"</span>,<span class="hljs-string">"us-west-1"</span>],
   <span class="hljs-attr">"organizationStructure"</span>: {
       <span class="hljs-attr">"security"</span>: {
           <span class="hljs-attr">"name"</span>: <span class="hljs-string">"CORE"</span>
       },
       <span class="hljs-attr">"sandbox"</span>: {
           <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Sandbox"</span>
       }
   },
   <span class="hljs-attr">"centralizedLogging"</span>: {
        <span class="hljs-attr">"accountId"</span>: <span class="hljs-string">"222222222222"</span>,
        <span class="hljs-attr">"configurations"</span>: {
            <span class="hljs-attr">"loggingBucket"</span>: {
                <span class="hljs-attr">"retentionDays"</span>: <span class="hljs-number">60</span>
            },
            <span class="hljs-attr">"accessLoggingBucket"</span>: {
                <span class="hljs-attr">"retentionDays"</span>: <span class="hljs-number">60</span>
            },
            <span class="hljs-attr">"kmsKeyArn"</span>: <span class="hljs-string">"arn:aws:kms:us-west-1:123456789123:key/e84XXXXX-6bXX-49XX-9eXX-ecfXXXXXXXXX"</span>
        },
        <span class="hljs-attr">"enabled"</span>: <span class="hljs-literal">true</span>
   },
   <span class="hljs-attr">"securityRoles"</span>: {
        <span class="hljs-attr">"accountId"</span>: <span class="hljs-string">"333333333333"</span>
   },
   <span class="hljs-attr">"accessManagement"</span>: {
        <span class="hljs-attr">"enabled"</span>: <span class="hljs-literal">true</span>
   }
}
</code></pre>
<p>This module sets up the AWS landing zone using <code>landingzone_manifest_template</code>. The landing zone version and admin account ID are given through variables. This module also creates several IAM roles required for the landing zone setup.</p>
<p>I defined a local variable <code>landingzone_manifest_template</code>, which is a JSON template for setting up the landing zone. This JSON template has several important settings:</p>
<pre><code class="lang-yaml"><span class="hljs-string">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>
}

<span class="hljs-string">locals</span> {
  <span class="hljs-string">landingzone_manifest_template</span> <span class="hljs-string">=</span> <span class="hljs-string">&lt;&lt;EOF</span>
{
    <span class="hljs-attr">"governedRegions":</span> <span class="hljs-string">$</span>{<span class="hljs-string">jsonencode(var.governed_regions)</span>},
    <span class="hljs-attr">"organizationStructure":</span> {
        <span class="hljs-attr">"security":</span> {
            <span class="hljs-attr">"name":</span> <span class="hljs-string">"Core"</span>
        }
    },
    <span class="hljs-attr">"centralizedLogging":</span> {
         <span class="hljs-attr">"accountId":</span> <span class="hljs-string">"${module.aws_core_accounts.log_account_id}"</span>,
         <span class="hljs-attr">"configurations":</span> {
             <span class="hljs-attr">"loggingBucket":</span> {
                 <span class="hljs-attr">"retentionDays":</span> <span class="hljs-string">$</span>{<span class="hljs-string">var.retention_days</span>}
             },
             <span class="hljs-attr">"accessLoggingBucket":</span> {
                 <span class="hljs-attr">"retentionDays":</span> <span class="hljs-string">$</span>{<span class="hljs-string">var.retention_days</span>}
             }
         },
         <span class="hljs-attr">"enabled":</span> <span class="hljs-literal">true</span>
    },
    <span class="hljs-attr">"securityRoles":</span> {
         <span class="hljs-attr">"accountId":</span> <span class="hljs-string">"${module.aws_core_accounts.security_account_id}"</span>
    },
    <span class="hljs-attr">"accessManagement":</span> {
         <span class="hljs-attr">"enabled":</span> <span class="hljs-literal">true</span>
    }
}
<span class="hljs-string">EOF</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_core_accounts"</span> {
  <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/terraform_modules/aws_core_accounts_module"</span>

  <span class="hljs-string">logging_account_email</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.logging_account_email</span>
  <span class="hljs-string">logging_account_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.logging_account_name</span>
  <span class="hljs-string">security_account_email</span> <span class="hljs-string">=</span> <span class="hljs-string">var.security_account_email</span>
  <span class="hljs-string">security_account_name</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.security_account_name</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_landingzone"</span> {
  <span class="hljs-string">source</span>                  <span class="hljs-string">=</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/blog_terraform_modules/aws_landingzone_module"</span>
  <span class="hljs-string">manifest_json</span>           <span class="hljs-string">=</span> <span class="hljs-string">local.landingzone_manifest_template</span>
  <span class="hljs-string">landingzone_version</span>     <span class="hljs-string">=</span> <span class="hljs-string">var.landingzone_version</span>
  <span class="hljs-string">administrator_account_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.administrator_account_id</span>
}
</code></pre>
<ul>
<li><p><strong>Governed Regions</strong>: Specifies the regions governed by the landing zone.</p>
</li>
<li><p><strong>Organization Structure</strong>: Defines the security structure with a dedicated security account.</p>
</li>
<li><p><strong>Centralized Logging</strong>: Configures logging, specifying the account ID and retention policies for logs.</p>
</li>
<li><p><strong>Security Roles</strong>: Specifies the account ID for security roles.</p>
</li>
<li><p><strong>Access Management</strong>: Enables access management.</p>
</li>
<li><p><strong>Core Accounts</strong>: The core accounts code, also defined in the same file, is what sets up essential AWS accounts for logging and security.</p>
</li>
</ul>
<p>You can find the full code here: <a target="_blank" href="https://github.com/nitheeshp-irl/aws-landing-zone">https://github.com/nitheeshp-irl/aws-landing-zone</a>.</p>
<h2 id="heading-how-to-create-an-organizational-unit"><strong>How to Create an Organizational Unit</strong></h2>
<p>When you run this code, different organizational units (OUs) are created according to the specifications in the <a target="_blank" href="https://github.com/nitheeshp-irl/aws-orgunits/blob/main/variables.auto.tfvars">variable</a> file.</p>
<p>Once the landing zone setup is finished, we can create an OU as per our business requirements. This will take the OU name from the variable file and create the OU.</p>
<pre><code class="lang-json">aws_region = <span class="hljs-string">"us-east-2"</span>

organizational_units = [
  {
    unit_name = <span class="hljs-attr">"apps"</span>
  },
  {
    unit_name = <span class="hljs-attr">"infra"</span>
  },
  {
    unit_name = <span class="hljs-attr">"stagingpolicy"</span>
  },
  {
    unit_name = <span class="hljs-attr">"sandbox"</span>
  },
  {
    unit_name = <span class="hljs-attr">"security"</span>
  }
]
</code></pre>
<p>You can see the code here:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/aws-orgunits">AWS Organizational Units (OUs) Terraform Repo</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/blog-terraform-modules/tree/main/aws_org_module">AWS Organizational Units Terraform Module Path</a></p>
</li>
</ul>
<h2 id="heading-how-to-automate-attaching-control-tower-control-to-the-ou"><strong>How to Automate Attaching Control Tower Control to the OU</strong></h2>
<p>Once you have created the OU units using the above repository, this repository will apply Control Tower controls to the OUs.</p>
<p>After creating the required objects, you can attach controls to the OU if you need them. Here is the <a target="_blank" href="https://github.com/nitheeshp-irl/controltower_controls/blob/main/main.tf"><code>main.tf</code></a> file:</p>
<pre><code class="lang-yaml"><span class="hljs-string">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_controls"</span> {
  <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/blog_terraform_modules/awscontroltower-controls_module"</span>

  <span class="hljs-string">aws_region</span> <span class="hljs-string">=</span> <span class="hljs-string">var.aws_region</span>
  <span class="hljs-string">controls</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.controls</span>
}
</code></pre>
<p>We used Terraform modules to create AWS resources.</p>
<p>Here are the control variables:</p>
<pre><code class="lang-json">aws_region = <span class="hljs-string">"us-east-2"</span>


controls = [
  {
    control_names = [
      <span class="hljs-attr">"AWS-GR_ENCRYPTED_VOLUMES"</span>,
      <span class="hljs-attr">"AWS-GR_EBS_OPTIMIZED_INSTANCE"</span>,
      <span class="hljs-attr">"AWS-GR_EC2_VOLUME_INUSE_CHECK"</span>,
      <span class="hljs-attr">"AWS-GR_RDS_INSTANCE_PUBLIC_ACCESS_CHECK"</span>,
      <span class="hljs-attr">"AWS-GR_RDS_SNAPSHOTS_PUBLIC_PROHIBITED"</span>,
      <span class="hljs-attr">"AWS-GR_RDS_STORAGE_ENCRYPTED"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICTED_COMMON_PORTS"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICTED_SSH"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICT_ROOT_USER"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICT_ROOT_USER_ACCESS_KEYS"</span>,
      <span class="hljs-attr">"AWS-GR_ROOT_ACCOUNT_MFA_ENABLED"</span>,
      <span class="hljs-attr">"AWS-GR_S3_BUCKET_PUBLIC_READ_PROHIBITED"</span>,
      <span class="hljs-attr">"AWS-GR_S3_BUCKET_PUBLIC_WRITE_PROHIBITED"</span>,
    ],
    organizational_unit_names = [<span class="hljs-attr">"infra"</span>, <span class="hljs-attr">"apps"</span>]
  }
]
</code></pre>
<p>You can see the code here:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/controltower_controls">Terraform Repo for Creating control tower controls</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/blog-terraform-modules/tree/main/awscontroltower-controls_module">Terraform Module for creating Control Tower Controls</a></p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Navigating a multi-account strategy in AWS can be challenging, but with AWS Control Tower and a structured approach, it becomes manageable.</p>
<p>Using AWS Control Tower, your team can ensure that their AWS environments are secure, compliant, and well-organized. The automated setup, governance at scale, and centralized management through AWS Organizations provide a strong foundation for cloud infrastructure.</p>
<p>Implementing a landing zone through AWS Control Tower offers a secure and standardized starting point, allowing for quicker deployment and better governance. Using organizational units (OUs) segregates accounts based on business needs, improving security and operational efficiency. AWS IAM Identity Center simplifies access management, providing a unified authentication experience across multiple accounts and applications.</p>
<p>Service Control Policies (SCPs) help keep things secure and compliant by making sure all resources follow the organization's rules. Terraform Cloud and GitHub Actions make it easier to deploy resources, offering a smooth CI/CD pipeline for managing infrastructure changes.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
