<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Nitheesh Poojary - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Sat, 16 May 2026 14:11:30 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/author/nitheeshp/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ Machine Learning vs Deep Learning vs Generative AI - What are the Differences? ]]>
                </title>
                <description>
                    <![CDATA[ When I started using LLMs for work and personal use, I picked up on some technical terms, such as "machine learning" and "deep learning," which are the main technologies behind these LLMs. I've always been interested in learning about the differences... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/machine-learning-vs-deep-learning-vs-generative-ai/</link>
                <guid isPermaLink="false">68de98a534a379d15102109e</guid>
                
                    <category>
                        <![CDATA[ Machine Learning ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Deep Learning ]]>
                    </category>
                
                    <category>
                        <![CDATA[ generative ai ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Thu, 02 Oct 2025 15:22:13 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759006391065/3cd87534-e2e9-49df-a9c7-1b636e491032.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>When I started using LLMs for work and personal use, I picked up on some technical terms, such as "machine learning" and "deep learning," which are the main technologies behind these LLMs. I've always been interested in learning about the differences between these technologies. Most companies in the industry are now developing their own AI tools, which makes MLOps necessary for managing and utilizing them.</p>
<p>Before I began learning about MLOps, I tried to understand the technologies behind LLMs and how they work. In this article, I’ll share my understanding of machine learning, deep learning, and generative AI, along with their potential applications.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-artificial-intelligence-ai">Artificial Intelligence (AI)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-machine-learning-ml-the-foundation">Machine Learning (ML): The Foundation</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deep-learning-adding-complexity">Deep Learning: Adding Complexity</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-generative-ai-write-new">Generative AI: Write New</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-summary-of-differences-between-machine-learning-vs-deep-learning-vs-generative-ai">Summary of Differences Between Machine Learning vs Deep Learning vs Generative AI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759006565108/9698f88c-7d81-40b6-b902-c3d75b054728.jpeg" alt="how AI works" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-artificial-intelligence-ai">Artificial Intelligence (AI)</h2>
<p>Artificial Intelligence (AI) is a form of technology that lets machines solve problems in a way that is identical to how people do it. It helps businesses make better decisions on a large scale by helping them recognize images, create content, and make predictions based on data. Artificial intelligence includes machine learning, deep learning, and generative AI.</p>
<h2 id="heading-machine-learning-ml-the-foundation">Machine Learning (ML): The Foundation</h2>
<p>When we give computers many examples, they learn how to make their own decisions or guesses. It's like teaching a kid to tell the difference between animals. You show them a lot of pictures of cats and dogs and say things like "This is a cat" and "This is a dog." In the end, they learn to tell the difference between cats and dogs on their own. Machine learning is similar in that you give a computer a lot of data with examples, and it learns how to make predictions about new data.</p>
<h3 id="heading-how-does-machine-learning-work">How Does Machine Learning Work?</h3>
<p>Machine Learning (ML) is the process of teaching computers to find patterns in data and make decisions or predictions without being instructed what to do. There are usually six main steps in this process:</p>
<p><strong>Data Collection:</strong> Get many examples, like thousands of emails, photos, or sales records. The more training data you have, the more accurate your predictions will be.</p>
<p><strong>Data Preparation</strong>: At this stage, you clean the data by getting rid of mistakes and adding missing labels.</p>
<p><strong>Selecting Algorithm (Models):</strong> It's like choosing the right tools for the job. Models can find patterns in data or make predictions. You can find machine learning models for your data <a target="_blank" href="https://www.ibm.com/think/topics/machine-learning-algorithms">here</a>.</p>
<p><strong>Training Phase:</strong> After you pick the right model for your cleaned-up data, you teach it. This is like getting ready for a test.</p>
<p><strong>Evaluation</strong>: Use the test data to assess the model's performance and see if it can make accurate predictions on unseen data.</p>
<p><strong>Deployment</strong>: Put the trained model to work in the real world.</p>
<p><strong>Training Phase</strong>: Teach the computer with 10,000 house sales with details like size (2,000 sq ft), number of bedrooms (3), and location (downtown). Cost: $300,000.</p>
<p><strong>Learning</strong>: The algorithm finds patterns, such as the fact that bigger houses cost more and places in the city center cost more. More bedrooms make a house worth more.</p>
<p><strong>Prediction</strong>: Think about a new house with 1,800 square feet, two bedrooms, and a location in the suburbs. It guesses a figure based on what it has learned.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759006771594/12afae06-9d72-4d65-af81-c10fda1e2099.png" alt="how machine learning works" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-types-of-machine-learning">Types of Machine Learning</h3>
<ol>
<li><p><strong>Supervised Learning</strong>: Give algorithms labeled and defined training data to look for patterns. The sample data tells the algorithm what to do and what to expect as an output. For instance, millions of X-ray reports that say someone is healthy or sick would need to be tagged. Then, machine learning programs could use this training data to guess if a new X-ray shows signs of illness.</p>
</li>
<li><p><strong>Unsupervised Learning</strong>: Algorithms that use unsupervised learning learn from data that doesn't have labels. The algorithm must find patterns in untagged data without outside help. For instance, finding groups of people on Facebook or Twitter who have similar interests.</p>
</li>
<li><p><strong>Reinforcement Learning</strong>: This technique is a kind of machine learning in which an agent learns how to make choices by interacting with the world around it. The agent receives points for doing things right and loses points for doing things wrong. Its goal is to get as many points as possible. For instance, cars learn how to drive safely by making mistakes in simulations. They get rewards for staying in their lane, following traffic rules, and not hitting other cars.</p>
</li>
</ol>
<h3 id="heading-machine-learningreal-world-examples">Machine Learning—Real-World Examples</h3>
<p><strong>Email Spam Detection</strong></p>
<p>You can show the computer thousands of emails that say "spam" or "not spam." It learns patterns, like how emails with "FREE MONEY" are usually spam. It can now automatically sort your inbox.</p>
<p><strong>Photo Recognition</strong></p>
<p>Give the computer millions of pictures with labels that say what's in them. It learns that apples are likely to be round and have stems. Your phone can now tell what things are in your pictures.</p>
<p><strong>Movie Recommendations</strong></p>
<p>Netflix keeps track of the movies you've seen and rated. It finds people who like the same things you do. It suggests movies that other people like.</p>
<h2 id="heading-deep-learning-adding-complexity">Deep Learning: Adding Complexity</h2>
<p>Deep learning is a type of artificial intelligence. It helps computers understand data like humans do. Deep learning can identify complex images, text, sound, and other data patterns to make accurate predictions. It uses artificial neural networks that work like the human brain. Neural networks are connected nodes that handle information.</p>
<h3 id="heading-how-does-deep-learning-work">How Does Deep Learning Work?</h3>
<p>Artificial neural networks are used in deep learning to learn from data. These networks consist of interconnected layers of nodes. Each node learns a different thing about the data.</p>
<p>For instance, when you show a computer a picture of a cat, the picture goes through a lot of steps. The first layer looks for shapes and edges. The second layer puts these shapes together to make ears, eyes, and whiskers. The last layers say things like "This picture looks like a cat." Deep learning can make a lot of mistakes when learning, but it gets better and better after each piece of feedback.</p>
<h3 id="heading-deep-learningreal-world-examples">Deep Learning—Real-World Examples</h3>
<ul>
<li><p><strong>Tesla Autopilot</strong>: Processes eight cameras simultaneously to navigate roads, recognize traffic signs, and avoid obstacles.</p>
</li>
<li><p><strong>Google's DeepMind</strong>: Detects over fifty eye diseases from retinal scans with 94% accuracy.</p>
</li>
<li><p><strong>ChatGPT</strong>: Helps with writing, coding, and problem-solving.</p>
</li>
</ul>
<h2 id="heading-generative-ai-write-new">Generative AI: Write New</h2>
<p>Generative AI is a subset of deep learning that makes new things, like stories, pictures, music, or code, instead of just looking at or sorting through things that are already there. Generative AI systems learn patterns from a lot of training data and then use those patterns to make new content.</p>
<h3 id="heading-real-world-examples">Real-World Examples</h3>
<ul>
<li><p>Chatbots help institutions give better customer service by making product suggestions and answering questions.</p>
</li>
<li><p>Automatically generate technical documents from the source code.</p>
</li>
<li><p>Auto-generate quizzes, practice problems, and explanations</p>
</li>
</ul>
<h2 id="heading-summary-of-differences-between-machine-learning-vs-deep-learning-vs-generative-ai">Summary of Differences Between Machine Learning vs Deep Learning vs Generative AI</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Machine Learning (ML)</strong></td><td><strong>Deep Learning (DL)</strong></td><td><strong>Generative AI (GenAI)</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Definition</strong></td><td>Subset of AI where machines learn from data to make predictions or decisions.</td><td>Subset of AI using artificial neural networks with multiple layers to model complex patterns</td><td>Subset of Deep learning that can create new content (text, images, code, etc.) similar to human-created content</td></tr>
<tr>
<td><strong>Data Requirements</strong></td><td>Small-to-medium datasets.</td><td>Large amounts of data (structured and unstructured)</td><td>Massive datasets for training, varying amounts for generation</td></tr>
<tr>
<td><strong>Computational Power</strong></td><td>Works on CPUs, moderate hardware.</td><td>Needs GPUs/TPUs for training.</td><td>Requires large-scale GPU/TPU clusters.</td></tr>
<tr>
<td><strong>Use Cases</strong></td><td>Predictions and classification.</td><td>Recognize complex data like speech, images, and language.</td><td>Generate new, original content.</td></tr>
<tr>
<td><strong>When NOT to Use</strong></td><td>Data is very complex/unstructured; accuracy is critical (medical, legal) ,Need to handle images/audio/video</td><td>The dataset is small (&lt;1000 samples), and computational resources are limited.</td><td>Copyright/IP restriction</td></tr>
<tr>
<td><strong>Cost Comparison</strong></td><td>Low ($1K-$10K) (Standard serve)</td><td>Medium ($10K-$100K)</td><td>High ($100K-$1M+)</td></tr>
<tr>
<td><strong>Real-World Examples</strong></td><td>Netflix recommendations, fraud detection, spam filters.</td><td>Face recognition, self-driving cars, Siri/Alexa.</td><td>Original creative outputs (text, images, code, video).</td></tr>
</tbody>
</table>
</div><h2 id="heading-conclusion">Conclusion</h2>
<p>To sum it up, anyone who is keen to learn more about artificial intelligence needs to know the differences between machine learning, deep learning, and generative AI.</p>
<p>Machine learning is the basis for this because it lets computers learn from data and make predictions. Deep learning takes this a step further by using neural networks to process complicated data patterns in a way that is similar to how humans understand things.</p>
<p>Generative AI goes a step further by making new things, which shows how creative AI can be. As these technologies get better, they open up a lot of new opportunities in many fields, such as improving customer service, making medical diagnoses more accurate, and making new content. To maximize AI's benefits in your life, stay current on new developments.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ From Commit to Production: Hands-On GitOps Promotion with GitHub Actions, Argo CD, Helm, and Kargo ]]>
                </title>
                <description>
                    <![CDATA[ Have you ever wanted to go beyond ‘hello world’ and build a real, production-style CI/CD pipeline – starting from scratch? Let’s pause for a moment: what are you trying to learn from your DevOps journey? Are you focusing on GitOps-style deployments, ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/from-commit-to-production-hands-on-gitops-promotion-with-github-actions-argo-cd-helm-and-kargo/</link>
                <guid isPermaLink="false">6841f1319c94d5fa67dae6e3</guid>
                
                    <category>
                        <![CDATA[ gitops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Thu, 05 Jun 2025 19:34:08 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749151777327/ece5b0b7-4a9a-4f95-8ebb-32e3768b678f.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Have you ever wanted to go beyond ‘hello world’ and build a real, production-style CI/CD pipeline – starting from scratch?</p>
<p>Let’s pause for a moment: what are you trying to learn from your DevOps journey? Are you focusing on GitOps-style deployments, or promotions? This guide will help you tackle all of it – one step at a time.</p>
<p>As a DevOps engineer interested in creating a complete CI/CD pipeline, I wanted more than a basic "hello world" microservice. I was looking for a project where I could start from scratch – beginning with raw source code, writing my own Docker Compose and Kubernetes files, deploying locally, and then adding automation, environment promotion, and GitOps practices step by step.</p>
<p>In my search, I found several GitHub repositories. Most were either too simple to be useful or too complicated and already set up, leaving no room for learning. They often included ready-made Docker Compose files and Kubernetes manifests, which didn't help with learning through hands-on experience.</p>
<p>That’s when I discovered <strong>Craftista</strong>, a project maintained by <a target="_blank" href="https://www.linkedin.com/in/gouravshah/">Gourav Shah</a>. This wasn’t just another training repo. As described in its documentation:</p>
<blockquote>
<p><em>“Craftista is not your typical hello world app or off-the-shelf WordPress app used in most DevOps trainings. It is the real deal.”</em></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748210412834/5ef3f2b6-029d-4967-b6a9-825888b44706.png" alt="Craftista" width="600" height="400" loading="lazy"></p>
<p>Craftista stood out to me for several reasons:</p>
<ul>
<li><p>It’s a <strong>polyglot microservices application</strong>, designed to resemble a real-world platform.</p>
</li>
<li><p>Each service uses its own technology stack – exactly like in modern enterprises.</p>
</li>
<li><p>It includes essential building blocks of a real e-commerce system:</p>
<ul>
<li><p>A modern UI built in Node.js</p>
</li>
<li><p>A Product Catalogue Service</p>
</li>
<li><p>A Recommendation Engine</p>
</li>
<li><p>A Voting/Review Service  </p>
</li>
</ul>
</li>
</ul>
<p>By the end of this guide, you won’t just have a “hello world” demo – you’ll have a fully functioning CI/CD/GitOps pipeline modeled on a real-world microservices stack. You’ll understand how the pieces fit together, why each tool exists, and how to adapt this workflow to your own projects.</p>
<p>Ready to go beyond hello world and build a production-style pipeline from scratch? Let’s dive in.</p>
<h2 id="heading-table-of-contentsheading-table-of-contents"><a class="post-section-overview" href="#heading-table-of-contents">Table of Contents</a></h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites-and-what-youll-learn">Prerequisites and What You’ll Learn</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-topics-outside-the-scope-of-this-guide">Topics Outside the Scope of This Guide</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-gitops">What is GitOps?</a></p>
<ul>
<li><a class="post-section-overview" href="#heading-core-principles-of-gitops">Core Principles of GitOps</a></li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-tools-we-are-using-in-this-guide">Tools We Are Using in This Guide</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-github-actions">GitHub Actions</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-minikube">Minikube</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-argo-cd">Argo CD</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-kargo">Kargo</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-structure-repositories-for-microservice-applications">How to Structure Repositories for Microservice Applications</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-why-a-polyrepo-fits-my-microservice-service-app">Why a Polyrepo Fits My Microservice-Service App</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-git-branching-is-anti-pattern-to-gitops-principles">Git Branching is Anti Pattern to GitOps Principles</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-organize-kubernetes-manifests-for-gitops">How to Organize Kubernetes Manifests for GitOps</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-argo-cd-folders">Argo CD Folders</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-2-argo-cd-application-manifests-by-environment">2. Argo CD Application Manifests (by Environment)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-env-folders">Env Folders</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-kargo-folders">Kargo Folders</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-deploy-and-promote-your-craftista-microservices-application">How to Deploy and Promote Your Craftista Microservices Application</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-1-start-minikube">1. Start Minikube</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-2-install-argo-cd">2. Install Argo CD</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-3-access-the-argo-cd-ui">3. Access the Argo CD UI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-4-define-a-craftista-argo-cd-project">4. Define a “Craftista” Argo CD Project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-5-deploy-the-development-environment">5. Deploy the Development Environment</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-6-manual-promotion-staging-amp-prod">6. Manual Promotion (Staging &amp; Prod)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-7-automated-promotion-with-kargo">7. Automated Promotion with Kargo</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-further-reading-amp-resources">Further Reading &amp; Resources</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites-and-what-youll-learn">Prerequisites and What You’ll Learn:</h2>
<p>Before you progress through this guide, ask yourself:</p>
<ul>
<li><p>Do I understand how semantic tagging improves traceability across environments?</p>
</li>
<li><p>Can I replicate a multi-environment GitOps setup using Helm and Kubernetes?</p>
</li>
<li><p>Am I confident in organizing Helm charts and manifests for scalable deployments?</p>
</li>
<li><p>Do I know how Kargo and Argo CD work together to automate promotions and approvals?</p>
</li>
</ul>
<p>This guide will help you confidently answer those questions by walking you through:</p>
<ul>
<li><p>✅ An optimized Git branching strategy: using feature branches and a single main branch</p>
</li>
<li><p>✅ Semantic Docker image tagging for clean version tracking</p>
</li>
<li><p>✅ Helm chart and Kubernetes manifest structuring for multi-environment GitOps</p>
</li>
<li><p>✅ CI pipelines using GitHub Actions for build → test → tag automation</p>
</li>
<li><p>✅ Full GitOps workflows with Kargo and Argo CD for seamless promotion and delivery</p>
</li>
</ul>
<h3 id="heading-topics-outside-the-scope-of-this-guide"><strong>Topics Outside the Scope of This Guide</strong></h3>
<ul>
<li><p>Deployment to managed services like EKS, AKS, or GKE is not included. We’ll use Minikube for local development.</p>
</li>
<li><p>I assume you are already familiar with writing basic Kubernetes manifests. I won’t explain Pods, Services, Deployments, and their YAML structures here.</p>
</li>
<li><p>I also won’t discuss topics like logging, metrics, tracing, and security hardening.</p>
</li>
<li><p>This guide does not cover Managing Secrets and ConfigMaps and Implementing Service Discovery.</p>
</li>
<li><p>And finally, we won’t go over ArgoCD and Kargo installation.</p>
</li>
</ul>
<h2 id="heading-what-is-gitops"><strong>What is GitOps?</strong></h2>
<p>GitOps is a modern way to manage applications and infrastructure using Git as the main source of truth. Developers have used Git for a long time to manage and work together on code. GitOps takes this further by including infrastructure setup, deployment processes, and automation.</p>
<p>By keeping everything – from Kubernetes files and Helm charts to infrastructure code and app settings – in Git, teams have a central, version-controlled system that can be tracked. Changes in Git are automatically updated and matched with the target environments by GitOps tools like Argo CD or Flux.</p>
<h3 id="heading-core-principles-of-gitops"><strong>Core Principles of GitOps</strong></h3>
<ul>
<li><p>Git as the single source of truth</p>
</li>
<li><p>Declarative systems</p>
</li>
<li><p>Immutable deployments</p>
</li>
<li><p>Centralized change audit</p>
</li>
</ul>
<h2 id="heading-tools-we-are-using-in-this-guide"><strong>Tools We Are Using in This Guide</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748886977153/8d7eb087-8161-431b-b48f-c67d724909b9.png" alt="Tools" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-github-actions"><strong>GitHub Actions</strong></h3>
<p>GitHub Actions is a platform for continuous integration and delivery (CI/CD) that helps automate your build, test, and deployment processes.</p>
<p>In our project, we’ll use it to store our microservice application code. We’ll use GitHub Actions workflows to build and push Docker images to Docker Hub as our Docker registry. We’ll rely on GitHub Actions for continuous delivery.</p>
<h3 id="heading-minikube"><strong>Minikube</strong></h3>
<p>We are deploying our application and ArgoCD locally on Minikube. To simulate promotion between different environments, I am using namespaces.</p>
<h3 id="heading-argo-cd"><strong>Argo CD</strong></h3>
<p>Argo CD is a declarative GitOps continuous deployment tool for Kubernetes that automates the deployment and synchronization of microservice applications with Git repositories. It follows GitOps principles and uses declarative configurations with a pull-based approach.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748345707461/f7484475-8867-48af-a36e-e97d68683a45.png" alt="ArgoCD Flow" width="600" height="400" loading="lazy"></p>
<p>Here’s a summary of the flow depicted in the above image:</p>
<ol>
<li><p>The developer modifies application code and changes are pushed to a Git repository.</p>
</li>
<li><p>The CI pipeline is triggered and builds a new container image and pushes it to a container registry.</p>
</li>
<li><p>Merge triggers a webhook to notify Argo CD of changes in the Git repository.</p>
</li>
<li><p>Argo CD clones the updated Git repository. Compares the desired state (from Git) with the current state in the Kubernetes cluster.</p>
</li>
<li><p>Argo CD applies the necessary changes to bring the cluster to the desired state.</p>
</li>
<li><p>Kubernetes controllers reconcile resources until the cluster matches the desired configuration.</p>
</li>
<li><p>Argo CD continuously monitors the application and cluster state.</p>
</li>
<li><p>Argo CD can automatically or manually revert the changes to match the Git configuration, ensuring Git remains the single source of truth.</p>
</li>
</ol>
<h3 id="heading-kargo"><strong>Kargo</strong></h3>
<p>Kargo manages promotion by watching repositories (Git, Image, Helm) for changes and making the needed commits to your Git repository, while Argo CD takes care of reconciliation. Kargo is built to simplify multi-stage application promotion using GitOps principles, removing the need for custom automation or CI pipelines.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748373053736/6c015e27-b47b-486a-bb6a-e581b0f29a30.webp" alt="Kargo (Source: Akuity Blog)" width="600" height="400" loading="lazy"></p>
<h4 id="heading-kargo-components"><strong>Kargo Components</strong></h4>
<ol>
<li><p><strong>Warehouse:</strong> Watches image registries and discovers new container images. Monitors DockerHub for new tags like <code>v1.2.0</code>, <code>v1.2.1</code>, etc., and stores metadata about discovered images.</p>
</li>
<li><p><strong>Stage:</strong> Defines a deployment environment (Dev, Stage, Prod). When a new image is found by the warehouse, it updates the manifest under <code>env/dev/</code> with the new image tag. This triggers Argo CD to sync the <code>dev</code> environment.</p>
</li>
<li><p><strong>PromotionPolicy:</strong> Defines how promotion should happen between stages (for example, auto or manual).</p>
</li>
<li><p><strong>Freight:</strong> An artifact version to be promoted (for example, a specific container image or Helm chart). When <code>v1.2.1</code> is discovered by the warehouse, a new <strong>Freight</strong> is created.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748373806449/00c7a2e5-48af-43b9-b9fc-9463b55c1abb.png" alt="Kargo Components" width="600" height="400" loading="lazy"></p>
<h4 id="heading-practical-examples"><strong>Practical Examples</strong></h4>
<ul>
<li><p>A new <code>v1.2.0</code> image is pushed to DockerHub.</p>
</li>
<li><p>Kargo detects it via a <strong>warehouse</strong> and updates the <code>dev</code> environment.</p>
</li>
<li><p>Once verified (either by tests or metrics), Kargo automatically updates Helm values in the Git repo for staging.</p>
</li>
<li><p>Argo CD sees the Git change and syncs the new version to staging.</p>
</li>
<li><p>Manual approval (via Slack or UI) is required to push to production.</p>
</li>
</ul>
<h4 id="heading-why-kargo-is-the-perfect-companion-to-argo-cd"><strong>Why Kargo is the Perfect Companion to Argo CD</strong></h4>
<p>Have you ever had to manually promote versions across environments and wished it were automated? How would integrating Kargo have saved time or prevented errors in your last deployment?</p>
<p>Argo CD excels at GitOps-driven continuous deployment – syncing your Kubernetes cluster with the desired state declared in Git. But it lacks native support for promotion workflows between environments (like dev → staging → production) based on image metadata, test results, or approval gates. This is where Kargo becomes the perfect companion.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748474195349/8e615222-067e-4958-aa8c-19a9f44e4d74.png" alt="Kargo and Argo CD Comparison " width="600" height="400" loading="lazy"></p>
<p>Kargo doesn’t replace Argo CD – it extends it. You continue to use Argo CD for syncing and deploying apps, but Kargo adds promotion intelligence and automation.</p>
<h2 id="heading-how-to-structure-repositories-for-microservice-applications"><strong>How to Structure Repositories for Microservice Applications</strong></h2>
<p>My example application consists of 4 microservices (<a target="_blank" href="https://github.com/nitheeshp-irl/microservice-frontend">frontend</a>, <a target="_blank" href="https://github.com/nitheeshp-irl/microservice-recommendation">recommendation</a>, <a target="_blank" href="https://github.com/nitheeshp-irl/microservice-catalogue">catalogues</a>, and <a target="_blank" href="https://github.com/nitheeshp-irl/microservice-voting">voting</a>). Designing your repository structure is very important to start your project. There is a lot of debate between monorepo and multi-service repo.</p>
<p>A <strong>monorepo</strong> is a unified repository that houses all the code for a project or a set of related projects. It consolidates code from various services, libraries, and applications into a single centralized location.</p>
<p>On the other hand, a <strong>polyrepo</strong> architecture comprises multiple repositories, each containing the code for a distinct service, library, or application component.</p>
<h3 id="heading-why-a-polyrepo-fits-my-microservice-service-app"><strong>Why a Polyrepo Fits My Microservice-Service App</strong></h3>
<p>Imagine you're onboarding a new team to your app. Would you prefer giving them access to an entire monorepo or just the relevant service’s repo? What trade-offs are you willing to accept?”</p>
<p>Well, using a polyrepo approach,</p>
<ul>
<li><p>Teams can work independently on the frontend, recommendations, catalogs, and voting without stepping on each other’s toes.</p>
</li>
<li><p>Sensitive services remain locked down without complex directory-level rules.</p>
</li>
<li><p>CI runners operate on a smaller codebase, speeding up checkouts and reducing bandwidth.</p>
</li>
<li><p>Each service has its own release cadence (for example, <code>catalogues</code> v2.1.0 and <code>voting</code> v1.7.3).</p>
</li>
<li><p>As your organization grows, new teams can onboard to only the repos they care about.</p>
</li>
<li><p>Shared libraries can be versioned and published to an internal package registry, then consumed by each service.</p>
</li>
</ul>
<h3 id="heading-git-branching-is-anti-pattern-to-gitops-principles"><strong>Git Branching is Anti Pattern to GitOps Principles</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748376156187/f1fb6cc6-c5d1-4001-a8a1-63353cc03cd7.png" alt="Git Branching AntiPattern" width="600" height="400" loading="lazy"></p>
<p>Many teams default to “<a target="_blank" href="https://medium.com/novai-devops-101/understanding-gitflow-a-simple-guide-to-git-branching-strategy-4f079c12edb9"><strong>GitFlow</strong></a>”-style branching – creating long-lived branches for <code>dev</code>, <code>staging</code>, <code>prod</code>, and more. But in a true GitOps workflow, <strong>Git is your control plane</strong>, and “environments” shouldn’t live as branches.</p>
<p>Instead, you can keep things simple with just:</p>
<ul>
<li><p>A long-lived <code>master</code> (or <code>main</code>) branch</p>
</li>
<li><p>Short-lived feature branches for code work</p>
</li>
</ul>
<h2 id="heading-how-to-organize-kubernetes-manifests-for-gitops"><strong>How to Organize Kubernetes Manifests for GitOps</strong></h2>
<p><a target="_blank" href="https://github.com/nitheeshp-irl/microservice-helmcharts">This repo</a> shows how you can keep ArgoCD application manifests, environment-specific values, Kargo promotion tasks, Helm charts for each microservice, and CI/CD workflows all in one place. It is organized so that:</p>
<ol>
<li><p><strong>ArgoCD application manifests</strong> live under <code>argocd/</code>, split by environment (for example, <code>dev/</code>, <code>staging/</code>, <code>prod/</code>).</p>
</li>
<li><p><strong>Environment-specific overrides</strong> (Helm values or Kustomize patches) go under <code>env/</code>.</p>
</li>
<li><p><strong>Kargo promotion configurations</strong> are grouped under <code>kargo/</code>, defining how new images move between environments.</p>
</li>
<li><p><strong>Service Helm charts</strong> reside in <code>service-charts/</code>, one chart per microservice.</p>
</li>
</ol>
<pre><code class="lang-markdown">/microservice-helmcharts/
├── argocd/                # ArgoCD application manifests
│   ├── application/       # Application definitions
│   │   ├── dev/           # Development environment applications
│   │   │   ├── catalogue.yaml
│   │   │   ├── catalogue-db.yaml
│   │   │   ├── frontend.yaml
│   │   │   ├── recommendation.yaml
│   │   │   ├── voting.yaml
│   │   │   └── kustomization.yaml
│   │   ├── staging/       # Staging environment applications
│   │   │   └── [similar structure as dev]
│   │   ├── prod/          # Production environment applications
│   │   │   └── [similar structure as dev]
│   │   └── craftista-project.yaml
│   ├── blog-post.md
│   ├── deployment-guide-blog.md
│   └── repository-structure.md
├── env/                   # Environment-specific configurations
│   ├── dev/               # Development environment values
│   │   ├── catalogue/
│   │   │   └── catalogue-values.yaml
│   │   ├── catalogue-db/
│   │   │   └── catalogue-db-values.yaml
│   │   ├── frontend/
│   │   │   └── frontend-values.yaml
│   │   ├── recommendation/
│   │   │   └── recommendation-values.yaml
│   │   ├── voting/
│   │   │   └── voting-values.yaml
│   │   └── kustomization.yaml
│   ├── staging/           # Similar structure as dev but with image files
│   └── prod/              # Similar structure as staging
├── kargo/                 # Kargo promotion configuration
│   ├── catalogue-config/  # Catalogue service promotion
│   │   ├── catalogue-promotion-tasks.yaml
│   │   ├── catalogue-stages.yaml
│   │   └── catalogue-warehouse.yaml
│   ├── frontend-config/   # Frontend service promotion
│   │   ├── frontend-promotion-tasks.yaml
│   │   ├── frontend-stages.yaml
│   │   └── frontend-warehouse.yaml
│   ├── recommendation-config/ # Recommendation service promotion
│   │   ├── recommendation-promotion-tasks.yaml
│   │   ├── recommendation-stages.yaml
│   │   └── recommendation-warehouse.yaml
│   ├── voting-config/     # Voting service promotion
│   │   ├── voting-promotion-tasks.yaml
│   │   ├── voting-stages.yaml
│   │   └── voting-warehouse.yaml
│   ├── kargo.yaml         # ArgoCD application for Kargo
│   ├── kustomization.yaml # Combines all Kargo resources
│   ├── project.yaml       # Kargo project definition
│   └── projectconfig.yaml # Project-wide promotion policies
├── service-charts/        # Helm charts for each microservice
│   ├── catalogue/         # Catalogue service chart
│   │   ├── templates/
│   │   │   ├── deployment.yaml
│   │   │   └── service.yaml
│   │   ├── Chart.yaml
│   │   └── values.yaml
│   ├── catalogue-db/      # Similar structure as catalogue
│   ├── frontend/          # Similar structure as catalogue
│   ├── recommendation/    # Similar structure as catalogue
│   └── voting/            # Similar structure as catalogue
├── .github/workflows/     # CI/CD workflows
│   └── docker-ci.yml      # Docker image build and push
└── README.md              # Repository documentation
</code></pre>
<h3 id="heading-argo-cd-folders"><strong>Argo CD Folders</strong></h3>
<p>The <code>argocd/</code> directory contains all of the manifests that Argo CD needs in order to track, group, and deploy your microservices. In this guide, we break that directory into two main pieces:</p>
<ol>
<li><p><strong>Argo CD Project Definition</strong></p>
</li>
<li><p><strong>Argo CD Application Manifests (organized by environment)</strong></p>
</li>
</ol>
<h4 id="heading-argocd-projectshttpsargo-cdreadthedocsioenstableuser-guideprojects"><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/"><strong>ArgoCD Projects</strong></a></h4>
<p>Before you can give Argo CD a set of Applications to manage, it’s often best practice to define a “Project.” A Project in Argo CD serves as a logical boundary around a group of Applications. It can control which Git repos those Applications are allowed to reference, which Kubernetes clusters/namespaces they can target, and even which resource kinds they can manage.</p>
<p>In our example repo, the file <code>craftisia-project.yaml</code> lives at the top of <code>argocd/</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># argocd/craftisia-project.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">AppProject</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">craftisia</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-comment"># 1) Which Git repos are we allowed to pull from?</span>
  <span class="hljs-attr">sourceRepos:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/microservice-helmcharts"</span>
    <span class="hljs-comment"># (Or you could use "*" to allow any repo, but this is less secure.)</span>

  <span class="hljs-comment"># 2) Which clusters/namespaces can these Apps be deployed to?</span>
  <span class="hljs-attr">destinations:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">namespace:</span> <span class="hljs-string">"*"</span>
      <span class="hljs-attr">server:</span> <span class="hljs-string">"*"</span>    <span class="hljs-comment"># Allow deployment to any cluster (for a local Minikube demo, this is fine).</span>

  <span class="hljs-comment"># 3) Which kinds of Kubernetes resources may be created/updated?</span>
  <span class="hljs-comment">#    (For example, we want Pods, Services, Deployments, Ingresses, etc.)</span>
  <span class="hljs-comment">#    Argo CD will reject any manifest containing a disallowed kind.</span>
  <span class="hljs-attr">clusterResourceWhitelist:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">""</span>            <span class="hljs-comment"># core API group (Pods, Services, ConfigMaps, etc.)</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">"apps"</span>        <span class="hljs-comment"># deployments, statefulsets, etc.</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">"networking.k8s.io"</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
    <span class="hljs-comment"># (You can list additional resource kinds as needed.)</span>

  <span class="hljs-comment"># 4) Optional: define role-based access control or sync policies at the project level.</span>
  <span class="hljs-comment">#    (Not shown here, but you could add roles, namespace resource quotas, etc.)</span>
</code></pre>
<h3 id="heading-2-argo-cd-application-manifests-by-environment">2. Argo CD Application Manifests (by Environment)</h3>
<p>Inside <code>argocd/</code>, there is a subdirectory called <code>application/</code>. We use this to keep all of our Argo CD Application YAMLs, broken out by environment. The high-level layout looks like this:</p>
<pre><code class="lang-markdown">rCopyEditargocd/
└── application/
<span class="hljs-code">    ├── dev/            # “Dev” environment Applications
    │   ├── catalogue.yaml
    │   ├── catalogue-db.yaml
    │   ├── frontend.yaml
    │   ├── recommendation.yaml
    │   ├── voting.yaml
    │   └── kustomization.yaml
    ├── staging/        # “Staging” environment Applications (same names/structure as dev/)
    │   └── […]
    └── prod/           # “Prod” environment Applications (same names/structure as dev/)
        └── […]</span>
</code></pre>
<p>Each of those YAML files is a standalone <strong>Argo CD Application</strong>. An Application tells Argo CD:</p>
<ol>
<li><p>Which project it belongs to (in our case, <code>craftisia</code>),</p>
</li>
<li><p>Where to find its manifests (a Git repo and path),</p>
</li>
<li><p>Which Kubernetes cluster and namespace to deploy into, and</p>
</li>
<li><p>How to keep itself up to date (that is, sync policies).</p>
</li>
</ol>
<p>Below is a example of the <code>frontend.yaml</code> file for the <strong>dev</strong> environment:</p>
<pre><code class="lang-markdown">yamlCopyEdit# argocd/application/dev/frontend.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: frontend-dev
  namespace: argocd
spec:
  project: craftisia

  # 1) Source: Where to find the Helm chart and which values file to use
  source:
<span class="hljs-code">    repoURL: https://github.com/nitheeshp-irl/microservice-helmcharts
    targetRevision: main
    path: service-charts/frontend       # Helm chart folder for the frontend service
    helm:
      valueFiles:
        - ../../env/dev/frontend/frontend-values.yaml
</span>
  # 2) Destination: Which cluster &amp; namespace to deploy into
  destination:
<span class="hljs-code">    server: https://kubernetes.default.svc    # (Assumes Argo CD is running in-cluster)
    namespace: front-end-dev                   # A dedicated namespace for “dev” frontend
</span>
  # 3) Sync Policy: Automate synchronization and enable self-healing
  syncPolicy:
<span class="hljs-code">    automated:
      prune: true          # Delete resources that are no longer in Git
      selfHeal: true       # If someone manually changes live resources, revert to Git state
    syncOptions:
      - CreateNamespace=true  # If the namespace doesn’t exist, Argo CD will create it</span>
</code></pre>
<p>You would repeat a similar pattern under <code>argocd/application/staging/</code> and <code>argocd/application/prod/</code> – each environment has its own <code>frontend.yaml</code>, <code>catalogue.yaml</code>, and so on, but each will point to a different values file under <code>env/staging/…</code> or <code>env/prod/…</code> and likely deploy into a different namespace (for example, <code>front-end-staging</code>, <code>front-end-prod</code>).</p>
<h3 id="heading-env-folders"><strong>Env Folders</strong></h3>
<p>The <code>/env</code> directory is a critical part of our GitOps implementation, containing all environment-specific configurations for our microservices. Each environment (dev, staging, prod) has its own subdirectory containing service-specific configurations. These contain general <strong>Helm chart</strong> values like resource limits and replica counts and container image repository and tag.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span>
  <span class="hljs-attr">repository:</span> <span class="hljs-string">nitheesh86/microservice-frontend</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.11</span>

<span class="hljs-attr">replicaCount:</span> <span class="hljs-number">2</span>

<span class="hljs-attr">resources:</span>
  <span class="hljs-attr">limits:</span>
    <span class="hljs-attr">memory:</span> <span class="hljs-string">"512Mi"</span>
  <span class="hljs-attr">requests:</span>
    <span class="hljs-attr">cpu:</span> <span class="hljs-string">"100m"</span>
    <span class="hljs-attr">memory:</span> <span class="hljs-string">"128Mi"</span>
</code></pre>
<h3 id="heading-kargo-folders"><strong>Kargo Folders</strong></h3>
<p>Our Kargo setup is organized in the <code>/kargo</code> directory with several key components:</p>
<pre><code class="lang-markdown">/kargo/
├── catalogue-config/           # Catalogue service promotion configuration
│   ├── catalogue-promotion-tasks.yaml  # Defines how to update catalogue images
│   ├── catalogue-stages.yaml           # Dev, staging, prod stages for catalogue
│   └── catalogue-warehouse.yaml        # Monitors catalogue image repository
├── frontend-config/            # Frontend service promotion configuration
│   ├── frontend-promotion-tasks.yaml   # Defines how to update frontend images
│   ├── frontend-stages.yaml            # Dev, staging, prod stages for frontend
│   └── frontend-warehouse.yaml         # Monitors frontend image repository
├── recommendation-config/      # Recommendation service promotion configuration
│   ├── recommendation-promotion-tasks.yaml  # Image update workflow
│   ├── recommendation-stages.yaml           # Environment stages
│   └── recommendation-warehouse.yaml        # Image monitoring
├── voting-config/              # Voting service promotion configuration
│   ├── voting-promotion-tasks.yaml     # Image update workflow
│   ├── voting-stages.yaml              # Environment stages
│   └── voting-warehouse.yaml           # Image monitoring
├── kargo.yaml                  # ArgoCD application for Kargo installation
├── kustomization.yaml          # This file - combines all resources
├── project.yaml                # Defines the Kargo project
└── projectconfig.yaml          # Project-wide promotion policies
</code></pre>
<p><strong>Stage Configurations:</strong> Kargo uses the concept of "stages" to represent our deployment environments. Each stage defines:</p>
<ul>
<li><p>Which freight (container images) to deploy</p>
</li>
<li><p>The promotion workflow to execute</p>
</li>
<li><p>Environment-specific variables</p>
</li>
</ul>
<p><strong>Warehouse Configuration:</strong> The warehouse monitors our container registry for new images.</p>
<p><strong>Promotion Tasks:</strong> Promotion tasks define the actual workflow for promoting between environments.</p>
<h2 id="heading-how-to-deploy-and-promote-your-craftista-microservices-application">How to Deploy and Promote Your Craftista Microservices Application</h2>
<p>Now I'll explain how to deploy your Craftista microservices application using Argo CD.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748558624685/9f0d5725-cc8a-4e37-851c-d4ab2870bafc.png" alt="ArgoCD Dashboard" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<ul>
<li><p><strong>A local Kubernetes cluster</strong>: We’ll use Minikube for local development.</p>
</li>
<li><p><strong>kubectl and helm</strong>: Ensure both are installed and configured.</p>
</li>
<li><p><strong>Git Clone of the microservice-helmcharts Repo</strong>:</p>
<pre><code class="lang-bash">  git <span class="hljs-built_in">clone</span> https://github.com/nitheeshp-irl/microservice-helmcharts.git
  <span class="hljs-built_in">cd</span> microservice-helmcharts
</code></pre>
</li>
</ul>
<h3 id="heading-1-start-minikube"><strong>1. Start Minikube</strong></h3>
<p>Start Minikube with the specified resources:</p>
<pre><code class="lang-bash">minikube start --memory=4096 --cpus=2
kubectl config use-context minikube
</code></pre>
<p>Adjust <code>--memory</code> and <code>--cpus</code> as needed for your machine.</p>
<h3 id="heading-2-install-argo-cd"><strong>2. Install Argo CD</strong></h3>
<p>Create a namespace:</p>
<pre><code class="lang-bash">kubectl create namespace argocd
</code></pre>
<p>Apply the official install manifest:</p>
<pre><code class="lang-bash">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<h3 id="heading-3-access-the-argo-cd-ui"><strong>3. Access the Argo CD UI</strong></h3>
<p>Port-forward the server:</p>
<pre><code class="lang-bash">kubectl port-forward svc/argocd-server -n argocd 8080:443
</code></pre>
<p><strong>Login</strong>:</p>
<ul>
<li><p><strong>Username</strong>: admin</p>
</li>
<li><p><strong>Password</strong>:</p>
<pre><code class="lang-bash">  kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=<span class="hljs-string">"{.data.password}"</span> | base64 -d
</code></pre>
</li>
</ul>
<p>Open your browser at <a target="_blank" href="http://localhost:8080/"><strong>http://localhost:8080</strong></a>.</p>
<h3 id="heading-4-define-a-craftista-argo-cd-project"><strong>4. Define a “Craftista” Argo CD Project</strong></h3>
<p>Scope Repos, Clusters, and Namespaces:</p>
<pre><code class="lang-bash">kubectl apply -f argocd/application/craftista-project.yaml
</code></pre>
<p>You should see:</p>
<pre><code class="lang-bash">project.argoproj.io/craftista created
</code></pre>
<h3 id="heading-5-deploy-the-development-environment"><strong>5. Deploy the Development Environment</strong></h3>
<p>Create Argo CD applications:</p>
<pre><code class="lang-bash">kubectl apply -f argocd/application/dev/
</code></pre>
<p>Argo CD will:</p>
<ul>
<li><p>Clone the microservice-helmcharts repo.</p>
</li>
<li><p>Render each Helm chart with its <code>env/dev/*-values.yaml</code>.</p>
</li>
<li><p>Create Deployment, Service, and so on in your dev namespaces.</p>
</li>
<li><p>Continuously reconcile desired vs. actual state.</p>
</li>
</ul>
<p>Monitor your progress:</p>
<pre><code class="lang-bash">argocd app list
argocd app get frontend-dev
</code></pre>
<h3 id="heading-6-manual-promotion-staging-amp-prod"><strong>6. Manual Promotion (Staging &amp; Prod)</strong></h3>
<p>Edit the image tag or other values:</p>
<ul>
<li><p><code>env/staging/&lt;service&gt;/&lt;service&gt;-values.yaml</code></p>
</li>
<li><p><code>env/prod/&lt;service&gt;/&lt;service&gt;-values.yaml</code></p>
</li>
</ul>
<p>Commit and push the changes:</p>
<pre><code class="lang-bash">git add env/staging env/prod
git commit -m <span class="hljs-string">"Promote v1.2.0 → staging &amp; prod"</span>
git push
</code></pre>
<p>Argo CD will detect the Git change and automatically sync your staging and prod applications (if automated sync is enabled).</p>
<h3 id="heading-7-automated-promotion-with-kargo"><strong>7. Automated Promotion with Kargo</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748689354529/3759ec0c-7db4-42a8-9f01-f0792dfec895.png" alt="Kargo DashBoard" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>First, install Kargo:</p>
<pre><code class="lang-bash">kubectl apply -f kargo/kargo.yaml
</code></pre>
<p>Configure promotion tasks, stages, and warehouse:</p>
<pre><code class="lang-bash">kubectl apply -k kargo/

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - project.yaml
  - projectconfig.yaml
  - catalogue-config/catalogue-warehouse.yaml
  - catalogue-config/catalogue-stages.yaml
  - catalogue-config/catalogue-promotion-tasks.yaml
  - frontend-config/frontend-warehouse.yaml
  - frontend-config/frontend-stages.yaml
  - frontend-config/frontend-promotion-tasks.yaml
  - recommendation-config/recommendation-warehouse.yaml
  - recommendation-config/recommendation-stages.yaml
  - recommendation-config/recommendation-promotion-tasks.yaml
  - voting-config/voting-warehouse.yaml
  - voting-config/voting-stages.yaml
  - voting-config/voting-promotion-tasks.yaml
</code></pre>
<h2 id="heading-how-the-gitops-pipeline-works">How the GitOps Pipeline Works</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748885521249/285bcaed-447d-4a31-87cb-98b531d9cb0d.png" alt="285bcaed-447d-4a31-87cb-98b531d9cb0d" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748557162526/f4716cfc-a71f-4ddc-bc58-7a42118c3190.png" alt="f4716cfc-a71f-4ddc-bc58-7a42118c3190" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<ol>
<li><strong>Developer Opens a Pull Request</strong>: The journey begins when a developer opens a pull request on one of the microservice repos. This signals that new code (feature, bugfix, config change) is ready to be integrated.</li>
</ol>
<ol start="2">
<li><p><strong>CI (GitHub Actions)</strong></p>
<ul>
<li><p><strong>CI: Lint → Test → Build &amp; Tag</strong>: A single workflow job lints the code, runs unit/integration tests, builds the Docker image, and applies a semantic tag (for example, v1.2.0).</p>
</li>
<li><p><strong>CI OK? (Decision)</strong>:</p>
<ul>
<li><p>If <strong>No</strong>, the pipeline stops and the developer is notified to fix errors.</p>
</li>
<li><p>If <strong>Yes</strong>, the newly built image is pushed to the container registry (DockerHub, ECR, and so on).</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Kargo</strong></p>
<ul>
<li><p><strong>Warehouse discovers new image tag</strong>: Kargo’s Warehouse component continuously watches your registry. As soon as it sees the new tag, it records that image metadata.</p>
</li>
<li><p><strong>Update env/dev values → Git</strong>: Kargo automatically commits an update to <code>env/dev/&lt;service&gt;/…-values.yaml</code>, pointing the dev Helm values file to the new image tag. This Git commit will drive the next step.</p>
</li>
</ul>
</li>
<li><p><strong>GitOps (Argo CD)</strong></p>
</li>
<li><ul>
<li><p><strong>Argo CD sync dev</strong>: Argo CD sees the Git change in the dev values file and pulls it into the cluster, reconciling the actual dev namespace with the desired state.</p>
<ul>
<li><p><strong>Dev deployment healthy? (Decision)</strong>:</p>
<ul>
<li><p>If <strong>No</strong>, Argo CD can optionally roll back and notifies the team (via Slack, email, etc.) of the failed dev rollout.</p>
</li>
<li><p>If <strong>Yes</strong>, it’s time to promote to staging.</p>
</li>
</ul>
</li>
<li><p><strong>Update env/staging values → Git</strong>: Kargo (or you, if manual) commits the same image tag into <code>env/staging/&lt;service&gt;/…-values.yaml</code>.</p>
</li>
<li><p><strong>Argo CD sync staging</strong>: Argo CD deploys that change to the staging namespace.</p>
</li>
<li><p><strong>Staging approval granted? (Decision)</strong>:</p>
<ul>
<li><p>If <strong>No</strong>, Kargo waits (and optionally notifies) until a manual gate is lifted.</p>
</li>
<li><p>If <strong>Yes</strong>, the final promotion commit is made: updating <code>env/prod/&lt;service&gt;/…-values.yaml</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Argo CD sync prod → End</strong>: Argo CD applies the production change, completing the pipeline from commit all the way to live production rollout.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<h3 id="heading-pipeline-summary"><strong>Pipeline Summary</strong></h3>
<ol>
<li><p>Developer opens PR → CI tests and builds → Docker image pushed</p>
</li>
<li><p>Kargo Warehouse detects new tag → Git commit to <code>env/dev</code></p>
</li>
<li><p>Argo CD syncs dev → Health check → (if successful) commit to <code>env/staging</code></p>
</li>
<li><p>Argo CD syncs staging → Approval → commit to <code>env/prod</code></p>
</li>
<li><p>Argo CD syncs prod → Live deployment complete</p>
</li>
</ol>
<p>Every stage must pass its health or approval check before the next begins, ensuring that only thoroughly tested and validated code makes it into production.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building a real-world CI/CD pipeline isn’t just about getting code from your laptop into a Kubernetes cluster – it’s about creating a repeatable, auditable, and reliable system that scales with your team and your application complexity.</p>
<p>In this guide, we walked through how I built a complete GitOps-based promotion pipeline using GitHub Actions, Argo CD, and Kargo, all driven by a hands-on microservices project: Craftista. From the first code commit to automated environment promotion, we leveraged industry best practices like semantic versioning, declarative infrastructure, and environment-based GitOps directories.</p>
<p>What makes this approach powerful is not just the tools but also the principles. By treating Git as the single source of truth, and using Kargo to automate what was traditionally a manual and fragile promotion process, we gain predictability and control over our deployments. Argo CD ensures that what’s in Git is always what’s running in our clusters, while Kargo eliminates human error in multi-stage rollouts.</p>
<p>If you’re tired of overly abstract “hello world” DevOps tutorials and want to get your hands dirty with something that feels <strong>real</strong>, Craftista offers the perfect sandbox. This pipeline reflects how teams operate in production – polyglot services, independent deployments, environment promotion gates, and GitOps as the operational backbone.</p>
<p>Whether you're a DevOps engineer sharpening your skills, or a platform team setting standards for internal development, I hope this tutorial provided the clarity and inspiration to build your own commit-to-production pipeline – step by step, with confidence.</p>
<h3 id="heading-further-reading-amp-resources">Further Reading &amp; Resources</h3>
<ul>
<li><p><a target="_blank" href="https://argo-cd.readthedocs.io/">Argo CD Documentat</a><a target="_blank" href="https://argo-cd.readthedocs.io/">ion</a></p>
</li>
<li><p><a target="_blank" href="https://docs.kargo.io/">Kargo Docs</a></p>
</li>
<li><p><a target="_blank" href="https://docs.github.com/en/actions">GitHub Actions Docs</a></p>
</li>
<li><p><a target="_blank" href="https://codefresh.io/blog/how-to-model-your-gitops-environments-and-promote-releases-between-them/">How to Model Your GitOps Environments and Promote Releases between Them</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/craftista/craftista">Craftista Repo</a></p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Simplify AWS Multi-Account Management with Terraform and GitOps ]]>
                </title>
                <description>
                    <![CDATA[ In the past, in the world of cloud computing, a company's journey often began with a single AWS account. In this unified space, development and testing environments coexisted, while the production environment lived in a separate account. This arrange... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/simplify-aws-multi-account-management-with-terraform-and-gitops/</link>
                <guid isPermaLink="false">6745e19265a8ceed4a65c3eb</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Terraform ]]>
                    </category>
                
                    <category>
                        <![CDATA[ gitops ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Tue, 26 Nov 2024 14:56:18 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730239127065/317aa4dd-aba9-4a9e-8abb-7cacfbd0e672.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In the past, in the world of cloud computing, a company's journey often began with a single AWS account. In this unified space, development and testing environments coexisted, while the production environment lived in a separate account.</p>
<p>This arrangement might work well in early days, but as a company grows and their needs become more specialized, the simplicity of a single account might start to show its limitations. The demand for dedicated environments will start to increase, and soon, that company may need to create new AWS accounts for specific functions like security, DevOps, and billing.</p>
<p>With each new account, the complexity of managing security policies and logging across the entire infrastructure grows exponentially. The cloud architects for these companies will then realize that they need a more centralized and streamlined approach to manage this expanding digital presence.</p>
<h3 id="heading-enter-aws-organizations">Enter AWS Organizations</h3>
<p>AWS Organizations is a service designed to streamline AWS account management. This powerful tool allows you to group multiple AWS accounts under a single umbrella. With AWS Organizations, you can easily create organizational units, apply service control policies, and manage permissions across all accounts. This not only simplifies the process but also enhances security and compliance.</p>
<p>The billing processes of AWS Organizations have also been optimized through the centralization of payments and the generation of comprehensive expense reports for each account. This improved clarity in financial management makes it easier for companies to allocate resources in a more efficient manner and strategize for future expansion.</p>
<p>AWS Organizations can help your team consistently enforce security policies, enable logging across all accounts, and streamline administrative tasks. Cloud infrastructure is now a well-organized, secure, and efficient machine, ready to support a company's ambitions for years to come.</p>
<p>In this article, we’ll discuss what it means to have a multi-account setup and how it works. I’ll walk you through everything from the deployment architecture to creating an Organizational Unit and beyond.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-components-of-multi-account-setup">Components of Multi-Account Setup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-automate-a-multi-account-strategy">How to Automate a Multi-Account Strategy</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-aws-organization-structure">AWS Organization Structure</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deployment-architecture">Deployment Architecture</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-overview-of-cicd-components">OverView of CI/CD Components</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-cicd-deployment-process-explained">CI/CD Deployment Process Explained</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-automate-landing-zone-creation">How to Automate Landing Zone Creation</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-create-an-organizational-unit">How to Create an Organizational Unit</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-automate-attaching-control-tower-control-to-the-ou">How to Automate Attaching Control Tower Control to the OU</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-components-of-multi-account-setup"><strong>Components of Multi-Account Setup</strong></h2>
<p>First, let's take a detailed look at the various components that make up an AWS multi-account strategy:</p>
<ul>
<li><p><strong>AWS Control Tower</strong></p>
</li>
<li><p><strong>Landing zone</strong></p>
</li>
<li><p><strong>AWS OU</strong></p>
</li>
<li><p><strong>AWS SSO</strong></p>
</li>
<li><p><strong>Control Tower Controls</strong></p>
</li>
<li><p><strong>Service control policies (SCPs)</strong></p>
</li>
</ul>
<h3 id="heading-what-is-aws-control-tower"><strong>What is AWS Control Tower?</strong></h3>
<p>AWS Control Tower is a comprehensive service that enables you to set up and manage a multi-account AWS environment efficiently. It’s designed based on best practices from AWS experts and adheres to industry standards and requirements.</p>
<p>By using AWS Control Tower, you can ensure that your AWS environment is secure, compliant, and well-organized, facilitating easier management and scalability.</p>
<h4 id="heading-features-of-aws-control-tower">Features of AWS Control Tower:</h4>
<ul>
<li><p>Cloud IT can be confident that all accounts are in line with company-wide regulations, and distributed teams may create new AWS accounts quickly.</p>
</li>
<li><p>You can enforce best practices, standards, and regulatory requirements with preconfigured controls.</p>
</li>
<li><p>You can automate your AWS environment setup with best-practice blueprints. These blueprints cover various aspects such as multi-account structure, identity and access management, as well as account provisioning workflow.</p>
</li>
<li><p>It lets you govern new or existing account configurations, gain visibility into compliance status, and enforce controls at scale.</p>
</li>
</ul>
<h3 id="heading-what-is-a-landing-zone-in-aws"><strong>What is a Landing Zone in AWS?</strong></h3>
<p>A landing zone helps you quickly set up a cloud environment using automation, including preconfigured settings that follow industry best practices for ensuring the security of your AWS accounts.</p>
<p>The starting point serves as a foundation for your company to efficiently initiate and implement workloads and applications, ensuring a secure and reliable infrastructure environment.</p>
<p>There are two choices for creating a landing zone. First, you can use the AWS Control Tower dashboard. Second, you can build a custom landing zone. If you are new to AWS, I recommend using AWS Control Tower to create a landing zone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137800622/f72dbf02-fa34-4004-999d-71a2af33f90b.png" alt="AWS Landing Zone" class="image--center mx-auto" width="1400" height="728" loading="lazy"></p>
<p>If you opt for creating a landing zone via the Control Tower dashboard, the following will be implemented in your landing zone:</p>
<ul>
<li><p>A multi-account environment with AWS organizations.</p>
</li>
<li><p>Identity management through the default directory in AWS IAM Identity Center.</p>
</li>
<li><p>Federated access to accounts using IAM Identity Center.</p>
</li>
<li><p>Centralized logging from AWS CloudTrail and AWS Config stored in Amazon Simple Storage Service (Amazon S3).</p>
</li>
<li><p>Enabled cross-account <a target="_blank" href="https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html">security audits</a> using IAM Identity Center.</p>
</li>
</ul>
<h3 id="heading-what-is-an-aws-organizational-unit"><strong>What is an AWS Organizational Unit?</strong></h3>
<p>Using multiple accounts allows you to better support your security goals and company operations.</p>
<p>AWS Organizations enables policy-based management of multiple AWS accounts. When you create new accounts, you can arrange them in organizational units (OUs), which are groupings of accounts that provide the same application or service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137833615/6eebb3ab-94d0-4286-8dc4-9d3ae297e186.png" alt="AWS Organizational Units" class="image--center mx-auto" width="786" height="496" loading="lazy"></p>
<h4 id="heading-advantages-of-using-ous">Advantages of Using OUs:</h4>
<ul>
<li><p>Accounts are units of security protection. Potential hazards and security threats can be contained within one account without affecting others.</p>
</li>
<li><p>Teams have different assignments and resource needs. Setting up different accounts prevents teams from interfering with one another, as they might do if they used the same account.</p>
</li>
<li><p>Isolating data stores to an account reduces the number of people who have access to and can manage the data store.</p>
</li>
<li><p>The multi-account concept allows you to generate separate billable items for business divisions, functional teams, or individual users.</p>
</li>
<li><p>AWS quotas are set up per account. Separating workloads into different accounts gives each account an individual quota.</p>
</li>
</ul>
<h3 id="heading-what-is-aws-iam-identity-center"><strong>What is AWS IAM Identity Center?</strong></h3>
<p>The AWS IAM Identity Center provides a centralized solution for managing access to multiple AWS accounts and business applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137875918/349673f8-1a09-4bcc-b1db-6a898d3d06b5.png" alt="AWS identity center" class="image--center mx-auto" width="1100" height="631" loading="lazy"></p>
<p>This method offers a single sign-on feature that allows employees to access all assigned accounts and applications from a single credential.</p>
<p>The personalized web user portal provides a centralized view of the user's assigned roles in AWS accounts.</p>
<p>For a uniformed authentication experience, users can sign in using the AWS Command Line Interface, AWS SDKs, or the AWS Console Mobile Application with their directory credentials.</p>
<p>You can also set up and oversee user IDs in IAM Identity Center's identity store, or you can connect to your existing identity provider, such as Microsoft Active Directory, Okta, and so on.</p>
<h3 id="heading-control-tower-controls-guardrails"><strong>Control Tower Controls (Guardrails)</strong></h3>
<p>Controls are predefined governance rules for security, operations, and compliance. You can select and apply them enterprise-wide or to specific groups of accounts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137911519/5dac3db6-15e6-476b-9b50-a1597a02fe84.png" alt="ControlTowerControls" class="image--center mx-auto" width="1322" height="843" loading="lazy"></p>
<p>Controls can be detective, preventive, or proactive and can be either mandatory or optional.</p>
<ul>
<li><p>First, we have detective controls (for example, detecting whether public read access to Amazon S3 buckets is allowed).</p>
</li>
<li><p>Next, preventive controls establish intent and prevent deployment of resources that don’t conform to your policies (for example, enabling AWS CloudTrail in all accounts).</p>
</li>
<li><p>Finally, proactive control capabilities use <a target="_blank" href="https://aws.amazon.com/blogs/mt/proactively-keep-resources-secure-and-compliant-with-aws-cloudformation-hooks/">AWS CloudFormation Hooks</a> to proactively identify and block the CloudFormation deployment of resources that are not compliant with the controls you have enabled. For example, developers cannot create S3 buckets that are capable of storing data in an unencrypted state at rest.</p>
</li>
</ul>
<h3 id="heading-service-control-policies-scp"><strong>Service Control Policies (SCP)</strong></h3>
<p>SCPs are a feature of the organization that allows you to set the maximum permissions for member accounts within the organization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732137972306/80d0782c-0801-4548-9c0c-d4a11d43ecbe.png" alt="Service Control Policies" class="image--center mx-auto" width="1036" height="658" loading="lazy"></p>
<p>There are many functions and features of an SCP:</p>
<ul>
<li><p>If an SCP denies an action on an account, no entity in the account can perform that action, even if its IAM permissions allow it.</p>
</li>
<li><p>Prevents stopping or deletion of CloudTrail logging.</p>
</li>
<li><p>Prevents deletion of VPC flow logs.</p>
</li>
<li><p>Prohibits AWS accounts from leaving the organization.</p>
</li>
<li><p>Prevents AWS GuardDuty changes.</p>
</li>
<li><p>Prevents resource sharing using AWS Resource Access Manager (RAM) either externally or across environments.</p>
</li>
<li><p>Prevents disabling the default Amazon EBS encryption.</p>
</li>
<li><p>Prevents Amazon S3 unencrypted object uploads.</p>
</li>
<li><p>And prevents IAM users and roles in the affected accounts from creating certain resource types if the request doesn't include the specified tags.</p>
</li>
</ul>
<h2 id="heading-how-to-automate-a-multi-account-strategy"><strong>How to Automate a Multi-Account Strategy</strong></h2>
<p>Now that you’re familiar with the key concepts of a Multi-Account Strategy in AWS, let’s dive deeper into the practical parts.</p>
<p>In the coming subsections, we’ll cover how you can set up an AWS Control Tower, create a landing zone, and automatically create organizational units (OUs). I’ll also walk you through how to configure Control Tower controls—often known as guardrails—to uphold security, compliance, and governance over your AWS environment.</p>
<p>Once we finish this deployment, we will have a solution that includes the following components:</p>
<ul>
<li><p>Creates an AWS Organizations OU named Core within the organizational root structure.</p>
</li>
<li><p>Creates and adds two shared accounts to the Security OU: the Log Archive account and the Audit account.</p>
</li>
<li><p>Creates a cloud-native directory in IAM Identity Center, with ready-made groups and single sign-on access.</p>
</li>
<li><p>Applies all required preventive controls to enforce policies.</p>
</li>
<li><p>Applies required detective controls to identify configuration violations.</p>
</li>
</ul>
<h2 id="heading-aws-organization-structure"><strong>AWS Organization Structure</strong></h2>
<p>We will create and implement the following organizational structure. You can add or modify OUs as per your requirements.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732138006995/423e54cd-bf74-4aef-a2a3-d52294482ca0.png" alt="AWS Organization Structure" class="image--center mx-auto" width="1295" height="694" loading="lazy"></p>
<h2 id="heading-deployment-architecture"><strong>Deployment Architecture</strong></h2>
<p>I will be using Terraform Cloud and GitHub Actions for automating the entire process. This architecture applies to all three components, including core accounts, landing zones, and organizational unit (OU) creation and controls.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732138041912/0cba5af0-69ea-4ae9-986e-c1608d3d5c21.avif" alt="Deployment Architecture" width="1888" height="518" loading="lazy"></p>
<h3 id="heading-overview-of-cicd-components">Overview of CI/CD Components</h3>
<h4 id="heading-1-github-actions"><strong>1. GitHub Actions</strong></h4>
<p>GitHub Actions is a CI/CD platform that lets you automate your build, test, and deployment pipeline. You can create workflows that automatically build and test every pull request to your repository, ensuring code changes are verified before merging.</p>
<p>GitHub Actions also lets you deploy merged pull requests to production, streamlining the release process and reducing errors.</p>
<p>Using GitHub Actions enhances your development workflow, improves code quality, and speeds up the delivery of new features and updates.</p>
<h4 id="heading-2-terraform-cloud"><strong>2. Terraform Cloud</strong></h4>
<p>Terraform Cloud is a platform by HashiCorp for managing and executing your Terraform code. It offers tools and features that enhance collaboration between developers and DevOps engineers, making teamwork more efficient.</p>
<p>With Terraform Cloud, you can simplify and streamline your workflow, making it easier to handle complex infrastructure tasks and deployments. The platform also provides strong security features to protect your code and infrastructure, keeping your product secure throughout its lifecycle.</p>
<h3 id="heading-cicd-deployment-process-explained"><strong>CI/CD Deployment Process Explained</strong></h3>
<p>DevOps engineers are responsible for writing the Terraform code and then creating a pull request. I have added several test cases for my Terraform code in the <code>terraform-plan.yml</code> file, which runs only on the feature branch.</p>
<ul>
<li><p><strong>Check environment variables:</strong> Ensures all required environment variables are set.</p>
</li>
<li><p><strong>Checkout Code:</strong> Uses the <code>actions/checkout</code> action to check out the repository.</p>
</li>
<li><p><strong>Verify Checkout:</strong> Verifies that the checkout was successful.</p>
</li>
<li><p><strong>Validation:</strong> Verifies the Terraform code for any syntax errors. Pull requests contain proposed changes in code, allowing team members to review and merge them into the master branch. Once pull requests are merged with the master branch, all test cases are rerun, and the landing zone is created through Terraform Cloud</p>
</li>
</ul>
<h2 id="heading-what-to-know-before-setting-up-control-tower"><strong>What to Know Before Setting up Control Tower</strong></h2>
<p>Before beginning the process of setting up for AWS Control Tower, it is important to have a clear understanding of what limitations are associated with Control Tower and consider some key points.</p>
<ul>
<li><p>When setting up a landing zone, it is important to choose your home region. Once you have made a selection, you won’t be able to change your home region.</p>
</li>
<li><p>If you intend to establish a control tower on an existing AWS account that is already a part of an existing organizational unit (OU), you won’t be able to use it. In order to proceed, you’ll need to create a new AWS account that is not associated with any organizational Unit (OU).</p>
</li>
<li><p>As part of the Control Tower creation process, you’ll need to create mandatory accounts such as the Log Archive Account and Audit Accounts. Account-specific emails are required.</p>
</li>
<li><p>In order to set up the Landing Zone in the Management Account, it is essential to ensure that you have subscribed to the following services in the management account:</p>
<ul>
<li>S3, EC2, SNS, VPC, CloudFormation, CloudTrail, CloudWatch, AWS Config, IAM, AWS Lambda</li>
</ul>
</li>
<li><p>The AWS Control Tower baseline covers only a few services with limited customization options: IAM Identity Center, CloudTrail, Config, some configuration rules, and some SCPs in AWS Organizations.</p>
</li>
<li><p>Implementing IAM Identity Center is limited to the management account of an organization.</p>
</li>
<li><p>AWS Control Tower implements concurrency limitations, allowing only one operation to be performed at a time.</p>
</li>
<li><p>Note that certain AWS Regions do not support the operation of some controls in AWS Control Tower. This is because the specified Regions lack the necessary underlying functionality to support the required operations.</p>
</li>
</ul>
<h3 id="heading-how-to-create-a-control-tower"><strong>How to Create a Control Tower</strong></h3>
<p>Creating a Control Tower means setting up a landing zone. AWS landing zone requires creating two new member accounts: the Audit account and the Log Archive account. You will need two unique email addresses for these accounts.</p>
<p>We will manage this process using Terraform modules. To keep things simple and clear, we will divide the project into several modules. One module will create the two core accounts. Another module will handle the setup of the landing zone. The final module will create Organizational Units (OUs) and apply Control Tower controls to ensure governance and compliance.</p>
<h2 id="heading-how-to-automate-landing-zone-creation"><strong>How to Automate Landing Zone Creation</strong></h2>
<p>When you run this code, the Core OU and two accounts are created under the Core OU. I have mentioned two repositories for each component: one for deploying the AWS resources like the landing zone, OU, and Control Tower Controls and another for the Terraform module.</p>
<p>A <em>Terraform module</em> is a set of standard configuration files in a specific directory. Terraform modules group resources for a specific task, which reduces the amount of code needed for similar infrastructure components.</p>
<p>I have imported both the core account creation and landing zone creation modules into the same <a target="_blank" href="https://github.com/nitheeshp-irl/aws-landing-zone/blob/main/main.tf"><code>main.tf</code></a> file. This is necessary because the landing zone creation depends on the core account module. Including them together ensures all dependencies are managed properly and the deployment process is efficient.</p>
<p>This method also simplifies the project structure and helps avoid potential issues from managing these components separately.</p>
<p>The AWS Control Tower <a target="_blank" href="https://docs.aws.amazon.com/controltower/latest/APIReference/API_CreateLandingZone.html"><code>CreateLandingZone</code></a> API needs a landing zone version and a manifest file as input parameters. Below is an example <strong>LandingZoneManifest.json</strong> manifest.</p>
<pre><code class="lang-json">{
   <span class="hljs-attr">"governedRegions"</span>: [<span class="hljs-string">"us-west-2"</span>,<span class="hljs-string">"us-west-1"</span>],
   <span class="hljs-attr">"organizationStructure"</span>: {
       <span class="hljs-attr">"security"</span>: {
           <span class="hljs-attr">"name"</span>: <span class="hljs-string">"CORE"</span>
       },
       <span class="hljs-attr">"sandbox"</span>: {
           <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Sandbox"</span>
       }
   },
   <span class="hljs-attr">"centralizedLogging"</span>: {
        <span class="hljs-attr">"accountId"</span>: <span class="hljs-string">"222222222222"</span>,
        <span class="hljs-attr">"configurations"</span>: {
            <span class="hljs-attr">"loggingBucket"</span>: {
                <span class="hljs-attr">"retentionDays"</span>: <span class="hljs-number">60</span>
            },
            <span class="hljs-attr">"accessLoggingBucket"</span>: {
                <span class="hljs-attr">"retentionDays"</span>: <span class="hljs-number">60</span>
            },
            <span class="hljs-attr">"kmsKeyArn"</span>: <span class="hljs-string">"arn:aws:kms:us-west-1:123456789123:key/e84XXXXX-6bXX-49XX-9eXX-ecfXXXXXXXXX"</span>
        },
        <span class="hljs-attr">"enabled"</span>: <span class="hljs-literal">true</span>
   },
   <span class="hljs-attr">"securityRoles"</span>: {
        <span class="hljs-attr">"accountId"</span>: <span class="hljs-string">"333333333333"</span>
   },
   <span class="hljs-attr">"accessManagement"</span>: {
        <span class="hljs-attr">"enabled"</span>: <span class="hljs-literal">true</span>
   }
}
</code></pre>
<p>This module sets up the AWS landing zone using <code>landingzone_manifest_template</code>. The landing zone version and admin account ID are given through variables. This module also creates several IAM roles required for the landing zone setup.</p>
<p>I defined a local variable <code>landingzone_manifest_template</code>, which is a JSON template for setting up the landing zone. This JSON template has several important settings:</p>
<pre><code class="lang-yaml"><span class="hljs-string">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>
}

<span class="hljs-string">locals</span> {
  <span class="hljs-string">landingzone_manifest_template</span> <span class="hljs-string">=</span> <span class="hljs-string">&lt;&lt;EOF</span>
{
    <span class="hljs-attr">"governedRegions":</span> <span class="hljs-string">$</span>{<span class="hljs-string">jsonencode(var.governed_regions)</span>},
    <span class="hljs-attr">"organizationStructure":</span> {
        <span class="hljs-attr">"security":</span> {
            <span class="hljs-attr">"name":</span> <span class="hljs-string">"Core"</span>
        }
    },
    <span class="hljs-attr">"centralizedLogging":</span> {
         <span class="hljs-attr">"accountId":</span> <span class="hljs-string">"${module.aws_core_accounts.log_account_id}"</span>,
         <span class="hljs-attr">"configurations":</span> {
             <span class="hljs-attr">"loggingBucket":</span> {
                 <span class="hljs-attr">"retentionDays":</span> <span class="hljs-string">$</span>{<span class="hljs-string">var.retention_days</span>}
             },
             <span class="hljs-attr">"accessLoggingBucket":</span> {
                 <span class="hljs-attr">"retentionDays":</span> <span class="hljs-string">$</span>{<span class="hljs-string">var.retention_days</span>}
             }
         },
         <span class="hljs-attr">"enabled":</span> <span class="hljs-literal">true</span>
    },
    <span class="hljs-attr">"securityRoles":</span> {
         <span class="hljs-attr">"accountId":</span> <span class="hljs-string">"${module.aws_core_accounts.security_account_id}"</span>
    },
    <span class="hljs-attr">"accessManagement":</span> {
         <span class="hljs-attr">"enabled":</span> <span class="hljs-literal">true</span>
    }
}
<span class="hljs-string">EOF</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_core_accounts"</span> {
  <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/terraform_modules/aws_core_accounts_module"</span>

  <span class="hljs-string">logging_account_email</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.logging_account_email</span>
  <span class="hljs-string">logging_account_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.logging_account_name</span>
  <span class="hljs-string">security_account_email</span> <span class="hljs-string">=</span> <span class="hljs-string">var.security_account_email</span>
  <span class="hljs-string">security_account_name</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.security_account_name</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_landingzone"</span> {
  <span class="hljs-string">source</span>                  <span class="hljs-string">=</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/blog_terraform_modules/aws_landingzone_module"</span>
  <span class="hljs-string">manifest_json</span>           <span class="hljs-string">=</span> <span class="hljs-string">local.landingzone_manifest_template</span>
  <span class="hljs-string">landingzone_version</span>     <span class="hljs-string">=</span> <span class="hljs-string">var.landingzone_version</span>
  <span class="hljs-string">administrator_account_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.administrator_account_id</span>
}
</code></pre>
<ul>
<li><p><strong>Governed Regions</strong>: Specifies the regions governed by the landing zone.</p>
</li>
<li><p><strong>Organization Structure</strong>: Defines the security structure with a dedicated security account.</p>
</li>
<li><p><strong>Centralized Logging</strong>: Configures logging, specifying the account ID and retention policies for logs.</p>
</li>
<li><p><strong>Security Roles</strong>: Specifies the account ID for security roles.</p>
</li>
<li><p><strong>Access Management</strong>: Enables access management.</p>
</li>
<li><p><strong>Core Accounts</strong>: The core accounts code, also defined in the same file, is what sets up essential AWS accounts for logging and security.</p>
</li>
</ul>
<p>You can find the full code here: <a target="_blank" href="https://github.com/nitheeshp-irl/aws-landing-zone">https://github.com/nitheeshp-irl/aws-landing-zone</a>.</p>
<h2 id="heading-how-to-create-an-organizational-unit"><strong>How to Create an Organizational Unit</strong></h2>
<p>When you run this code, different organizational units (OUs) are created according to the specifications in the <a target="_blank" href="https://github.com/nitheeshp-irl/aws-orgunits/blob/main/variables.auto.tfvars">variable</a> file.</p>
<p>Once the landing zone setup is finished, we can create an OU as per our business requirements. This will take the OU name from the variable file and create the OU.</p>
<pre><code class="lang-json">aws_region = <span class="hljs-string">"us-east-2"</span>

organizational_units = [
  {
    unit_name = <span class="hljs-attr">"apps"</span>
  },
  {
    unit_name = <span class="hljs-attr">"infra"</span>
  },
  {
    unit_name = <span class="hljs-attr">"stagingpolicy"</span>
  },
  {
    unit_name = <span class="hljs-attr">"sandbox"</span>
  },
  {
    unit_name = <span class="hljs-attr">"security"</span>
  }
]
</code></pre>
<p>You can see the code here:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/aws-orgunits">AWS Organizational Units (OUs) Terraform Repo</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/blog-terraform-modules/tree/main/aws_org_module">AWS Organizational Units Terraform Module Path</a></p>
</li>
</ul>
<h2 id="heading-how-to-automate-attaching-control-tower-control-to-the-ou"><strong>How to Automate Attaching Control Tower Control to the OU</strong></h2>
<p>Once you have created the OU units using the above repository, this repository will apply Control Tower controls to the OUs.</p>
<p>After creating the required objects, you can attach controls to the OU if you need them. Here is the <a target="_blank" href="https://github.com/nitheeshp-irl/controltower_controls/blob/main/main.tf"><code>main.tf</code></a> file:</p>
<pre><code class="lang-yaml"><span class="hljs-string">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_controls"</span> {
  <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"https://github.com/nitheeshp-irl/blog_terraform_modules/awscontroltower-controls_module"</span>

  <span class="hljs-string">aws_region</span> <span class="hljs-string">=</span> <span class="hljs-string">var.aws_region</span>
  <span class="hljs-string">controls</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.controls</span>
}
</code></pre>
<p>We used Terraform modules to create AWS resources.</p>
<p>Here are the control variables:</p>
<pre><code class="lang-json">aws_region = <span class="hljs-string">"us-east-2"</span>


controls = [
  {
    control_names = [
      <span class="hljs-attr">"AWS-GR_ENCRYPTED_VOLUMES"</span>,
      <span class="hljs-attr">"AWS-GR_EBS_OPTIMIZED_INSTANCE"</span>,
      <span class="hljs-attr">"AWS-GR_EC2_VOLUME_INUSE_CHECK"</span>,
      <span class="hljs-attr">"AWS-GR_RDS_INSTANCE_PUBLIC_ACCESS_CHECK"</span>,
      <span class="hljs-attr">"AWS-GR_RDS_SNAPSHOTS_PUBLIC_PROHIBITED"</span>,
      <span class="hljs-attr">"AWS-GR_RDS_STORAGE_ENCRYPTED"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICTED_COMMON_PORTS"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICTED_SSH"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICT_ROOT_USER"</span>,
      <span class="hljs-attr">"AWS-GR_RESTRICT_ROOT_USER_ACCESS_KEYS"</span>,
      <span class="hljs-attr">"AWS-GR_ROOT_ACCOUNT_MFA_ENABLED"</span>,
      <span class="hljs-attr">"AWS-GR_S3_BUCKET_PUBLIC_READ_PROHIBITED"</span>,
      <span class="hljs-attr">"AWS-GR_S3_BUCKET_PUBLIC_WRITE_PROHIBITED"</span>,
    ],
    organizational_unit_names = [<span class="hljs-attr">"infra"</span>, <span class="hljs-attr">"apps"</span>]
  }
]
</code></pre>
<p>You can see the code here:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/controltower_controls">Terraform Repo for Creating control tower controls</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/nitheeshp-irl/blog-terraform-modules/tree/main/awscontroltower-controls_module">Terraform Module for creating Control Tower Controls</a></p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Navigating a multi-account strategy in AWS can be challenging, but with AWS Control Tower and a structured approach, it becomes manageable.</p>
<p>Using AWS Control Tower, your team can ensure that their AWS environments are secure, compliant, and well-organized. The automated setup, governance at scale, and centralized management through AWS Organizations provide a strong foundation for cloud infrastructure.</p>
<p>Implementing a landing zone through AWS Control Tower offers a secure and standardized starting point, allowing for quicker deployment and better governance. Using organizational units (OUs) segregates accounts based on business needs, improving security and operational efficiency. AWS IAM Identity Center simplifies access management, providing a unified authentication experience across multiple accounts and applications.</p>
<p>Service Control Policies (SCPs) help keep things secure and compliant by making sure all resources follow the organization's rules. Terraform Cloud and GitHub Actions make it easier to deploy resources, offering a smooth CI/CD pipeline for managing infrastructure changes.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ AWS Security Specialty Certification: How to Prepare for the Exam ]]>
                </title>
                <description>
                    <![CDATA[ Welcome to my latest tutorial! After a three-year hiatus from certifications, I'm thrilled to announce that I've successfully obtained the AWS Certified Security Specialty certification. As someone who strongly believes in the power of community lear... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/aws-security-specialty-certification-study-tips/</link>
                <guid isPermaLink="false">670e8de0b14141d162654266</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Certification ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Security ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Tue, 15 Oct 2024 15:44:32 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729003266992/5be45b39-6a46-42a0-89f8-82dde56f0207.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Welcome to my latest tutorial! After a three-year hiatus from certifications, I'm thrilled to announce that I've successfully obtained the AWS Certified Security Specialty certification. As someone who strongly believes in the power of community learning, I'm excited to share my journey and insights with you.</p>
<p>In this guide, I'll take you through my experience of preparing for and passing the AWS Certified Security Specialty exam. Rather than a comprehensive study guide, I'll present this as a collection of study notes and personal observations. My aim is to provide you with practical tips and strategies that helped me succeed.</p>
<p>For those seeking a more structured approach, I highly recommend the official AWS certification study guide and the excellent resources provided by TutorialsDojo. These were invaluable in my preparation and could be great resources for your own journey.</p>
<p>So, whether you're considering this certification or you’re just curious about AWS security, I hope you'll find value in the experiences and insights I'm about to share.</p>
<h2 id="heading-should-you-get-certified">Should You Get Certified?</h2>
<p>There are mixed opinions in the tech industry about the importance of certifications. Some people argue that the certificates you have don't matter – it's all about your real-world knowledge.</p>
<p>But not everyone has the chance to work with real-world projects. And certification questions are based on real-world scenarios. So if you haven't had an opportunity to work with AWS security much in practice, you can learn from this exam and apply your learnings on actual projects.</p>
<p>On the other hand, if you're already working with AWS, taking the exam is an excellent chance to test your knowledge and learn more about its internal workings.</p>
<p>For example, you might have been working with AWS for quite some time, but you haven't touched AWS security, or haven't been following best practices. The certification covers every aspect of AWS security, so you will learn how you can reduce your costs and follow best practices.</p>
<h2 id="heading-exam-structure">Exam Structure</h2>
<ul>
<li><p><strong>Exam Duration</strong>: 170 Mins</p>
</li>
<li><p><strong>Exam Format</strong>: 65 Questions. Multiple Choice, Multiple response</p>
</li>
<li><p><strong>Passing Score</strong>: The exam uses a scaled scoring system from 100 to 1,000. To pass, you need to achieve a minimum score of 750.</p>
</li>
<li><p><strong>Cost</strong>: $300 USD. Additional Tax may be $30.</p>
</li>
<li><p><strong>Delivery Method</strong>: Pearson VUE testing center or online proctored exam</p>
</li>
</ul>
<p>It took me about 110 minutes to finish the questions, and I marked 25 for review. I then spent another 60 minutes reviewing those 25 questions.</p>
<p>In my case, the internet got disconnected, and my exam froze. Don't panic! Just launch the VUE software again—you are allowed to resume the exam. No snacking or restroom breaks are allowed, but you can have water.</p>
<h2 id="heading-my-study-approach">My Study Approach</h2>
<p>I used a structured method to prepare for the AWS Certified Security Specialty exam:</p>
<ul>
<li><p>Completed a comprehensive AWS security course on Udemy</p>
</li>
<li><p>Practiced with multiple sets of exam questions:</p>
<ul>
<li><p>Carefully analyzed incorrect answers</p>
</li>
<li><p>Consulted AWS documentation and AWS YouTube videos for a deeper understanding</p>
</li>
</ul>
</li>
<li><p>Used additional practice resources:</p>
<ul>
<li><p>TutorialsDojo practice exams</p>
</li>
<li><p>WhizLabs mock tests</p>
</li>
</ul>
</li>
</ul>
<p>This approach helped me gain both theoretical knowledge and practical problem-solving skills essential for the exam.</p>
<h2 id="heading-key-topics-and-concepts">Key Topics and Concepts</h2>
<h3 id="heading-aws-iam-credential-report">AWS IAM Credential Report</h3>
<p>Understanding how to review the AWS IAM credential report is crucial. Here are some key points:</p>
<ol>
<li><p>Multi-Factor Authentication (MFA) Enforcement: Identify users who haven't enabled MFA and enforce its usage.</p>
</li>
<li><p>Root Account Monitoring: Monitor usage of the root account to ensure it's not being used for day-to-day operations.</p>
</li>
<li><p>Track user creation and last activity dates to manage user lifecycles effectively.</p>
</li>
<li><p>Access Key Usage Monitoring: Identify unused access keys that could pose a security risk.</p>
</li>
<li><p>Find users with old passwords or access keys that might be compromised.</p>
</li>
<li><p>Understand Report Format: <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html#id_credentials_understanding_the_report_format">AWS Documentation</a></p>
</li>
</ol>
<h3 id="heading-aws-s3-object-lock">AWS S3 Object Lock</h3>
<p>AWS S3 Object Lock is a feature that helps prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. It's particularly useful for scenarios requiring data immutability, such as regulatory compliance or protection against accidental or malicious deletion.</p>
<ul>
<li><p>Compliance Mode: Prevents anyone, including the root user, from overwriting or deleting an object version.</p>
</li>
<li><p>Governance Mode: Allows users with special permissions to overwrite or delete protected object versions.</p>
</li>
</ul>
<h3 id="heading-integrating-on-premise-active-directory-with-iam">Integrating On-Premise Active Directory with IAM</h3>
<p>It's important to know the steps involved in integrating on-premise Active Directory with IAM for single sign-on. For more details, refer to this <a target="_blank" href="https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/">AWS Blog Post</a>.</p>
<h3 id="heading-aws-service-control-policies-scps">AWS Service Control Policies (SCPs)</h3>
<p>Study AWS SCP examples. Service Control Policies (SCPs) are organization-level policies that manage permissions across your AWS organization. They provide centralized control over the maximum available permissions for IAM users and roles within your organization's accounts.</p>
<p>For SCP examples, refer to the <a target="_blank" href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html">AWS Documentation</a>.</p>
<h3 id="heading-serving-private-content-through-cloudfront">Serving Private Content through CloudFront</h3>
<p>Learn how to serve private content through CloudFront. The <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html">AWS CloudFront Developer Guide</a> provides detailed information on this topic.</p>
<ul>
<li><p>Using signed URLs is beneficial when you want to restrict access to individual files, for example, an installation download for your application.</p>
</li>
<li><p>Using signed cookies is beneficial when you want to provide access to multiple restricted files, and don't want to change your current URLs.</p>
</li>
</ul>
<h3 id="heading-understanding-ephemeral-ports">Understanding Ephemeral Ports</h3>
<p>Understand why it's important to set the range for ephemeral ports in outbound rules. Ephemeral ports are temporary ports used in network and internet communications, managed by the machine's operating system.</p>
<p>Check out this <a target="_blank" href="https://remy-nts.medium.com/aws-nacl-why-the-need-to-set-ephemeral-ports-range-for-outbound-rules-50ee93986555">Medium article</a> on NACL and ephemeral ports.</p>
<h3 id="heading-securing-access-to-websites-through-cloudfront">Securing Access to Websites through CloudFront</h3>
<p>To ensure that users can only access your website through the CloudFront URL while completely restricting access via the Application Load Balancer (ALB) URL, you’ll need to know how to do the following:</p>
<ol>
<li><p>Configure ALB Security Group: Restrict access to your ALB by only allowing traffic from CloudFront IP ranges.</p>
</li>
<li><p>Implement Custom Headers: Set a custom header in CloudFront and configure your ALB to only accept requests with this header.</p>
</li>
</ol>
<h3 id="heading-aws-kms-and-envelope-encryption">AWS KMS and Envelope Encryption</h3>
<p>AWS KMS can directly encrypt data up to 4 KB in size. For files larger than 4 KB, you need to use Envelope Encryption. Here are the steps:</p>
<ul>
<li><p>Generate a data key using AWS KMS</p>
</li>
<li><p>Use the data key to encrypt your large data</p>
</li>
<li><p>Encrypt the data key with KMS</p>
</li>
<li><p>Store the encrypted data key with your encrypted data</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728487271329/bdd34ea3-5e47-474a-a083-6e031e508949.png" alt="AWS KMS and Envelope Encryption" width="600" height="400" loading="lazy"></p>
<h3 id="heading-kms-policy-conditions">KMS Policy Conditions</h3>
<p>Learn how conditions work in AWS KMS policy. Refer to the <a target="_blank" href="https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms-key-origin">AWS KMS Developer Guide</a> for detailed information.</p>
<h3 id="heading-iam-policy-conditions">IAM Policy Conditions</h3>
<ul>
<li><p>Get to know important IAM policy conditions. The <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_String">AWS IAM User Guide</a> offers detailed information.</p>
</li>
<li><p>I strongly suggest watching these two videos to understand IAM policy conditions with examples:</p>
<ul>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=qsF6Kauh2J4">Video 1</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=4PJienr4gZI">Video 2</a></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-lambda-function-assuming-iam-role-in-another-aws-account">Lambda Function Assuming IAM Role in Another AWS Account</h3>
<p>Understand how to configure a Lambda function to assume an IAM role in another AWS account. Refer to this <a target="_blank" href="https://repost.aws/knowledge-center/lambda-function-assume-iam-role">AWS Knowledge Center article</a> for details.</p>
<h3 id="heading-kms-key-rotation">KMS Key Rotation</h3>
<p>Understand which types of keys can be rotated automatically and which require manual rotation.</p>
<ul>
<li><p>Symmetric encryption KMS keys can be rotated automatically.</p>
</li>
<li><p>Asymmetric KMS keys, HMAC KMS keys, and custom key stores require manual rotation.</p>
</li>
<li><p>KMS keys with imported key material also require manual rotation.</p>
</li>
</ul>
<p>For more information, see the <a target="_blank" href="https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html">AWS KMS Developer Guide</a>.</p>
<h3 id="heading-amazon-ecr-image-scanning">Amazon ECR Image Scanning</h3>
<p>You can scan Amazon Elastic Container Registry (ECR) images for vulnerabilities. There are two types of scanning available in ECR:</p>
<ul>
<li><p><strong>Enhanced scanning</strong>—Amazon ECR integrates with Amazon Inspector to provide automated, continuous scanning of your repositories. Enhanced scanning provides the following:</p>
<ul>
<li><p>OS and programming languages package vulnerabilities.</p>
</li>
<li><p>Two scanning frequencies: Scan on push and continuous scan.</p>
</li>
</ul>
</li>
<li><p><strong>Basic scanning</strong>—Amazon ECR provides two versions of basic scanning that use the Common Vulnerabilities and Exposures (CVEs) database:</p>
<ul>
<li><p>The current GA version, which uses the open-source Clair project</p>
</li>
<li><p>An improved version that uses AWS native technology</p>
</li>
</ul>
</li>
</ul>
<p>Basic scanning offers:</p>
<ul>
<li><p>OS scans</p>
</li>
<li><p>Two scanning frequencies: manual and scan-on-push</p>
</li>
</ul>
<h3 id="heading-implementing-end-to-end-encrypted-traffic">Implementing End-to-End Encrypted Traffic</h3>
<p>Know when you have a use case that requires implementing end-to-end encrypted traffic. Steps are listed below:</p>
<ol>
<li><p>Configure your CloudFront distribution to require HTTPS for all viewer requests.</p>
<ul>
<li><p>Use a custom SSL/TLS certificate (from AWS Certificate Manager or imported) for CloudFront.</p>
</li>
<li><p>Use a third-party SSL/TLS certificate on your Application Load Balancer (ALB) or EC2 instances.</p>
</li>
<li><p>Ensure you use the same certificate on your EC2 instances as on your ALB for consistency.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-securely-storing-rds-credentials">Securely Storing RDS Credentials</h3>
<p>Learn how to securely store RDS credentials. AWS Secrets Manager is the recommended service for storing and managing sensitive information like database credentials. It is not wise to hard-code database credentials in your code or store them in Lambda as environment variables.</p>
<h3 id="heading-cloudtrail-data-events-vs-management-events">CloudTrail: Data Events vs Management Events</h3>
<p>Understand the differences between data events and management events in CloudTrail.</p>
<ul>
<li><p><strong>Management Events</strong></p>
<ul>
<li><p>Provide information about management operations performed on resources in your AWS account.</p>
</li>
<li><p>Examples include:</p>
<ul>
<li><p>IAM AttachRolePolicy operations</p>
</li>
<li><p>Amazon EC2 CreateSubnet operations</p>
</li>
<li><p>AWS CloudTrail CreateTrail operations</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Data Events</strong></p>
<ul>
<li><p>Provide information about resource operations performed on or within a resource.</p>
</li>
<li><p>Examples include:</p>
<ul>
<li><p>Amazon S3 object-level API activity</p>
</li>
<li><p>AWS Lambda function execution activity</p>
</li>
<li><p>Amazon DynamoDB object-level API activity on tables</p>
</li>
</ul>
</li>
<li><p>Data events are not logged by default.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-guardduty-suppression-rules-trusted-ip-lists-and-threat-lists">GuardDuty: Suppression Rules, Trusted IP Lists, and Threat Lists</h3>
<p>Expect questions about suppression rules and how to add known IPs to trusted IP lists and threat lists during penetration testing.</p>
<ul>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/guardduty/latest/ug/findings_suppression-rule.html">GuardDuty Suppression Rule Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_upload-lists.html">GuardDuty Upload Lists Documentation</a></p>
</li>
</ul>
<p>Understand which logs are analyzed by AWS GuardDuty. These include AWS CloudTrail management event logs, VPC Flow Logs, DNS logs, EKS audit logs, S3 data events, and runtime activity from EKS, EC2, and ECS workloads.</p>
<h3 id="heading-aws-abuse-email">AWS Abuse Email</h3>
<p>Know how to respond to an AWS abuse email.</p>
<ul>
<li><p>Review the abuse notice carefully to understand what content or activity was reported. The report typically includes logs or other evidence implicating the abusive activity.</p>
</li>
<li><p>Investigate the reported issue within your AWS resources.</p>
</li>
<li><p>Verify and understand the cause of the reported abuse.</p>
</li>
<li><p>Take immediate action to stop or prevent the abusive activity. This may involve:</p>
<ul>
<li><p>Removing or modifying offending content</p>
</li>
<li><p>Securing compromised resources</p>
</li>
<li><p>Updating security settings</p>
</li>
<li><p>Revoking unauthorized access</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-inspector">Inspector</h3>
<p>You need to know which AWS services are scanned by AWS Inspector.</p>
<ul>
<li><p>Amazon Inspector can evaluate:</p>
<ul>
<li><p>EC2 instances</p>
</li>
<li><p>Container images in Amazon ECR</p>
</li>
<li><p>Lambda functions for vulnerabilities and security issues</p>
</li>
</ul>
</li>
</ul>
<p>The following AWS services integrate with Amazon Inspector:</p>
<ul>
<li><p>AWS Security Hub for a centralized view of security findings</p>
</li>
<li><p>Amazon EventBridge for automated responses to findings</p>
</li>
<li><p>AWS Systems Manager for patch management based on Inspector findings</p>
</li>
<li><p>Amazon Elastic Container Registry (ECR) for container image scanning</p>
</li>
</ul>
<h3 id="heading-aws-config">AWS Config</h3>
<p>Learn these important AWS Config rules:</p>
<ul>
<li><p><strong>encrypted-volumes</strong>: Checks if attached EBS volumes are encrypted.</p>
</li>
<li><p><strong>s3-bucket-public-read-prohibited</strong>: Ensures that your S3 buckets do not allow public read access.</p>
</li>
<li><p><strong>iam-user-no-policies-check</strong>: Verifies that IAM users don't have policies directly attached to them (best practice is to use group policies).</p>
</li>
<li><p><strong>root-account-mfa-enabled</strong>: Checks if the root account has Multi-Factor Authentication (MFA) enabled.</p>
</li>
<li><p><strong>ec2-instances-in-vpc</strong>: Ensures that all EC2 instances are launched within a VPC.</p>
</li>
<li><p><strong>cloudtrail-enabled</strong>: Verifies that CloudTrail is enabled in your account.</p>
</li>
<li><p><strong>rds-instance-public-access-check</strong>: Checks if RDS instances are not publicly accessible.</p>
</li>
<li><p><strong>iam-password-policy</strong>: Ensures that the account password policy meets specified complexity requirements.</p>
</li>
<li><p><strong>restricted-ssh</strong>: Checks if security groups allow unrestricted incoming SSH traffic.</p>
</li>
<li><p><strong>cloudwatch-alarm-action-check</strong>: Verifies if CloudWatch alarms have at least one alarm action, one INSUFFICIENT_DATA action, or one OK action enabled.</p>
</li>
</ul>
<p>For more details, refer to the <a target="_blank" href="https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html">AWS Config Managed Rules documentation</a>.</p>
<h3 id="heading-trusted-advisor">Trusted Advisor</h3>
<p>Be aware of the checks performed by AWS Trusted Advisor:</p>
<ul>
<li><p><strong>Security Groups - Specific Ports Unrestricted</strong>: Identifies security groups that allow unrestricted access to specific ports, potentially exposing your resources to security risks.</p>
</li>
<li><p><strong>IAM Use</strong>: Ensures that you're following security best practices by using IAM users, groups, and roles to control access to your AWS resources, rather than using your root account credentials.</p>
</li>
<li><p><strong>MFA on Root Account</strong>: Verifies if multi-factor authentication (MFA) is enabled on your AWS account's root user, significantly enhancing security.</p>
</li>
<li><p><strong>EBS Public Snapshots</strong>: Identifies EBS snapshots that are publicly accessible, which could lead to unintended data exposure.</p>
</li>
<li><p><strong>RDS Public Snapshots</strong>: Similar to the EBS check, identifies RDS snapshots that are publicly accessible.</p>
</li>
<li><p><strong>S3 Bucket Permissions</strong>: Checks for S3 buckets with open access permissions or those that allow access to any authenticated AWS user.</p>
</li>
<li><p><strong>CloudTrail Logging</strong>: Verifies if CloudTrail is enabled for all regions, crucial for maintaining an audit trail of actions taken on your AWS account.</p>
</li>
<li><p><strong>IAM Password Policy</strong>: Checks if your IAM password policy aligns with security best practices, such as minimum length and complexity requirements.</p>
</li>
<li><p><strong>Exposed Access Keys</strong>: Identifies if any of your AWS access keys have been exposed publicly on code repositories or other public sites.</p>
</li>
<li><p><strong>Security Groups - Unrestricted Access</strong>: Checks for security groups that allow unrestricted access (0.0.0.0/0) to specific ports.</p>
</li>
</ul>
<h3 id="heading-s3-encryption">S3 Encryption</h3>
<p>Learn about the different use cases for S3 encryption options.</p>
<ul>
<li><p><strong>Server-Side Encryption (SSE)</strong></p>
<ul>
<li><p>This is the default encryption method for all new buckets and objects.</p>
</li>
<li><p>Amazon S3 handles key management and encryption/decryption automatically.</p>
</li>
<li><p>Uses AES-256 encryption algorithm.</p>
</li>
<li><p>Each object is encrypted with a unique key, and the key itself is encrypted with a regularly rotated master key.</p>
</li>
</ul>
</li>
<li><p><strong>SSE-KMS (Server-Side Encryption with AWS KMS-Managed Keys)</strong></p>
<ul>
<li><p>Uses AWS Key Management Service (KMS) for managing encryption keys.</p>
</li>
<li><p>Provides additional control and audit trail for your keys.</p>
</li>
<li><p>Allows you to use customer-managed keys (CMKs) or AWS managed keys.</p>
</li>
<li><p>Enables you to set key rotation policies and control key usage through IAM policies.</p>
</li>
</ul>
</li>
<li><p><strong>SSE-C (Server-Side Encryption with Customer-Provided Keys)</strong></p>
<ul>
<li><p>You manage your own encryption keys.</p>
</li>
<li><p>S3 performs the encryption/decryption, but you provide the key with each request.</p>
</li>
<li><p>Keys are not managed by AWS; you must provide the correct key to access the object.</p>
</li>
</ul>
</li>
<li><p><strong>Client-Side Encryption</strong></p>
<ul>
<li><p>Data is encrypted before sending it to S3.</p>
</li>
<li><p>You can use the Amazon S3 Encryption Client or implement your own client-side encryption.</p>
</li>
<li><p>Provides end-to-end encryption, as data is encrypted before leaving your application.</p>
</li>
</ul>
</li>
</ul>
<p>For more information, watch this <a target="_blank" href="https://www.youtube.com/watch?v=2uaeFDlVPlY">video</a>.</p>
<h3 id="heading-cloudformation-and-secrets">CloudFormation and Secrets</h3>
<p>Using secrets in AWS CloudFormation is a great way to manage sensitive information securely. CloudFormation supports dynamic references to secrets stored in AWS Secrets Manager.</p>
<p>In the example below, MySecret:<code>{{resolve:secretsmanager:SecretName:SecretKey:VersionStage:VersionId}}</code> retrieves the 'password' field from the secret 'MySecretName' in Secrets Manager.</p>
<pre><code class="lang-json">MySecret:
  Type: AWS::SecretsManager::Secret
  Properties:
    Name: MySecretName
    Description: <span class="hljs-string">"This is my secret"</span>
    SecretString: '{<span class="hljs-attr">"username"</span>:<span class="hljs-string">"myuser"</span>,<span class="hljs-attr">"password"</span>:<span class="hljs-string">"mypassword"</span>}'
</code></pre>
<h3 id="heading-vpc-flowlog">VPC FlowLog</h3>
<p>Understand the use cases for using VPC flow logs.</p>
<ul>
<li><p>Identify unusual traffic patterns or unexpected denied connections.</p>
</li>
<li><p>Detect potential security threats by identifying suspicious IP addresses or unusual port activity.</p>
</li>
<li><p>You can set up automated alerts using CloudWatch Alarms for specific traffic patterns.</p>
</li>
</ul>
<pre><code class="lang-json"><span class="hljs-number">2</span> <span class="hljs-number">123456789010</span> eni<span class="hljs-number">-1234567890</span>abcdef0 <span class="hljs-number">10.0</span><span class="hljs-number">.1</span><span class="hljs-number">.5</span> <span class="hljs-number">10.0</span><span class="hljs-number">.0</span><span class="hljs-number">.220</span> <span class="hljs-number">39812</span> <span class="hljs-number">80</span> <span class="hljs-number">6</span> <span class="hljs-number">20</span> <span class="hljs-number">4249</span> <span class="hljs-number">1418530010</span> <span class="hljs-number">1418530070</span> ACCEPT O
</code></pre>
<p>Let's break down this log entry:</p>
<ol>
<li><p>Version number (2)</p>
</li>
<li><p>AWS account ID (123456789010)</p>
</li>
<li><p>Network interface ID (eni-1234567890abcdef0)</p>
</li>
<li><p>Source IP address (10.0.1.5)</p>
</li>
<li><p>Destination IP address (10.0.0.220)</p>
</li>
<li><p>Source port (39812)</p>
</li>
<li><p>Destination port (80)</p>
</li>
<li><p>Protocol (6 = TCP)</p>
</li>
<li><p>Packets transferred (20)</p>
</li>
<li><p>Bytes transferred (4249)</p>
</li>
<li><p>Start time (1418530010)</p>
</li>
<li><p>End time (1418530070)</p>
</li>
<li><p>Action (ACCEPT)</p>
</li>
<li><p>Log status (0)</p>
</li>
</ol>
<h3 id="heading-s3-glacier-vault-lock-policies-and-archival-retrieval-options">S3 Glacier Vault Lock Policies and Archival Retrieval Options</h3>
<p>S3 Glacier Vault Lock policies are a powerful feature for enforcing compliance controls on your Amazon S3 Glacier vaults. These policies allow you to create and lock down rules that control access to your archives, ensuring that your data retention and deletion policies are strictly enforced.</p>
<p>When initiating a job to retrieve an archive, you can specify one of the following retrieval options, based on your access time and cost requirements.</p>
<ul>
<li><p><strong>Expedited</strong>: Expedited retrievals allow you to quickly access your data that's stored in the S3 Glacier Flexible Retrieval storage class or the S3 Intelligent-Tiering Archive Access tier when occasional urgent requests for restoring archives are required. For all but the largest archives (more than 250 MB), data accessed by using Expedited retrievals is typically made available within 1–5 minutes. Provisioned capacity ensures that retrieval capacity for Expedited retrievals is available when you need it.</p>
</li>
<li><p><strong>Standard</strong>: Standard retrievals allow you to access any of your archives within several hours. Standard retrievals are typically completed within 3–5 hours. Standard is the default option for retrieval requests that do not specify the retrieval option.</p>
</li>
<li><p><strong>Bulk</strong>: Bulk retrievals are the lowest-cost S3 Glacier retrieval option, which you can use to retrieve large amounts, even petabytes, of data inexpensively in a day. Bulk retrievals are typically completed within 5–12 hours.</p>
</li>
</ul>
<h3 id="heading-rds-copying-encrypted-snapshots">RDS Copying Encrypted Snapshots</h3>
<ul>
<li><p>You can’t share snapshots that are encrypted with the default AWS managed key. You can only share snapshots that are encrypted with a customer-managed key.</p>
</li>
<li><p>You can share only unencrypted snapshots publicly.</p>
</li>
<li><p>When you share an encrypted snapshot, you must also share the customer-managed key used to encrypt the snapshot.</p>
</li>
<li><p>Since you cannot share a snapshot that is encrypted with the default AWS managed key, you can copy the snapshot first. When you copy a snapshot, you can encrypt the copy or you can specify a KMS key that is different than the original, and the resulting copied snapshot uses the new KMS key.</p>
</li>
<li><p>Also, you cannot enable encrypt snapshots on existing RDS.</p>
</li>
</ul>
<h3 id="heading-waf-protections">WAF Protections</h3>
<p>Understand which layer AWS WAF operates on. AWS WAF (Web Application Firewall) mainly works at the application layer (Layer 7) of the <a target="_blank" href="https://www.freecodecamp.org/news/osi-model-networking-layers-explained-in-plain-english/">OSI (Open Systems Interconnection) model</a>.</p>
<h3 id="heading-aws-config-aggregator">AWS Config Aggregator</h3>
<p>You can expect questions about AWS Config Aggregator. This feature lets you gather configuration and compliance data from multiple accounts and regions into one account, giving you a complete view of your AWS resources.</p>
<p>For more information, refer to the <a target="_blank" href="https://docs.aws.amazon.com/config/latest/developerguide/aggregated-create.html">AWS Config Aggregator documentation</a>.</p>
<h3 id="heading-aws-macie">AWS Macie</h3>
<p>Learn how to categorize your data using Amazon Macie.</p>
<ul>
<li><p>Macie can automatically scan your S3 buckets to identify sensitive data such as personally identifiable information (PII), financial data, or intellectual property.</p>
</li>
<li><p>Macie helps assess and monitor the security posture of your S3 buckets, identifying misconfigurations or overly permissive access policies.</p>
</li>
<li><p>Automatically classify data based on its sensitivity, helping organizations manage and protect data according to its importance.</p>
</li>
</ul>
<h3 id="heading-aws-cloudfront-oai">AWS CloudFront OAI</h3>
<p>Learn how to restrict user access to content directly from S3.</p>
<p>To limit user access to content directly from S3 when using Amazon CloudFront, you can use Origin Access Identity (OAI). An OAI is a special CloudFront user that lets you restrict access to your S3 bucket content. When you create an OAI, CloudFront connects it to your distribution, and you can configure your S3 bucket to only allow access from that OAI.</p>
<h3 id="heading-aws-cloudhsm">AWS CloudHSM</h3>
<p>Use AWS CloudHSM instead of KMS when you want complete control over key management hardware and keys.</p>
<p>AWS CloudHSM lets you manage and use your keys on FIPS-approved hardware. It uses customer-owned, single-tenant HSM instances that operate in your own Virtual Private Cloud (VPC). If you need full control over the Hardware Security Module (HSM) that stores and manages your cryptographic keys, CloudHSM is the better choice.</p>
<h3 id="heading-aws-kms-key-types">AWS KMS Key Types</h3>
<p>Learn about the available KMS key types:</p>
<ol>
<li><p><strong>Symmetric Keys</strong></p>
<ul>
<li><p>AWS Managed Keys</p>
</li>
<li><p>Customer Managed Keys</p>
</li>
</ul>
</li>
<li><p><strong>Asymmetric Keys</strong>: These consist of a public and private key pair.</p>
</li>
<li><p><strong>HMAC Keys</strong>: Used for generating and verifying Hash-based Message Authentication.</p>
</li>
<li><p><strong>Multi-Region Keys</strong>: A set of interoperable keys that can be replicated across multiple AWS Regions. These are useful for encrypting data across multiple Regions or for disaster recovery scenarios.</p>
</li>
<li><p><strong>Keys with Imported Key Material</strong>: Allows you to import your own key material into KMS.</p>
</li>
<li><p><strong>Keys in Custom Key Stores</strong>: Enables you to create and manage KMS keys in an AWS CloudHSM cluster.</p>
</li>
</ol>
<h3 id="heading-a-few-notes-on-aws-cloudtrail">A Few Notes on AWS CloudTrail</h3>
<ol>
<li><p>Enable CloudTrail in all regions: To ensure thorough logging, activate CloudTrail in every AWS region. This gives you a complete record of activities across your entire AWS infrastructure.</p>
</li>
<li><p>Use a dedicated S3 bucket: Store CloudTrail logs in a specific S3 bucket with strict access controls. This helps prevent unauthorized access and ensures the integrity of your audit logs.</p>
</li>
<li><p>Enable log file integrity validation: This feature uses industry-standard algorithms to ensure that your log files haven't been tampered with after delivery to S3.</p>
</li>
<li><p>Encrypt log files: Use server-side encryption with AWS KMS managed keys (SSE-KMS) to secure your CloudTrail log files while they are stored. This provides an additional layer of security for your audit data.</p>
</li>
<li><p>For the S3 bucket that stores CloudTrail logs, enable Multi-Factor Authentication (MFA) Delete. This helps prevent unauthorized deletion of log files.</p>
</li>
<li><p>Use AWS Config rules to make sure CloudTrail is always turned on and set up correctly across all your accounts.</p>
</li>
<li><p>Regularly review and analyze your CloudTrail logs. Consider using AWS services like Amazon Athena or third-party SIEM tools for log analysis.</p>
</li>
<li><p>If you're using AWS Organizations, consider setting up organization-wide trails to centralize logging for all accounts in your organization.</p>
</li>
</ol>
<h3 id="heading-s3-replication">S3 Replication</h3>
<p>Learn how to replicate encrypted S3 objects across regions.</p>
<ul>
<li><p>Versioning must be enabled on both source and destination buckets.</p>
</li>
<li><p>Create a replication role: Create an IAM role that allows S3 to assume the role and perform replication tasks.</p>
</li>
<li><p>Attach a policy to the replication role: This policy should grant permissions to read from the source bucket and write to the destination bucket.</p>
</li>
<li><p>You can set up replication using the AWS Management Console, AWS CLI.</p>
</li>
<li><p>KMS Key Permissions: If you're using AWS KMS for encryption, you need to grant the replication role permission to use the KMS key in both the source and destination regions.</p>
</li>
</ul>
<h3 id="heading-aws-service-catalog">AWS Service Catalog</h3>
<p>Learn about the use cases for AWS Service Catalog.</p>
<p>You can use AWS Service Catalog to standardize your applications and distribute them to your teams. For example, if you want to set restrictions on encryption and AMIs, you can create a complete application stack and share it with your team.</p>
<h3 id="heading-mfa-for-active-directory-users">MFA for Active Directory Users</h3>
<p>You can enable multi-factor authentication (MFA) for your AWS Managed Microsoft AD directory to increase security when your users specify their AD credentials to access supported Amazon Enterprise applications.</p>
<p>When you enable MFA, your users enter their username and password (first factor) as usual, and they must also enter an authentication code (the second factor) they obtain from your virtual or hardware MFA solution.</p>
<h3 id="heading-aws-iam-access-analyzer">AWS IAM Access Analyzer</h3>
<ul>
<li>Cross-Account Access Analysis: Helps identify resources that are shared with external AWS accounts, ensuring only intended cross-account access exists.</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Getting the AWS Certified Security Specialty certification was a great experience that helped me learn more about AWS security. By studying and applying what I learned, I gained useful knowledge about keeping AWS environments secure.</p>
<p>This certification proved my skills and gave me the tools to use best practices in real situations. Whether you're new to AWS security or want to improve your skills, going for this certification can be an important part of your career growth.</p>
<p>I hope my experiences encourage and help you on your certification path. Keep in mind that ongoing learning and being active in the community are important to stay updated in the fast-changing world of cloud security.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Troubleshoot Your Network on Linux – OSI Model Troubleshooting Guide ]]>
                </title>
                <description>
                    <![CDATA[ In the world of networking, you may find yourself troubleshooting problems such as difficulty connecting to other computers or to SSH, problems with IP tables, or being unable to access websites. However, have you ever attempted to troubleshoot your ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-troubleshoot-network-on-linux/</link>
                <guid isPermaLink="false">66b9f9cd01c4f505a2083107</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ network ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Mon, 25 Mar 2024 17:34:59 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/03/taylor-vick-M5tzZtFCOfs-unsplash.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In the world of networking, you may find yourself troubleshooting problems such as difficulty connecting to other computers or to SSH, problems with IP tables, or being unable to access websites.</p>
<p>However, have you ever attempted to troubleshoot your network by applying the OSI Model? Through the use of a bottom-to-top methodology that is based on the Open Systems Interconnection (OSI) architecture, we will uncover the complexities of network troubleshooting, providing you with the knowledge and tools that are essential for effectively addressing a wide variety of networking difficulties.</p>
<h2 id="heading-what-is-the-osi-model-open-systems-interconnection">What is the OSI Model (Open Systems Interconnection)?</h2>
<p>The Open Systems Interconnection (OSI) model is a conceptual framework that categorizes the functions of network communications into seven distinct levels. To put it simply, the OSI standardizes how various computer systems can communicate with one another.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/03/Screenshot-2024-03-24-at-15.28.35.png" alt="Image" width="600" height="400" loading="lazy">
<em>seven layers of the OSI model</em></p>
<h2 id="heading-how-to-troubleshoot-a-website-by-applying-the-osi-model-principles">How to Troubleshoot a Website by Applying the OSI Model Principles</h2>
<p>Consider the following example of troubleshooting a website hosted on your server that is not working. We'll use Linux as our operating system. I believe that the divide and rule is a better technique for debugging. </p>
<p>The OSI model is one method for efficiently breaking down an issue so that you can methodically simplify the environment in order to discover a solution and conquer it.</p>
<h3 id="heading-physical-layer">Physical Layer</h3>
<p>As I previously stated, when it comes to debugging, it is usually preferable to begin from the bottom. The physical layer is the bottom layer in the OSI Model. The key components in this layer consist of ethernet cables, hubs, and switches. At this level, you should check the power supply and the status of devices, as well as examine interface statistics.</p>
<ul>
<li>The "ifconfig" tool provides a detailed overview of all the ethernet cards present in your system.</li>
<li>In addition, you have a choice of using the "IP link show" commands. If the result shows "down," it suggests that layer1 is not functioning.</li>
<li>Sometimes, ethernet connections may be physically connected to the server but not activated by default. To enable, use the command below.</li>
</ul>
<pre><code class="lang-bash">IP link <span class="hljs-built_in">set</span> eth0 up
</code></pre>
<ul>
<li>If you're looking for more detailed information, the ethtool utility can be quite helpful. This utility provides the ability to query and modify settings. It allows you to adjust parameters such as speed, port, auto-negotiation, PCI locations, and checksum offload.</li>
</ul>
<h3 id="heading-data-link-layer">Data Link Layer</h3>
<p>The data link layer enables the transmission of data between two devices that are connected to the same network. There are two components in this layer. The first component is the medium access control (MAC) layer, which includes the operation of hardware addressing and access control. </p>
<p>The second layer is the logical link layer, which enables the creation of a logical connection between different media. A common issue in this layer is the inability of two servers to establish connectivity. Tools such as ping, traceroute, arp, macof, and Wireshark are utilized for testing the data link layer.</p>
<p>This may help in verifying correct transmission and reception of data frames among devices within the same network group.</p>
<h3 id="heading-network-layer">Network Layer</h3>
<p>The network layer's job is to make it easy for data to move between two networks. Network devices that work at Layer 3 of the OSI model are routers. A router's main job is to make it easier for networks to talk to each other. Working with IP addresses is part of this layer. </p>
<p>In this stage, you should mostly look for problems with IP addresses. You can type "ip -br address show" to see the address. You can see if your network card has been given an IP address. You might not be getting dynamic IP addresses from DHCP if you use it to get them.</p>
<p>One common problem that often comes is the lack of an upstream gateway for a specific route or the absence of a default route. When an IP packet is transmitted to a different network, it needs to be directed to a gateway for additional processing. </p>
<p>Understanding the routing of packets to their final destinations is crucial for the gateway. The routing table contains the list of gateways for various routes and can be managed using the “ip route” commands. We can also check connectivity by sending pings to the default gateway or beyond gateway.</p>
<h3 id="heading-transport-layer">Transport Layer</h3>
<p>Protocols like Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are used by the transport layer to control network traffic between systems and make sure that data flows efficiently. </p>
<p>The transport layer is in charge of sending data packets, looking for errors, controlling the flow of data, and putting them in the right order. You may run into problems in this layer, like ports that aren't listening. Your service might not start because the port is already being used. You can see what ports are open by running "commad "netstat -antlp | grep "LISTEN"".</p>
<p>One problem that often occurs is related to remote connectivity. Consider a scenario where your local system is unable to establish a connection with a distant port, specifically HTTP on port 80.  The <code>telnet</code> command tries to create a TCP connection with the specified host and port. This capability is ideal for conducting remote TCP connectivity testing. </p>
<p>To check a remote UDP port, you can utilize the "netcat" utility.</p>
<h3 id="heading-session-layer">Session Layer</h3>
<p>This layer is responsible for facilitating the initiation and termination of communication between the two devices (for example: authentication). The period of time during which communication is initiated and terminated is referred to as the session. </p>
<p>In this layer you should be investigating credentials, certificates of the servers, the session ID and cookies of the clients</p>
<h3 id="heading-presentation-layer">Presentation Layer</h3>
<p>The presentation layer of the OSI model is responsible for formatting and transforming data in a way that allows it to be presented to the user. </p>
<p>SSL or TLS encryption methods are key parts of this layer. Here, you should be examining for encryption and decryption issues.</p>
<h3 id="heading-application-layer">Application Layer</h3>
<p>The system takes input from the user and transmits output back to the user. The Bellow Protocols function at this level. </p>
<p>You should verify the configuration files on your server for any wrong settings. Additionally, it is essential to look at the log files on the servers to get more detailed information about the issues.</p>
<ul>
<li>File Transfer Protocol (FTP)</li>
<li>Simple Mail Transfer Protocol (SMTP)</li>
<li>Secure Shell (SSH)</li>
<li>Internet Message Access Protocol (IMAP)</li>
<li>Domain Name Service (DNS)</li>
<li>Hypertext Transfer Protocol (HTTP).</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Troubleshooting network issues in Linux can be a daunting task, but by applying the principles of the OSI model, you can systematically diagnose and resolve problems with greater efficiency. </p>
<p>Starting from the bottom layer and working your way up, we've explored various tools and techniques tailored to each level of the OSI model.</p>
<p>Beginning with the physical layer, we inspected hardware components and used tools like <code>ifconfig</code> and <code>ip link show</code> to verify connectivity. Moving up to the data link layer, we focused on MAC addresses and used utilities like <code>ping</code> and <code>Wireshark</code> for testing. At the network layer, we delved into IP addressing and routing, employing commands such as <code>ip route</code> and <code>ping</code> to diagnose issues.</p>
<p>Transitioning to the transport layer, we addressed TCP and UDP related problems, utilizing commands like <code>netstat</code> and <code>telnet</code> to check for open ports and establish connections. Further up the stack, we discussed the importance of session management and encryption at the session and presentation layers respectively.</p>
<p>Finally, at the application layer, we examined specific protocols like FTP, SMTP, SSH, and HTTP, emphasizing the significance of configuration files and log analysis in resolving issues.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Deploy AWS Infrastructure with Terraform and Github Actions – A Multi-Environment CI/CD Guide ]]>
                </title>
                <description>
                    <![CDATA[ Recently, I've been considering developing a complete end-to-end greenfield DevOps personal lab project.  The term "greenfield software project" refers to the development of a system for a new product that requires development from scratch with no le... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-deploy-aws-infrastructure-with-terraform-and-github-actions-a-practical-multi-environment-ci-cd-guide/</link>
                <guid isPermaLink="false">66b9f9ca2d0f884246b5243f</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Cloud ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Terraform ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Fri, 11 Feb 2022 19:09:46 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2022/02/pexels-pixabay-163235.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Recently, I've been considering developing a complete end-to-end greenfield DevOps personal lab project. </p>
<p>The term "greenfield software project" refers to the development of a system for a new product that requires development from scratch with no legacy code. </p>
<p>This is a method you use when you are beginning from scratch and have no constraints or dependencies. You have a fantastic opportunity to build a solution from the ground up. The project is open to new tools and architectures.</p>
<p>I've been looking around the internet for ideas on how to put up a CI/CD pipeline for terraforming deployment. But I couldn't find a comprehensive end-to-end terraform deployment instruction. </p>
<p>The majority of the guides and blog posts I discovered discuss the deployment pipeline for single (Prod) environments. So I've chosen to build my personal lab project and turn it into a blog post. </p>
<p>In this article, I will discuss the entire Terraform deployment workflow from Development to Production environments. I will also present topics and techniques that I will be using in my lab assignment.</p>
<p>There are two reasons why I choose Terraform as my infrastructure as code tool. The first is that I've been using cloud formation for a long time and have a lot of experience with it, so I wanted to get some experience with Terraform. </p>
<p>The second reason I chose Terraform is that this is a greenfield DevOps project, so I can pick a modern technology and play with it. </p>
<p>I will go over some of the features of Terraform in this article. Let's get started.</p>
<h2 id="heading-deployment-tools">Deployment Tools</h2>
<p>Before we get into deployment patterns, I'd like to go over the tools I'll be using.</p>
<h3 id="heading-terraform">Terraform</h3>
<p>Terraform is an open-source provisioning framework. It's a cross-platform application that can operate on Windows, Linux, and macOS. </p>
<p>You can use Terraform in three ways.</p>
<ul>
<li>Terraform OSS (Free)</li>
<li>Terraform Cloud (Paid - Saas Model)</li>
<li>Terraform Enterprise (Paid - Self Hosted)</li>
</ul>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/bj3ZlORR_.jpeg" alt="bj3ZlORR_" width="600" height="400" loading="lazy"></p>
<h3 id="heading-what-is-terraform-cloud">What is Terraform Cloud?</h3>
<p>For my lab project, I'm utilizing Terraform Cloud. Terraform OSS is fantastic for small teams, but as your team expands, so does the difficulty of administering Terraform. HashiCorp's Terraform Cloud is a commercial SaaS offering.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/LvbPPn6sH.jpeg" alt="LvbPPn6sH" width="600" height="400" loading="lazy"></p>
<p><strong>Terraform Cloud Offerings</strong></p>
<ul>
<li>Remote Terraform workflow for teams.</li>
<li>VCS Connection (GitHub, GitLab, Bitbucket) State Management (Storage, History, and Locking)</li>
<li>Okta-integrated single sign-on (SSO) with a full user interface</li>
<li>Terraform Cloud serves as your Terraform state's remote backend.</li>
<li>Terraform Cloud incorporates the Sentinel policy-as-code framework, which lets you establish and enforce specific policies for how your business provisions infrastructure. You can limit the number of compute VMs, restrict important upgrades to predefined maintenance times, and perform a variety of other tasks.</li>
<li>Terraform Cloud can show an estimate of its entire cost as well as any cost change caused by the proposed modifications.</li>
</ul>
<h3 id="heading-github-actions-cicd">GitHub Actions (CI/CD)</h3>
<p>You can use Terraform CLI or Terraform console to deploy infrastructure from your laptop. </p>
<p>If you are a single team member, this may work for a while. But this strategy will not be scalable as your team grows in size. You must deploy from a centralized location where everyone has visibility, control, and rollback capabilities.</p>
<p>There are numerous technologies available for deployment from a centralized location (CI/CD). I intended to try Terraform pipeline deployment using the "GitOps" technique. </p>
<p>A Git repository serves as the single source of truth for infrastructure definitions in GitOps. For all infrastructure modifications, GitOps use merge requests as the change method. When new code is integrated, the CI/CD pipeline updates the environment. GitOps automatically overwrites any configuration drift, such as manual modifications or errors.</p>
<p>For my deployment, I'll be using GitHub Actions.</p>
<p>GitHub Actions lets you automate tasks throughout the software development lifecycle. GitHub Actions are event-driven, which means you can run a series of commands in response to a specific event. </p>
<p>For example, you can run a command that executes a testing script, plan script, and apply script every time someone writes a pull request for a repository. This allows you to incorporate continued integration (CI) and continuous deployment (CD) capabilities, as well as a variety of other features, directly in your repository.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/PC6GGaSwk.jpeg" alt="PC6GGaSwk" width="600" height="400" loading="lazy"></p>
<p><strong>Github Actions Features</strong></p>
<ul>
<li>Github Actions are fully integrated into Github and can be controlled alongside your other repository-related features like pull requests and problems.</li>
<li>They have Docker container support</li>
<li>Github Actions are available for free for all repositories and feature 2000 free build minutes per month for all private repositories.</li>
</ul>
<p>Check out this <a target="_blank" href="https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions">link</a> to get more information about the actions available on GitHub.</p>
<p>So far, I've introduced the tools and services that I'll use in my deployment pipeline. Now I'll look into the Terraform directory structure. </p>
<p>To summarize, I will be using Terraform Cloud and GitHub Actions. Another thing to note is that I will not go into great length about writing Terraform code in this article. I'll be using code from the Terraform Registry. Thank you very much, Anton Babenko.</p>
<h2 id="heading-setting-up-the-project">Setting Up the Project</h2>
<p>Assume you've just started a new job and your first assignment is to create VPCs. They want you to set up three VPCs for them (Dev—&gt;Stage—&gt;Prod VPC). You've decided to use Terraform to deploy VPCs.</p>
<h3 id="heading-terraform-directory-structure">Terraform Directory Structure</h3>
<p>Your first step should be to create Terraform's directory structure. You don't need to establish a directory structure if you've previously used cloud-formation because you don't need to handle state files or modules. But while using Terraform, it is critical to define the directory structure. </p>
<p>First, I'll provide several samples of commonly used directory structures, followed by information about the directory I'll be using in my project.</p>
<p><strong>Basic Directory Structure</strong></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/I-Yg30jok.jpeg" alt="I-Yg30jok" width="600" height="400" loading="lazy"></p>
<p>In this arrangement, you will have three files. Your major file is main.tf. This is the file in which all of the resources are defined.</p>
<pre><code>{
  resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"this"</span> {

  cidr_block = <span class="hljs-keyword">var</span>.cidr
 }
}
</code></pre><p>variables.tf is where you define your input variables:</p>
<pre><code>variable <span class="hljs-string">"cidr"</span> {
 description = <span class="hljs-string">"The CIDR block for the VPC"</span>
 type        = string
 <span class="hljs-keyword">default</span>     = <span class="hljs-string">"10.0.0.0/16"</span>
}
</code></pre><p>outputs.tf output values are defined in this file:</p>
<pre><code>output <span class="hljs-string">"vpc_id"</span> {
  description = <span class="hljs-string">"The ID of the VPC"</span>
  value       = concat(aws_vpc.this.*.id, [<span class="hljs-string">""</span>])[<span class="hljs-number">0</span>]
}
</code></pre><p>If you're working on a modest project with a small team, this structure will work well. But as you use modules and work on larger projects, this structure will not be able to scale as well.</p>
<h3 id="heading-complex-and-scalable-directory-structure">Complex and Scalable Directory Structure</h3>
<p>You will not be able to scale your project or team with the basic directory structure. </p>
<p>Multiple habitats and areas will be required for the broader project. You'll need a decent directory structure to transfer infrastructure from development to production environments using a CI/CD solution. You can use Terraform Modules in this structure.</p>
<blockquote>
<p>Modules are reusable Terraform configurations that can be called and configured by other configurations.</p>
</blockquote>
<pre><code>
├── enviournments
│   ├── dev
│   │   ├── compute.tf
│   │   ├── dev.tfvars
│   │   ├── outputs.tf
│   │   ├── rds.tf
│   │   ├── s3.tf
│   │   ├── variables.tf
│   │   └── vpc.tf
│   ├── prod
│   │   ├── compute.tf
│   │   ├── outputs.tf
│   │   ├── prod.tfvars
│   │   ├── rds.tf
│   │   ├── s3.tf
│   │   ├── variables.tf
│   │   └── vpc.tf
│   └── stage
│       ├── compute.tf
│       ├── outputs.tf
│       ├── rds.tf
│       ├── s3.tf
│       ├── stage.tfvars
│       ├── variables.tf
│       └── vpc.tf
└── modules
    ├── compute
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── rds
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── s3
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── security-group
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    └── vpc
        ├── main.tf
        ├── outputs.tf
        └── variables.tf
</code></pre><p>I've noticed a lot of projects use this structure. In this case, the contents of each environment will be nearly identical. </p>
<p>But in my opinion, content should be the same across all environments. For all environments, we should use the same main.tf file. Variables can be used to adjust the number of servers or number of subnets.</p>
<pre><code>variable <span class="hljs-string">"instance_count"</span> {
  description = <span class="hljs-string">"Numbers of servers count"</span>
}

variable <span class="hljs-string">"instance_type"</span> {
  description = <span class="hljs-string">"Instance Size (t2.micro,t2.large"</span>
}
</code></pre><p><strong>Proposed Directory Structure</strong></p>
<p>Having a separate folder and separate configuration file, as I described in the prior section, makes little sense. You can get in touch if you believe there is an advantage to having a different folder for each setting. </p>
<p>As a result, below is my proposed directory layout for VPC deployment.</p>
<p>VPC: github.com/nitheesh86/network-vpc</p>
<p>Security Group: github.com/nitheesh86/network-sg</p>
<p>Compute-ASG: github.com/nitheesh86/compute-asg</p>
<p>You may be wondering how you will reference resources from multiple repositories. This is where Terraform cloud workspace will come in handy. I'll go through this in more detail later in the article.</p>
<p>If you look at the above directory, you might assume it looks like a "Basic Directory Structure." You may also be asking where module directories are. Yes, directories seem the same, but the magic happens within the configuration files.</p>
<pre><code>terraform {
  required_version = <span class="hljs-string">"~&gt; 0.12"</span>
  backend <span class="hljs-string">"remote"</span> {
    hostname     = <span class="hljs-string">"app.terraform.io"</span>
    organization = <span class="hljs-string">"xxxxxxxx"</span>
    workspaces { prefix = <span class="hljs-string">"vpc-"</span> }
  }
}

provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"ap-south-1"</span>
}


<span class="hljs-built_in">module</span> <span class="hljs-string">"vpc"</span> {
  source = <span class="hljs-string">"github.com/nitheesh86/terraform-modules/modules/vpc"</span>

  name = <span class="hljs-keyword">var</span>.name
  cidr = <span class="hljs-string">"10.0.0.0/16"</span>

  azs             = [<span class="hljs-string">"ap-south-1a"</span>, <span class="hljs-string">"ap-south-1b"</span>, <span class="hljs-string">"ap-south-1c"</span>]
  public_subnets  = [<span class="hljs-string">"10.0.101.0/24"</span>, <span class="hljs-string">"10.0.102.0/24"</span>, <span class="hljs-string">"10.0.103.0/24"</span>]
  private_subnets = [<span class="hljs-string">"10.0.1.0/24"</span>, <span class="hljs-string">"10.0.2.0/24"</span>, <span class="hljs-string">"10.0.3.0/24"</span>]

  enable_nat_gateway = <span class="hljs-literal">true</span>
  enable_vpn_gateway = <span class="hljs-literal">true</span>

  tags = {
    Terraform   = <span class="hljs-string">"true"</span>
    Environment = <span class="hljs-keyword">var</span>.env
  }
}
</code></pre><p>I maintained modules separate from setups. My modules are all placed in a separate repository. I'll refer to that module by its Git repo URL.</p>
<blockquote>
<p>The source argument in a module block tells Terraform where to find the source code for the desired child module.</p>
</blockquote>
<p>Terraform supports sources in the following modules:</p>
<ul>
<li>Local paths</li>
<li>Terraform Registry</li>
<li>GitHub</li>
<li>Bitbucket</li>
<li>Generic Git, Mercurial repositories</li>
<li>HTTP URLs</li>
<li>S3 buckets</li>
<li>GCS buckets</li>
</ul>
<p>We can use the Terraform registry as a module source because we are using Terraform Cloud. However, each module needs its own git repository. If you're publishing vpc modules (terraform-aws-vpc), for example, you can only provide code for those vpc resources that are relevant to the module. Another repo is needed for the security group module (terraform-aws-sg).</p>
<blockquote>
<p>One module per repository. The registry cannot use combined repositories with multiple modules.</p>
</blockquote>
<p>However, it is worth considering this structure if your organization has a separate network, security, and compute team. Each team can handle their modules independently.</p>
<h2 id="heading-terraform-cloud-components">Terraform Cloud Components</h2>
<h3 id="heading-organizations">Organizations</h3>
<p>Organizations are a shared place in Terraform Cloud for teams to collaborate on workspaces. Remote state setups can be shared between organizations. </p>
<p>You can, for example, build companies based on a project or a product. Like if you are attempting to create an Apple product, you can name it "apple." "nitheeshp" is the name I gave to my project.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/KR3DCPRWz.jpeg" alt="KR3DCPRWz" width="600" height="400" loading="lazy"></p>
<h3 id="heading-workspaces">WorkSpaces</h3>
<p>Instead of directories, Terraform Cloud maintains infrastructure collections using workspaces. A workspace is related to contexts such as dev, staging, and prod. </p>
<p>Terraform configurations, variable values, and state files connected with an environment are all stored in the workspace. Each workspace keeps backups of earlier state files.</p>
<p>In my project, I set up a workspace for each Amazon Web Services service. Each workspace can be linked to a Git branch or Git repo.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/ncNrBPUb7.jpeg" alt="ncNrBPUb7" width="600" height="400" loading="lazy"></p>
<p>When you create a workspace, you have three options for designing your Terraform workflow:</p>
<ul>
<li>Version Control Workflow</li>
<li>CLI-Driven Workflow</li>
<li>API-Driven Workflow</li>
</ul>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/wsGudB1Pa.jpeg" alt="wsGudB1Pa" width="600" height="400" loading="lazy"></p>
<p>If you look at my Terraform directory structure below, you'll notice that I haven't set any default values for my variables. Variables have been related in Terraform workspace settings.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/GbQMtKo2F.jpeg" alt="GbQMtKo2F" width="600" height="400" loading="lazy"></p>
<p>If you look at main.tf, you'll notice that all environments use the same Terraform cloud-config. You may be curious as to how I go about making modifications to a certain workspace. The workspace prefix is what I'm using.</p>
<pre><code>terraform {
  required_version = <span class="hljs-string">"~&gt; 0.12"</span>
  backend <span class="hljs-string">"remote"</span> {
    hostname     = <span class="hljs-string">"app.terraform.io"</span>
    organization = <span class="hljs-string">"nitheeshp"</span>
    workspaces { prefix = <span class="hljs-string">"vpc-"</span> }
  }
}
</code></pre><p>When you add a workspace to your configuration, it will prompt you to choose a workspace. As an example:</p>
<pre><code>$ terraform init

Initializing the backend...

Successfully configured the backend <span class="hljs-string">"remote"</span>! Terraform will automatically
use <span class="hljs-built_in">this</span> backend unless the backend configuration changes.

The currently selected workspace (<span class="hljs-keyword">default</span>) does not exist.
  This is expected behaviour when the selected workspace did not have an
  existing non-empty state. Please enter a number to select a workspace:

  <span class="hljs-number">1.</span> dev
  <span class="hljs-number">2.</span> stage
  <span class="hljs-number">3.</span> prod

  Enter a value:
</code></pre><p>Set the TF WORKSPACE environment variable to the workspace name you want to be selected when using Terraform in CI/CD.</p>
<pre><code><span class="hljs-keyword">export</span> TF_WORKSPACE=<span class="hljs-string">"dev"</span><span class="hljs-string">`</span>
</code></pre><h3 id="heading-how-to-deploy-security-groups">How to Deploy Security Groups</h3>
<p>As previously stated, I deploy security groups using a separate repo and workspace. A VPC id is required for the deployment of security groups. This is where Terraform data sources come in.</p>
<blockquote>
<p>Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. The use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration.</p>
</blockquote>
<pre><code>data <span class="hljs-string">"terraform_remote_state"</span> <span class="hljs-string">"vpc"</span> {
  backend = <span class="hljs-string">"remote"</span>

  config = {
    organization = <span class="hljs-string">"nitheeshp"</span>
    workspaces = {
     name = <span class="hljs-string">"vpc-${var.env}"</span>
    }
  }
}

provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"ap-south-1"</span>
}


<span class="hljs-built_in">module</span> <span class="hljs-string">"elb_sg"</span> {
  source = <span class="hljs-string">"terraform-aws-modules/security-group/aws"</span>

  name        = <span class="hljs-string">"${var.env}-elb-sg"</span>
  description = <span class="hljs-string">"elb security group."</span>
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id

  egress_with_cidr_blocks = [
    {
      from_port   = <span class="hljs-number">0</span>
      to_port     = <span class="hljs-number">65535</span>
      protocol    = <span class="hljs-string">"all"</span>
      description = <span class="hljs-string">"Open internet"</span>
      cidr_blocks = <span class="hljs-string">"0.0.0.0/0"</span>
    }
  ]
</code></pre><p>As you can see, I'm getting the VPC id from the vpc-dev workspace.</p>
<pre><code>vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id
</code></pre><h3 id="heading-github-actions">GitHub Actions</h3>
<p>As we discussed above, we'll also use GitHub Actions in our deployment pipeline.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/3Xxi80ooR.png" alt="3Xxi80ooR" width="600" height="400" loading="lazy"></p>
<p>GitHub Actions makes it simple to automate all of your CI/CD workflows. You can build, test, and deploy code directly from your GitHub repository. You can also make code reviews, branch management, and issue triaging the way you want them to function. Github Actions offers free plans.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/eD4NRbeS-g.jpeg" alt="eD4NRbeS-g" width="600" height="400" loading="lazy"></p>
<h2 id="heading-gitops-and-terraform-workflow">GitOps and Terraform WorkFlow</h2>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/3Wg3lPnMu.jpeg" alt="3Wg3lPnMu" width="600" height="400" loading="lazy"></p>
<ul>
<li>Each service has its own repository (Network-VPC, Network-Security Groups, Compute-ASG, Compute-EC2)</li>
<li>I've made three branches (Develop, Stage and Prod). Each branch reflects one of our actual infrastructure environments or workplace terraforms.</li>
<li>Workflow begins when the DevOps engineer begins to make modifications to the infrastructure.</li>
<li>A DevOps engineer develops a feature branch from the master (production) branch</li>
<li>Make your changes and submit a pull request to the branch's development team.</li>
</ul>
<p>I made a separate workflow for each branch (terraform-develop.yml,terraform-stage.yml,terraform-prod.yml). The workflow is a procedure that you add to your repository. </p>
<p>Workflows consist of one or more jobs that can be scheduled or triggered by an event. You can use the workflow to create, test, package, release, or deploy a GitHub project.</p>
<p>GitWorkFlow will:</p>
<ul>
<li>Check out feature branch code.</li>
<li>Check for syntax.</li>
<li>Initialise Terraform.</li>
<li>Generate a plan for every pull requests.</li>
<li>When a pull request is merged with the develop branch, it deploys the resources to the development environment.</li>
<li>Deploy the changes Development Branch.</li>
<li>Again create pull requests to stage branch and same to prod branch.</li>
</ul>
<p>Here are the GitHub repos for this project if you want to take a look:</p>
<ul>
<li>github.com/nitheesh86/network-vpc</li>
<li>github.com/nitheesh86/network-sg</li>
<li>github.com/nitheesh86/terraform-modules</li>
</ul>
<p>I would love to learn more about your Terraform deployment methods. Please get in touch with me if you'd like to share them and discuss further.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Certified Kubernetes Administrator Study Guide – Prepare for the CKA Exam ]]>
                </title>
                <description>
                    <![CDATA[ Kubernetes is a container orchestration platform that helps you manage containers at scale.  I recently passed my certified Kubernetes administrator exam, and I would like to share my learning experience and resources with you.  Should You Get Kubern... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/certified-kubernetes-administrator-study-guide-cka/</link>
                <guid isPermaLink="false">66b9f9c7052fa53219e0a302</guid>
                
                    <category>
                        <![CDATA[ Certification ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Kubernetes ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nitheesh Poojary ]]>
                </dc:creator>
                <pubDate>Fri, 14 Jan 2022 01:14:02 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2022/01/cka-article-image.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Kubernetes is a container orchestration platform that helps you manage containers at scale. </p>
<p>I recently passed my certified Kubernetes administrator exam, and I would like to share my learning experience and resources with you. </p>
<h2 id="heading-should-you-get-kubernetes-certified">Should You Get Kubernetes Certified?</h2>
<p>There are mixed opinions in the tech industry about the importance of certifications. Some people argue that the certificates you have doesn't matter – it's all about your real-world knowledge. </p>
<p>But not everyone has the chance to work with real-world projects. And certification questions are based on real-world scenarios. So if you haven't had an opportunity to work with Kubernetes much in practice, you can learn from this exam and apply your learnings on actual projects. </p>
<p>On the other hand, if you're already working with Kubernetes, taking the exam is an excellent chance to test your knowledge and learn more about its internal workings. </p>
<p>For example, you might have beren working with AWS for quite some time, but you haven't touched AWS costing, or haven't been following best practices. The certification covers every aspect of Kubernetes, so you will learn how you can reduce your cost or follow best practices.</p>
<p>In this study guide, I will not explain Kubernetes architecture or Kubernetes objects (Pods, Deployments, Services, Config, Secrets, and so on) in detail. If you want to dig into those topics, here are some helpful Kubernetes learning resources:</p>
<ul>
<li><a target="_blank" href="https://www.freecodecamp.org/news/learn-kubernetes-in-under-3-hours-a-detailed-guide-to-orchestrating-containers-114ff420e882/">Learn Kubernetes in 3 hours</a></li>
<li><a target="_blank" href="https://www.freecodecamp.org/news/the-kubernetes-handbook/">The Kubernetes Handbook</a></li>
</ul>
<h2 id="heading-what-is-the-certified-kubernates-administor">What is the Certified Kubernates Administor?</h2>
<p>The CKA is a product of the Cloud Native Computing Foundation (CNCF). It launched this certification collaboration with the Linux Foundation. Other certifications offered by CNCFcf are:</p>
<ul>
<li>Kubernetes and Cloud Native Associate (KCNA)</li>
<li>Certified Kubernetes Application Developer (CKAD)</li>
<li>Certified Kubernetes Security Specialist (CKS)</li>
</ul>
<p>As per the CNCF foundation,</p>
<blockquote>
<p>"The purpose of the Certified Kubernetes Administrator (CKA) program is to provide assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators."</p>
</blockquote>
<p>Here's what we'll cover in this study guide:</p>
<ul>
<li>Exam Details</li>
<li>Exam Tips</li>
<li>Exam Modules</li>
<li>Cluster Architecture, Installation, and Configuration</li>
<li>Workloads and Scheduling</li>
<li>Services and Networking</li>
<li>Storage</li>
<li>Troubleshooting</li>
</ul>
<p>Alright, let's dive in.</p>
<h2 id="heading-certified-kubernetes-administrator-exam-details">Certified Kubernetes Administrator Exam Details</h2>
<p>Here's some helpful information to get you started preparing and planning for the exam.</p>
<p>First, keep in mind that this is an online proctored exam, which means you can take it from your home or office. There are no exam centers.</p>
<p>Exam fees are $375, but the Linux foundation offers discount vouchers from time to time. Keep <a target="_blank" href="https://training.linuxfoundation.org/promo-inactive/">watching this space</a> to find them.</p>
<p>The CKA exam is a problem-based exam, and you'll solve those problems right in command line or by writing manifesto files.</p>
<p>It is a 2 hours exam, and you need solve 17 questions. A passing score is 66%. Each question will have different weights, like 4%, 5%, 7%, 13% and so on.</p>
<p>Some questions will have two parts. If you just get the first part correct, those points for the correct part will still be added to your score.</p>
<p>The CKA exam is an open-book exam. You have access to the below resources:</p>
<ul>
<li>https://kubernetes.io/docs/</li>
<li>https://github.com/kubernetes</li>
<li>https://kubernetes.io/blog/</li>
</ul>
<p>If you don't pass on your first try, you get one retake.</p>
<p>The CKA certification is valid for three years.</p>
<h2 id="heading-certified-kubernetes-administrator-exam-tips">Certified Kubernetes Administrator Exam Tips</h2>
<p>To get you started, <a target="_blank" href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/">here's a handy kubectl cheat sheet</a> you can use during the exam. </p>
<p>You can create an alias for it so that you don't need to type the full command. For example, if you create an alias like "alias k=kubectl," you can type "k" instead of "kubectl." </p>
<p>During the exam, avoid creating Kubernetes resources using YAML files, as this takes too much time. Instead, use an imperative command to create resources. </p>
<p>For example, to ceate a pod use this command:</p>
<pre><code>kubectl run nginx --image=nginx
</code></pre><p>If you still want to create a resource using YAML files, use dry-run=client.</p>
<p>Make sure you study the basis of curl and systemctl, as they'll show up on the exam.</p>
<p>Finally, there are around 6 or 8 different clusters setup for the exam. Make sure you switch contexts before solving the problem. The context switch command will be provided at the start of each question.</p>
<h2 id="heading-certified-kubernetes-administrator-exam-modules">Certified Kubernetes Administrator Exam Modules</h2>
<p>There are five modules in the exam:</p>
<ul>
<li>Cluster Architecture, Installation, and Configuration</li>
<li>Workloads and Scheduling</li>
<li>Services and Networking</li>
<li>Storage</li>
<li>Troubleshooting</li>
</ul>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/Screenshot-2022-01-11-at-00.16.14.png" alt="Screenshot-2022-01-11-at-00.16.14" width="600" height="400" loading="lazy"></p>
<p>We'll look at each of them in a bit more detail, and cover some other important and related info along the way.</p>
<h3 id="heading-impertive-vs-declarative-statements">Impertive vs Declarative statements</h3>
<p>You need to know the difference between imperative and declarative statements so you can decide when to use each of them.</p>
<p>Deploying the Kubernetes resource imperatively means running kubectl commands, for example, <code>kubectl run nginx --image=nginx</code>. Deploying declaratively means writing manifests using YAML, for example <code>kubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml</code>.</p>
<p>Deploying resources using the imperative method helps during the exam and saves you time.</p>
<h3 id="heading-cluster-architecture-installation-amp-configuration-module">Cluster Architecture, Installation &amp; Configuration Module</h3>
<p>You can expect 25% of questions on the exam to come from this section. If you want to get a good score on this section, make sure you review these topics thoroughly. </p>
<p>This module mostly focuses on authentication, upgrading your cluster version, backing up Kubernetes data, and setting up a cluster using kubeadm.</p>
<h3 id="heading-role-based-access-control-rbac">Role-based access control (RBAC)</h3>
<p>Understanding role-based access control (RBAC) is essential. RBAC restricts access to computers or networks based on the individual's role. Roles include policies or rules defining who can do what within the Kubernetes cluster. </p>
<p>Here's a relevant section on <a target="_blank" href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">ClusterRole and CluserRole Binding</a> you can check out.</p>
<p>Example questions will be like this: </p>
<blockquote>
<p>Create a new service account named "sa" in the development namespace. Create a cluster role called "pod-reader," having permission to get pod and list pods. "sa" should be able to get pod and list pods.</p>
</blockquote>
<p>So how would you tackle a question like this?</p>
<p>First, you need to create a namespace called "development" and create a service called "sa" in the development workspace:</p>
<pre><code>kubectl create namespace development
kubecrl create serviceaccount sa -n development
kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding pod-reader --clusterrole=pod-reader --serviceaccount=development:sa
</code></pre><p>You can test if the sa is allowed to read pods using the below command:</p>
<pre><code>kubectl auth can-i list pods --development target --<span class="hljs-keyword">as</span> system:serviceaccount:development:sa
</code></pre><h4 id="heading-how-to-install-and-configure-a-kubernetes-cluster-using-kubeadm">How to install and configure a Kubernetes cluster using kubeadm</h4>
<p>Kubeadm automates the installation and configuration of Kubernetes components like Control Manager, API server, and KubeDNS. </p>
<p>If you have time, I highly recommend building a Kubernetes cluster using the <a target="_blank" href="https://github.com/kelseyhightower/kubernetes-the-hard-way">Kubernetes the Hard Way guide</a> designed by Kelsey Hightower. </p>
<p>If you don't have time to go through the complete guide, from the exam point of view just study certification location and Kubernetes Config Path.</p>
<h4 id="heading-how-to-upgrade-your-kubernetes-cluster-version">How to Upgrade Your Kubernetes Cluster Version</h4>
<p>It's very likely you will get this question, as it is specifically mentioned in the exam syllabus. </p>
<p>Below are steps to upgrade the cluster version from 1.22.21 to 1.22.22. You might be also asked to upgrade Kubelet and Kube proxy versions too.</p>
<ul>
<li>Check the current version of cluster, kubeadm, and kubelet:</li>
</ul>
<pre><code>kubectl get nodes -o wide
kubeadm version
kubectl version
</code></pre><ul>
<li>Upgrade the control plane nodes first:</li>
</ul>
<pre><code>apt-get update &amp;&amp; apt-get install -y kubeadm=<span class="hljs-number">1.2</span><span class="hljs-number">.22</span><span class="hljs-number">-00</span>
</code></pre><ul>
<li>Verify the upgrade plan. Use the below command to see if the cluster can be upgraded:</li>
</ul>
<pre><code>kubeadm upgrade plan
</code></pre><ul>
<li>Apply the upgraded version:</li>
</ul>
<pre><code>sudo kubeadm upgrade apply v1<span class="hljs-number">.22</span><span class="hljs-number">.0</span>
</code></pre><p>Once the command finishes, you should be able to see  "upgrade/successful SUCCESS! Your cluster was upgraded to "v1.22.0". Enjoy!"</p>
<ul>
<li>Prepare the node for maintenance by marking it unschedulable and evicting the workloads:</li>
</ul>
<pre><code>kubectl drain node01 --ignore-daemonsets
</code></pre><ul>
<li>Next upgrade the kubelet and kubectl:</li>
</ul>
<pre><code>apt-get update &amp;&amp; apt-get install -y kubelet=<span class="hljs-number">1.22</span><span class="hljs-number">.0</span><span class="hljs-number">-00</span> kubectl=<span class="hljs-number">1.22</span><span class="hljs-number">.0</span><span class="hljs-number">-00</span>
</code></pre><ul>
<li>At last restart the kubelet and check if the desired version was upgraded:</li>
</ul>
<pre><code>sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes -o wide
kubeadm version
kubectl version
</code></pre><ul>
<li>Bring the node back online by marking it schedulable:<pre><code>kubectl uncordon node01
</code></pre></li>
</ul>
<h4 id="heading-how-to-backup-and-restore-an-etcd-cluster">How to Backup and Restore an ETCD Cluster</h4>
<p>ETCD is a consistent, distributed key-value store that provides a reliable way to store data that a distributed system or cluster of machines needs to be accessed. </p>
<p>Kubernetes uses etcd to keep all its config and data. You can think of it as a database of Kubernetes. When you run "kubectl get pods", the results are fetched from etcd. In the exam the certification name and path are provided</p>
<ul>
<li>Login to the master node and run the below command to backup the etcd:</li>
</ul>
<pre><code>etcdctl snapshot save /tmp/etcd-backup.db  --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
</code></pre><ul>
<li>Test your backup file:</li>
</ul>
<pre><code>ETCDCTL_API=<span class="hljs-number">3</span> etcdctl --write-out=table snapshot status snapshotdb
</code></pre><ul>
<li>Restore etcd from a backup file:</li>
</ul>
<pre><code>ETCDCTL_API=<span class="hljs-number">3</span> etcdctl snapshot restore tmp/etcd-backup.db  --data-dir /<span class="hljs-keyword">var</span>/lib/etcd-backup --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
</code></pre><h3 id="heading-workloads-amp-scheduling-module">Workloads &amp; Scheduling Module</h3>
<p>In this section you will get questions about deploying a Kubernetes application, creating daemonsets, scaling the application, configuring health checks, multi-container pods, and using config maps and secrets in a  pod.</p>
<p><strong>How to deploy an application and expose the app using a service</strong></p>
<p>Example questions for deploying an app and creating a service might look like this:</p>
<blockquote>
<p>Create a deployment as follows:
Name: nginx
Exposed via a service nginx using CluserIP
Ensure that the service &amp; pod are accessible within the cluster</p>
</blockquote>
<ul>
<li>Manifesto file for creating deployment:<pre><code>apiVersion: apps/v1
<span class="hljs-attr">kind</span>: Deployment
<span class="hljs-attr">metadata</span>:
name: nginx-deployment
<span class="hljs-attr">labels</span>:
  app: nginx
<span class="hljs-attr">spec</span>:
replicas: <span class="hljs-number">3</span>
<span class="hljs-attr">selector</span>:
  matchLabels:
    app: nginx
<span class="hljs-attr">template</span>:
  metadata:
    labels:
      app: nginx
  <span class="hljs-attr">spec</span>:
    containers:
    - name: nginx
      <span class="hljs-attr">image</span>: nginx:<span class="hljs-number">1.14</span><span class="hljs-number">.2</span>
      <span class="hljs-attr">ports</span>:
      - containerPort: <span class="hljs-number">80</span>
</code></pre></li>
</ul>
<p>Run kubectl get deployments to check if the Deployment was created. If the deployment is sucessfull "Ready" should show 3/3. Ready displays how many replicas of the application are available to your users.</p>
<p>If you need to expose your application outside the cluster or inside the cluster, you need to create a service. There are different options available to expose your application as per your needs.</p>
<ul>
<li>ClusterIP: Exposes application within cluster. For example exposing a database to a backend application.</li>
<li>NodePort: Exposes application outside the cluster using node ip. For exmaple exposing your frontend application to the outside world.</li>
<li>LoadBalancer: Exposes application outside the cluster using a load balancer.</li>
</ul>
<p>How about an example of exposing your application using ClusterIP (within the cluster). You can create a service using the below manifesto file:</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Service
<span class="hljs-attr">metadata</span>:
  name: nginx-service
<span class="hljs-attr">spec</span>:
  selector:
    app: nginx
  <span class="hljs-attr">type</span>: ClusterIP
  <span class="hljs-attr">ports</span>:
  - protocol: TCP
    <span class="hljs-attr">port</span>: <span class="hljs-number">80</span>
    <span class="hljs-attr">targetPort</span>: <span class="hljs-number">8080</span>
</code></pre><p>You can use "kubectl get service" to see the IP address of the service.</p>
<p>Here's an example of exposing your application using NodePort (outside the cluster). You can create a service using the below manifesto file:</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Service
<span class="hljs-attr">metadata</span>:
  name: nginx-service
<span class="hljs-attr">spec</span>:
  selector:
    app: nginx
  <span class="hljs-attr">type</span>: NodePort
  <span class="hljs-attr">ports</span>:
  - protocol: TCP
    <span class="hljs-attr">port</span>: <span class="hljs-number">80</span>
    <span class="hljs-attr">targetPort</span>: <span class="hljs-number">8080</span>
</code></pre><p>You can use "kubectl get service" to see the IP address of the node.</p>
<p>Another sample question will be like this:</p>
<blockquote>
<p>Schedule the pod on a node labeled with distype=ssd</p>
</blockquote>
<p>Here you can use node-selector like this:</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: nginx
  <span class="hljs-attr">labels</span>:
    env: test
<span class="hljs-attr">spec</span>:
  containers:
  - name: nginx
    <span class="hljs-attr">image</span>: nginx
  <span class="hljs-attr">nodeSelector</span>:
    disktype: ssd
</code></pre><h4 id="heading-how-to-scale-and-update-the-deployments">How to scale and update the deployments</h4>
<p>If you need to scale the deployment after creating it, you can use the below command.</p>
<pre><code>kubectl scale deployment/nginx-deployment --replicas=<span class="hljs-number">6</span>
</code></pre><p>You can update the image of the existing deployment using the below command: </p>
<pre><code>kubectl set image deployment/nginx-deployment nginx=nginx:<span class="hljs-number">1.8</span>
</code></pre><h4 id="heading-how-to-configure-healthcecks-for-your-application">How to configure healthcecks for your application</h4>
<p>Once your application is deployed, you need to make sure that the app is running successfully. If an application crashes, you need to know how you can kill the container and bring in the new one. </p>
<p>Health checks help to achieve this use case. There the three different types of health checks you can perform:</p>
<ul>
<li>Readiness Probe: Kubernates uses readiness probes to know when a container is ready to start accepting traffic.</li>
<li>Liveness Probe: Kubernates uses liveness probes to check when to restart a container. Once the application is deployed succesfully, if it crashes in between, a liveness probe will detect and restart the application.</li>
<li>Startup Probe: Kubernates uses startup probes to know when a container application has started.</li>
</ul>
<p>Example of configuring a livenss probe:</p>
<pre><code>kubectl apply -f https:<span class="hljs-comment">//k8s.io/examples/pods/probe/exec-liveness.yaml</span>
</code></pre><p>Example of configuring a readines probe:</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  labels:
    test: liveness
  <span class="hljs-attr">name</span>: liveness-exec
<span class="hljs-attr">spec</span>:
  containers:
  - name: liveness
    <span class="hljs-attr">image</span>: k8s.gcr.io/busybox
    <span class="hljs-attr">args</span>:
    - <span class="hljs-regexp">/bin/</span>sh
    - -c
    - touch /tmp/healthy; sleep <span class="hljs-number">30</span>; rm -rf /tmp/healthy; sleep <span class="hljs-number">600</span>
    <span class="hljs-attr">livenessProbe</span>:
      exec:
        command:
        - cat
        - <span class="hljs-regexp">/tmp/</span>healthy
      <span class="hljs-attr">initialDelaySeconds</span>: <span class="hljs-number">5</span>
      <span class="hljs-attr">periodSeconds</span>: <span class="hljs-number">5</span>
</code></pre><h4 id="heading-multicontainer-podsidecar-containers">MultiContainer pod/sidecar containers</h4>
<p>The primary purpose of a multi-container pod is to support a co-located helper container for the main program. </p>
<p>The standard logging method for containerized applications is writing to standard output and standard error streams. </p>
<p>There might be use cases where you also need to access these logs after a container crashes. For example, your NGINX designed for serving the web page is not suitable for shipping the logs to a centralized logging solution. </p>
<p>You can set up a sidecar container that specialises in log shipping. The sidecar container is designed as a logging agent, which is configured to pick up logs from an application container.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/logging-with-streaming-sidecar.png" alt="logging-with-node-agent-reference from https://kubernetes.io/" width="600" height="400" loading="lazy"></p>
<p>Example questions about this topic will be like this:</p>
<blockquote>
<p>Create a Pod with the main container NGINX, which outputs logs to shared volume, and configure the sidecar container to stream those logs. Verify both containers are running.</p>
</blockquote>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: nginx-server
<span class="hljs-attr">spec</span>:
  volumes:
    - name: shared-logs
      <span class="hljs-attr">emptyDir</span>: {}

  <span class="hljs-attr">containers</span>:
    - name: nginx
      <span class="hljs-attr">image</span>: nginx
      <span class="hljs-attr">volumeMounts</span>:
        - name: shared-logs
          <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/var/</span>log/nginx

    - name: sidecar-container
      <span class="hljs-attr">image</span>: busybox
      <span class="hljs-attr">command</span>: [<span class="hljs-string">"sh"</span>,<span class="hljs-string">"-c"</span>,<span class="hljs-string">"while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"</span>]
      <span class="hljs-attr">volumeMounts</span>:
        - name: shared-logs
          <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/var/</span>log/nginx
</code></pre><h4 id="heading-how-to-configure-a-pod-to-use-a-configmap">How to configure a pod to use a ConfigMap</h4>
<p>ConfigMaps store data in key-value format. A possible usecase of ConfigMaps is keeping application code and configuration separate. </p>
<p>ConfigMaps are designed to store non-confidential data such as environment variables or properties of a game or application. If you want to store sensitive data, use secrets. </p>
<p>ConfigMaps help create separate config files for each environment (development, staging, prod).</p>
<p>You can create ConfigMaps from files, directories, and literal values. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.</p>
<p>Example questions will be like this:</p>
<blockquote>
<p> Create a ConfigMap called cfg-data with values var1=val1,var2=val2 and create a busybox pod with volume config-volume, which reads data from this ConfigMap cfg-volume and put it on the path
/etc/cfg</p>
</blockquote>
<pre><code>kubectl create configmap cfg-data --<span class="hljs-keyword">from</span>-literal=key1=val1 --<span class="hljs-keyword">from</span>-literal=key2=val2 --<span class="hljs-keyword">from</span>-literal=key3=val3
kubectl create -f https:<span class="hljs-comment">//github.com/nitheesh86/cka/blob/main/deployments-services/configmap.yml</span>
</code></pre><h4 id="heading-how-to-configure-a-pod-to-use-secrets">How to configure a pod to use secrets</h4>
<p>Secrets in Kubernetes can be used to store sensitive data such as passwords and tokens. Secrets are similar to ConfigMaps but are specifically designed to hold sensitive data. Pods can use secrets as an environment variable or as files in a volume.</p>
<ul>
<li>Example questions about secrets will be like this:</li>
</ul>
<blockquote>
<p>Create a secret named "db-secret" in namespace database. The secret should contain db_user=root and pass=1234. Mount it as ready only into the pod named "mysql-db" as an enviournment variable.</p>
</blockquote>
<pre><code>kubectl create namespace database
kubectl -n secret create secret generic db-secret --<span class="hljs-keyword">from</span>-literal=username=db_user --<span class="hljs-keyword">from</span>-literal=db_pass=<span class="hljs-number">1234</span> -n database
</code></pre><pre><code>https:<span class="hljs-comment">//github.com/nitheesh86/cka/blob/main/deployments-services/mysql-secret.yml</span>
</code></pre><h3 id="heading-services-and-networking-module">Services and Networking Module</h3>
<p>In this section you will get questions about creating networking polices, creating ingress controllers, and exposing apps through services (already covered in above)</p>
<h4 id="heading-how-to-creating-networking-policies">How to creating networking policies</h4>
<p>In Kubernetes, by default, communication between all pods is allowed. If you need to isolate pods, you need to apply a network policy.</p>
<p>Example questions about netowrking policies will be like this:</p>
<p>Allow traffic from production namespace only:</p>
<pre><code>kind: NetworkPolicy
<span class="hljs-attr">apiVersion</span>: networking.k8s.io/v1
<span class="hljs-attr">metadata</span>:
  name: allow-traffic-<span class="hljs-keyword">from</span>-namespace
<span class="hljs-attr">spec</span>:
  podSelector:
    matchLabels:
  ingress:
  - <span class="hljs-keyword">from</span>:
    - namespaceSelector:
        matchLabels:
          purpose: production
</code></pre><p>This policy will allow traffic to all pods from the production namespace.</p>
<h4 id="heading-how-to-create-an-ingress-resource">How to create an ingress resource</h4>
<p>An ingress controller is a type of load balancer. It accepts traffic from outside the cluster and loads the balance to pods. We can also configure rules like redirections.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/Screenshot-2022-01-12-at-20.53.03.png" alt="Screenshot-2022-01-12-at-20.53.03 - Reference from kubernetes.io Website" width="600" height="400" loading="lazy"></p>
<ul>
<li>How to create an ingress using NGINX ingress controller:</li>
</ul>
<pre><code>apiVersion: networking.k8s.io/v1
<span class="hljs-attr">kind</span>: Ingress
<span class="hljs-attr">metadata</span>:
  name: nginx-ngress
  <span class="hljs-attr">annotations</span>:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /example
        <span class="hljs-attr">pathType</span>: Prefix
        <span class="hljs-attr">backend</span>:
          service:
            name: nginx-service
            <span class="hljs-attr">port</span>:
              number: <span class="hljs-number">80</span>
</code></pre><h3 id="heading-storage-module">Storage Module</h3>
<p>This section is all about creating persistence volume, persistence volume claims, and mounting into to a pod. It's helpful to study a lot about persistence and persistence volume.</p>
<p>A PersistentVolume (PV) is a  storage in the cluster that has been provisioned by a stroage administrator or dynamically provisioned using Storage Classes like AWSElasticBlockStore, AzureDisk, and so on.</p>
<p>A PersistentVolumeClaim (PVC) is a request for storage by a user or Pod.</p>
<p>Example questions will be like this:</p>
<blockquote>
<p>Create an NGINX Pod and mount index.html as PersistentVolume.</p>
</blockquote>
<ul>
<li>ssh into the node and create a /mnt/data directory and then create an index.html file:</li>
</ul>
<pre><code>sudo mkdir /mnt/data
sudo sh -c <span class="hljs-string">"echo 'Hello from Kubernetes storage' &gt; /mnt/data/index.htm</span>
</code></pre><ul>
<li>Create PersistentVolume and PersistentVolume Claim:</li>
</ul>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: PersistentVolume
<span class="hljs-attr">metadata</span>:
  name: task-pv-volume
  <span class="hljs-attr">labels</span>:
    type: local
<span class="hljs-attr">spec</span>:
  storageClassName: manual
  <span class="hljs-attr">capacity</span>:
    storage: <span class="hljs-number">10</span>Gi
  <span class="hljs-attr">accessModes</span>:
    - ReadWriteOnce
  <span class="hljs-attr">hostPath</span>:
    path: <span class="hljs-string">"/mnt/data"</span>
</code></pre><pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: PersistentVolumeClaim
<span class="hljs-attr">metadata</span>:
  name: task-pv-claim
<span class="hljs-attr">spec</span>:
  storageClassName: manual
  <span class="hljs-attr">accessModes</span>:
    - ReadWriteOnce
  <span class="hljs-attr">resources</span>:
    requests:
      storage: <span class="hljs-number">3</span>Gi
</code></pre><ul>
<li>Now, the config pod uses the PersistentVolume Claim</li>
</ul>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: task-pv-pod
<span class="hljs-attr">spec</span>:
  volumes:
    - name: task-pv-storage
      <span class="hljs-attr">persistentVolumeClaim</span>:
        claimName: task-pv-claim
  <span class="hljs-attr">containers</span>:
    - name: task-pv-container
      <span class="hljs-attr">image</span>: nginx
      <span class="hljs-attr">ports</span>:
        - containerPort: <span class="hljs-number">80</span>
          <span class="hljs-attr">name</span>: <span class="hljs-string">"http-server"</span>
      <span class="hljs-attr">volumeMounts</span>:
        - mountPath: <span class="hljs-string">"/usr/share/nginx/html"</span>
          <span class="hljs-attr">name</span>: task-pv-storage
</code></pre><p>Also, in the exam, you might be asked to expand PersistentVolume. Some storage classes support resizing the volume—for example, AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs.</p>
<p>If the storage class is not enabled, you need to set it to "allowVolumeExpansion: true." </p>
<p>Get the storage class name you want to expand with "kubectl get storage classes". Then edit the YAML file. </p>
<pre><code>apiVersion: storage.k8s.io/v1
<span class="hljs-attr">kind</span>: StorageClass
<span class="hljs-attr">metadata</span>:
  name: standard
<span class="hljs-attr">parameters</span>:
  type: pd-standard
<span class="hljs-attr">provisioner</span>: kubernetes.io/gce-pd
<span class="hljs-attr">allowVolumeExpansion</span>: <span class="hljs-literal">true</span>
<span class="hljs-attr">reclaimPolicy</span>: Delete
</code></pre><ul>
<li><p>Then edit PVC to request more space:</p>
<pre><code>kubectl edit pvc myclaim and update request parameter
</code></pre><p><img src="https://www.freecodecamp.org/news/content/images/2022/01/pvc-storageclass.png" alt="pvc-storageclass" width="600" height="400" loading="lazy"></p>
</li>
<li><p>Once PVC is updated, you need to replace the pod to change to effect. You can check the new size by "kubectl get pvc. myclaim".</p>
</li>
</ul>
<h3 id="heading-troubleshooting-module">Troubleshooting Module</h3>
<p>This part covers 30% of the exam. You can expect questions about how to troubleshoot nodes.</p>
<p>Example questions will be like this:</p>
<blockquote>
<p>One of the worker nodes in cluster is not in a ready state. Troubleshoot the node and bring it online.</p>
</blockquote>
<p>To Troubleshoot a worker node, you need to know what components are running. Worker nodes consist of kubelet and kube-proxy. Typically, one of these services will have issues. </p>
<p>First, check if the service is running and try restarting. If restarting the service is not working, check the logs.</p>
<ul>
<li>var/log/kubelet.log - Kubelet, responsible for running containers on the node</li>
<li>/var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing</li>
</ul>
<p>You can refer to the below link for a detailed trooubleshooting guide:</p>
<p>https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/</p>
<h2 id="heading-how-to-verify-your-answers-on-the-exam">How to Verify Your Answers on the Exam</h2>
<p>It is very important verify your answers on the exam questions. Here are the ways to verify your answers:</p>
<h3 id="heading-check-your-pods">Check your pods</h3>
<p>After you create a pod, make sure it's in the ready state with the command <code>kubectl get pod nginx</code>.</p>
<p>If your pods are not in a ready state, run <code>kubectl desribe pod nginx</code> to see the events. You can also check the pod logs by running <code>kubectllogs nginx</code>.</p>
<h3 id="heading-check-deployment-status">Check deployment status</h3>
<p>Once you create a deployment you can get the deployment status by running <code>kubectl get deployments nginx-deployment</code>. </p>
<p>If your depoyment is in pending state you can view events by running <code>kubectl deployment deployment nginx-deployment</code>.</p>
<h3 id="heading-verify-services">Verify services</h3>
<p>You can verify that your service endpoints are working by launching helper pods with the image "busybox". </p>
<p>You can ssh into the helper pod with the command <code>kubectl exec --stdin --tty busybox -- /bin/bash</code> and then query the endpoint curl http://ipaddress/. </p>
<p>You can get the service endpoint by running <code>kubectl get svc my-service</code>. <a target="_blank" href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/">Here's a reference</a> where you can study more about logging into the container.</p>
<h2 id="heading-and-thats-it">And that's it!</h2>
<p>Good luck studying for the exam! I hope this guide helps you prepare and pass.</p>
<p>These are the main sources and references I used to study and to write this article:</p>
<ul>
<li>https://kubernetes.io/</li>
<li>https://www.cncf.io/</li>
<li>https://github.com/ahmetb/kubernetes-network-policy-recipes</li>
<li>https://www.katacoda.com/courses/kubernetes</li>
<li>https://jenciso.github.io/personal/manage-tls-certificates-for-kubernetes-users</li>
</ul>
<p>Thank you for reading, and happy learning.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
