<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ continuous delivery - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Sat, 09 May 2026 08:29:52 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/continuous-delivery/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ The CI/CD Handbook: Learn Continuous Integration and Delivery with GitHub Actions, Docker, and Google Cloud Run ]]>
                </title>
                <description>
                    <![CDATA[ Hey everyone! 🌟 If you’re in the tech space, chances are you’ve come across terms like Continuous Integration (CI), Continuous Delivery (CD), and Continuous Deployment. You’ve probably also heard about automation pipelines, staging environments, pro... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/learn-continuous-integration-delivery-and-deployment/</link>
                <guid isPermaLink="false">6751d2f856661d3d5a501466</guid>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                    <category>
                        <![CDATA[ CI/CD ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Prince Onukwili ]]>
                </dc:creator>
                <pubDate>Thu, 05 Dec 2024 16:21:12 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734119999570/cfbf3375-1e95-41df-b5b0-8fbb8b827f59.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Hey everyone! 🌟 If you’re in the tech space, chances are you’ve come across terms like <strong>Continuous Integration (CI)</strong>, <strong>Continuous Delivery (CD)</strong>, and <strong>Continuous Deployment</strong>. You’ve probably also heard about automation pipelines, staging environments, production environments, and concepts like testing workflows.</p>
<p>These terms might seem complex or interchangeable at first glance, leaving you wondering: What do they actually mean? How do they differ from one another? 🤔</p>
<p>In this handbook, I’ll break down these concepts in a clear and approachable way, drawing on relatable analogies to make each term easier to understand. 🧠💡 Beyond just theory, we’ll dive into a hands-on tutorial where you’ll learn how to set up a CI/CD workflow step by step.</p>
<p>Together, we’ll:</p>
<ul>
<li><p>Set up a Node.js project. ✨</p>
</li>
<li><p>Implement automated tests using Jest and Supertest. 🛠️</p>
</li>
<li><p>Set up a CI/CD workflow using GitHub Actions, triggered on push, and pull requests, or after a new release. ⚙️</p>
</li>
<li><p>Build and publish a Docker image of your application to Docker Hub. 📦</p>
</li>
<li><p>Deploy your application to a staging environment for testing. 🚀</p>
</li>
<li><p>Finally, roll it out to a production environment, making it live! 🌐</p>
</li>
</ul>
<p>By the end of this guide, not only will you understand the difference between CI/CD concepts, but you’ll also have practical experience in building your own automated pipeline. 😃</p>
<h3 id="heading-table-of-contents">Table of Contents</h3>
<ol>
<li><p><a class="post-section-overview" href="#heading-what-is-continuous-integration-deployment-and-delivery"><strong>What is Continuous Integration, Deployment, and Delivery?</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-differences-between-continuous-integration-continuous-delivery-and-continuous-deployment"><strong>Differences Between Continuous Integration, Continuous Delivery, and Continuous Deployment</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-a-nodejs-project-with-a-web-server-and-automated-tests"><strong>How to Set Up a Node.js Project with a Web Server and Automated Tests</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-create-a-github-repository-to-host-your-codebase"><strong>How to Create a GitHub Repository to Host Your Codebase</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-the-ci-and-cd-workflows-within-your-project"><strong>How to Set Up the CI and CD Workflows Within Your Project</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-set-up-a-docker-hub-repository-for-the-projects-image-and-generate-an-access-token-for-publishing-the-image"><strong>Set Up a Docker Hub Repository for the Project's Image and Generate an Access Token for Publishing the Image</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-a-google-cloud-account-project-and-billing-account"><strong>Create a Google Cloud Account, Project, and Billing Account</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-a-google-cloud-service-account-to-enable-deployment-of-the-nodejs-application-to-google-cloud-run-via-the-cd-pipeline"><strong>Create a Google Cloud Service Account to Enable Deployment of the Node.js Application to Google Cloud Run via the CD Pipeline</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-the-staging-branch-and-merge-the-feature-branch-into-it-continuous-integration-and-continuous-delivery"><strong>Create the Staging Branch and Merge the Feature Branch into It (Continuous Integration and Continuous Delivery)</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-merge-the-staging-branch-into-the-main-branch-continuous-integration-and-continuous-deployment"><strong>Merge the Staging Branch into the Main Branch (Continuous Integration and Continuous Deployment)</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion"><strong>Conclusion</strong></a></p>
</li>
</ol>
<h2 id="heading-what-is-continuous-integration-deployment-and-delivery"><strong>What is Continuous Integration, Deployment, and Delivery?</strong> 🤔</h2>
<h3 id="heading-continuous-integration-ci"><strong>Continuous Integration (CI)</strong></h3>
<p>Imagine you’re part of a team of six developers, all working on the same project. Without a proper system, chaos would ensue.</p>
<p>Let’s say Mr. A is building a new login feature, Mrs. B is fixing a bug in the search bar, and Mr. C is tweaking the dashboard UI—all at the same time. If everyone is editing the same "folder" or codebase directly, things could go horribly wrong: <em>"Hey! Who just broke the app?!"</em> 😱</p>
<p>To keep everything in order, teams use <strong>Version Control Systems (VCS)</strong> like GitHub, GitLab, or BitBucket. Think of it as a digital workspace where everyone can safely collaborate without stepping on each other’s toes. 🗂️✨</p>
<p>Here’s how Continuous Integration fits into this process step-by-step:</p>
<h4 id="heading-1-the-main-branch-the-general-folder">1. <strong>The Main Branch: The General Folder</strong> ✨</h4>
<p>At the heart of every project is the <strong>main branch</strong>—the ultimate source of truth. It contains the stable codebase that powers your live app. It’s where every team member contributes their work, but with one important rule: only tested and approved code gets merged here. 🚀</p>
<h4 id="heading-2-feature-branches-personal-workspaces">2. <strong>Feature Branches: Personal Workspaces</strong> 🔨</h4>
<p>When someone like Mr. A wants to work on a new feature, they create a <strong>feature branch</strong>. This branch is essentially a personal copy of the main branch where they can tinker, write code, and test without affecting others. Mrs. B and Mr. C are also working on their own branches. Everyone’s experiments stay neatly organized. 🧪💡</p>
<h4 id="heading-3-merging-changes-the-ci-workflow">3. <strong>Merging Changes: The CI Workflow</strong> 🎉</h4>
<p>When Mr. A is satisfied with his feature, he doesn’t just shove it into the main branch—CI ensures it’s done safely:</p>
<ul>
<li><p><strong>Automated Tests</strong>: Before merging, CI tools automatically run tests on Mr. A’s code to check for bugs or errors. Think of it as a bouncer guarding the main branch, ensuring no bad code gets in. 🕵️‍♂️</p>
</li>
<li><p><strong>Build Verification</strong>: The feature branch code is also "built" (converted into a deployable version of the app) to confirm it works as intended.</p>
</li>
</ul>
<p>Once these checks are passed, Mr. A’s feature branch is merged into the main branch. This frequent merging of changes is what we call <strong>Continuous Integration</strong>.</p>
<h3 id="heading-continuous-delivery-cd">Continuous Delivery (CD)</h3>
<p>Continuous Delivery (CD) often gets mixed up with Continuous Deployment, and while they share similarities, they serve distinct purposes in the development lifecycle. Let’s break it down! 🧐</p>
<h4 id="heading-the-need-for-a-staging-area">The Need for a <code>Staging</code> Area 🌉</h4>
<p>In the Continuous Integration (CI) process we discussed above, we primarily dealt with <strong>feature branches</strong> and the <strong>main branch</strong>. But directly merging changes from feature branches into the main branch (which powers the live product) can be risky. Why? 🛑</p>
<p>While automated tests and builds catch many errors, they’re not foolproof. Some edge cases or bugs might slip through unnoticed. This is where the <strong>staging branch</strong> and <strong>staging environment</strong> come into play! 🎭</p>
<p>Think of the staging branch as a “trial run.” Before unleashing changes to real customers, the codebase from feature branches is merged into the staging branch and deployed to a <strong>staging environment</strong>. This environment is an exact replica of the production environment, but it’s used exclusively by the <strong>Quality Assurance (QA) team</strong> for testing.</p>
<p>The QA team takes the role of a “test driver,” running the platform through its paces just as a real user would. They check for usability issues, edge cases, or bugs that automated tests might miss, and provide feedback to developers for fixes. 🚦 If everything passes, the codebase is cleared for deployment to production.</p>
<h4 id="heading-continuous-delivery-in-action">Continuous Delivery in Action 📦</h4>
<p>The process of merging changes into the staging branch and deploying them to the <strong>staging environment</strong> is what we call <strong>Continuous Delivery</strong>. 🛠️ It ensures that the application is always in a deployable state, ready for the next step in the pipeline.</p>
<p>Unlike Continuous Deployment (which we’ll discuss later), Continuous Delivery doesn’t automatically push changes to production (live platform). Instead, it pauses to let humans—namely the QA team or stakeholders—decide when to proceed. This adds an extra layer of quality assurance, reducing the chances of errors making it to the live product. 🕵️‍♂️</p>
<h3 id="heading-continuous-deployment-cd">Continuous Deployment (CD)</h3>
<p>Continuous Deployment (CD) takes automation to its peak. While it shares similarities with Continuous Delivery, the key difference lies in the <strong>final step</strong>: there’s no manual approval required. The final process—merging the codebase and deploying it live for end users (the QA testers or the team lead could do this).</p>
<p>Let’s explore what makes Continuous Deployment so powerful (and a little scary)! 😅</p>
<h4 id="heading-the-last-mile-of-the-cicd-pipeline">The Last Mile of the CI/CD Pipeline 🛣️</h4>
<p>Imagine you’ve gone through the rigorous process of Continuous Integration: teammates have merged their feature branches, automated tests were run, and the codebase was successfully deployed to the staging environment during Continuous Delivery.</p>
<p>Now, you’re confident that the application is free of bugs and ready to shine in the production environment—the live version of your platform used by real customers.</p>
<p>In <strong>Continuous Deployment</strong>, this final step of deploying changes to the live environment happens <strong>automatically</strong>. The pipeline triggers whenever specific events occur, such as:</p>
<ul>
<li><p>A <strong>Pull Request (PR)</strong> is merged into the <strong>main branch</strong>.</p>
</li>
<li><p>A new <strong>release version</strong> is created.</p>
</li>
<li><p>A <strong>commit</strong> is pushed directly to the production branch (though this is rare for most teams).</p>
</li>
</ul>
<p>Once triggered, the pipeline springs into action, building, testing, and finally deploying the updated codebase to the production environment. 📡</p>
<h2 id="heading-differences-between-continuous-integration-continuous-delivery-and-continuous-deployment"><strong>Differences Between Continuous Integration, Continuous Delivery, and Continuous Deployment</strong> 🔍</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aspect</td><td>Continuous Integration (CI)</td><td>Continuous Delivery (CD)</td><td>Continuous Deployment (CD)</td></tr>
</thead>
<tbody>
<tr>
<td>Primary Focus</td><td>Merging feature branches into the main/general codebase OR to the staging codebase.</td><td>Deploying the tested code to a staging environment for QA testing and approval.</td><td>Automatically deploying the code to the live production environment.</td></tr>
<tr>
<td><strong>Automation Level</strong></td><td>Automates testing and building processes for feature branches.</td><td>Automates deployment to staging/test environments after successful testing.</td><td>Fully automates the deployment to production with no manual approval.</td></tr>
<tr>
<td><strong>Testing Scope</strong></td><td>Automated tests run on feature branches to ensure code quality before merging into the main or staging branch.</td><td>Includes automated tests before deployment to staging and allows QA testers to perform manual testing in a controlled environment.</td><td>May include automated tests as a final check, ensuring the production environment is stable before deployment.</td></tr>
<tr>
<td><strong>Branch Involved</strong></td><td>Feature branches merging into the main/general or staging branch.</td><td>Staging branch used as an intermediate step before merging into the main branch.</td><td>Main/general branch deployed directly to production.</td></tr>
<tr>
<td><strong>Environment Target</strong></td><td>Ensures integration and testing within a local environment or build pipeline.</td><td>Deploys to staging/test environments where QA testers validate features.</td><td>Deploys to production/live environment accessed by end users.</td></tr>
<tr>
<td><strong>Key Goal</strong></td><td>Prevent integration conflicts and ensure new changes don’t break the existing codebase.</td><td>Provide a stable, near-production environment for thorough QA testing before final deployment.</td><td>Ensure that new features and updates reach users as soon as possible with minimal delays.</td></tr>
<tr>
<td><strong>Approval Process</strong></td><td>No approval needed. Feature branches are tested and merged upon passing criteria.</td><td>QA team or lead provides feedback/approval before changes are merged into the main branch for production.</td><td>No manual approval. Deployment is entirely automated.</td></tr>
<tr>
<td><strong>Example Trigger</strong></td><td>A developer merges a feature branch into the main branch.</td><td>The staging branch passes automated tests (during PR) and is ready for deployment to the testing environment.</td><td>A new release is created or a pull request is merged into the main branch, triggering an automatic production deployment.</td></tr>
</tbody>
</table>
</div><p>Now that we’ve untangled the mysteries of Continuous Integration, Continuous Delivery, and Continuous Deployment, it’s time to roll up our sleeves and put theory into practice 😁.</p>
<h2 id="heading-how-to-set-up-a-nodejs-project-with-a-web-server-and-automated-tests"><strong>How to Set Up a Node.js Project with a Web Server and Automated Tests</strong> ✨</h2>
<p>In this hands-on section, we’ll build a Node.js web server with automated tests using Jest. From there, we’ll create a CI/CD pipeline with GitHub Actions that automates testing for every <strong>pull request to the staging and main branches</strong>. Finally, we’ll publish an Image of our application to DockerHub and deploy the image to <strong>Google Cloud Run</strong>, first to a staging environment for testing and later to the production environment for live use.</p>
<p>Ready to bring your project to life? Let’s get started! 🚀✨</p>
<h3 id="heading-step-1-install-nodejs">Step 1: Install Node.js 📥</h3>
<p>To get started, you’ll need to have <strong>Node.js</strong> installed on your machine. Node.js provides the JavaScript runtime we’ll use to create our web server.</p>
<ol>
<li><p>Visit <a target="_blank" href="https://nodejs.org/en/download/package-manager">https://nodejs.org/en/download/package-manager</a></p>
</li>
<li><p>Choose your operating system (Windows, macOS, or Linux) and download the installer.</p>
</li>
<li><p>Follow the installation instructions to complete the setup.</p>
</li>
</ol>
<p>To verify that Node.js was installed successfully, open your terminal and run <code>node -v</code>. This should display the installed version of Node.js</p>
<h3 id="heading-step-2-clone-the-starter-repository">Step 2: Clone the Starter Repository 📂</h3>
<p>The next step is to grab the starter code from GitHub. If you don’t have Git installed, you can download it at <a target="_blank" href="https://git-scm.com/downloads">https://git-scm.com/downloads</a>. Choose your OS and follow the instructions to install Git. Once you’re set, it’s time to clone the repository.</p>
<p>Run the following command in your terminal to clone the boilerplate code:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> --single-branch --branch initial https://github.com/onukwilip/ci-cd-tutorial
</code></pre>
<p>This will download the project files from the <code>initial</code> branch, which contains the starter template for our Node.js web server.</p>
<p>Navigate into the project directory:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ci-cd-tutorial
</code></pre>
<h3 id="heading-step-3-install-dependencies">Step 3: Install Dependencies 📦</h3>
<p>Once you’re in the project directory, install the required dependencies for the Node.js project. These are the packages that power the application:</p>
<pre><code class="lang-bash">npm install --force
</code></pre>
<p>This will download and set up all the libraries specified in the project. Alright, dependencies installed? You’re one step closer!</p>
<h3 id="heading-step-4-run-automated-tests">Step 4: Run Automated Tests ✅</h3>
<p>Before diving into the code, let’s confirm that the automated tests are functioning correctly. Run:</p>
<pre><code class="lang-bash">npm <span class="hljs-built_in">test</span>
</code></pre>
<p>You should see two successful test results in your terminal. This indicates that the starter project is correctly configured with working automated tests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733074280408/93b4ea86-1dfa-42eb-a163-b97c19c2a053.png" alt="Successful test run" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-5-start-the-web-server">Step 5: Start the Web Server 🌐</h3>
<p>Finally, let’s start the web server and see it in action. Run the following command:</p>
<pre><code class="lang-bash">npm start
</code></pre>
<p>Wait for the application to start running. Open your browser and visit <a target="_blank" href="http://localhost:5000/">http://localhost:5000</a>. 🎉 You should see the starter web server up and running, ready for your CI/CD magic:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733074667521/7b80bb21-1f43-430e-8a56-2bff8b81ddad.png" alt="Successful project run" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-create-a-github-repository-to-host-your-codebase"><strong>How to Create a GitHub Repository to Host Your Codebase 📂</strong></h2>
<h3 id="heading-step-1-sign-in-to-github">Step 1: Sign In to GitHub</h3>
<ol>
<li><p><strong>Go to GitHub</strong>: Open your browser and visit GitHub - <a target="_blank" href="https://github.com/">https://github.com</a>.</p>
</li>
<li><p><strong>Sign In</strong>: Click on the <strong>Sign In</strong> button in the top-right corner and enter your username and password to log in, OR create an account if you don’t have one by clicking the <strong>Sign up</strong> button.</p>
</li>
</ol>
<h3 id="heading-step-2-create-a-new-repository">Step 2: Create a New Repository</h3>
<p>Once you're signed in, on the main GitHub page, you’ll see a "+" sign in the top-right corner next to your profile picture. Click on it, and select <strong>“New repository”</strong> from the dropdown.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733130465203/dac28dee-74da-4fd4-8a96-bc90aef01207.png" alt="New GitHub repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now it’s time to set the repository details. You’ll include:</p>
<ul>
<li><p><strong>Repository Name</strong>: Choose a name for your repository. For example, you can call it <code>ci-cd-tutorial</code>.</p>
</li>
<li><p><strong>Description</strong> (Optional): You can add a short description, like “A tutorial project for CI/CD with Docker and GitHub Actions.”</p>
</li>
<li><p><strong>Visibility</strong>: Choose whether you want your repository to be <strong>public</strong> (accessible by anyone) or <strong>private</strong> (only accessible by you and those you invite). For the sake of this tutorial, make it <strong>public</strong>.</p>
</li>
<li><p><strong>Do Not Check the Add a README File Box</strong>: <strong>Important</strong>: Make sure you <strong>do not check</strong> the option to <strong>Add a README file</strong>. This will automatically create a <code>README.md</code> file in your repository, which could cause conflicts later when you push your local files. We'll add the README file manually if needed later.</p>
</li>
</ul>
<p>After filling out the details, click on <strong>“Create repository”</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733130890582/04e09ac8-0ee6-4d26-a9f2-007c0e6ca08f.png" alt="Create GitHub repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-3-change-the-remote-destination-and-push-to-your-new-repository">Step 3: Change the Remote Destination and Push to Your New Repository</h3>
<h4 id="heading-update-the-remote-repository-url"><strong>Update the Remote Repository URL</strong>:</h4>
<p>Since you've already cloned the codebase from my repository, you need to update the remote destination to point to your newly created GitHub repository.</p>
<p>Copy your repository URL (the URL of the page you were redirected to after creating the repository). It should look similar to this: <code>https://github.com/&lt;username&gt;/&lt;repo-name&gt;</code>.</p>
<p>Open your terminal in the project directory and run the following commands:</p>
<pre><code class="lang-bash">git remote set-url origin &lt;your-repo-url&gt;
</code></pre>
<p>Replace <code>&lt;your-repo-url&gt;</code> with your GitHub repository URL which you copied earlier.</p>
<h4 id="heading-rename-the-current-branch-to-main"><strong>Rename the Current Branch to</strong> <code>main</code>:</h4>
<p>If your branch is named something other than <code>main</code>, you can rename it to <code>main</code> using:</p>
<pre><code class="lang-bash">git branch -M main
</code></pre>
<h4 id="heading-push-to-your-new-repository"><strong>Push to Your New Repository</strong>:</h4>
<p>Finally, commit any changes you’ve made and push your local repository to the new remote GitHub repository by running:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">'Created boilerplate'</span>
git push -u origin main
</code></pre>
<p>Now your local codebase is linked to your new GitHub repository, and the files are successfully pushed there. You can verify by visiting your repository on GitHub.</p>
<h2 id="heading-how-to-set-up-the-ci-and-cd-workflows-within-your-project">How to Set Up the CI and CD Workflows Within Your Project ⚙️</h2>
<p>Now it’s time to create the <strong>CI and CD workflows</strong> for our project! These workflows won’t run on your local PC but will be automatically triggered and executed in the cloud once you push your changes to the remote repository. GitHub Actions will detect these workflows and run them based on the triggers you define.</p>
<h3 id="heading-step-1-prepare-the-workflow-directory">Step 1: Prepare the Workflow Directory 📂</h3>
<p>Before adding the CI/CD pipelines, it's a good practice to first create a feature branch. This step mirrors the workflow commonly used in teams, where new features or changes are made in separate branches before they are merged into the main codebase.</p>
<p>To create and switch to a new branch, run the following command:</p>
<pre><code class="lang-bash">git checkout -b feature/ci-cd-pipeline
</code></pre>
<p>This will create a new branch called <code>feature/ci-cd-pipeline</code> and switch to it. Now, you can safely add and test the CI/CD workflows without affecting the main branch.</p>
<p>Once you finish, you’ll be able to merge this feature branch back into <code>main</code> or <code>staging</code> as part of the pull request process.</p>
<p>In the project’s root directory, create a folder named <code>.github</code>. Inside <code>.github</code>, create another folder called <code>workflows</code>.</p>
<p>Any YAML file placed in the <code>.github/workflows</code> directory is automatically recognized as a GitHub Actions workflow. These workflows will execute based on specific triggers, such as pull requests, pushes, or releases.</p>
<h3 id="heading-step-2-create-the-continuous-integration-workflow">Step 2: Create the Continuous Integration Workflow 🚀</h3>
<p>We’ll now create a CI workflow that automatically tests the application whenever a pull request is made to the <code>main</code> or <code>staging</code> branches.</p>
<p>First, inside the <code>workflows</code> directory, create a file named <code>ci-pipeline.yml</code>.</p>
<p>Paste the following code into the file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CI</span> <span class="hljs-string">Pipeline</span> <span class="hljs-string">to</span> <span class="hljs-string">staging/production</span> <span class="hljs-string">environment</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">test:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup,</span> <span class="hljs-string">test,</span> <span class="hljs-string">and</span> <span class="hljs-string">build</span> <span class="hljs-string">project</span>
    <span class="hljs-attr">env:</span>
      <span class="hljs-attr">PORT:</span> <span class="hljs-number">5001</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">application</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">application</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          echo "Run command to build the application if present"
          npm run build --if-present</span>
</code></pre>
<h4 id="heading-explanation-of-the-ci-workflow">Explanation of the CI Workflow</h4>
<p>Here’s a breakdown of each section in the workflow:</p>
<ol>
<li><p><code>name: CI Pipeline to staging/production environment</code>: This is the title of your workflow. It helps you identify this pipeline in GitHub Actions.</p>
</li>
<li><p><code>on</code>: The <code>on</code> parameter is what determines the events that trigger your workflow. When the workflow YAML file is pushed to the remote GitHub repository, GitHub Actions automatically registers the workflow using the configured triggers in the <code>on</code> field. These triggers act as event listeners that tell GitHub when to execute the workflow</p>
<p> <strong>For example:</strong></p>
<p> If we set <code>pull_request</code> as the value for the <code>on</code> parameter and specify the branches we want to monitor using the <code>branches</code> key, GitHub sets up event listeners for pull requests to those branches.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">on:</span>
   <span class="hljs-attr">pull_request:</span>
     <span class="hljs-attr">branches:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
</code></pre>
<p> This configuration means that GitHub will trigger the workflow whenever a pull request is made to the <code>main</code> or <code>staging</code> branches.</p>
<p> <strong>Multiple Triggers</strong>:<br> You can define multiple event listeners in the <code>on</code> parameter. For instance, in addition to pull requests, you can add a listener for push events.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">on:</span>
   <span class="hljs-attr">pull_request:</span>
     <span class="hljs-attr">branches:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
   <span class="hljs-attr">push:</span>
     <span class="hljs-attr">branches:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
</code></pre>
<p> This configuration ensures that the workflow is triggered when:</p>
<ul>
<li><p>A pull request is made to either the <code>main</code> or <code>staging</code> branch.</p>
</li>
<li><p>A push is made directly to the <code>main</code> branch.</p>
</li>
</ul>
</li>
</ol>
<p>    📘 <strong>Learn more about triggers:</strong> Check out the <a target="_blank" href="https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows">official GitHub documentation here</a>.</p>
<ol start="3">
<li><p><code>jobs</code>: The <code>jobs</code> section outlines the specific tasks (or jobs) that the workflow will execute. Each job is an independent unit of work that runs on a separate virtual machine (VM). This isolation ensures a clean, unique environment for every job, avoiding potential conflicts between tasks.</p>
<p> <strong>Key Points About Jobs:</strong></p>
<ol>
<li><p><strong>Clean VM for Each Job</strong>: When GitHub Actions runs a workflow, it assigns a dedicated VM instance to each job. This means the environment is reset for every job, ensuring there’s no overlap or interference between tasks.</p>
</li>
<li><p><strong>Multiple Jobs</strong>: Workflows can have multiple jobs, each responsible for a specific task. For example:</p>
<ul>
<li><p>A <strong>Test</strong> job to install dependencies and run automated tests.</p>
</li>
<li><p>A <strong>Build</strong> job to compile the application.</p>
</li>
</ul>
</li>
<li><p><strong>Job Organization</strong>: Jobs can be organized to run:</p>
<ul>
<li><p><strong>Sequentially</strong>: Ensures one job is completed before the next starts, for example the Test job must finish before the Build job. This sequential flow mimics the "pipeline" structure.</p>
</li>
<li><p><strong>Simultaneously</strong>: Multiple jobs can run in parallel to save time, especially if the jobs are independent of one another.</p>
</li>
</ul>
</li>
<li><p><strong>Single Job in This Workflow</strong>: In our current workflow, there is only one job, <code>test</code>, which:</p>
<ul>
<li><p>Installs dependencies.</p>
</li>
<li><p>Runs automated tests.</p>
</li>
<li><p>Builds the application.</p>
</li>
</ul>
</li>
</ol>
</li>
</ol>
<p>    📘 <strong>Learn more about jobs:</strong> Dive into the <a target="_blank" href="https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/using-jobs-in-a-workflow">GitHub Actions jobs documentation here</a>.</p>
<ol start="4">
<li><p><code>runs-on: ubuntu-latest</code>: Specifies the operating system the job will run on. GitHub provides pre-configured virtual environments, and we’re using the latest Ubuntu image.</p>
</li>
<li><p><code>env</code>: Sets environment variables for the job. Here, we define the <strong>PORT</strong> variable used by our application.</p>
</li>
<li><p><strong>Steps</strong>: Steps define the individual actions to execute within a job:</p>
<ul>
<li><p><code>Checkout</code>: Uses the <code>actions/checkout</code> action to clone the repository containing the codebase in the feature branch into the virtual machine instance environment. This step ensures the pipeline has access to the project files.</p>
</li>
<li><p><code>Install dependencies</code>: Runs <code>npm ci</code> to install the required Node.js packages.</p>
</li>
<li><p><code>Test application</code>: Runs the automated tests using the <code>npm test</code> command. This validates the codebase for errors or failing test cases.</p>
</li>
<li><p><code>Build application</code>: Builds the application if a build script is defined in the <code>package.json</code>. The <code>--if-present</code> flag ensures this step doesn’t fail if no build script is present.</p>
</li>
</ul>
</li>
</ol>
<p>Now that we’ve completed the CI pipeline, which runs on pull requests to the <code>main</code> or <code>staging</code> branches, let’s move on to setting up the <strong>Continuous Delivery (CD)</strong> and <strong>Continuous Deployment</strong> pipelines. 🚀</p>
<h3 id="heading-step-3-the-continuous-delivery-and-deployment-workflow">Step 3: The Continuous Delivery and Deployment Workflow</h3>
<p><strong>First, create the Pipeline File</strong>:<br>In the <code>.github/workflows</code> folder, create a new file called <code>cd-pipeline.yml</code>. This file will define the workflows for automating delivery and deployment.</p>
<p><strong>Next, paste the configuration</strong>:<br>Copy and paste the following configuration into the <code>cd-pipeline.yml</code> file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CD</span> <span class="hljs-string">Pipeline</span> <span class="hljs-string">to</span> <span class="hljs-string">Google</span> <span class="hljs-string">Cloud</span> <span class="hljs-string">Run</span> <span class="hljs-string">(staging</span> <span class="hljs-string">and</span> <span class="hljs-string">production)</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
  <span class="hljs-attr">workflow_dispatch:</span> {}
  <span class="hljs-attr">release:</span>
    <span class="hljs-attr">types:</span> <span class="hljs-string">published</span>

<span class="hljs-attr">env:</span>
  <span class="hljs-attr">PORT:</span> <span class="hljs-number">5001</span>
  <span class="hljs-attr">IMAGE:</span> <span class="hljs-string">${{vars.IMAGE}}:${{github.sha}}</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">test:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup,</span> <span class="hljs-string">test,</span> <span class="hljs-string">and</span> <span class="hljs-string">build</span> <span class="hljs-string">project</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">application</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">needs:</span> <span class="hljs-string">test</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup</span> <span class="hljs-string">project,</span> <span class="hljs-string">Authorize</span> <span class="hljs-string">GitHub</span> <span class="hljs-string">Actions</span> <span class="hljs-string">to</span> <span class="hljs-string">GCP</span> <span class="hljs-string">and</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub,</span> <span class="hljs-string">and</span> <span class="hljs-string">deploy</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Authenticate</span> <span class="hljs-string">for</span> <span class="hljs-string">GCP</span>
        <span class="hljs-attr">id:</span> <span class="hljs-string">gcp-auth</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">google-github-actions/auth@v0</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">credentials_json:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.GCP_SERVICE_ACCOUNT</span> <span class="hljs-string">}}</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Cloud</span> <span class="hljs-string">SDK</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">google-github-actions/setup-gcloud@v0</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Authenticate</span> <span class="hljs-string">for</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub</span>
        <span class="hljs-attr">id:</span> <span class="hljs-string">docker-auth</span>
        <span class="hljs-attr">env:</span>
          <span class="hljs-attr">D_USER:</span> <span class="hljs-string">${{secrets.DOCKER_USER}}</span>
          <span class="hljs-attr">D_PASS:</span> <span class="hljs-string">${{secrets.DOCKER_PASSWORD}}</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker login -u $D_USER -p $D_PASS
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">tag</span> <span class="hljs-string">Image</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker build -t ${{env.IMAGE}} .
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">the</span> <span class="hljs-string">image</span> <span class="hljs-string">to</span> <span class="hljs-string">Docker</span> <span class="hljs-string">hub</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker push ${{env.IMAGE}}
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Enable</span> <span class="hljs-string">the</span> <span class="hljs-string">Billing</span> <span class="hljs-string">API</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          gcloud services enable cloudbilling.googleapis.com --project=${{secrets.GCP_PROJECT_ID}}
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">GCP</span> <span class="hljs-string">Run</span> <span class="hljs-bullet">-</span> <span class="hljs-string">Production</span> <span class="hljs-string">environment</span> <span class="hljs-string">(If</span> <span class="hljs-string">a</span> <span class="hljs-string">new</span> <span class="hljs-string">release</span> <span class="hljs-string">was</span> <span class="hljs-string">published</span> <span class="hljs-string">from</span> <span class="hljs-string">the</span> <span class="hljs-string">master</span> <span class="hljs-string">branch)</span>
        <span class="hljs-attr">if:</span> <span class="hljs-string">github.event_name</span> <span class="hljs-string">==</span> <span class="hljs-string">'release'</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">github.event.action</span> <span class="hljs-string">==</span> <span class="hljs-string">'published'</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">github.event.release.target_commitish</span> <span class="hljs-string">==</span> <span class="hljs-string">'main'</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          gcloud run deploy ${{vars.GCR_PROJECT_NAME}} \
          --region ${{vars.GCR_REGION}} \
          --image ${{env.IMAGE}} \
          --platform "managed" \
          --allow-unauthenticated \
          --tag production \
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">GCP</span> <span class="hljs-string">Run</span> <span class="hljs-bullet">-</span> <span class="hljs-string">Staging</span> <span class="hljs-string">environment</span>
        <span class="hljs-attr">if:</span> <span class="hljs-string">github.ref</span> <span class="hljs-type">!=</span> <span class="hljs-string">'refs/heads/main'</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          echo "Deploying to staging environment"
          # Deploy service with to staging environment
          gcloud run deploy ${{vars.GCR_STAGING_PROJECT_NAME}} \
          --region ${{vars.GCR_REGION}} \
          --image ${{env.IMAGE}} \
          --platform "managed" \
          --allow-unauthenticated \
          --tag staging \</span>
</code></pre>
<p>The <strong>CD pipeline</strong> configuration combines Continuous Delivery and Continuous Deployment workflows into a single file for simplicity. It builds on the concepts of CI/CD we discussed earlier, automating testing, building, and deploying the application to Google Cloud Run.</p>
<h4 id="heading-explanation-of-the-cd-pipeline">Explanation of the CD pipeline:</h4>
<ol>
<li><h4 id="heading-workflow-triggers-on">Workflow Triggers (<code>on</code>)</h4>
</li>
</ol>
<ul>
<li><p><code>push</code>: Workflow triggers on pushes to the <code>staging</code> branch.</p>
</li>
<li><p><code>workflow_dispatch</code>: Enables manual execution of the workflow via the GitHub Actions interface.</p>
</li>
<li><p><code>release</code>: Triggers when a new release is published.<br>  Example: When a release is published from the <code>main</code> branch, the app deploys to the production environment.</p>
</li>
</ul>
<ol start="2">
<li><p><strong>Job 1 – Testing the Codebase:</strong> The first job in the pipeline, Test, ensures the codebase is functional and error-free before proceeding with delivery or deployment</p>
</li>
<li><p><strong>Job 2 – Building and Deploying the Application:</strong> Aha! Moment ✨: These jobs run sequentially. 😃 The <strong>Build</strong> job begins only after the <strong>Test</strong> job is completed successfully. It prepares the application for deployment and manages the actual deployment process.</p>
<p> Here's what happens:</p>
<ul>
<li><p><strong>Authorization for GCP and Docker Hub</strong>: The workflow authenticates with both Google Cloud Platform (GCP) and Docker Hub. For GCP, it uses the <code>google-github-actions/auth@v0</code> action to handle service account credentials stored as secrets. Similarly, it logs into Docker Hub with stored credentials to enable image uploads.</p>
</li>
<li><p><strong>Build and Push Docker Image</strong>: The application is built into a Docker image and tagged with a unique identifier (<code>${{env.IMAGE}}</code>). This image is then pushed to Docker Hub, making it accessible for deployment.</p>
</li>
<li><p><strong>Deploy to Google Cloud Run</strong>: Based on the event that triggered the workflow, the application is <strong>deployed to either the staging or production environment</strong> in Google Cloud Run. A <strong>push</strong> to the <code>staging</code> branch deploys to the staging environment (Continuous Delivery), while a <strong>release</strong> from the <code>main</code> branch deploys to production (Continuous Deployment).</p>
</li>
</ul>
</li>
</ol>
<p>To ensure the security and flexibility of our pipeline, we rely on external variables and secrets rather than hardcoding sensitive information directly into the workflow file.</p>
<p>Why? Workflow configuration files are part of your repository and accessible to anyone with access to the codebase. If sensitive data, like API keys or passwords, is exposed here, it can be easily compromised. 😨</p>
<p>Instead, we use GitHub’s <strong>Secrets</strong> to securely store and access this information. Secrets allow us to define variables that are encrypted and only accessible by our workflows. For example:</p>
<ul>
<li><p><strong>DockerHub Credentials</strong>: We’ll add a Docker username and access token to the repository’s secrets. These are essential for authenticating with DockerHub to upload the built Docker images.</p>
</li>
<li><p><strong>Google Cloud Service Account Key</strong>: This key will grant the pipeline the necessary permissions to deploy the application on <strong>Google Cloud Run</strong> securely.</p>
</li>
</ul>
<p>We'll set up these variables and secrets incrementally as we proceed, ensuring each step is fully secure and functional. 🎯</p>
<h2 id="heading-set-up-a-docker-hub-repository-for-the-projects-image-and-generate-an-access-token-for-publishing-the-image"><strong>Set Up a Docker Hub Repository for the Project's Image and Generate an Access Token for Publishing the Image</strong> 📦</h2>
<p>Before we dive into the steps, let’s quickly go over what we’re about to do. In this section, you’ll learn how to create a Docker Hub repository, which acts like an online storage space for your application’s container image.</p>
<p>Think of a container image as a snapshot of your application, ready to be deployed anywhere. To ensure smooth and secure access, we’ll also generate a special access token, kind of like a revokable password that our CI/CD pipeline can use to upload your app’s image to Docker Hub. Let’s get started! 🚀</p>
<h3 id="heading-step-1-sign-up-for-docker-hub">Step 1: Sign Up for Docker Hub</h3>
<p>Here are the steps to follow to sign up for Docker Hub:</p>
<ol>
<li><p><strong>Go to the Docker Hub website</strong>: Open your web browser and visit Docker Hub - <a target="_blank" href="https://hub.docker.com/">https://hub.docker.com/</a>.</p>
</li>
<li><p><strong>Create an account</strong>: On the Docker Hub homepage, you’ll see a button labelled <strong>"Sign Up"</strong> in the top-right corner. Click on it.</p>
</li>
<li><p><strong>Fill in your details</strong>: You'll be asked to provide a few details like your username, email address, and password. Choose a strong password that you can remember.</p>
</li>
<li><p><strong>Agree to the terms</strong>: You’ll need to check a box to agree to Docker’s terms of service. After that, click <strong>“Sign Up”</strong> to create your account.</p>
</li>
<li><p><strong>Verify your email</strong>: Docker Hub will send you an email to verify your account. Open that email and click on the verification link to complete your account creation.</p>
</li>
</ol>
<h3 id="heading-step-2-sign-in-to-docker-hub">Step 2: Sign In to Docker Hub</h3>
<p>After verifying your email, go back to Docker Hub, and click on <strong>"Sign In"</strong> at the top right. Then you can use the credentials you just created to log in.</p>
<h3 id="heading-step-3-generate-an-access-token-for-the-cicd-pipeline">Step 3: Generate an Access Token (for the CI/CD pipeline)</h3>
<p>Now that you have an account, you can create an access token. This token will allow your GitHub Actions workflow to securely sign into Docker Hub and upload Docker images.</p>
<p>Once you’re logged into Docker Hub, click on your profile picture (or avatar) in the top right corner. This will open a menu. From the menu, click “Account Settings”.</p>
<p>Then in the left-hand menu of your account settings, scroll to the <strong>"Security"</strong> tab. This section is where you manage your tokens and passwords.</p>
<p>Now you’ll need to create a new access token. In the Security tab, you’ll see a link labelled <strong>“Personal access tokens”</strong> – click on it. Click the button labelled <strong>“Generate new token”</strong>.</p>
<p>You’ll be asked to give your token a description. You can name it something like "GitHub Actions CI/CD" so that you know what it's for.</p>
<p>After giving it a description, click on the “<strong>Access permissions dropdown</strong>“ and select <strong>“Read &amp; Write“,</strong> or <strong>“Read, Write, Delete“</strong>. Click “<strong>Generate</strong>“</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733129374816/c725f041-c0ef-49a0-b8ef-ca62acafc1ee.png" alt="Create Docker access token" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now, you need to copy the credentials. After clicking the generate button, Docker Hub will create an access token. <strong>Immediately copy this token along with your username</strong> and save it somewhere safe, like in a file (don’t worry, we’ll add it to our GitHub secrets). You won’t be able to see this token again, so make sure you save it!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733133363382/33dbf334-a7ec-4151-8639-5368c3ccaedb.png" alt="Copy Docker username + access token" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-4-add-the-token-to-github-as-a-secret">Step 4: Add the Token to GitHub as a Secret</h3>
<p>To do this, open your GitHub repository where the codebase is hosted. In the GitHub repo, click on the <strong>Settings</strong> tab (located near the top of your repo page).</p>
<p>Then on the left sidebar, scroll down and click on <strong>“Secrets and Variables”</strong>, then choose <strong>“Actions”</strong>.</p>
<ol>
<li><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733133003023/75c3bd35-1a5b-46fa-845a-0f4fd8305d53.png" alt="Open GitHub Actions Secrets" class="image--center mx-auto" width="600" height="400" loading="lazy"></li>
</ol>
<p>Here are the steps to create and manage your new secret:</p>
<ol>
<li><p><strong>Add a new secret</strong>: Click on the <strong>“New repository secret”</strong> button.</p>
</li>
<li><p><strong>Set up the secret</strong>:</p>
<ul>
<li><p>In the <strong>Name</strong> field, type <code>DOCKER_PASSWORD</code>.</p>
</li>
<li><p>In the <strong>Value</strong> field, paste the access token you copied earlier.</p>
</li>
</ul>
</li>
<li><p><strong>Save the secret</strong>: Finally, click <strong>Add secret</strong> to save your Docker access token securely in GitHub.</p>
</li>
</ol>
<p>Then you’ll repeat the process for your Docker username. Create a new secret called <code>DOCKER_USER</code> and add your Docker username that you copied earlier.</p>
<p>And that’s it! Now your CI/CD pipeline can use this token to securely log in to Docker Hub and upload images automatically when triggered. 🎉</p>
<h3 id="heading-step-5-creating-the-dockerfile-for-the-project"><strong>Step 5: Creating the Dockerfile for the Project</strong></h3>
<p>Before you can build and publish the Docker image to Docker Hub, you need to create a <code>Dockerfile</code> that contains the necessary instructions to build your application.</p>
<p>Follow the steps below to create the <code>Dockerfile</code> in the root folder of your project:</p>
<ol>
<li><p>Navigate to your project’s root folder.</p>
</li>
<li><p>Create a new file named <code>Dockerfile</code>.</p>
</li>
<li><p>Open the <strong>Dockerfile</strong> in a text editor and paste the following content into it:</p>
</li>
</ol>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> node:<span class="hljs-number">18</span>-slim

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-keyword">COPY</span><span class="bash"> package.json .</span>

<span class="hljs-keyword">RUN</span><span class="bash"> npm install -f</span>

<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment"># EXPOSE 5001</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">5001</span>

<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"npm"</span>, <span class="hljs-string">"start"</span>]</span>
</code></pre>
<h4 id="heading-explanation-of-the-dockerfile">Explanation of the Dockerfile:</h4>
<ul>
<li><p><code>FROM node:18-slim</code>: This sets the base image for the Docker container, which is a slim version of the official Node.js image based on version 18.</p>
</li>
<li><p><code>WORKDIR /app</code>: Sets the working directory for the application inside the container to <code>/app</code>.</p>
</li>
<li><p><code>COPY package.json .</code>: Copies the <code>package.json</code> file into the working directory.</p>
</li>
<li><p><code>RUN npm install -f</code>: Installs the project dependencies using <code>npm</code>.</p>
</li>
<li><p><code>COPY . .</code>: Copies the rest of the project files into the container.</p>
</li>
<li><p><code>EXPOSE 5001</code>: This tells Docker to expose port <code>5001</code>, which is the port our app will run on inside the container.</p>
</li>
<li><p><code>CMD ["npm", "start"]</code>: This sets the default command to start the application when the container is run, using <code>npm start</code>.</p>
</li>
</ul>
<h2 id="heading-create-a-google-cloud-account-project-and-billing-account"><strong>Create a Google Cloud Account, Project, and Billing Account</strong> ☁️</h2>
<p>In this section, we’re laying the foundation for deploying our application to Google Cloud. First, we’ll set up a Google Cloud account (don’t worry, it’s free to get started!). Then, we’ll create a new project where all the resources for your app will live.</p>
<p>Finally, we’ll enable billing so you can unlock the cloud services needed for deployment. Think of this as setting up your workspace in the cloud—organized, ready, and secure! Let’s dive in! ☁️</p>
<h3 id="heading-step-1-create-or-sign-in-to-a-google-cloud-account">Step 1: Create or Sign in to a Google Cloud Account 🌐</h3>
<p>First, go to <a target="_blank" href="https://console.cloud.google.com">Google Cloud Console</a>. If you don’t have a Google Cloud account, you’ll need to create one.</p>
<p>To do this, click on <strong>Get Started for Free</strong> and follow the steps to set up your account (you’ll need to provide payment information, but Google offers $300 in free credits to get started). If you already have a Google account, simply sign in using your credentials.</p>
<p>Once you’ve signed in, you’ll be taken to your Google Cloud dashboard. This is where you can manage all your cloud projects and resources.</p>
<h3 id="heading-step-2-create-a-new-google-cloud-project">Step 2: Create a New Google Cloud Project 🏗️</h3>
<p>At the top left of the Google Cloud Console, you’ll see a drop-down menu beside the Google Cloud logo. Click on this drop-down to display your current projects.</p>
<p>Now it’s time to create a new project. In the top-left corner of the pop-up modal, click on the <strong>New Project</strong> button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733134260252/6769909a-cf9c-4c91-9d79-7676500f3981.webp" alt="Create Google Cloud Project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You’ll be redirected to a page where you’ll need to provide some basic details for your new project. So now enter the following information:</p>
<ul>
<li><p><strong>Project Name:</strong> Enter a name of your choice for the project (for example, <code>gcr-ci-cd-project</code>).</p>
</li>
<li><p><strong>Location:</strong> Select a location for your project. You can leave it as the default "No organization" if you're just getting started.</p>
</li>
</ul>
<p>Once you've entered the project name, click the <strong>Create</strong> button. Google Cloud will now start creating your new project. It may take a few seconds.</p>
<h3 id="heading-step-3-access-your-new-project">Step 3: Access Your New Project 🛠️</h3>
<p>After a few seconds, you’ll be redirected to your <strong>Google Cloud dashboard</strong>.</p>
<p>Click on the drop-down menu beside the Google Cloud logo again, and you should now see your newly created project listed in the modal where you can select it.</p>
<p>Then click on the project name (for example, <code>gcr-ci-cd-project</code>) to enter your project’s dashboard.</p>
<h3 id="heading-step-4-link-a-billing-account-to-your-project">Step 4: Link A Billing Account To Your Project 💳</h3>
<p>To access the billing page, in the Google Cloud Console, find the <strong>Navigation Menu</strong> (the three horizontal lines) at the top left of the screen. Click on it to open a list of options. Scroll down and click on <strong>Billing</strong>. This will take you to the billing section of your Google Cloud account.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733134747962/745c8a0e-13c5-4dde-849b-303c1200f495.png" alt="Navigate to Google Cloud Billing dashboard/section " class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If you haven't set up a billing account yet, you'll be prompted to do so. Click on the <strong>"Link a billing account"</strong> button to start the process.</p>
<p>Now you can create a new billing account (if you don’t have one). You’ll be redirected to a page where you can either select an existing billing account or create a new one. If you don't already have a billing account, click on <strong>"Create a billing account"</strong>.</p>
<p>Provide the necessary details, including:</p>
<ul>
<li><p><strong>Account name</strong> (for example, "Personal Billing Account" or your business name).</p>
</li>
<li><p><strong>Country</strong>: Choose the country where your business or account is based.</p>
</li>
<li><p><strong>Currency</strong>: Choose the currency in which you want to be billed.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733135153425/1287ab53-e9c5-45b5-a09d-3d3a13840ca4.png" alt="Create Google Cloud billing account" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ul>
<p>Next, enter your payment information (credit card or bank account details). Google Cloud will verify your payment method, so make sure the information is correct.</p>
<p>Read and agree to the Google Cloud Terms of Service and Billing Account Terms. Once you’ve done this, click <strong>"Start billing"</strong> to finish setting up your billing account</p>
<p>After setting up your billing account, you’ll be taken to a page that asks you to <strong>link</strong> it to your project. Select the billing account you just created or an existing billing account you want to use. Click Set Account to link the billing account to your project.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733337276189/b80702dd-2ff6-42db-a325-c2082e8059e5.png" alt="Link Google Cloud billing account to project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>After you’ve linked your billing account to your project, you should see a confirmation message indicating that billing has been successfully enabled for your project.</p>
<p>You can always verify this by returning to the Billing section in the Google Cloud Console, where you’ll see your billing account listed.</p>
<h2 id="heading-create-a-google-cloud-service-account-to-enable-deployment-of-the-nodejs-application-to-google-cloud-run-via-the-cd-pipeline"><strong>Create a Google Cloud Service Account to Enable Deployment of the Node.js Application to Google Cloud Run via the CD Pipeline</strong> 🚀</h2>
<h3 id="heading-why-do-we-need-a-service-account-and-key">Why Do We Need a Service Account and Key? 🤔</h3>
<p>A <strong>service account</strong> allows our CI/CD pipeline to authenticate and interact with Google Cloud services programmatically. By assigning specific roles (permissions), we ensure the service account can only perform tasks related to deployment, such as managing Google Cloud Run.</p>
<p>The <strong>service account key</strong> is a JSON file containing the credentials used for authentication. We securely store this key as a GitHub secret to protect sensitive information.</p>
<h3 id="heading-step-1-open-the-service-accounts-page">Step 1: Open the Service Accounts Page</h3>
<p>Here are the steps you can follow to set up your service account and get your key:</p>
<p>First, visit the Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>. Ensure you’ve selected the correct project (e.g. <code>gcr-ci-cd-project</code>). To change projects, click the drop-down menu next to the Google Cloud logo at the top-left corner and select your project.</p>
<p>Then navigate to the Navigation Menu (three horizontal lines in the top-left corner) and click on <strong>IAM &amp; Admin &gt; Service Accounts</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733147553088/e3647442-ca8e-4197-ab5f-91cee5a6d6b0.png" alt="Navigate to Google Cloud IAM - Service Account" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-2-create-a-new-service-account">Step 2: Create a New Service Account</h3>
<p>Click on the "Create Service Account" button. This will open a form where you’ll define your service account details.</p>
<p>Next, enter the Service Account details:</p>
<ul>
<li><p><strong>Name</strong>: Enter a descriptive name (for example, <code>ci-cd-sa</code>).</p>
</li>
<li><p><strong>ID</strong>: This will auto-fill based on the name.</p>
</li>
<li><p><strong>Description</strong>: Add a description to help identify its purpose, such as “Used for deploying Node.js app to Cloud Run.”</p>
</li>
<li><p>Click <strong>Create and Continue</strong> to proceed.</p>
</li>
</ul>
<h3 id="heading-step-3-assign-necessary-roles-permissions">Step 3: Assign Necessary Roles (Permissions)</h3>
<p>On the next screen, you’ll assign roles to the service account. Add the following roles one by one:</p>
<ul>
<li><p><strong>Cloud Run Admin</strong>: Allows management of Cloud Run services.</p>
</li>
<li><p><strong>Service Account User</strong>: Grants the ability to use service accounts.</p>
</li>
<li><p><strong>Service Usage Admin</strong>: Enables control over enabling APIs.</p>
</li>
<li><p><strong>Viewer</strong>: Provides read-only access to view resources.</p>
</li>
</ul>
<p>To add a role:</p>
<ul>
<li><p>Click on <strong>"Select a Role"</strong>.</p>
</li>
<li><p>Use the search bar to type the role name (for example, "Cloud Run Admin") and select it.</p>
</li>
<li><p>Repeat for all four roles.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733147870701/393833c9-c320-49e3-8743-dbc0d739b99b.png" alt="Create Google Cloud Service Account - Add role to a service account during creation" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Your screen should look similar to this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733147949148/c509c810-767d-4900-aa44-a737cc1c8dc1.png" alt="Create a Google Cloud service account (SA) - Done assigning all roles to SA" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>After assigning the roles, click <strong>Continue</strong>.</p>
<h3 id="heading-step-4-skip-granting-users-access-to-the-service-account">Step 4: Skip Granting Users Access to the Service Account</h3>
<p>On the next screen, you’ll see an option to grant additional users access to this service account. Click <strong>Done</strong> to complete the creation process.</p>
<h3 id="heading-step-5-generate-a-service-account-key">Step 5: Generate a Service Account Key 🔑</h3>
<p>You should now see your newly created service account in the list. Find the row for your service account (for example, <code>ci-cd-sa</code>) and click the three vertical dots under the “Actions” column. Select <strong>"Manage Keys"</strong> from the drop-down menu.</p>
<p>To add a new key:</p>
<ul>
<li><p>Click on <strong>"Add Key" &gt; "Create New Key"</strong>.</p>
</li>
<li><p>In the pop-up dialog, select <strong>JSON</strong> as the key type.</p>
</li>
<li><p>Click <strong>Create</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733148120618/c7014982-ae7d-40ed-bbfb-0c8f5c4b8090.png" alt="Create Google Cloud service account key" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ul>
<p>Now, download the key file. A JSON file will automatically be downloaded to your computer. This file contains the credentials needed to authenticate with Google Cloud.</p>
<p>Make sure you keep the key secure and store it in a safe location. Don’t share it – treat it as sensitive information.</p>
<h3 id="heading-step-6-add-the-service-account-key-to-github-secrets">Step 6: Add the Service Account Key to GitHub Secrets 🔒</h3>
<p>Start by opening the downloaded JSON file using a text editor (like Notepad or VS Code). Then select and copy the entire contents of the file.</p>
<p>Then navigate to the repository you created for this project on GitHub. Click on the <strong>Settings</strong> tab at the top of the repository. Scroll down and find the <strong>Secrets and variables &gt; Actions</strong> section.</p>
<p>Now you need to add a new secret. Click the <strong>"New repository secret"</strong> button. In the <strong>Name</strong> field, enter <code>GCP_SERVICE_ACCOUNT</code>. In the <strong>Value</strong> field, paste the JSON content you copied earlier. Click <strong>Add secret</strong> to save it.</p>
<p>Do the same for the <code>GCP_PROJECT_ID</code> secret, but now add your Google Project ID as the value. To get your project ID, follow these steps:</p>
<ol>
<li><p><strong>Navigate to the Google Cloud Console</strong>: Open Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>.</p>
</li>
<li><p><strong>Locate the Project Dropdown</strong>: At the top-left of the screen, next to the <strong>Google Cloud logo</strong>, you will see a drop-down that shows the name of your current project.</p>
</li>
<li><p><strong>View the Project ID</strong>: Click the drop-down, and you'll see a list of all your projects. Your <strong>Project ID</strong> will be displayed next to the project name. It is a unique identifier used by Google Cloud.</p>
</li>
<li><p><strong>Copy the Project ID</strong>: Copy the <strong>Project ID</strong> that is displayed, and add it as the value of the <code>GCP_PROJECT_ID</code> secret.</p>
</li>
</ol>
<h3 id="heading-step-7-adding-external-variables-to-the-github-repository">Step 7: Adding External Variables to the GitHub Repository 🔧</h3>
<p>Before proceeding with deployment, we need to define some external variables that were referenced in the CD workflow. These variables ensure that the pipeline knows critical details about your Google Cloud Run services and Docker container registry.</p>
<p>Here are the steps you’ll need to follow to do this:</p>
<ol>
<li><p>First, go to your repository on GitHub.</p>
</li>
<li><p>Click the <strong>Settings</strong> tab at the top of the repository. Scroll down to <strong>Secrets and variables &gt; Actions</strong>.</p>
</li>
<li><p>Click on the <strong>Variables</strong> tab next to <strong>Secrets</strong>. Click <strong>"New repository variable"</strong> for each variable. Then you’ll need to define these variables:</p>
<ul>
<li><p><code>GCR_PROJECT_NAME</code>: Set this to the name of your Cloud Run service for the production/live environment. For example, <code>gcr-ci-cd-app</code>.</p>
</li>
<li><p><code>GCR_STAGING_PROJECT_NAME</code>: Set this to the name of your Cloud Run service for the staging/test environment. For example, <code>gcr-ci-cd-staging</code>.</p>
</li>
<li><p><code>GCR_REGION</code>: Enter the region where you’d like to deploy the services. For this tutorial, set it to <code>us-central1</code>.</p>
</li>
<li><p><code>IMAGE</code>: Specify the name of the Docker image/container registry where the published image will be uploaded. For example, <code>&lt;dockerhub-username&gt;/ci-cd-tutorial-app</code>.</p>
</li>
</ul>
</li>
<li><p>After entering each variable name and value, click <strong>Add variable</strong>.</p>
</li>
</ol>
<h3 id="heading-enabling-the-service-usage-api-on-the-google-cloud-project">Enabling the Service Usage API on the Google Cloud Project 🌐</h3>
<p>To deploy your application, the <strong>Service Usage API</strong> must be enabled in your Google Cloud project. This API allows you to manage Google Cloud services programmatically, including enabling/disabling APIs and monitoring their usage.</p>
<p>Follow these steps to enable it:</p>
<ol>
<li><p>First, visit the Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>.</p>
</li>
<li><p>Then make sure you’re in the correct project. Click the project drop-down menu near the <strong>Google Cloud logo</strong> at the top-left corner. Select <code>gcr-ci-cd-project</code> , or the name you gave your project from the list of projects.</p>
</li>
<li><p>Next you’ll need to access the API library. Open the <strong>Navigation Menu</strong> (three horizontal lines in the top-left corner). Select <strong>APIs &amp; Services &gt; Library</strong> from the menu.</p>
</li>
<li><p>In the API Library, use the search bar to search for <strong>"Service Usage API"</strong>.</p>
</li>
<li><p>Click on the <strong>Service Usage API</strong> from the search results. On the API’s details page, click <strong>Enable</strong>.</p>
</li>
<li><p>To verify, go to <strong>APIs &amp; Services &gt; Enabled APIs &amp; Services</strong> in the Google Cloud Console. Confirm that the <strong>Service Usage API</strong> appears in the list of enabled APIs.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733150269757/00a4e20b-72ac-4bd4-b05f-af6e61600e09.png" alt="Enable the Google Cloud &quot;Service Usage API&quot; in the project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ol>
<h2 id="heading-create-the-staging-branch-and-merge-the-feature-branch-into-it-continuous-integration-and-continuous-delivery"><strong>Create the Staging Branch and Merge the Feature Branch into It (Continuous Integration and Continuous Delivery) 🌟</strong></h2>
<p>When changes from the <code>feature/ci-cd-pipeline</code> branch are merged into the <code>staging</code> branch, we complete the <strong>Continuous Integration (CI)</strong> process, and the workflow <code>ci-pipeline.yml</code> will run. This ensures that the changes made in the feature branch are tested and integrated into a shared branch.</p>
<p>Once the pull request (PR) is merged into <code>staging</code>, the <strong>Continuous Delivery (CD)</strong> pipeline automatically triggers, deploying the application to the staging environment. This simulates how updates are tested in a safe environment before being pushed to production.</p>
<h3 id="heading-create-the-staging-branch-on-the-remote-repository">Create the <code>staging</code> Branch on the Remote Repository</h3>
<p>To enable the CI/CD pipeline, we’ll first create a <code>staging</code> branch on the remote GitHub repository. This branch will serve as the test environment where changes are deployed before they reach the production environment.</p>
<p>To create the <code>staging</code> branch directly on GitHub, follow these steps:</p>
<ol>
<li><p>First, navigate to your repository on GitHub. Open your web browser and go to the GitHub repository where you want to create the new <code>staging</code> branch.</p>
</li>
<li><p>Then, switch to the <code>main</code> branch. On the top of the repository page, locate the <strong>Branch</strong> dropdown (usually labelled as <code>main</code> or the current branch name). Click on the dropdown and make sure you are on the <code>main</code> branch.</p>
</li>
<li><p>Next, create the <code>staging</code> branch. In the same dropdown where you see the <code>main</code> branch, type <code>staging</code> into the text box. Once you start typing, GitHub will offer you the option to create a new branch called <code>staging</code>. Select the <strong>Create branch: staging</strong> option from the dropdown.</p>
</li>
<li><p>Finally, verify the branch**.** After creating the <code>staging</code> branch, GitHub will automatically switch to it. You should now see <code>staging</code> in the branch dropdown, confirming the new branch was created.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733152232155/e6215137-5e3b-474b-88f8-af03269eccc2.png" alt="Create a new Staging branch in the GitHub repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ol>
<h3 id="heading-merge-your-feature-branch-into-the-staging-branch-via-a-pull-request-pr"><strong>Merge Your Feature Branch into the Staging Branch via a Pull Request (PR)</strong></h3>
<p>This process combines both Continuous Integration (CI) and Continuous Delivery (CD). You will commit changes from your feature branch, push them to the remote feature branch, and then open a PR to merge those changes into the <code>staging</code> branch. Here's how to do it:</p>
<h4 id="heading-step-1-commit-local-changes-on-your-feature-branch"><strong>Step 1: Commit Local Changes on Your Feature Branch</strong></h4>
<p>First, you’ll want to make sure that you are on the correct branch (the feature branch) by running:</p>
<pre><code class="lang-bash">git status
</code></pre>
<p>If you are not on the <code>feature/ci-cd-pipeline</code> branch, switch to it by running:</p>
<pre><code class="lang-bash">git checkout feature/ci-cd-pipeline
</code></pre>
<p>Now, it’s time to add your changes you made for the commit:</p>
<pre><code class="lang-bash">git add .
</code></pre>
<p>This stages all changes, including new files, modified files, and deleted files.</p>
<p>Next, commit your changes with a clear and descriptive message:</p>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"Set up CI/CD pipelines for the project"</span>
</code></pre>
<p>Then you can verify your commit by running:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">log</span>
</code></pre>
<p>This will display your most recent commits, and you should see the commit message you just added.</p>
<h4 id="heading-step-2-push-your-feature-branch-changes-to-the-remote-repository"><strong>Step 2: Push Your Feature Branch Changes to the Remote Repository</strong></h4>
<p>After committing your changes, push them to the remote repository:</p>
<pre><code class="lang-bash">git push origin feature/ci-cd-pipeline
</code></pre>
<p>This pushes your local changes on the <code>feature/ci-cd-pipeline</code> branch to the remote GitHub repository.</p>
<p>Once the push is successful, visit your GitHub repository in a web browser, and confirm that the <code>feature/ci-cd-pipeline</code> branch is updated with your new commit.</p>
<h4 id="heading-step-3-create-a-pull-request-to-merge-the-feature-branch-into-staging"><strong>Step 3: Create a Pull Request to Merge the Feature Branch into Staging</strong></h4>
<p>Go to your repository on GitHub and ensure that you are on the main page of the repository.</p>
<p>You should see an alert at the top of the page suggesting you create a pull request for the recently pushed branch (<code>feature/ci-cd-pipeline</code>). Click the <strong>Compare &amp; Pull Request</strong> button next to the alert.</p>
<p>Now, it’s time to choose the base and compare branches. On the PR creation page, make sure the <strong>base</strong> branch is set to <code>staging</code> (this is the branch you want to merge your changes into). The <strong>compare</strong> branch should already be set to <code>feature/ci-cd-pipeline</code> (the branch you just pushed). If they’re not selected correctly, use the dropdowns to change them.</p>
<p>You’ll want to come up with a good PR description for this. Write a clear title and description for the pull request, explaining what changes you're merging and why. For example:</p>
<ul>
<li><p><strong>Title</strong>: "Merge CI/CD setup changes from feature branch"</p>
</li>
<li><p><strong>Description</strong>: "This pull request adds the CI/CD pipelines for GitHub Actions and Docker Hub integration to the project. It includes the configurations for both CI and CD workflows."</p>
</li>
</ul>
<p>Now GitHub will show a list of all the changes that will be merged. Take a moment to review them and ensure everything looks correct.</p>
<p>If all looks good after reviewing, click on the <strong>Create pull request</strong> button. This will create the PR and notify team members (if any) that changes are ready to be reviewed and merged.</p>
<p>Wait a few seconds, and you should see a message indicating that all the checks have passed. Click on the link with the description "<strong>CI Pipeline to staging/production environment...</strong>". This should direct you to the Continuous Integration workflow, where you can view the steps that ran</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733153444873/6ecdb277-0a45-44ec-981c-c7ee671cd2f0.png" alt="Create a new pull request (PR) from the feature to the staging branch" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733153637817/e12fefde-9259-41a3-9bd1-63b5da1d88ea.png" alt="CI workflow run from PR (feature to staging branch)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-the-continuous-integration-ci-process">The Continuous Integration (CI) Process</h4>
<p>The CI process begins when a Pull Request is made to the <code>staging</code> branch. It triggers the GitHub Actions workflow defined in the <code>.github/workflows/ci-pipeline.yml</code> file. The workflow runs the necessary steps to set up the environment, install dependencies, and build the Node.js application.</p>
<p>It then runs automated tests (using <code>npm test</code>) to ensure that the changes do not break any functionality in the codebase. If all these steps are completed successfully, the CI pipeline confirms that the feature branch is stable and ready to be merged into the <code>staging</code> branch for further testing and deployment.</p>
<h4 id="heading-step-4-merge-the-pull-request"><strong>Step 4: Merge the Pull Request</strong></h4>
<p>If your team or collaborators are part of the project, they may review your PR. This step may involve discussing any changes or improvements. If everything looks good, a reviewer will merge the PR.</p>
<p>Once the PR has been reviewed and approved, you can merge the PR. To do this, just click on the <strong>Merge pull request</strong> button. Choose <strong>Confirm merge</strong> when prompted.</p>
<p>After merging, you can go to the <code>staging</code> branch to verify that the changes were successfully merged.</p>
<h3 id="heading-navigating-to-the-actions-page-after-merging-the-pr"><strong>Navigating to the Actions Page After Merging the PR</strong></h3>
<p>Once you have successfully merged your pull request from the <code>feature/ci-cd-pipeline</code> branch into the <code>staging</code> branch, the Continuous Delivery (CD) pipeline will be triggered. To view the progress of the CD pipeline, navigate to the <strong>Actions</strong> tab in your GitHub repository. Here's how to do it:</p>
<ol>
<li><p>Go to your GitHub repository.</p>
</li>
<li><p>At the top of the page, you will see the <strong>Actions</strong> tab next to the <strong>Code</strong> tab. Click on it.</p>
</li>
<li><p>On the Actions page, you will see a list of workflows that have been triggered. Look for the one labelled <strong>CD Pipeline to Google Cloud Run (staging and production)</strong>. It should appear as a new run after the PR merge.</p>
</li>
<li><p>Click on the workflow run to view its progress and see the detailed logs for each step.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733154575368/96e236a2-ae66-494b-b544-f96955a18ac9.png" alt="Continuous Delivery workflow from merge to staging (feature to staging)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733159329441/cb7e26a9-7a20-4b1b-9869-e00facc695c1.png" alt="Continuous Delivery workflow Jobs from merge to staging (feature to staging)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733160506355/4682afe3-bb04-405d-af4e-fd9bd3494659.png" alt="Continuous Delivery workflow steps from merge to staging (feature to staging)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This will allow you to monitor the status of the CD pipeline and check if there are any issues during deployment.</p>
<p>If you look at the CD steps and workflow, you'll see that the step to deploy the application to the <strong>production</strong> environment was skipped, while the step to deploy to the <strong>staging</strong> environment was executed.</p>
<h4 id="heading-continuous-delivery-cd-pipeline-whats-going-on"><strong>Continuous Delivery (CD) pipeline – what’s going on:</strong></h4>
<p>The <strong>Continuous Delivery (CD) Pipeline</strong> automates the process of deploying the application to Google Cloud Run (testing environment). This workflow is triggered by a push to the <code>staging</code> branch, which happens after the changes from the feature branch are merged into <code>staging</code>. It can also be manually triggered via <code>workflow_dispatch</code> or upon a new release being published.</p>
<p>The pipeline consists of multiple stages:</p>
<ol>
<li><p><strong>Test Job:</strong> The pipeline begins by setting up the environment and running tests using the <code>npm test</code> command. If the tests pass, the process moves forward.</p>
</li>
<li><p><strong>Build Job:</strong> The next step builds the Docker image of the Node.js application, tags it, and then pushes it to Docker Hub.</p>
</li>
<li><p><strong>Deployment to GCP:</strong> After the image is pushed, the workflow authenticates to Google Cloud and deploys the application. If the event is a release (that is, a push to the <code>main</code> branch), the application is deployed to the production environment. If the event is a push to <code>staging</code>, the app is deployed to the staging environment.</p>
</li>
</ol>
<p>The CD process ensures that any changes made to the <code>staging</code> branch are automatically tested, built, and deployed to the staging environment, ready for further validation. When a release is published, it will trigger deployment to production, ensuring your app is always up to date.</p>
<h3 id="heading-accessing-the-deployed-application-in-the-staging-environment-on-google-cloud-run">Accessing the Deployed Application in the Staging Environment on Google Cloud Run 🌐</h3>
<p>Once the deployment to Google Cloud Run is successfully completed, you'll want to access your application running in the <strong>staging</strong> environment. Follow these steps to find and visit your deployed application:</p>
<h4 id="heading-1-navigate-to-the-google-cloud-console">1. <strong>Navigate to the Google Cloud Console</strong></h4>
<p>Open the Google Cloud Console in your browser by visiting <a target="_blank" href="https://console.cloud.google.com">https://console.cloud.google.com</a>. If you're not already signed in, make sure you log in with your Google account.</p>
<h4 id="heading-2-go-to-the-cloud-run-dashboard">2. <strong>Go to the Cloud Run Dashboard</strong></h4>
<p>In the Google Cloud Console, use the Search bar at the top or navigate through the left-hand menu: Go to <strong>Cloud Run</strong> (you can type this into the search bar, or find it under <strong>Products &amp; services</strong> &gt; <strong>Compute</strong> &gt; <strong>Cloud Run</strong>). Click on <strong>Cloud Run</strong> to open the Cloud Run dashboard.</p>
<h4 id="heading-3-select-your-staging-service">3. <strong>Select Your Staging Service</strong></h4>
<p>In the <strong>Cloud Run dashboard</strong>, you should see a list of all your services deployed across various environments. Find the service associated with the staging environment. The name should be similar to what you defined in your workflow (for example, <code>gcr-ci-cd-staging</code>).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733159635861/4ac895d2-5071-4d3f-9ed1-5af2bcca8835.png" alt="Google Cloud Run service for the staging environment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-4-access-the-service-url">4. <strong>Access the Service URL</strong></h4>
<p>Once you've selected your staging service, you’ll be taken to the <strong>Service details page</strong>. This page provides all the important information about your deployed service.<br>On this page, look for the <strong>URL</strong> section under the <strong>Service URL</strong> heading. The URL will look something like: <code>https://gcr-ci-cd-staging-&lt;unique-id&gt;.run.app</code>.</p>
<h4 id="heading-5-visit-the-application">5. <strong>Visit the Application</strong></h4>
<p>Click on the <strong>Service URL</strong>, and it will open your staging environment in a new tab in your browser. You can now interact with your application as if it were live, but in the <strong>staging environment</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733160050763/b097e647-bf6d-442e-87df-fc7d82d3585c.png" alt="Google Cloud Run service URL for the staging environment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-merge-the-staging-branch-into-the-main-branch-continuous-integration-and-continuous-deployment"><strong>Merge the Staging Branch into the Main Branch (Continuous Integration and Continuous Deployment) 🌐</strong></h2>
<p>In this section, we'll take the updates in the staging branch, merge them into the main branch, and trigger the CI/CD pipeline. This process not only ensures your changes are production-ready but also deploys them to the production/live environment. 🚀</p>
<h3 id="heading-step-1-push-local-changes-and-open-a-pull-request">Step 1: Push Local Changes and Open a Pull Request</h3>
<p><strong>Why?</strong> The first step involves merging the staging branch into the main branch. Just like in the previous Continuous Delivery process, this ensures the integration of thoroughly tested updates.</p>
<p>Here’s how to do it:</p>
<p>First, visit the GitHub repository where your project is hosted.</p>
<p>Then go to the <strong>Pull Requests</strong> tab. Click <strong>New Pull Request</strong>. Choose <strong>staging</strong> as the source branch (base branch) and <strong>main</strong> as the target branch. Add a clear title and description for the Pull Request, explaining why these updates are ready for production deployment.</p>
<h3 id="heading-step-2-continuous-integration-ci-pipeline-execution">Step 2: Continuous Integration (CI) Pipeline Execution</h3>
<p>After merging the pull request, the <strong>Continuous Integration (CI)</strong> pipeline will automatically execute to validate that the changes are still stable when integrated into the <strong>main branch</strong>.</p>
<h4 id="heading-pipeline-steps">Pipeline Steps:</h4>
<ul>
<li><p><strong>Code Checkout</strong>: The workflow fetches the latest code from the <strong>main branch</strong>.</p>
</li>
<li><p><strong>Dependency Installation</strong>: The pipeline installs all required dependencies.</p>
</li>
<li><p><strong>Testing</strong>: Automated tests are run to validate the application's stability.</p>
</li>
</ul>
<h3 id="heading-step-3-create-a-new-release">Step 3: Create a New Release</h3>
<p>The Continuous Deployment (CD) workflow to deploy to the production environment is triggered by the creation of a new release from the main branch.</p>
<p>Let’s walk through the steps to create a release.</p>
<p>On your GitHub repository page, click on the <strong>Releases</strong> section (located under the <strong>Code</strong> tab).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733338781623/c21e7f03-5381-47f9-8807-b5a3360245ad.png" alt="Navigate to the Release page in theGitHub repo" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Next, click <strong>Draft a new release</strong>. Set the <strong>Target</strong> branch to <strong>main</strong>. Enter a <strong>Tag version</strong> (for example, <code>v1.0.0</code>) following semantic versioning. Add a <strong>Release title</strong> and an optional description of the changes.</p>
<p>Then, click <strong>Publish Release</strong> to finalize.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733161473858/6e14214c-31fb-49b3-9dff-a719b9ec1d40.png" alt="Create a new release in the GitHub repo" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-why-run-the-continuous-deployment-pipeline-on-release-instead-of-on-push">Why run the Continuous Deployment pipeline on release instead of on push? 🤔</h4>
<p>In our setup, we decided not to trigger the Continuous Deployment (CD) pipeline every time changes are pushed to the main branch. Instead, we trigger it only when a new release is created. This gives the team more control over when updates are deployed to the production environment.</p>
<p>Imagine a scenario where developers are working on new features—they may push changes to the main branch as part of their regular workflow, but these features might not be complete or ready for users yet. Automatically deploying every push could accidentally expose unfinished features to your users, which can be confusing or disruptive.</p>
<p>By requiring a release to trigger the deployment, the team gets a chance to finalize and polish all changes before they go live.</p>
<p>For example, developers can test new features in the staging environment, fix any issues, and merge those changes into the main branch without worrying about them immediately appearing in production. This workflow ensures that only well-tested and complete features make their way to your end users.</p>
<p>Ultimately, this approach helps maintain a smooth user experience. Instead of seeing half-built features or unexpected changes, users only see updates that are ready and functional. It also gives the team the flexibility to push changes to the main branch frequently—preventing merge conflicts and making collaboration easier—while keeping control over what gets deployed live. 🚀</p>
<h3 id="heading-step-4-navigate-to-the-actions-page">Step 4: Navigate to the Actions Page</h3>
<p>After the release is published, the CD pipeline for the production environment is triggered. To monitor this repeat the process taken for the Continuous Delivery workflow, follow these steps:</p>
<ol>
<li><p><strong>Go to the GitHub Actions tab</strong>: In your GitHub repository, click on the <strong>Actions</strong> tab.</p>
</li>
<li><p><strong>Locate the deployment workflow</strong>: Look for the <strong>CD Pipeline to Google Cloud Run (staging and production)</strong> workflow. You’ll notice that the workflow has been triggered on the <strong>main branch</strong> due to the push event.</p>
</li>
<li><p><strong>Open the workflow details</strong>: Click on the workflow to view detailed steps, logs, and statuses for each part of the deployment process.</p>
</li>
</ol>
<p>This time, the Continuous delivery workflow deploys the application to the <strong>production</strong>/<strong>live</strong> environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733164741827/303cd415-5bb9-4149-aa5d-7088d0eab582.png" alt="Continuous Deployment workflow from merge to main (staging to main)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-5-access-the-live-application">Step 5: Access the Live Application</h3>
<p>Once the deployment is complete, go to Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com">https://console.cloud.google.com</a>.</p>
<p>Navigate to <strong>Cloud Run</strong> from the menu. Select the service corresponding to the <strong>production environment</strong> (for example, <code>gcr-ci-cd-app</code>).</p>
<p>Locate the <strong>Service URL</strong> in the service details page. Open the URL in your browser to access the live application.</p>
<p>And now, congratulations – you’re done!</p>
<h2 id="heading-conclusion">Conclusion 🌟</h2>
<p>In this article, we explored how to build and automate a CI/CD pipeline for a Node.js application, using GitHub Actions, Docker Hub, and Google Cloud Run.</p>
<p>We set up workflows to handle Continuous Integration by testing and integrating code changes and Continuous Delivery to deploy those changes to a staging environment. We also containerized our app using Docker and deployed it seamlessly to Google Cloud Run.</p>
<p>Finally, we implemented Continuous Deployment, ensuring updates to the production environment happen only when a release is created from the main branch.</p>
<p>This approach gives teams the flexibility to push and test incomplete features without impacting end users. By following these steps, you've built a robust pipeline that makes deploying your application smoother, faster, and more reliable.</p>
<h3 id="heading-study-further">Study Further 📚</h3>
<p>If you would like to learn more about Continuous Integration, Delivery, and Deployment you can check out the courses below:</p>
<ul>
<li><p><a target="_blank" href="https://www.coursera.org/learn/continuous-integration-and-continuous-delivery-ci-cd"><strong>Continuous Integration and Continuous Delivery (CI/CD) (from IBM Coursera</strong></a><strong>)</strong></p>
</li>
<li><p><a target="_blank" href="https://www.udemy.com/course/github-actions-the-complete-guide/?couponCode=CMCPSALE24"><strong>GitHub Actions - The Complete Guide (from Udemy</strong></a><strong>)</strong></p>
</li>
<li><p><a target="_blank" href="https://www.freecodecamp.org/news/what-is-ci-cd/"><strong>Learn CI/CD by buliding a project (freeCodeCamp tutorial)</strong></a></p>
</li>
</ul>
<h3 id="heading-about-the-author">About the Author 👨‍💻</h3>
<p>Hi, I’m Prince! I’m a software engineer passionate about building scalable applications and sharing knowledge with the tech community.</p>
<p>If you enjoyed this article, you can learn more about me by exploring more of my blogs and projects on my <a target="_blank" href="https://www.linkedin.com/in/prince-onukwili-a82143233/">LinkedIn profile</a>. You can find my <a target="_blank" href="https://www.linkedin.com/in/prince-onukwili-a82143233/details/publications/">LinkedIn articles here</a>. And you can <a target="_blank" href="https://prince-onuk.vercel.app/achievements#articles">visit my website</a> to read more of my articles as well. Let’s connect and grow together! 😊</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up a CI/CD Pipeline with Husky and GitHub Actions ]]>
                </title>
                <description>
                    <![CDATA[ CI/CD is a core practice in the modern software development ecosystem. It helps agile teams deliver high-quality software in short release cycles. In this tutorial, you'll learn what CI/CD is, and I'll help you set up a CI/CD pipeline using Husky and... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-set-up-a-ci-cd-pipeline-with-husky-and-github-actions/</link>
                <guid isPermaLink="false">66bccaed4a4c0beb784641ce</guid>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Viviana Yanez ]]>
                </dc:creator>
                <pubDate>Mon, 15 Jul 2024 17:46:34 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/07/how-to-set-a-cicd-pipeline-1.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>CI/CD is a core practice in the modern software development ecosystem. It helps agile teams deliver high-quality software in short release cycles.</p>
<p>In this tutorial, you'll learn what CI/CD is, and I'll help you set up a CI/CD pipeline using Husky and GitHub Actions in a Next.js application. </p>
<p>This tutorial assumes that you already have knowledge of React and Next.js or other modern JavaScript frameworks. You will need also a GitHub account, and basic knowledge of Git will be strongly beneficial. </p>
<p>If you already have a working web app that is not built with Next.js, you might still find this article useful. All the concepts and most of the configurations will work with little adaptation in apps created with other frameworks.</p>
<h2 id="heading-heres-what-well-cover">Here's What We'll Cover:</h2>
<ol>
<li><a class="post-section-overview" href="#heading-what-is-cicd">What is CI/CD?</a><br>– <a class="post-section-overview" href="#heading-what-is-ci">What is CI?</a><br>– <a class="post-section-overview" href="#heading-what-is-cd">What is CD?</a><br>– <a class="post-section-overview" href="#heading-what-is-a-cicd-pipeline-and-what-are-its-benefits">What is a CI/CD pipeline and what are its benefits?</a></li>
<li><a class="post-section-overview" href="#heading-how-to-set-up-a-cicd-pipeline">How to Set Up a CI/CD Pipeline</a><br>– <a class="post-section-overview" href="#heading-step-1-set-up-a-nextjs-app">Step 1: Set Up a Next.js App with Vitest</a><br>– <a class="post-section-overview" href="#heading-step-2-set-a-git-hook">Step 2: Set a Git Hook</a><br>– <a class="post-section-overview" href="#heading-step-3-create-a-github-actions-workflow">Step 3: Create a GitHub Actions Workflow</a><br>– <a class="post-section-overview" href="#heading-step-4-deploy-the-project">Step 4: Deploy the Project</a></li>
<li><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></li>
</ol>
<h2 id="heading-what-is-cicd">What is CI/CD?</h2>
<p>Continuous Integration/Continuous Delivery or Continuous Deployment (CI/CD) is a practice that involves automating the process of building, testing, and deploying software.</p>
<p>Its main benefit is speeding up the entire development process. It also increases productivity by ensuring smooth code integration, standards, and security best practices adoption. It also helps produce a shorter feedback cycle with early issue detection, among other advantages explained below.</p>
<p>CI/CD is an essential tool in today’s software development practices, enabling teams to deliver high-quality software quickly, efficiently, and reliably.</p>
<p>Let’s learn more about it in detail.</p>
<h3 id="heading-what-is-ci">What is CI?</h3>
<p><strong>Continuous Integration</strong> is a software practice that means that developers in a team merge code changes into a central repository multiple times a day. </p>
<p>Instead of having independent dev environments and merging at a specific time, developers frequently integrate their changes to an application into a shared branch or “trunk”.</p>
<h3 id="heading-what-is-cd">What is CD?</h3>
<p>The CD in CI/CD usually refers to <strong>Continuous Delivery</strong>. It's a practice that, on top of CI, automates the software integration, testing, and release process. The automation stops just before deploying to production, where a human-controlled step is needed.</p>
<p>But CD can also refer to <strong>Continuous Deployment</strong>, which adds automation to the step of releasing software to a production environment.</p>
<p>Even though CD usually refers to Continuous Delivery, both terms are sometimes used interchangeably. The difference between them is the amount of automation implemented in a project.</p>
<h3 id="heading-what-is-a-cicd-pipeline-and-what-are-its-benefits">What is a CI/CD pipeline and what are its benefits?</h3>
<p>When put together, these two practices create a CI/CD pipeline. Adding CI/CD to your project brings the following benefits:</p>
<ul>
<li>Faster development: reduces the time required to deliver new features thanks to automating the build, test and deploy.</li>
<li>Enhanced Collaboration: encourages frequent code integrations and reduces integration conflicts.</li>
<li>Improved Code Quality: enforces the adoption of coding standards and best practices throughout the codebase.</li>
<li>Early Detection of Issues: makes the feedback cycle smaller, as issues can be caught in advance.</li>
<li>Increased Productivity: prevents developers from needing to work on repetitive tasks.</li>
</ul>
<p>These are some of the reasons why CI/CD is a core practice in modern software development and why it is such an important topic to learn about. The following steps will guide you through the process of setting up a CI/CD pipeline for your project.</p>
<h2 id="heading-how-to-set-up-a-cicd-pipeline">How to Set Up a CI/CD Pipeline</h2>
<h3 id="heading-step-1-set-up-a-nextjs-app">Step 1: Set Up a Next.js App</h3>
<p>If you already have a working web app, you can skip this and go directly to the first step.</p>
<p>Otherwise, let's set up a basic Next.js app with the default ESLint configuration and Vitest, and push it to a GitHub repo.</p>
<h4 id="heading-create-a-nextjs-app">Create a Next.js app</h4>
<p>Navigate into the directory where you want to create the new project folder, then run the following command in your terminal:</p>
<pre><code class="lang-bash">npx create-next-app@latest
</code></pre>
<p>When prompted with the installation options, make sure you choose to use ESLint in your project. This will ensure that ESLint is properly installed and a <code>lint</code> script is created in the package.json. </p>
<p>Wait for <code>create-next-app</code> to create the folder and install the project dependencies. Once it's done, navigate into the new folder and start the dev server:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> &lt;your-project-name&gt;
npm run dev
</code></pre>
<h4 id="heading-set-up-vitest">Set up Vitest</h4>
<p>Let's add Vitest to the project and add some automated tests to run in the CI/CD pipeline.</p>
<p>First, install <code>vitest</code> and the dev dependencies needed:</p>
<pre><code class="lang-bash">npm install -D vitest @vitejs/plugin-react jsdom @testing-library/react
</code></pre>
<p>Create a <code>vitest.config.js</code> file (or <code>vitest.config.ts</code> if using TypeScript) with the following content:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> { defineConfig } <span class="hljs-keyword">from</span> <span class="hljs-string">'vitest/config'</span>
<span class="hljs-keyword">import</span> react <span class="hljs-keyword">from</span> <span class="hljs-string">'@vitejs/plugin-react'</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineConfig({
  <span class="hljs-attr">plugins</span>: [react()],
  <span class="hljs-attr">test</span>: {
    <span class="hljs-attr">environment</span>: <span class="hljs-string">'jsdom'</span>,
  },
})
</code></pre>
<p>And finally, add the <code>test</code> script to the package.json:</p>
<pre><code> <span class="hljs-string">"test"</span>: <span class="hljs-string">"vitest --no-watch"</span>
</code></pre><p>Note that I added the no-watch option to the test script. This prevents Vitest from starting in the default watch mode in dev environment.</p>
<p>Now, you can add tests for your project. If you don't know how to start, you can check out <a target="_blank" href="https://nextjs.org/docs/app/building-your-application/testing/vitest#creating-your-first-vitest-unit-test">this guide</a> for some examples.</p>
<h4 id="heading-push-the-project-to-github">Push the project to GitHub</h4>
<p>Log in into your GitHub account and create new repository. Once your are done, you can connect the local repo with the one you just created, adding this repo as the remote. Then push the changes:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">"first commit"</span>
git remote add origin git@github.com:&lt;your-user-name&gt;/&lt;your-repo-name&gt;.git
git push origin main
</code></pre>
<p>You should be now ready to continue to the interesting part of this tutorial. :)</p>
<h3 id="heading-step-2-set-a-git-hook">Step 2: Set a Git Hook</h3>
<p>A Git hook is a script that allows you to run some event within the Git lifecycle. In this case we will be using Husky.</p>
<p><a target="_blank" href="https://typicode.github.io/husky/">Husky</a> is a pre-commit hook for Git that allows you maintain code quality by executing some task upon committing or pushing. You can run various checks before making a commit with new changes, such as linting the code and running automated tests.</p>
<p>By implementing these checks, you can avoid wasting time and resources by catching issues in advance before triggering the GitHub Actions workflow.  </p>
<p>Let’s start by adding Husky to the project with the following command:</p>
<pre><code class="lang-bash">npm install --save-dev husky
</code></pre>
<p>Next, let’s set up the project using the Husky init command:</p>
<pre><code class="lang-bash">npx husky init
</code></pre>
<p>After running this command, you will notice that a pre-commit file was created under <code>./husky</code>. Also, a <code>“prepare”</code> script was added in the package.json.</p>
<p>If you open the pre-commit file inside <code>./husky</code>, you will find the following content:</p>
<pre><code class="lang-bash">npm <span class="hljs-built_in">test</span>
</code></pre>
<p>As its name suggests, this file contains the code that executes before completing a commit. With everything set up as described, tests will run each time you attempt to create a new commit and new commits will be added only if all tests pass. </p>
<h4 id="heading-adding-more-git-hooks">Adding more git hooks</h4>
<p>Now, let’s change the content in the pre-commit file so the code linter also executes before creating a new commit. </p>
<p>You can open your preferred code editor and add <code>npm run lint</code> (or the corresponding ESLint script if you’re not using Next.js) in a new line in the pre-commit file. Alternatively, you can simply run the following command from the root folder of your project:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"npm run lint"</span> &gt;&gt; ./.husky/pre-commit
</code></pre>
<p>Now, each time you attempt to make a new commit, the tests and the linter will run, and the commit will be created – only if all tests are passing and no errors are found in the code.</p>
<h4 id="heading-setting-up-lint-staged">Setting up lint-staged</h4>
<p>You can go one step further and include a tool called <a target="_blank" href="https://github.com/lint-staged/lint-staged">lint-staged</a>. This tool will be especially useful if your project is large, because it allows you to run the Git hooks only for staged files. In this case, it will lint only the files that will be committed, avoiding wasting time by linting the entire project.</p>
<p>To start using lint-staged, let's add it as a dev dependency to the project:</p>
<pre><code class="lang-bash">npm install --save-dev lint-staged
</code></pre>
<p>There are <a target="_blank" href="https://github.com/lint-staged/lint-staged?tab=readme-ov-file#configuration">different ways to configure lint-staged</a> and you can choose the one that best suits your needs. I will add a <code>lint-staged</code> script and object to the package.json of my project with the following content:</p>
<pre><code class="lang-js">  <span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"dev"</span>: <span class="hljs-string">"next dev"</span>,
    <span class="hljs-string">"build"</span>: <span class="hljs-string">"next build"</span>,
    <span class="hljs-string">"start"</span>: <span class="hljs-string">"next start"</span>,
    <span class="hljs-string">"lint"</span>: <span class="hljs-string">"next lint"</span>,
    <span class="hljs-string">"test"</span>: <span class="hljs-string">"vitest --no-watch"</span>,
    <span class="hljs-string">"prepare"</span>: <span class="hljs-string">"husky"</span>
  },
  <span class="hljs-string">"lint-staged"</span>: {
    <span class="hljs-string">"*.{js, jsx,ts,tsx}"</span>: [
        <span class="hljs-string">"eslint --fix"</span>
        ]
    },
</code></pre>
<p>Now, I can replace <code>npm run lint</code> with <code>npm run lint-staged</code> in the pre-commit file.</p>
<p>Each time I make a new commit, any <code>js</code>, <code>jsx</code>, <code>ts</code>, or <code>tsx</code> staged files will be linted and, if there are fixable issues, they will be automatically fixed.</p>
<p>Let's test that the pre-commit hook is working as expected by:</p>
<ol>
<li>Running <code>git add  .</code></li>
<li>Running <code>git commit</code></li>
<li>Waiting for the linter to run and entering a commit message when prompted</li>
<li>Running <code>git log</code> to confirm that the commit was properly created</li>
</ol>
<p>If you want, you can add more checks to your pre-commit file to fit your project's needs. For example, you could run a tool like Prettier to automatically format your code, or <a target="_blank" href="https://commitlint.js.org/">commitlint</a> to lint your commit messages.</p>
<p>Now, let’s move on to setting up a GitHub Actions workflow for the project. </p>
<h3 id="heading-step-3-create-a-github-actions-workflow">Step 3: Create a GitHub Actions Workflow</h3>
<p>With the first part complete, we can move on to the next step. Here, you will add a GitHub Actions workflow to ensure the smooth integration of changes into the entire project.</p>
<h4 id="heading-github-actions-basics">GitHub Actions Basics</h4>
<p>GitHub Actions is a CI/CD platform that allows you to automate the building, testing, and deployment of your project. It also lets you perform actions when certain activities happen in your repository, such as opening a pull request or creating an issue.</p>
<p>GitHub Actions are configured through workflows defined in YAML files. These workflows typically run when triggered by an event in the repository, but they can also be scheduled or run manually.</p>
<p>Workflows are located in the <code>.github/workflows</code> folder and run different jobs. Each job includes a set of steps that run in order on the same runner or server. A step can be either a shell script or an action (a reusable piece of code that helps reduce repetitive code in your workflows). </p>
<p>Let's put all this together by creating the first workflow.</p>
<h4 id="heading-creating-a-workflow-to-execute-when-you-push-to-main-branch">Creating a workflow to execute when you push to main branch</h4>
<p>First create a <code>.github/workflows/</code> under your project root. Then create a <code>run-test.yml</code> file. You will be adding content to this file to create a CI workflow.</p>
<p>The first line is optional and includes a name for the workflow. It will appear at the "Actions" tab in the GitHub repo:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">linter</span> <span class="hljs-string">and</span> <span class="hljs-string">tests</span> <span class="hljs-string">on</span> <span class="hljs-string">push</span>
</code></pre>
<p>Then, you will use the <code>on</code> key to define the event or events that will trigger the workflow run. This can be an event in your repo or a time schedule. In this case, let's set it to run each time a push to the repo happens:</p>
<pre><code class="lang-yml"><span class="hljs-attr">on:</span>
  <span class="hljs-string">push</span>
</code></pre>
<p>You can also set options below the <code>on</code> keyword to limit the execution of a workflow to some branch or files – for example to run only on push to main branch:</p>
<pre><code class="lang-yml"><span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
</code></pre>
<p>Below this, you will add the <code>jobs</code> key. It groups all the jobs in the workflow, followed by the name of the first job, in this case <code>run-linter-and-tests</code>. </p>
<p>The lines below that define workflow properties, configuring it to run on the latest version of an Ubuntu Linux runner and grouping all the steps that run on this job.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">run-linter-and-tests:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Lint</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">lint</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>As mentioned before, each step can be either a shell script or an action. You can see the difference between the first and the second step in the previous code. </p>
<p>The first one specifies with the <code>uses</code> keyword that will run the <code>actions/checkout</code>. This action is used to checkout the repository onto the runner so the workflow can use the repository code. The second step <code>Install dependencies</code> uses the <code>run</code> keyword to tell the job to execute the <code>npm i</code> command on the runner.</p>
<p>This is the complete resulting file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CI</span> <span class="hljs-string">workflow</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-string">push</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">run-linter-and-tests:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Lint</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">lint</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>Let's commit the changes and push them to the GitHub repository.</p>
<p>Now each time you push to your repository, the workflow will trigger. If you click on the "Actions" tab in your GitHub repository navigation bar, you will find a list of all the runs from all your workflows and its complete logs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/07/Screenshot-2024-07-03-at-12.13.05-1.png" alt="Image" width="600" height="400" loading="lazy">
<em>"Actions" tab in a GitHub repository navigation bar</em></p>
<p>Also, you will see that in the GitHub repository's "Code" tab, a green checkmark appears next to the last commit message. This means that workflows ran and finished successfully. </p>
<p>When jobs are still running, you'll see a brown dot, and a red cross when a workflow finished with an error.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/07/Screenshot_.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h4 id="heading-adding-a-second-workflow-to-run-when-a-pr-is-created">Adding a second workflow to run when a PR is created</h4>
<p>Each repository can have one or more workflows, so let's add a second workflow to run each time a PR is created. Let's run the code coverage report each time a PR is opened against the main branch of the repo.</p>
<p>First, create and checkout a new <code>add-wf</code> branch:</p>
<pre><code class="lang-yaml"><span class="hljs-string">git</span> <span class="hljs-string">checkout</span> <span class="hljs-string">-b</span> <span class="hljs-string">add-wf</span>
</code></pre>
<p>Then, create a new YAML file under the <code>.github/workflows</code> directory and start adding some content on it.</p>
<p>First, let's add the name and when to run the workflow with the <code>on</code> keyword:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Coverage</span> <span class="hljs-string">on</span> <span class="hljs-string">PR</span>
<span class="hljs-attr">on:</span> <span class="hljs-string">pull_request</span>
</code></pre>
<p>After that, you will use the <code>jobs</code> keyword to describe the jobs to run. Let's define the first one as <code>build-and-run-coverage</code> to run in <code>ubuntu-latest</code> runner:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-run-coverage:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
</code></pre>
<p>Now, let's add <code>steps</code> for this job:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span> <span class="hljs-string">and</span> <span class="hljs-string">coverage</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">coverage</span>
</code></pre>
<p>Following is the complete resulting code:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Coverage</span> <span class="hljs-string">on</span> <span class="hljs-string">PR</span>
<span class="hljs-attr">on:</span> <span class="hljs-string">pull_request</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-run-coverage:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span> <span class="hljs-string">and</span> <span class="hljs-string">coverage</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">coverage</span>
</code></pre>
<p>Now, you can push the change to your GitHub repo:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">'add a wf to run on opened PR'</span>
git push origin add-wf
</code></pre>
<p>Now you can open a PR against your <code>main</code> branch and and wait for the workflow to complete.</p>
<h5 id="heading-comment-coverage-report-in-the-pr">Comment coverage report in the PR</h5>
<p>As mentioned earlier in this article, actions are reusable pieces of code that avoid repetitive code in the workflow. One cool thing about them is that there are many already written by the community that you can use in your workflows, saving lots of time.</p>
<p>To complete the workflow we created, let's add a new step that uses an action to report coverage results as a comment on the pull request.</p>
<p>First, let's modify the <code>permissions</code> keyword to ensure the workflow has the right access to content and to create comments:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">permissions:</span>
      <span class="hljs-attr">contents:</span> <span class="hljs-string">read</span>
      <span class="hljs-attr">pull-requests:</span> <span class="hljs-string">write</span>
</code></pre>
<p>Then, let's use the <a target="_blank" href="https://github.com/marketplace/actions/vitest-coverage-report">Vitest Coverage Report</a> action by adding a <code>step</code> into the <code>build-and-run-coverage</code> job:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Report</span> <span class="hljs-string">Coverage</span>
        <span class="hljs-attr">uses:</span>  <span class="hljs-string">davelosert/vitest-coverage-report-action@v2</span>
</code></pre>
<p>The final <code>yaml</code> file will look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Coverage</span> <span class="hljs-string">on</span> <span class="hljs-string">PR</span>
<span class="hljs-attr">on:</span> <span class="hljs-string">pull_request</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-run-coverage:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">permissions:</span>
      <span class="hljs-attr">contents:</span> <span class="hljs-string">read</span>
      <span class="hljs-attr">pull-requests:</span> <span class="hljs-string">write</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">test</span> <span class="hljs-string">and</span> <span class="hljs-string">coverage</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">coverage</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Report</span> <span class="hljs-string">Coverage</span>
        <span class="hljs-attr">uses:</span>  <span class="hljs-string">davelosert/vitest-coverage-report-action@v2</span>
</code></pre>
<p>There is one more step to ensure all works as expected. You must add the <code>json-summary</code> reporter in the Vitest configuration:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">import</span> { defineConfig } <span class="hljs-keyword">from</span> <span class="hljs-string">"vitest/config"</span>;
<span class="hljs-keyword">import</span> react <span class="hljs-keyword">from</span> <span class="hljs-string">"@vitejs/plugin-react"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineConfig({
  plugins: [react()],
  test: {
    environment: <span class="hljs-string">"jsdom"</span>,
    coverage: {
      provider: <span class="hljs-string">"v8"</span>,
      extension: [<span class="hljs-string">".tsx"</span>],
      reporter: [<span class="hljs-string">'text'</span>, <span class="hljs-string">'json-summary'</span>, <span class="hljs-string">'json'</span>],
    },
  },
});
</code></pre>
<p>Now, make some changes in your project and add corresponding tests to check if the workflow is working as expected. </p>
<p>Once you push your changes to the GitHub repo, open a PR against the main branch of your project. After the workflows finish running, you should see a comment showing the coverage result:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/07/Screen-Shot-2024-07-12-at-19.18.05.png" alt="Image" width="600" height="400" loading="lazy">
<em>Coverage Report in a pull request comment</em></p>
<h3 id="heading-step-4-deploy-the-project">Step 4: Deploy the Project</h3>
<p>As a last step in this tutorial, let's deploy the project on <a target="_blank" href="https://vercel.com/">Vercel</a>. You will set up an automatic deployment through Git that will trigger a redeploy each time new changes are pushed or merged into the main branch.</p>
<p>First, log in to your Vercel account, or create one if you don't already have one. Then, in your dashboard, click on "Add New Project" and click on the "Import" button next to your repository name in the "Import Git Repository" section. </p>
<p>If you don't see your repository listed, it may be due to your GitHub app permissions configuration. You can manage them in your settings section in your GitHub account.</p>
<p>Finally, choose a name for the project in the "Configure Project" section and click on the "Deploy" button. You can now see the deploy details by clicking on the "Deployment" link.</p>
<p>Vercel automatic deployments ensure that the deployed project is always updated with the latest changes. They also have the benefit of <a target="_blank" href="https://vercel.com/docs/deployments/preview-deployments">Preview Deployments</a>, a preview URL that lets you test new features in advance of merging changes into production.</p>
<p>If you have followed along with the tutorial, with this step completed, you'll have completed the CD part of the CI/CD pipeline for your project. Now, you can be sure any code that is pushed to the main branch is linted and tested, and once all checks pass, it is automatically pushed to production.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this guide, you learned about the importance of CI/CD in today’s software development ecosystem and its main benefits. You also took your first steps in this area by creating your own CI/CD pipeline for your project, learning how to use Husky and GitHub Actions.</p>
<p>Now, you can keep learning more about these tools and improve your CI/CD pipeline by customizing it to better fit your project's needs.</p>
<p>I hope you were able to gain some new knowledge and enjoyed following along. Thanks for reading!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Setup a CI/CD Pipeline with GitHub Actions and AWS ]]>
                </title>
                <description>
                    <![CDATA[ By Nyior Clement In this article, we'll learn how to set up a CI/CD pipeline with GitHub Actions and AWS. I've divided the guide into three parts to help you work through it: First, we'll cover some important terminology so you're not lost in a bunch... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-setup-a-ci-cd-pipeline-with-github-actions-and-aws/</link>
                <guid isPermaLink="false">66d4608bc7632f8bfbf1e46b</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 18 Jan 2022 21:26:24 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2021/12/2220.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Nyior Clement</p>
<p>In this article, we'll learn how to set up a CI/CD pipeline with GitHub Actions and AWS. I've divided the guide into three parts to help you work through it:</p>
<p>First, we'll cover some important terminology so you're not lost in a bunch of big buzzwords.</p>
<p>Second, we'll set up continuous integration so we can automatically run builds and tests.</p>
<p>And finally, we'll set up continuous delivery so we can automatically deploy our code to AWS.</p>
<p>Alright, that was a lot. Let's start by diving into each of these terms so you understand exactly what we're doing here.</p>
<h2 id="heading-part-one-demystifying-the-hefty-buzzwords">Part One: Demystifying the Hefty Buzzwords</h2>
<p>The key to making sense of the title of this piece lies in understanding the terms CI/CD Pipeline, GitHub Actions, and AWS.</p>
<p>If you already have a strong grasp of what these terms are, you can just skip to down to Part 2.</p>
<h3 id="heading-what-is-a-cicd-pipeline">What is a CI/CD Pipeline?</h3>
<p>A CI/CD Pipeline is simply a development practice. It tries to answer this one question: <em>How can we ship quality features to our production environment faster?</em> In other words, how can we hasten the feature release process without compromising on quality?</p>
<p>How does the CI/CD Pipeline help us hasten the feature release process, you might ask? </p>
<p>The diagram below depicts a typical feature delivery cycle with or without the CI/CD pipeline.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/12/Activity-diagram.jpeg" alt="Image" width="600" height="400" loading="lazy">
<em>the feature release process. Source: Author</em></p>
<p>Without the CI/CD Pipeline, each step in the diagram above will be performed manually by the developer. In essence, to build the source code, someone on your team has to manually run the command to initiate the build process. Same thing with running tests and deployment.</p>
<p>The CI/CD approach is a radical shift from the manual way of doing things. It is entirely based on the premise that we can speed up the feature release process reasonably, if we automate steps 3-6 in the diagram above. </p>
<p>With the CI/CD Pipeline, we set up a mechanism that automatically starts the build process, runs the tests, deploys to the User Acceptance Testing (UAT) environment, and finally to the production environment each time a member of the team pushes their change to the shared repo, for example.</p>
<p>Continuous Integration happens each time the build process is initiated, and tests run on a new change.  </p>
<p>Continuous Delivery happens when a newly integrated change is automatically deployed to the UAT environment and then manually deployed to the production environment from there. </p>
<p>Continuous Deployment happens when an update in the UAT environment is automatically deployed to the production environment as an official release.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/12/Activity-diagram--1-.jpeg" alt="Image" width="600" height="400" loading="lazy">
<em>Continuous Integration vs Continuous Deployment vs Continuous Delivery. Source: Author</em></p>
<p><strong>Note:</strong> If the deployment from the UAT environment to the production environment is initiated manually, then it is a Continuous Integration/Continuous Delivery setup. Otherwise, it is a Continuous Integration/Continuous Deployment set up.</p>
<p>However, we can't help but ask, what is that entity that automates the different phases of the CI/CD Pipeline? </p>
<p>There are variety of tools we can use to automate the build, tests, and deployment steps in the CI/CD Pipeline – for example, CI Circle, Travis CI, Jenkins, GitHub Actions, and so on. In this article we will be focusing on GitHub Actions.</p>
<h3 id="heading-what-are-github-actions">What are GitHub Actions?</h3>
<p>For want of a better way of making the GitHub Actions term super comprehensible, I'm going to oversimplify this. </p>
<p>In the CI/CD Pipeline, GitHub Actions is the entity that automates the boring stuff. Think of it as some plugin that comes bundled with every GitHub repository you create. </p>
<p>The plugin exists on your repo to execute whatever task you tell it to. Usually, you'd specify what tasks the plugin should execute through a YAML configuration file. Whatever command you add to the configuration file, will translate to something like this in plain English: </p>
<p>"hey GitHub Actions, each time a PR is opened on X branch, automatically build and test the new change. And each time a new change is merged into or pushed to X branch, deploy that change to Y server." </p>
<p>At the core of GitHub Actions lies five concepts: jobs, workflows,  events, actions, and runners. </p>
<p><strong>Jobs</strong> are the tasks you command GitHub Actions to execute through the YAML config file. A job could be something like telling GitHub actions to build your source code, run tests, or deploy the code that has been built to some remote server.</p>
<p><strong>Workflows</strong> are essentially automated processes that contain one or more logically related jobs. For example, you could put the build and run tests jobs into the same workflow, and the deployment job into a different workflow. </p>
<p>Remember, we mentioned that you tell GitHub Actions what job(s) to execute through a configuration file right? GitHub Actions considers each configuration file that you put in some folder in your repo a workflow<em>.</em> We will talk more about this folder in the next part. </p>
<p>So, to create a separate workflow for the deployment job and then a different workflow that combines the build and tests jobs, you'd have to add two config files to your repo. But if you are merging all the three jobs into a single workflow, then you'd need to add just one config file.</p>
<p><strong>Events</strong> are literally the events that trigger the execution of a job by GitHub Actions. Recall we mentioned passing jobs to be executed through a config file? In that config file you'd also have to specify when a job should be executed. </p>
<p>For example, is it on-PR to main? Is it on-push to main? is it on-merge to main? A job can only be executed by a GitHub Action when some event happens.</p>
<p>Okay, let me quickly correct myself. It's not always the case that some event has to happen before a job could be executed. You could schedule jobs too. </p>
<p>For example, in your config file, instead of specifying that the event that should trigger the execution of, let's say, the build-and-test job, you could schedule it to happen a 2am everyday. In fact, you could both schedule a job and specify an event for that same job.</p>
<p><strong>Actions</strong> are the reusable commands that you can reuse in your config file. You can write your custom actions or use existing ones.</p>
<p>A <strong>runner</strong> is the remote computer that GitHub Actions uses to execute the jobs you tell it to. </p>
<p>For example, when the build-and-test job is triggered based on some event, GitHub Actions will pull your code to that computer and execute the job. </p>
<p>The same thing happens in the case of the deployment job. The runner triggers the deployment of the built code to some remote server you specify. In our case, we will be using a service called AWS.</p>
<h3 id="heading-what-is-aws">What is AWS?</h3>
<p>AWS stands for Amazon Web Services. It is a platform owned by Amazon, and this platform allows you access to a broad range of cloud computing services.</p>
<p><strong>Cloud computing</strong> – I thought you said no big words? Most times businesses and even individual developers build applications just so other people can use them. For that reason, these applications have to be available on the interwebs. </p>
<p>Making an application accessible to some target users, ideally, entails uploading that application to a special computer that runs 24/7 and is super fast. </p>
<p>Imagine if it were the case that, before you could make your applications available to other internet users, you'd have to own and set up such a computer. It is quite doable, but it comes with a lot of hurdles. </p>
<p>For example, what if you just want to test out the application? You'd go through all the stress of setting up a hardware infrastructure just for testing? Furthermore, what if you want to upload 1000 different applications or your application is beginning to handle 1billion concurrent requests? Things start to get complicated.</p>
<p>Cloud computing platforms like AWS exist to save you all that stress. These platforms already have numerous of these special computers setup and kept in buildings called data centers. </p>
<p>Instead of having to setup your own hardware infrastructure from scratch, they allow you upload your application to one of their pre-configured computers over the internet. In return, you pay some certain amount to them. </p>
<p>In fact, some of these platforms have free plans that allows you test out small applications. In addition to uploading your application's code, some of these platforms also allow you host your database and store your media files, amongst other features.</p>
<p>In its most simplistic form, Cloud Computing is primarily about storing or executing (sometimes both) certain things on someone else's computer, usually, over a network.</p>
<p>So when I said AWS gives access to a wide range of cloud services, I was just saying it provides businesses and individuals with some special computer where they could upload their code, databases, and media files as a service.</p>
<p>Okay, now that we fully understand the different parts of our title, we will now restate it in the form of objectives. These objectives will then dictate what the remaining two parts in this article will contain.</p>
<p><strong>Objective 1:</strong> How to automatically build and run unit tests on push or on PR to the main branch with GitHub Actions.</p>
<p><strong>Objective 2:</strong> How to automatically deploy to AWS on push or on PR to the main branch with GitHub Actions.</p>
<h2 id="heading-part-2-continuous-integration-how-to-automatically-run-builds-and-tests">Part 2: Continuous Integration – How to Automatically Run Builds and Tests</h2>
<p>In this section, we will be seeing how we can configure GitHub Actions to automatically run builds and tests on push or pull request to the main branch of a repo.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li>A Django project setup locally with at least one view that returns some response defined. </li>
<li>A testcase written for the view(s) you've defined.</li>
</ul>
<p>I have created a demo Django project which you can grab from this <a target="_blank" href="https://github.com/Nyior/django-github-actions-aws">repository</a>:</p>
<p><code>git clone [https://github.com/Nyior/django-github-actions-aws](https://github.com/Nyior/django-github-actions-aws)</code></p>
<p>After you download the code, create a virtualenv and install the requirements via pip:</p>
<p><code>pip install -r requirements.txt</code></p>
<p><strong>Note:</strong> The project already has all the files we will be adding incrementally as we proceed. Maybe you could still download and try to make sense of the content of the files as we proceed. The project is certainly not interesting. It's just created for demo purposes.</p>
<p>Now that you have a Django project setup locally, let's configure GitHub Actions.</p>
<h3 id="heading-how-to-configure-github-actions">How to Configure GitHub Actions</h3>
<p>Okay, so we have our project setup. We also have a testcase written for the view that we have defined, but most importantly we've pushed our shiny change to GitHub. </p>
<p>The goal is to have GitHub trigger a build and run our tests each time we push or open a pull request on main/master. We just pushed our change to main, but GitHub Actions didn't trigger the build or run our tests. </p>
<p><strong>Why not?</strong> Because we haven't defined a workflow yet. Remember, a workflow is where we specify the jobs we want GitHub Actions to execute.</p>
<p>In fact, Nyior, how did you even know that no build was triggered and by extension no workflow defined? Every GitHub repo has an <code>Action</code> tab. If you navigate to that tab, you'll know if a repo has a workflow defined on it or not.</p>
<p><strong>How?</strong> See the images below.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/with-workflow.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>A GitHub Repo With a Workflow Defined on it</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/no-workflow.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>A GitHub Repo With No Workflow Defined on it</em></p>
<p>The first repo in the first image has a workflow defined on it named 'Lint and Test'. The second repo in the second image has no workflow – it's why you don't see a list with the heading 'All Workflows' as is the case with the first repo.</p>
<p>Oh okay, now I get it. So how do I define a workflow on my repo?</p>
<ul>
<li>Create a folder named <code>.github</code> in the root of your project directory.</li>
<li>Create a folder named <code>workflows</code> in the <code>.github</code> directory: This is where you'll create all your YAML files. </li>
<li>Let's create our first workflow that will contain our build and test jobs. We do that by creating a file with a <code>.yml</code> extension. Let's name this file <code>build_and_test.yml</code></li>
<li>Add the content below in the <code>yaml</code> file you just created:</li>
</ul>
<pre><code class="lang-Python">name: Build <span class="hljs-keyword">and</span> Test

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout code
      uses: actions/checkout@v2
    - name: Set up Python Environment
      uses: actions/setup-python@v2
      <span class="hljs-keyword">with</span>:
        python-version: <span class="hljs-string">'3.x'</span>
    - name: Install Dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt

    - name: Run Tests
      run: |
        python manage.py test
</code></pre>
<p>Let's make sense of each line in the file above.</p>
<ul>
<li><code>name: Build and Test</code> This is the name of our workflow. When you navigate to the actions tab, each workflow you define will be identified by the name you give it here on that list.</li>
<li><code>on:</code> This is where you specify the events that should trigger the execution of our workflow. In our config file we passed it two events. We specified the main branch as the target branch.</li>
<li><code>jobs:</code> Remember, a workflow is just a collection of jobs.</li>
<li><code>test:</code> This is the name of the job we've defined in this workflow. You could name it anything really. Notice it's the only job and the build job isn't there? Well, it's Python code so no build step is required. This is why we didn't define the build job.</li>
<li><code>runs-on</code> GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run your workflows. This where you specify the type of runner you want to use. In our case, we are using the Ubuntu Linux runner.</li>
<li>A job is made up of a series of  <code>steps</code>  that are usually executed sequentially on the same runner. In our file above, each step is marked by a hyphen. <code>name</code> represents the name of the step. Each step could either be a shell script that is being executed or an <code>action</code> . You define a step with <code>uses</code> if it's executing an <code>action</code> or you define a step with <code>run</code> if it's executing a shell script.</li>
</ul>
<p>Now that you've defined a <code>workflow</code> by adding the config file in the designated folder, you can commit and push your change to your remote repo. </p>
<p>If you navigate to the <code>Actions</code> tab of your remote repo, you should see a workflow with the name Build and Test (the name which we've given it) listed there.</p>
<h2 id="heading-part-3-continuous-delivery-how-to-automatically-deploy-our-code-to-aws">Part 3: Continuous Delivery – How to Automatically Deploy Our Code to AWS</h2>
<p>In this section, we will see how we can have GitHub Actions automatically deploy our code to AWS on push or pull request to the main branch. AWS offers a broad range of services. For this tutorial, we will be using a compute service called Elastic Beanstalk.</p>
<h3 id="heading-compute-service-elastic-beanstalk-come-onnn">Compute Service? Elastic Beanstalk? Come onnn :(</h3>
<p>No worries, relax, you'll get it. Remember we mentioned that cloud computing is all about storing and executing certain things on someone else's computer via the internet right – <strong>certain things?</strong> </p>
<p>Yes. For example, we can store and execute source code or we can just store media files. Amazon knows this, and as a result, their cloud infrastructure encompasses a plethora of service categories. Each service category allows us do <em>a certain thing out of the certain things that we can do.</em> </p>
<p>For example, there is a service category that allows the upload and execution of source codes that power our applications (<strong>Compute Service).</strong> There is the service category that allows us persist our media files (<strong>Storage Service).</strong> Then there is the service category that allows us manage our databases (<strong>Database Service)</strong>.</p>
<p>Each service category is made up of one or more services. Each service in a category just presents us with a different way of solving the problem that the category it belongs to addresses. </p>
<p>For example, each service in the compute category provides us with a different approach to deploying and executing our application code on the cloud – one problem, different approaches. <strong>Elastic Beanstalk</strong> is one of the services in the compute category. Others are, but not limited to, EC2 and Lambda.</p>
<p><strong>Amazon S3</strong> is one of the services in the storage category. And <strong>Amazon RDS</strong> is one of the services in the Database category.</p>
<p>Hopefully, you now understand what I mean by "we will be using a compute service called Elastic Beanstalk."  Of all the compute services, why Elastic Beanstalk? Well, because it's one of the easiest to work with.</p>
<h3 id="heading-that-being-said-lets-configure-stuff-lt3">That Being Said, Let's Configure Stuff &lt;3</h3>
<p>For brevity's sake we are going with the Continuous Delivery setup. In addition, we are going to have just one one deployment environment that will serve as our UAT environment.</p>
<p>To help you get the big picture right, in summary, this is how our deployment setup is going to work: on push or pull request to main, GitHub Actions will test and upload our source code to Amazon S3. The code is then pulled from Amazon S3 to our Elastic Beanstalk environment. Picture the flow this way:</p>
<p><code>GitHub -&gt; Amazon S3 -&gt; Elastic Beanstalk</code></p>
<p>Why aren't we pushing directly to Elastic Beanstalk, you might ask?</p>
<p>The only other way we could upload code directly to an Elastic Beanstalk instance with our current setup, is if we were using the <a target="_blank" href="https://pypi.org/project/awsebcli/">AWS Elastic Beanstalk CLI</a> (EB CLI). </p>
<p>Using the EB CLI requires running some shell command that would then require that we respond with some input. </p>
<p>Now, if we are deploying from our local machine to Elastic Beanstalk, when we run the EB CLI commands, we'd be there to type in the required responses. But with our current setup, those commands would be executed on GitHub Runners. We wouldn't be there to provide the required responses. The EB CLI isn't the easiest deployment tool for our use case.</p>
<p>With the approach we've picked, we'd run a shell command that uploads our code to S3 and then another command that pulls the uploaded code to our Elastic Beanstalk instance. These commands, when run, do not require that we submit some responses. Having the Amazon s3 step is the easiest way to go about this.</p>
<p>To implement our approach and have our code deployed to Elastic Beanstalk, follow the steps below:</p>
<h4 id="heading-step-1-setup-an-aws-account">Step 1: Setup an AWS Account</h4>
<p>Create an IAM. To keep things simple, when adding permissions, just add "Administrator Access" to the user (this has some security pitfalls, though). To accomplish this, follow the steps in modules 1 and 2 of <a target="_blank" href="https://aws.amazon.com/getting-started/guides/setup-environment/">this guide.</a></p>
<p>In the end, make sure to grab and keep your AWS secret and access keys. We will be needing them in the subsequent sections.</p>
<p>Now that we have an AWS account properly setup, it's time to set up our Elastic Beanstalk environment.</p>
<h4 id="heading-step-2-setup-your-elastic-beanstalk-environment">Step 2: Setup your Elastic Beanstalk Environment</h4>
<p>Once logged into your AWS account, take the following steps to set up your Elastic Beanstalk environment.</p>
<p>First, search for "elastic beanstalk" in the search field as shown in the image below. Then click on the Elastic Beanstalk service.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/search-for-elastic-bean.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>searching for elastic beanstalk.</em></p>
<p>Once you click on the Elastic Beanstalk service in the previous step, you'll be taken to the page shown in the image below. On that page, click on the "Create a New Environment" prompt. Make sure to select "Web server environment" in the next step.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/new-env.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>creating an environment</em></p>
<p>After selecting the "Web server environment" in the previous page you'll be taken to the page shown in the images below. </p>
<p>On that page, submit an application name, an environment name, and also select a platform. For this tutorial, we are going with the Python platform.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/name-env.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>an application and an environment names</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/platform.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>selecting a platform</em></p>
<p>Once you submit the form filled out in the previous step, after a while your application and its associated environment will be created. You should see the names you submitted displayed on the left side bar. </p>
<p>Grab the application name and the environment name. We will be needing them in the subsequent steps.</p>
<p>Now that we have our Elastic Beanstalk environment fully setup, it's time to configure GitHub Actions to trigger automatic deployment to AWS on push or pull request to main.</p>
<h4 id="heading-step-3-configure-your-project-for-elastic-beanstalk">Step 3: Configure your Project for Elastic Beanstalk</h4>
<p>By default, Elastic Beanstalk looks for a file named <code>application.py</code> in our project. It uses that file to run our application, but we don't have that file in our project. Do we? We need to tell Elastic Beanstalk to use the <code>wsgi.py</code> file in our project to run our application instead. To do that, take the following step:</p>
<p>Create a folder named <code>.ebextensions</code> in your project directory. In that folder create a config file. You can name it anything you want. I named mine <code>eb.config</code>. Add the content below to your config file:</p>
<pre><code>option_settings:
  aws:elasticbeanstalk:container:python:
    WSGIPath: django_github_actions_aws.wsgi:application
</code></pre><p>After completing the step above, your project directory should now look similar to the one in the image below:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/struct-1.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>demo project structure</em></p>
<p>One last thing you need to do in this section is to go to your <code>settings.py</code> file and update the <code>ALLOWED_HOSTS</code> setting to <code>all</code>:</p>
<p><code>ALLOWED_HOSTS = ['*']</code></p>
<p>Note that using the wildcard has major security drawbacks. We are only using it here for demo purposes. </p>
<p>Now that we are done configuring our project for Elastic Beanstalk, it's time to update our workflow file.</p>
<h4 id="heading-step-4-update-your-workflow-file">Step 4: Update your Workflow File</h4>
<p>There are five important pieces of information we need to complete this step: application name, environment name, access key id, secret access key, and the server region (after login, you can grab the region from the right-most section of the navbar).</p>
<p>Because the access key id and the secret access key are sensitive data, we'll hide them somewhere in our repository and access them in our workflow file. </p>
<p>To do that, head over to the settings tab of your repo, and then click on secrets as shown in the image below. There, you can create your secrets as key-value pairs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/secrets_new.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>embedding secret data in your repo</em></p>
<p>Next, add the deployment job to the end of your existing workflow file:</p>
<pre><code>deploy:
    needs: [test]
    runs-on: ubuntu-latest

    <span class="hljs-attr">steps</span>:
    - name: Checkout source code
      <span class="hljs-attr">uses</span>: actions/checkout@v2

    - name: Generate deployment package
      <span class="hljs-attr">run</span>: zip -r deploy.zip . -x <span class="hljs-string">'*.git*'</span>

    - name: Deploy to EB
      <span class="hljs-attr">uses</span>: einaregilsson/beanstalk-deploy@v20
      <span class="hljs-attr">with</span>:

          <span class="hljs-comment">// Remember the secrets we embedded? this is how we access them</span>
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        <span class="hljs-attr">aws_secret_key</span>: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

        <span class="hljs-comment">// Replace the values here with your names you submitted in one of </span>
        <span class="hljs-comment">// The previous sections</span>
        <span class="hljs-attr">application_name</span>: django-github-actions-aws
        <span class="hljs-attr">environment_name</span>: django-github-actions-aws

        <span class="hljs-comment">// The version number could be anything. You can find a dynamic way </span>
        <span class="hljs-comment">// Of doing this.</span>
        <span class="hljs-attr">version_label</span>: <span class="hljs-number">12348</span>
        <span class="hljs-attr">region</span>: <span class="hljs-string">"us-east-2"</span>
        <span class="hljs-attr">deployment_package</span>: deploy.zip
</code></pre><p><code>needs</code> simply tells GitHub Actions to only start executing the <code>deployment</code> job after the <code>test</code> job has been completed with a passing status.</p>
<p>The step <code>Deploy to EB</code> uses an existing action, <code>einaregilsson/beanstalk-deploy@v20</code>. Remember how we said <code>actions</code> are some reusable applications that takes care of some frequently repeated tasks for us? <code>einaregilsson/beanstalk-deploy@v20</code> is one of those actions. </p>
<p>To reinforce the above, remember that our deployment was suppose to go through the following steps: <code>GitHub -&gt; Amazon S3 -&gt; Elastic Beanstalk</code>. </p>
<p>However, throughout this tutorial, we didn't do any Amazon s3 set up. Furthermore, in our workflow file we didn't upload to an s3 bucket nor did we pull from an s3 bucket to our Elastic Beanstalk environment. </p>
<p>Normally, we are supposed to do all that, but we didn't here – because under the hood, the <code>einaregilsson/beanstalk-deploy@v20</code> action does all the heavy lifting for us. You can also create your own <code>action</code> that takes care of some repetitive tasks and make it available to other developers through the <a target="_blank" href="https://github.com/marketplace?type=actions">GitHub Marketplace.</a></p>
<p>Now that you've updated your workflow file locally, you can then commit and push this change to your remote. Your jobs will run and your code will be deployed to the Elastic Beanstalk instance you created. And that's it. <strong>We're done &gt;&gt;&gt;</strong></p>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>Wow! This was a really long one, wasn't it? In summary I explained what the terms GitHub Actions, CI/CD Pipeline, and AWS mean. In addition, we saw how we could configure GitHub Actions to automatically deploy our code to an Elastic Beanstalk instance on AWS.</p>
<p>If you love this work, and would like to stay up to date on future articles I will be putting out, let's connect on <a target="_blank" href="https://twitter.com/nyior_clement">Twitter</a>, <a target="_blank" href="https://www.linkedin.com/in/nyior/">Linkedin</a>, or <a target="_blank" href="https://github.com/Nyior">GitHub.</a> I use those channels to share what I work on, immediately after I put them out.</p>
<h3 id="heading-credits">Credits:</h3>
<p>Cover image: <a target="_blank" href="https://www.freepik.com/">www.freepik.com</a></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.plutora.com/blog/understanding-ci-cd-pipeline">https://www.plutora.com/blog/understanding-ci-cd-pipeline</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://docs.github.com/en/actions">https://docs.github.com/en/actions</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html">https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/einaregilsson/beanstalk-deploy">https://github.com/einaregilsson/beanstalk-deploy</a></div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ The Best Tools for Continuous Testing – How to Keep Your Code Updates from Breaking Things ]]>
                </title>
                <description>
                    <![CDATA[ By Linda Ikechukwu These days, applications have to evolve as the needs of their target users grow and change. This is why engineering teams often adopt Agile software development principles (or any iterative variation). Agile principles involve cont... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/tools-for-continuous-testing/</link>
                <guid isPermaLink="false">66d4601a246e57ac83a2c78f</guid>
                
                    <category>
                        <![CDATA[ agile ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Software Testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 19 Jan 2021 21:57:55 +0000</pubDate>
                <media:content url="https://cdn-media-2.freecodecamp.org/w1280/6006e5440a2838549dcb4be6.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Linda Ikechukwu</p>
<p>These days, applications have to evolve as the needs of their target users grow and change. This is why engineering teams often adopt Agile software development principles (or any iterative variation).</p>
<p>Agile principles involve continuous integration and continuous delivery (CI/CD). This means that developers will frequently make code updates for new features to the existing codebase of the application. </p>
<p>So then how can you verify that a recent code addition doesn't break a part of the application? The answer is continuous testing.</p>
<h2 id="heading-what-is-continuous-testing">What is Continuous Testing?</h2>
<p>Continuous testing is a critical part of the CI/CD pipeline. It helps development teams discover if a particular code commit will break the application build and if it should be integrated or not.</p>
<p>In other words, continuous testing is the practice of integrating <a target="_blank" href="https://www.perfecto.io/blog/what-is-test-automation">automated tests</a> into a software delivery pipeline to determine the risks associated with each code release or addition. These automated tests are usually triggered during or after builds and are carried out using automation testing frameworks or tools. </p>
<p>Let me now introduce you to four recommended automation tools you can use for continuous testing.</p>
<h2 id="heading-tools-for-continuous-testing">Tools for Continuous Testing</h2>
<h3 id="heading-1-testsigma">1. TestSigma</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-101.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="https://testsigma.com/">TestSigma</a> is a cloud-based automation testing tool for continuous testing. It has a low learning curve, as automated tests can be written in plain English, and requires no coding skills. Tests can also be extended with Selenium and JS-based custom functions for more advanced use cases.</p>
<p>TestSigma can be used for web applications, native mobile apps, regression, cross-browser and data-driven testing. It also features inbuilt seamless integrations with test management, bug reporting, CI/CD, and communication tools such as GitHub, Slack, Jira, BrowserStack, Jenkins, AWS, Bamboo, Azure DevOps, Circle CI, and so on.</p>
<p>TestSigma also uses AI to reduce maintenance effort and increase productivity by identifying affected tests and potential failures upfront to save execution time &amp; cost, alongside other features. </p>
<p>The platform has a free tier, but to use all the features mentioned above, you’ll need to commit to a paid plan.</p>
<h3 id="heading-2-tricentis-tosca">2. Tricentis Tosca</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-102.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="https://www.tricentis.com/products/automate-continuous-testing-tosca/">Tosca</a> is another no-code continuous testing tool which makes it easy to learn. QA engineers with zero scripting knowledge can easily set up automated tests using a GUI.</p>
<p>Tosca is suitable for enterprise-level applications and is versatile because it supports and integrates seamlessly with over 160 technologies/languages. With Tosca, you can run tests on the web, mobile, and desktops with Windows OS (Mac and Linux are only possible with virtualization tools).</p>
<p>Tosca automatically creates and provisions on-demand test data to reduce the time it takes to provision and make reliable test data for test automation. </p>
<p>The platform offers free trials for a limited amount of time and custom pricing, which the sales team decides on based on your specific needs.</p>
<h3 id="heading-3-katalon-studio">3. Katalon Studio</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-103.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="https://www.katalon.com/">Katalon</a> is another comprehensive continuous testing tool built on top of the popular open-source Selenium and Appium. It can be used for testing web, API, mobile, and desktop applications across Windows, macOS, and Linux operating systems. </p>
<p>In fact, with Katalon, you can execute tests on all OSs, browsers, and devices, as well as on cloud, on-premise, and hybrid environments.</p>
<p>Katalon also provides other useful features like recording test steps, executing test cases, providing infrastructure, analytics reporting, and CI/CD integration with the most popular CI tools (like Jenkins, Bamboo, Azure, and CircleCI).</p>
<p>Katalon Studio is easy to get started with because it offers codeless test creation for beginners. For advanced usage, experts can extend automation capabilities using the plugins in the Katalon Store. </p>
<p>It also has extensive documentation featuring a well-organized library of tutorials alongside images and videos to help you out if you ever get stuck on something. It has a robust free tier and an enterprise tier for advanced usage.</p>
<h3 id="heading-4-watir">4. Watir</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-104.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="http://watir.com/">Watir</a> is another continuous testing automation tool powered by the Selenium framework, and it is open-source. Watir can only run tests for web applications on Windows and can only execute simple and easily maintainable tests.</p>
<p>It is not codeless, as scripts have to be written in Ruby, Java, .NET or Python using its sister software: Watij, WatiN, and Nerodia. Regardless, it's easy to get started with if you're already familiar with Ruby because it features extensive documentation. </p>
<p>Watir can also be integrated with a couple of CI tools such as Jenkins and GitHub.</p>
<p>Even though Watir seems limited, most teams find its simplicity appealing. It is prevalent within the Ruby community and is even used by large companies like Slack and Oracle.</p>
<h2 id="heading-how-to-choose-a-continuous-testing-tool">How to choose a continuous testing tool</h2>
<p>There are other excellent continuous testing tools available aside from the four that I have mentioned above. I favour no-code testing tools because it lets teams set up and maintain automated tests much faster. </p>
<p>Regardless, here are a few things to consider before choosing a continuous testing tool:</p>
<ol>
<li><strong>Application types supported:</strong> Does the tool support your intended application type (for example, mobile, web, desktop)?</li>
<li><strong>Learning curve:</strong> How easy/difficult is it to use? Will you need to learn a new scripting language? Ideally, you should go for something with a low learning curve that you and your team can get started with in the shortest amount of time.</li>
<li><strong>Costs:</strong> Is the cost of the tool a feasible addition to your budget in the long run?</li>
<li><strong>Integration capabilities:</strong> Can it integrate seamlessly with your existing CI/CD pipeline?</li>
<li><strong>Scalability and reusability:</strong> Does the tool support scalability and reusability of test cases across multiple projects?</li>
<li><strong>Documentation and Community:</strong> How concise and rich is the documentation for the tool? You’re going to run into some mental blocks in the future, and you may not be able to get through without proper documentation and community support.</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>With the right tools, continuous testing eliminates the risks associated with frequent code releases by ensuring that only quality code is delivered to the end-user.</p>
<p>As I previously mentioned, the tools I have listed above are not an exhaustive list of all the continuous testing tool options. They're just the ones I recommend, and they may or may not be the right choice for you. </p>
<p>Do some further research, check out different tools, and settle for one that will integrate seamlessly into your current setup and meet your needs.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ What are Github Actions and How Can You Automate Tests and Slack Notifications? ]]>
                </title>
                <description>
                    <![CDATA[ Automation is a powerful tool. It both saves us time and can help reduce human error.  But automation can be tough and can sometimes prove to be costly. How can Github Actions help harden our code and give us more time to work on features instead of ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/what-are-github-actions-and-how-can-you-automate-tests-and-slack-notifications/</link>
                <guid isPermaLink="false">66b8e39047c23b7ae1ad0bdf</guid>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation testing  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ CI/CD ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Git ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                    <category>
                        <![CDATA[ slack ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Software Testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ tech  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Colby Fayock ]]>
                </dc:creator>
                <pubDate>Wed, 03 Jun 2020 14:45:00 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/05/github-actions.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Automation is a powerful tool. It both saves us time and can help reduce human error. </p>
<p>But automation can be tough and can sometimes prove to be costly. How can Github Actions help harden our code and give us more time to work on features instead of bugs?</p>
<ul>
<li><a class="post-section-overview" href="#heading-what-are-github-actions">What are Github Actions?</a></li>
<li><a class="post-section-overview" href="#heading-what-is-cicd">What is CI/CD?</a></li>
<li><a class="post-section-overview" href="#heading-what-are-we-going-to-build">What are we going to build?</a></li>
<li><a class="post-section-overview" href="#heading-part-0-setting-up-a-project">Part 0: Setting up a project</a></li>
<li><a class="post-section-overview" href="#heading-part-1-automating-tests">Part 1: Automating tests</a></li>
<li><a class="post-section-overview" href="#heading-part-2-post-new-pull-requests-to-slack">Part 2: Post new pull requests to Slack</a></li>
</ul>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/1n-jHHNSoTw" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
<h2 id="heading-what-are-github-actions">What are Github Actions?</h2>
<p><a target="_blank" href="https://github.com/features/actions">Actions</a> are a relatively new feature to <a target="_blank" href="https://github.com/">Github</a> that allow you to set up CI/CD workflows using a configuration file right in your Github repo.</p>
<p>Previously, if you wanted to set up any kind of automation with tests, builds, or deployments, you would have to look to services like <a target="_blank" href="https://circleci.com/">Circle CI</a> and <a target="_blank" href="https://travis-ci.org/">Travis</a> or write your own scripts. But with Actions, you have first class support to powerful tooling to automate your workflow.</p>
<h2 id="heading-what-is-cicd">What is CI/CD?</h2>
<p>CD/CD stands for Continuous Integration and Continuous Deployment (or can be Continuous Delivery). They're both practices in software development that allow teams to build projects together quickly, efficiently, and ideally with less errors.</p>
<p>Continuous Integration is the idea that as different members of the team work on code on different git branches, the code is merged to a single working branch which is then built and tested with automated workflows. This helps to constantly make sure everyone's code is working properly together and is well-tested.</p>
<p>Continuous Deployment takes this a step further and takes this automation to the deployment level. Where with the CI process, you automate the testing and the building, Continuous Deployment will automate deploying the project to an environment. </p>
<p>The idea is that the code, once through any building and testing processes, is in a deployable state, so it should be able to be deployed.</p>
<h2 id="heading-what-are-we-going-to-build">What are we going to build?</h2>
<p>We're going to tackle two different workflows.</p>
<p>The first will be to simply run some automated tests that will prevent a pull request from being merged if it is failing. We won't walk through building the tests, but we'll walk through running tests that already exist.</p>
<p>In the second part, we'll set up a workflow that sends a message to slack with a link to a pull request whenever a new one is created. This can be super helpful when working on open source projects with a team and you need a way to keep track of requests.</p>
<h2 id="heading-part-0-setting-up-a-project">Part 0: Setting up a project</h2>
<p>For this guide, you can really work through any node-based project as long as it has tests you can run for Part 1.</p>
<p>If you'd like to follow along with a simpler example that I'll be using, I've set up a new project that you can clone with a single function that has two tests that are able to run and pass.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/function-with-test.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>A function with two tests</em></p>
<p>If you'd like to check out this code to get started, you can run:</p>
<pre><code class="lang-shell">git clone --single-branch --branch start git@github.com:colbyfayock/my-github-actions.git
</code></pre>
<p>Once you have that cloned locally and have installed the dependencies, you should be able to run the tests and see them pass!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/passing-tests.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Passing tests</em></p>
<p>It should also be noted that you'll be required to have this project added as a new repository on Github in order to follow along.</p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/6919b1b9beea4823fd28375f1864d233e23f2d26">Follow along with the commit!</a></p>
<h2 id="heading-part-1-automating-tests">Part 1: Automating tests</h2>
<p>Tests are an important part of any project that allow us to make sure we're not breaking existing code while we work. While they're important, they're also easy to forget about.</p>
<p>We can remove the human nature out of the equation and automate running our tests to make sure we can't proceed without fixing what we broke.</p>
<h3 id="heading-step-1-creating-a-new-action">Step 1: Creating a new action</h3>
<p>The good news, is Github actually makes it really easy to get this workflow started as it comes as one of their pre-baked options.</p>
<p>We'll start by navigating to the <strong>Actions</strong> tab on our repository page.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-actions-dashboard.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Github Actions starting page</em></p>
<p>Once there, we'll immediately see some starter workflows that Github provides for us to dive in with. Since we're using a node project, we can go ahead and click <strong>Set up this workflow</strong> under the <strong>Node.js</strong> workflow.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-new-nodejs-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a Node.js Github Action workflow</em></p>
<p>After the page loads, Github will land you on a new file editor that already has a bunch of configuration options added.</p>
<p>We're actually going to leave this "as is" for our first step. Optionally, you can change the name of the file to <code>tests.yml</code> or something you'll remember.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-create-new-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Adding a new Github Action workflow file</em></p>
<p>You can go ahead and click <strong>Start commit</strong> then either commit it directory to the <code>master</code> branch or add the change to a new branch. For this walkthrough, I'll be committing straight to <code>master</code>.</p>
<p>To see our new action run, we can again click on the <strong>Actions</strong> tab which will navigate us back to our new Actions dashboard.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-status.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Viewing Github Action workflow events</em></p>
<p>From there, you can click on <strong>Node.js CI</strong> and select the commit that you just made above and you'll land on our new action dashboard. You can then click one of the node versions in the sidebar via <strong>build (#.x)</strong>, click the <strong>Run npm test</strong> dropdown, and we'll be able to see the output of our tests being run (which if you're following along with me, should pass!).</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-logs.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Viewing logs of a Github Action workflow</em></p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/10e397966572ed9975cac40f6ab5f41c1255a947">Follow along with the commit!</a></p>
<h3 id="heading-step-2-configuring-our-new-action">Step 2: Configuring our new action</h3>
<p>So what did we just do above? We'll walk through the configuration file and what we can customize.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-file.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Github Action Node.js workflow file</em></p>
<p>Starting from the top, we specify our name:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Node.js</span> <span class="hljs-string">CI</span>
</code></pre>
<p>This can really be whatever you want. Whatever you pick should help you remember what it is. I'm going to customize this to "Tests" so I know exactly what's going on.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]
</code></pre>
<p>The <code>on</code> key is how we specify what events trigger our action. This can be a variety of things like based on time with <a target="_blank" href="https://en.wikipedia.org/wiki/Cron">cron</a>. But here, we're saying that we want this action to run any time someone pushes commits to  <code>master</code> or someone creates a pull request targeting the <code>master</code> branch. We're not going to make a change here.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
</code></pre>
<p>This next bit creates a new job called <code>build</code>. Here we're saying that we want to use the latest version of Ubuntu to run our tests on. <a target="_blank" href="https://ubuntu.com/">Ubuntu</a> is common, so you'll only want to customize this if you want to run it on a specific environment.</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">strategy:</span>
      <span class="hljs-attr">matrix:</span>
        <span class="hljs-attr">node-version:</span> [<span class="hljs-number">10.</span><span class="hljs-string">x</span>, <span class="hljs-number">12.</span><span class="hljs-string">x</span>, <span class="hljs-number">14.</span><span class="hljs-string">x</span>]
</code></pre>
<p>Inside of our job we specify a <a target="_blank" href="https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategy">strategy</a> matrix. This allows us to run the same tests on a few different variations. </p>
<p>In this instance, we're running the tests on 3 different versions of <a target="_blank" href="https://nodejs.org/en/">node</a> to make sure it works on all of them. This is definitely helpful to make sure your code is flexible and future proof, but if you're building and running your code on a specific node version, you're safe to change this to only that version.</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Use</span> <span class="hljs-string">Node.js</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v1</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">node-version:</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span> <span class="hljs-string">--if-present</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>Finally, we specify the steps we want our job to run. Breaking this down:</p>
<ul>
<li><code>uses: actions/checkout@v2</code>: In order for us to run our code, we need to have it available. This checks out our code on our job environment so we can use it to run tests.</li>
<li><code>uses: actions/setup-node@v1</code>: Since we're using node with our project, we'll need it set up on our environment. We're using this action to do that setup  for us for each version we've specified in the matrix we configured above.</li>
<li><code>run: npm ci</code>: If you're not familiar with <code>npm ci</code>, it's similar to running <code>npm install</code> but uses the <code>package-lock.json</code> file without performing any patch upgrades. So essentially, this installs our dependencies.</li>
<li><code>run: npm run build --if-present</code>: <code>npm run build</code> runs the build script in our project. The <code>--if-present</code> flag performs what it sounds like and only runs this command if the build script is present. It doesn't hurt anything to leave this in as it won't run without the script, but feel free to remove this as we're not building the project here.</li>
<li><code>run: npm test</code>: Finally, we run <code>npm test</code> to run our tests. This uses the <code>test</code> npm script set up in our <code>package.json</code> file.</li>
</ul>
<p>And with that, we've made a few tweaks, but our tests should run after we've committed those changes and pass like before!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-logs-npm-test.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Logs of passing tests in Github Action workflow</em></p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/087cd8e8592d1f2b520b6e44b70b0a242a9d2d72">Follow along with the commit!</a></p>
<h3 id="heading-step-3-testing-that-our-tests-fail-and-prevent-merges">Step 3: Testing that our tests fail and prevent merges</h3>
<p>Now that our tests are set up to automatically run, let's try to break it to see it work.</p>
<p>At this point, you can really do whatever you want to intentionally break the tests, but <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/1">here's what I did</a>:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/bad-changes-code-diff.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Code diff - https://github.com/colbyfayock/my-github-actions/pull/1</em></p>
<p>I'm intentionally returning different expected output so that my tests will fail. And they do!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-failing-checks.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Failing status checks on pull request</em></p>
<p>In my new pull request, my new branch breaks the tests, so it tells me my checks have failed. If you noticed though, it's still green to merge, so how can we prevent merges?</p>
<p>We can prevent pull requests from being merged by setting up a Protected Branch in our project settings.</p>
<p>First, navigate to <strong>Settings</strong>, then <strong>Branches</strong>, and click <strong>Add rule</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-add-protected-branch.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Github branch protection rules</em></p>
<p>We'll then want to set the branch name pattern to <code>*</code>, which means all branches, check the <strong>Require status checks to pass before merging option</strong>, then select all of our different status checks that we'd like to require to pass before merging.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-configure-protected-branch.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a branch protection rule in Github</em></p>
<p>Finally, hit <strong>Create</strong> at the bottom of the page.</p>
<p>And once you navigate back to the pull request, you'll notice that the messaging is a bit different and states that we need our statuses to pass before we can merge.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-failing-checks-cant-merge.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Failing tests preventing merge in pull request</em></p>
<p><em>Note: as an administrator of a repository, you'll still be able to merge, so this technically only prevents non-administrators from merging. But will give you increased messaging if the tests fail.</em></p>
<p>And with that, we have a new Github Action that runs our tests and prevents pull requests from merging if they fail.</p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/1">Follow along with the pull request!</a></p>
<p><em>Note: we won't be merging that pull request before continuing to Part 2.</em></p>
<h2 id="heading-part-2-post-new-pull-requests-to-slack">Part 2: Post new pull requests to Slack</h2>
<p>Now that we're preventing merge requests if they're failing, we want to post a message to our <a target="_blank" href="http://slack.com/">Slack</a> workspace whenever a new pull request is opened up. This will help us keep tabs on our repos right in Slack.</p>
<p>For this part of the guide, you'll need a Slack workspace that you have permissions to create a new developer app with and the ability to create a new channel for the bot user that will be associated with that app.</p>
<h3 id="heading-step-1-setting-up-slack">Step 1: Setting up Slack</h3>
<p>There are a few things we're going to walk through as we set up Slack for our workflow:</p>
<ul>
<li>Create a new app for our workspace</li>
<li>Assign our bot permissions</li>
<li>Install our bot to our workspace</li>
<li>Invite our new bot to our channel</li>
</ul>
<p>To get started, we'll create a new app. Head over to the <a target="_blank" href="https://api.slack.com/apps">Slack API Apps dashboard</a>. If you already haven't, log in to your Slack account with the Workspace you'd like to set this up with.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-create-new-app.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Creating a new Slack app</em></p>
<p>Now, click <strong>Create New App</strong> where you'll be prompted to put in a name and select a workspace you want this app to be created for. I'm going to call my app "Gitbot" as the name, but you can choose whatever makes sense for you. Then click <strong>Create App</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-add-name-new-app.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Configuring a new Slack app</em></p>
<p>Once created, navigate to the <strong>App Home</strong> link in the left sidebar. In order to use our bot, we need to assign it <a target="_blank" href="https://oauth.net/">OAuth</a> scopes so it has permissions to work in our channel, so select <strong>Review Scopes to Add</strong> on that page.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-review-scopes.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Reviewing Slack app scopes</em></p>
<p>Scroll own and you'll see a <strong>Scopes</strong> section and under that a <strong>Bot Token</strong> section. Here, click <strong>Add an OAuth Scope</strong>. For our bot, we don't need a ton of permissions, so add the <code>channels:join</code> and <code>chat:write</code> scopes and we should be good to go.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-add-scopes.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Adding scopes for a Slack app Bot Token</em></p>
<p>Now that we have our scopes, let's add our bot to our workspace. Scroll up on that same page to the top and you'll see a button that says <strong>Install App to Workspace</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-install-app-to-workspace.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Installing Slack app to a workspace</em></p>
<p>Once you click this, you'll be redirected to an authorization page. Here, you can see the scopes we selected for our bot. Next, click <strong>Allow</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-allow-workspace-permissions.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Allowing permission for Slack app to be installed to workspace</em></p>
<p>At this point, our Slack bot is ready to go. At the top of the <strong>OAuth &amp; Permissions</strong> page, you'll see a <strong>Bot User OAuth Access Token</strong>. This is what we'll use when setting up our workflow, so either copy and save this token or remember this location so you know how to find it later.</p>
<p><em>Note: this token is private - don't give this out, show it in a screencast, or let anyone see it!</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-oauth-token.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Copying OAuth Access Token for Slack bot user</em></p>
<p>Finally, we need to invite our Slack bot to our channel. If you open up your workspace, you can either use an existing channel or create a new channel for these notifications, but you'll want to enter the command <code>/invite @[botname]</code> which will invite our bot to our channel.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-invite-bot-to-channel.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Inviting Slack bot user to channel</em></p>
<p>And once added, we're done with setting up Slack!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-bot-joined-channel.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Slack bot was added to channel</em></p>
<h3 id="heading-create-a-github-action-to-notify-slack">Create a Github Action to notify Slack</h3>
<p>Our next step will be somewhat similar to when we created our first Github Action. We'll create a workflow file which we'll configure to send our notifications.</p>
<p>While we can use our code editors to do this by creating a file in the <code>.github</code> directory, I'm going to use the Github UI.</p>
<p>First, let's navigate back to our <em>Actions</em> tab in our repository. Once there, select <strong>New workflow</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-new-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a new Github Action workflow</em></p>
<p>This time, we're going to start the workflow manually instead of using a pre-made Action. Select <strong>set up a workflow yourself</strong> at the top.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-set-up-new-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a Github Action workflow manually</em></p>
<p>Once the new page loads, you'll be dropped in to a new template where we can start working. Here's what our new workflow will look like:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Slack</span> <span class="hljs-string">Notifications</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">notifySlack:</span>

    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Notify</span> <span class="hljs-string">slack</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-attr">SLACK_BOT_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.SLACK_BOT_TOKEN</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">abinoda/slack-action@master</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">args:</span> <span class="hljs-string">'{\"channel\":\"[Channel ID]\",\"blocks\":[{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"*Pull Request:* $<span class="hljs-template-variable">{{ github.event.pull_request.title }}</span>\"}},{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"*Who?:* $<span class="hljs-template-variable">{{ github.event.pull_request.user.login }}</span>\n*Request State:* $<span class="hljs-template-variable">{{ github.event.pull_request.state }}</span>\"}},{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"&lt;$<span class="hljs-template-variable">{{ github.event.pull_request.html_url }}</span>|View Pull Request&gt;\"}}]}'</span>
</code></pre>
<p>So what's happening in the above?</p>
<ul>
<li><code>name</code>: we're setting a friendly name for our workflow</li>
<li><code>on</code>: we want our workflow to trigger when there's a pull request is created that targets our <code>master</code> branch</li>
<li><code>jobs</code>: we're creating a new job called <code>notifySlack</code></li>
<li><code>jobs.notifySlack.runs-on</code>: we want our job to run on a basic setup of the latest Unbuntu</li>
<li><code>jobs.notifySlack.steps</code>: we really only have one step here - we're using a pre-existing Github Action called <a target="_blank" href="https://github.com/marketplace/actions/post-slack-message">Slack Action</a> and we're configuring it to publish a notification to our Slack</li>
</ul>
<p>There are two points here we'll need to pay attention to, the <code>env.SLACK_BOT_TOKEN</code> and the <code>with.args</code>.</p>
<p>In order for Github to communicate with Slack, we'll need a token. This is what we're setting in <code>env.SLACK_BOT_TOKEN</code>. We generated this token in the first step. Now that we'll be using this in our workflow configuration, we'll need to <a target="_blank" href="https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#creating-encrypted-secrets-for-a-repository">add it as a Git Secret in our project</a>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-slack-token-secret.jpg" alt="Image" width="600" height="400" loading="lazy">
_Github secrets including SLACK_BOT<em>TOKEN</em></p>
<p>The  <code>with.args</code> property is what we use to configure the payload to the Slack API that includes the channel ID (<code>channel</code>) and our actual message (<code>blocks</code>).</p>
<p>The payload in the arguments is stringified and escaped. For example, when expanded it looks like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"channel"</span>: <span class="hljs-string">"[Channel ID]"</span>,
  <span class="hljs-attr">"blocks"</span>: [{
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"section"</span>,
    <span class="hljs-attr">"text"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"mrkdwn"</span>,
      <span class="hljs-attr">"text"</span>: <span class="hljs-string">"*Pull Request:* ${{ github.event.pull_request.title }}"</span>
    }
  }, {
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"section"</span>,
    <span class="hljs-attr">"text"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"mrkdwn"</span>,
      <span class="hljs-attr">"text"</span>: <span class="hljs-string">"*Who?:*n${{ github.event.pull_request.user.login }}n*State:*n${{ github.event.pull_request.state }}"</span>
    }
  }, {
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"section"</span>,
    <span class="hljs-attr">"text"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"mrkdwn"</span>,
      <span class="hljs-attr">"text"</span>: <span class="hljs-string">"&lt;${{ github.event.pull_request._links.html.href }}|View Pull Request&gt;"</span>
    }
  }]
}
</code></pre>
<p><em>Note: this is just to show what the content looks like, we need to use the original file with the stringified and escaped argument.</em></p>
<p>Back to our configuration file, the first thing we set is our channel ID. To find our channel ID, you'll need to use the Slack web interface. Once you open Slack in your browser, you want to find your channel ID in the URL:</p>
<pre><code>https:<span class="hljs-comment">//app.slack.com/client/[workspace ID]/[channel ID]</span>
</code></pre><p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-web-channel-id.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Channel ID in Slack web app URL</em></p>
<p>With that channel ID, you can modify our workflow configuration and replace <code>[Channel ID]</code> with that ID:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">with:</span>
  <span class="hljs-attr">args:</span> <span class="hljs-string">'{\"channel\":\"C014RMKG6H2\",...</span>
</code></pre>
<p>The rest of the arguments property is how we set up our message. It includes variables from the Github event that we use to customize our message. </p>
<p>We won't go into tweaking that here, as what we already have will send a basic pull request message, but you can test out and build your own payload with Slack's <a target="_blank" href="https://app.slack.com/block-kit-builder/">Block Kit Builder</a>.</p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/e228b9899ef3da218d1a100d06a72259d45ea19e">Follow along with the commit!</a></p>
<h3 id="heading-test-out-our-slack-workflow">Test out our Slack workflow</h3>
<p>So now we have our workflow configured with our Slack app, finally we're ready to use our bot!</p>
<p>For this part, all we need to do is create a new pull request with any change we want. To test this out, I simply <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/2">created a new branch</a> where I added a sentence to the <code>README.md</code> file.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-test-pull-request.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Code diff - <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/2">https://github.com/colbyfayock/my-github-actions/pull/2</a></em></p>
<p>Once you <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/2">create that pull request</a>, similar to our tests workflow, Github will run our Slack workflow! You can see this running in the Actions tab just like before.</p>
<p>As long as you set everything up correctly, once the workflow runs, you should now have a new message in Slack from your new bot.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-github-notification.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Slack bot automated message about new pull request</em></p>
<p><em>Note: we won't be merging that pull request in.</em></p>
<h2 id="heading-what-else-can-we-do">What else can we do?</h2>
<h3 id="heading-customize-your-slack-notifications">Customize your Slack notifications</h3>
<p>The message I put together is simple. It tells us who created the pull request and gives us a link to it.</p>
<p>To customize the formatting and messaging, you can use the Github <a target="_blank" href="https://app.slack.com/block-kit-builder/">Block Kit Builder</a> to create your own.</p>
<p>If you'd like to include additional details like the variables I used for the pull request, you can make use of Github's available <a target="_blank" href="https://help.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#contexts">contexts</a>. This lets you pull information about the environment and the job to customize your message.</p>
<p>I couldn't seem to find any sample payloads, so here's an example of a sample <code>github</code> context payload you would expect in the event.</p>
<p><a target="_blank" href="https://gist.github.com/colbyfayock/1710edb9f47ceda0569844f791403e7e">Sample github context</a></p>
<h3 id="heading-more-github-actions">More Github actions</h3>
<p>With our ability to create new custom workflows, that's not a lot we can't automate. Github even has a <a target="_blank" href="https://github.com/marketplace?type=actions">marketplace</a> where you can browse around for one.</p>
<p>If you're feeling like taking it a step further, you can even create your own! This lets you set up scripts to configure a workflow to perform whatever tasks you need for your project.</p>
<h2 id="heading-join-in-the-conversation">Join in the conversation!</h2>
<div class="embed-wrapper">
        <blockquote class="twitter-tweet">
          <a href="https://twitter.com/colbyfayock/status/1268197100539514881"></a>
        </blockquote>
        <script defer="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></div>
<h2 id="heading-what-do-you-use-github-actions-for">What do you use Github actions for?</h2>
<p>Share with me on <a target="_blank" href="https://twitter.com/colbyfayock">Twitter</a>!</p>
<div id="colbyfayock-author-card">
  <p>
    <a href="https://twitter.com/colbyfayock">
      <img src="https://res.cloudinary.com/fay/image/upload/w_2000,h_400,c_fill,q_auto,f_auto/w_1020,c_fit,co_rgb:007079,g_north_west,x_635,y_70,l_text:Source%20Sans%20Pro_64_line_spacing_-10_bold:Colby%20Fayock/w_1020,c_fit,co_rgb:383f43,g_west,x_635,y_6,l_text:Source%20Sans%20Pro_44_line_spacing_0_normal:Follow%20me%20for%20more%20JavaScript%252c%20UX%252c%20and%20other%20interesting%20things!/w_1020,c_fit,co_rgb:007079,g_south_west,x_635,y_70,l_text:Source%20Sans%20Pro_40_line_spacing_-10_semibold:colbyfayock.com/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_68,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_145,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_222,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_295,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/v1/social-footer-card" alt="Follow me for more Javascript, UX, and other interesting things!" width="2000" height="400" loading="lazy">
    </a>
  </p>
  <ul>
    <li>
      <a href="https://twitter.com/colbyfayock">? Follow Me On Twitter</a>
    </li>
    <li>
      <a href="https://youtube.com/colbyfayock">?️ Subscribe To My Youtube</a>
    </li>
    <li>
      <a href="https://www.colbyfayock.com/newsletter/">✉️ Sign Up For My Newsletter</a>
    </li>
  </ul>
</div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ The real difference between Continuous Integration and Continuous Deployment ]]>
                </title>
                <description>
                    <![CDATA[ By Jean-Paul Delimat There is plenty of content out there describing what Continuous Integration, Continuous Delivery, and Continuous Deployment are. But what purposes do these processes serve in the first place?  It is crucial to understand the prob... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/the-real-difference-between-ci-and-cd/</link>
                <guid isPermaLink="false">66d45f75ffe6b1f641b5fa11</guid>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Productivity ]]>
                    </category>
                
                    <category>
                        <![CDATA[ software development ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Sun, 01 Dec 2019 21:37:46 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2019/12/continuous-integration-and-delivery.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Jean-Paul Delimat</p>
<p>There is plenty of content out there describing what Continuous Integration, Continuous Delivery, and Continuous Deployment are. But what purposes do these processes serve in the first place? </p>
<p>It is crucial to understand the problems CI and CD solve to use them properly. This will allow your team to improve your process and avoid putting effort into chasing fancy metrics that do not bring any value to your process.</p>
<h2 id="heading-continuous-integration-is-a-team-problem">Continuous Integration is a team problem</h2>
<p>If you work in a team, chances are there are several developers working on the same repository. There is a main branch in the repository carrying the latest version of the code. Developers work on different things on different branches. Once someone is done with their change, they'll push or merge it to the main branch. Eventually the whole team will pull this change.</p>
<p>The scenario we want to avoid is that a faulty commit makes it to the main branch. Faulty means the code does not compile or the app won't start or is unusable. Why? Not because the app is broken or because all tests must always be green. That is not a problem–you can decide not to deploy that version and wait for a fix. </p>
<p>The problem is that your entire team is stuck. All the developers who pulled the faulty commit will spend 5 minutes wondering why it doesn't work. Several will probably try to find the faulty commit. Some will try to fix the issue by themselves in parallel of the faulty code author.</p>
<p>This is a waste of time for your team. The worst part is that repeated incidents fuel a mistrust of the main branch and encourage developers to work apart.</p>
<blockquote>
<p>Continuous Integration is all about preventing the main branch from breaking so your team is not stuck. That's it. It is <strong>not</strong> about having all your tests green all the time and the main branch deployable to production at every commit.</p>
</blockquote>
<p>The process of Continuous Integration is independent of any tool. You could manually verify that the merge of your branch and the main branch works locally, and then only actually push the merge to the repository. But that would be very inefficient. That's why Continuous Integration is implemented using automated checks.</p>
<p>The checks ensure that, at the bare minimum:</p>
<ul>
<li>The app should build and start</li>
<li>Most critical features should be functional at all times (user signup/login journey and  key business features)</li>
<li>Common layers of the application that all the developers rely on should be stable. This means unit tests on those parts.</li>
</ul>
<p>In practice, this means you need to pull any unit test framework that works for you and secure the common layers of the application. Sometimes it is not that much code and can be done fairly quickly. Also you need to add a "smoke test" verifying that the code compiles and that the application starts. This is especially important in technologies with crazy dependency injections like Java Spring or .NET core. In large projects it is so easy to miswire your dependencies that verifying that the app always starts is a must.</p>
<blockquote>
<p>If you have hundreds or thousands of tests you don't need to run them all for each merge. It will take a lot of time and most tests probably verify "non team blocker" features.</p>
</blockquote>
<p>We'll see in the next sections how the process of Continuous Delivery will make good use of these many tests.</p>
<h3 id="heading-its-not-about-tools">It's not about tools</h3>
<p>Tools and automated checks are all fine. But if your developers only merge giant branches they work on for weeks, they won't help you. The team will spend a good amount of time merging the branches and fixing the code incompatibilities that will arise eventually. It is as much a waste of time as being blocked by a faulty commit.</p>
<blockquote>
<p>Continuous Integration is not about tools. It is about working in small chunks and integrating your new code to the main branch and pulling frequently. </p>
</blockquote>
<p>Frequently means at least daily. Split the task you are working on into smaller tasks. Merge your code very often and pull very often. This way nobody works apart for more than a day or two and problems do not have time to become snowballs.</p>
<p>A large task does not need to be all in one branch. It should never be. Techniques to merge work in progress to the main branch are called "branching by abstraction" and "feature toggles". See the blog post <a target="_blank" href="https://fire.ci/blog/how-to-get-started-with-continuous-integration/">How to get started with Continuous Integration</a>  for more details.</p>
<h3 id="heading-key-points-for-a-good-ci-build">Key points for a good CI build</h3>
<p>It's very simple. <strong>Keep it short. 3-7 minutes should be the max.</strong> It's not about CPU and resources. It is about developers' productivity. The first rule of productivity is focus. Do one thing, finish it, then move to the next thing. </p>
<p>Context switching is costly. Studies show it takes ~23 minutes to deeply refocus on something when you get disturbed. </p>
<p>Imagine you push your branch to merge it. You start another task. You spend 15-20 minutes getting into it. The minute after you are in the zone you receive a "build failed" notification from your 20 minutes long CI build for the previous task. You get back to fix it. You push it again. You easily lost more than 20 minutes moving back and forth.</p>
<blockquote>
<p>Multiply 20 minutes once or twice a day by the number of developers in your team... That's a lot of precious time wasted.</p>
</blockquote>
<p>Now imagine if the feedback came within 3 minutes. You probably wouldn't have started the new task at all. You would have proof read your code one more time or reviewed a PR while waiting. The failed notification would come and you would fix it. Then you could move on to the next task. That is the kind of focus your process should enable.</p>
<p>Keeping your CI build short makes it a trade off. Tests that run longer or provide little value in the context of CI should be moved to the CD step. And yes, failures there also need to be fixed. But since they are not preventing anybody from doing their thing, you can take the fixes as a "next task" when you finish what you are doing. Just turn off the notifications while working and check every now and then. Keep the context switching to a minimum.</p>
<h2 id="heading-continuous-delivery-and-deployment-are-engineering-problems">Continuous Delivery and Deployment are engineering problems</h2>
<p>Let’s settle on the definitions to get that out of the way.</p>
<p><strong>Continuous Delivery</strong> is about being able to deploy any version of your code at all times. In practice it means the last or pre-last version of your code. You don’t deploy automatically, usually because you don’t have to or are limited by your project lifecycle. But as soon as someone feels like it, a deployment can be done in a minimal amount of time. That someone can be the test/QA team that wants to test things out on a staging or pre-production environment. Or it can actually be time to roll out the code to production.</p>
<p>The idea of Continuous Delivery is to prepare artifacts as close as possible from what you want to run in your environment. These can be .jar or .war files if you are working with Java, or executables if you are working with .NET. These can also be folders of transpiled JS code or even Docker containers, whatever makes deployment shorter (i.e. you have pre-built as much as you can in advance).</p>
<p>By preparing artifacts, I don't mean turning code into artifacts. This is usually a few scripts and minutes of execution. Preparing means:</p>
<blockquote>
<p>Run all the tests you can to ensure that, once deployed, the code will actually work. Run unit tests, integration tests, end to end tests, and even performance tests if you can automate that. </p>
</blockquote>
<p>This way you can filter which versions of your main branch are actually production ready and which are not. The ideal test suite:</p>
<ul>
<li>Ensures that the application's key functionalities work. Ideally all functionalities</li>
<li>Ensures that no performance deal breaker has been introduced so when your new version hits your many users, it has a chance to last</li>
<li>Dry run any database updates your code needs to avoid surprises</li>
</ul>
<p>It does not need to be very fast. 30 minutes or 1 hour is acceptable.</p>
<p><strong>Continuous Deployment</strong> is the next step. You deploy the most up to date and production ready version of your code to some environment. Ideally production if you trust your CD test suite enough. </p>
<p>Note that, depending on the context, this is not always possible or worth the effort. Continuous Delivery is often enough to be productive, especially if you are working in a close network and have limited environments you can deploy to. It can also be that the release cycle of your software prevents unplanned deploys.</p>
<p>Continuous Delivery and Continuous Deployment (let’s call them CD from now on) are not team problems. They are about finding the right balance between execution time, maintenance efforts and relevance of your tests suite to be able to say "This version works as it should." </p>
<p>And it is a balance. If your tests last 30 hours that is a problem. See <a target="_blank" href="https://news.ycombinator.com/item?id=18442941">this epic post</a> about what the Oracle database test suite looks like. But if you spend so much time keeping your tests up to date with the latest code that it impedes the team's progress, that is not good either. Also if your test suite ensures pretty much nothing... it is basically useless.</p>
<p>In an ideal world we want one set of deployable artifacts per commit to the main branch. You can see we have a vertical scalability problem: the faster we move from code to artifacts, the more ready we are to deploy the newest version of the code.</p>
<h2 id="heading-whats-the-big-difference">What’s the big difference?</h2>
<p>Continuous Integration is a horizontal scalability problem. You want developers to merge their code often so the checks must be fast. Ideally within minutes to avoid developers switching context all the time with highly async feedback from the CI builds. </p>
<p>The more developers you have, the more computing power you need to run simple checks (build and test) on all the active branches.</p>
<blockquote>
<p><strong>A good CI build:</strong></p>
<p>Ensures no code that breaks basic stuff and prevents other team members to work is introduced to the main branch, and</p>
<p>Is fast enough to provide feedback to developers within minutes to prevent context switching between tasks.</p>
</blockquote>
<p>Continuous Delivery and Deployment are vertical scalability problems. You have one rather complex operation to perform.</p>
<blockquote>
<p><strong>A good CD build:</strong></p>
<p>Ensures that as many features as possible are working properly.</p>
<p>The faster the better, but it is not a matter of speed. A 30-60 minutes build is OK.</p>
</blockquote>
<p>A common misconception is to see CD as a horizontal scalability problem like CI: the faster you can move from code to artifacts, the more commits you can actually process, and the closer to the ideal scenario you can be. </p>
<p>But we don't need that. Producing artifacts for every commit and as fast as possible is usually overkill. You can very well approach CD on a best effort basis: have a single CD build that will just pick the latest commit to verify once a given build is finished.</p>
<p>Make no mistake about CD. It is really hard. Getting to sufficient test confidence to say your software is ready to be deployed automatically usually works on low surface applications like APIs or simple UIs. It is very difficult to achieve on a complex UI or a large monolith system.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Tools and principles used to execute CI and CD are often very similar. The goals are very different though. </p>
<p>Continuous Integration is a trade off between speed of feedback loop to developers and relevance of the checks your perform (build and test). No code that would impede the team progress should make it to the main branch. </p>
<p>Continuous Delivery of Deployment is about running as thorough checks as you can to catch issues on your code. Completeness of the checks is the most important factor. It is usually measured in terms code coverage or functional coverage of your tests. Catching errors early on prevents broken code to get deployed to any environment and saves the precious time of your test team.</p>
<p>Craft your CI and CD builds to achieve these goals and keep your team productive. No workflow is perfect. Problems will arise every now and then. Use them as lessons learned to strengthen your workflow every time they do.</p>
<p>Published on 27 Nov 2019 on the <a target="_blank" href="https://fire.ci/blog/">Fire CI Blog</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ A complete guide to end-to-end API testing with Docker ]]>
                </title>
                <description>
                    <![CDATA[ By Jean-Paul Delimat Testing is a pain in general. Some don't see the point. Some see it but think of it as an extra step slowing them down. Sometimes tests are there but very long to run or unstable. In this article you'll see how you can engineer t... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/end-to-end-api-testing-with-docker/</link>
                <guid isPermaLink="false">66d45f6936c45a88f96b7ceb</guid>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Developer Tools ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Wed, 13 Nov 2019 20:29:11 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2019/11/api-end-to-end-testing-with-docker-1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Jean-Paul Delimat</p>
<p>Testing is a pain in general. Some don't see the point. Some see it but think of it as an extra step slowing them down. Sometimes tests are there but very long to run or unstable. In this article you'll see how you can engineer tests for yourself with Docker. </p>
<p>We want fast, meaningful and reliable tests written and maintained with minimal effort. It means tests that are useful to you as a developer on a day-to-day basis. They should boost your productivity and improve the quality of your software. Having tests because everybody says  "you should have tests" is no good if it slows you down.</p>
<p>Let's see how to achieve this with not that much effort.</p>
<h2 id="heading-the-example-we-are-going-to-test">The example we are going to test</h2>
<p>In this article we are going to test an API built with Node/express and use chai/mocha for testing. I've chosen a JS'y stack because the code is super short and easy to read. The principles applied are valid for any tech stack. Keep reading even if Javascript makes you sick.</p>
<p>The example will cover a simple set of CRUD endpoints for users. It's more than enough to grasp the concept and apply to the more complex business logic of your API.</p>
<p>We are going to use a pretty standard environment for the API:</p>
<ul>
<li>A Postgres database</li>
<li>A Redis cluster</li>
<li>Our API will use other external APIs to do its job</li>
</ul>
<p>Your API might need a different environment. The principles applied in this article will remain the same. You'll use different Docker base images to run whatever component you might need.</p>
<h2 id="heading-why-docker-and-in-fact-docker-compose">Why Docker? And in fact Docker Compose</h2>
<p>This section contains a lot of arguments in favour of using Docker for testing. You can skip it if you want to get to the technical part right away.</p>
<h2 id="heading-the-painful-alternatives">The painful alternatives</h2>
<p>To test your API in a close to production environment you have two choices. You can mock the environment at code level or run the tests on a real server with the database etc. installed. </p>
<p>Mocking everything at code level clutters the code and configuration of our API. It is also often not very representative of how the API will behave in production. Running the thing in a real server is infrastructure heavy. It is a lot of setup and maintenance, and it does not scale. Having a shared database, you can run only 1 test at a time to ensure test runs do not interfere with each other.</p>
<p>Docker Compose allows us to get the best of both worlds. It creates "containerized" versions of all the external parts we use. It is mocking but on the outside of our code. Our API thinks it is in a real physical environment. Docker compose will also create an isolated network for all the containers for a given test run. This allows you to run several of them in parallel on your local computer or a CI host.</p>
<h2 id="heading-overkill">Overkill?</h2>
<p>You might wonder if it isn't overkill to perform end to end tests at all with Docker compose. What about just running unit tests instead?</p>
<p>For the last 10 years, large monolith applications have been split into smaller services (trending towards the buzzy "microservices"). A given API component relies on more external parts (infrastructure or other APIs). As services get smaller, integration with the infrastructure becomes a bigger part of the job. </p>
<p>You should keep a small gap between your production and your development environments. Otherwise problems will arise when going for production deploy. By definition these problems appear at the worst possible moment. They will lead to rushed fixes, drops in quality, and frustration for the team. Nobody wants that.</p>
<p>You might wonder if end to end tests with Docker compose run longer than traditional unit tests. Not really. You'll see in the example below that we can easily keep the tests under 1 minute, and at great benefit: the tests reflect the application behaviour in the real world. This is more valuable than knowing if your class somewhere in the middle of the app works OK or not. </p>
<p>Also, if you don't have any tests right now, starting from end to end gives you great benefits for little effort. You'll know all stacks of the application work together for the most common scenarios. That's already something! From there you can always refine a strategy to unit test critical parts of your application.</p>
<h2 id="heading-our-first-test">Our first test</h2>
<p>Let’s start with the easiest part: our API and the Postgres database. And let’s run a simple CRUD test. Once we have that framework in place, we can add more features both to our component and to the test.</p>
<p>Here is our minimal API with a GET/POST to create and list users:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">'express'</span>);
<span class="hljs-keyword">const</span> bodyParser = <span class="hljs-built_in">require</span>(<span class="hljs-string">'body-parser'</span>);
<span class="hljs-keyword">const</span> cors = <span class="hljs-built_in">require</span>(<span class="hljs-string">'cors'</span>);

<span class="hljs-keyword">const</span> config = <span class="hljs-built_in">require</span>(<span class="hljs-string">'./config'</span>);

<span class="hljs-keyword">const</span> db = <span class="hljs-built_in">require</span>(<span class="hljs-string">'knex'</span>)({
  <span class="hljs-attr">client</span>: <span class="hljs-string">'pg'</span>,
  <span class="hljs-attr">connection</span>: {
    <span class="hljs-attr">host</span> : config.db.host,
    <span class="hljs-attr">user</span> : config.db.user,
    <span class="hljs-attr">password</span> : config.db.password,
  },
});

<span class="hljs-keyword">const</span> app = express();

app.use(bodyParser.urlencoded({ <span class="hljs-attr">extended</span>: <span class="hljs-literal">false</span> }));
app.use(bodyParser.json());
app.use(cors());

app.route(<span class="hljs-string">'/api/users'</span>).post(<span class="hljs-keyword">async</span> (req, res, next) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> { email, firstname } = req.body;
    <span class="hljs-comment">// ... validate inputs here ...</span>
    <span class="hljs-keyword">const</span> userData = { email, firstname };

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> db(<span class="hljs-string">'users'</span>).returning(<span class="hljs-string">'id'</span>).insert(userData);
    <span class="hljs-keyword">const</span> id = result[<span class="hljs-number">0</span>];
    res.status(<span class="hljs-number">201</span>).send({ id, ...userData });
  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Error: Unable to create user: <span class="hljs-subst">${err.message}</span>. <span class="hljs-subst">${err.stack}</span>`</span>);
    <span class="hljs-keyword">return</span> next(err);
  }
});

app.route(<span class="hljs-string">'/api/users'</span>).get(<span class="hljs-function">(<span class="hljs-params">req, res, next</span>) =&gt;</span> {
  db(<span class="hljs-string">'users'</span>)
  .select(<span class="hljs-string">'id'</span>, <span class="hljs-string">'email'</span>, <span class="hljs-string">'firstname'</span>)
  .then(<span class="hljs-function"><span class="hljs-params">users</span> =&gt;</span> res.status(<span class="hljs-number">200</span>).send(users))
  .catch(<span class="hljs-function"><span class="hljs-params">err</span> =&gt;</span> {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Unable to fetch users: <span class="hljs-subst">${err.message}</span>. <span class="hljs-subst">${err.stack}</span>`</span>);
      <span class="hljs-keyword">return</span> next(err);
  });
});

<span class="hljs-keyword">try</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Starting web server..."</span>);

  <span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">8000</span>;
  app.listen(port, <span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server started on: <span class="hljs-subst">${port}</span>`</span>));
} <span class="hljs-keyword">catch</span>(error) {
  <span class="hljs-built_in">console</span>.error(error.stack);
}
</code></pre>
<p>Here are our tests written with chai. The tests create a new user and fetch it back. You can see that the tests are not coupled in any way with the code of our API. The <code>SERVER_URL</code> variable specifies the endpoint to test. It can be a local or a remote environment.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> chai = <span class="hljs-built_in">require</span>(<span class="hljs-string">"chai"</span>);
<span class="hljs-keyword">const</span> chaiHttp = <span class="hljs-built_in">require</span>(<span class="hljs-string">"chai-http"</span>);
<span class="hljs-keyword">const</span> should = chai.should();

<span class="hljs-keyword">const</span> SERVER_URL = process.env.APP_URL || <span class="hljs-string">"http://localhost:8000"</span>;

chai.use(chaiHttp);

<span class="hljs-keyword">const</span> TEST_USER = {
  <span class="hljs-attr">email</span>: <span class="hljs-string">"john@doe.com"</span>,
  <span class="hljs-attr">firstname</span>: <span class="hljs-string">"John"</span>
};

<span class="hljs-keyword">let</span> createdUserId;

describe(<span class="hljs-string">"Users"</span>, <span class="hljs-function">() =&gt;</span> {
  it(<span class="hljs-string">"should create a new user"</span>, <span class="hljs-function"><span class="hljs-params">done</span> =&gt;</span> {
    chai
      .request(SERVER_URL)
      .post(<span class="hljs-string">"/api/users"</span>)
      .send(TEST_USER)
      .end(<span class="hljs-function">(<span class="hljs-params">err, res</span>) =&gt;</span> {
        <span class="hljs-keyword">if</span> (err) done(err)
        res.should.have.status(<span class="hljs-number">201</span>);
        res.should.be.json;
        res.body.should.be.a(<span class="hljs-string">"object"</span>);
        res.body.should.have.property(<span class="hljs-string">"id"</span>);
        done();
      });
  });

  it(<span class="hljs-string">"should get the created user"</span>, <span class="hljs-function"><span class="hljs-params">done</span> =&gt;</span> {
    chai
      .request(SERVER_URL)
      .get(<span class="hljs-string">"/api/users"</span>)
      .end(<span class="hljs-function">(<span class="hljs-params">err, res</span>) =&gt;</span> {
        <span class="hljs-keyword">if</span> (err) done(err)
        res.should.have.status(<span class="hljs-number">200</span>);
        res.body.should.be.a(<span class="hljs-string">"array"</span>);

        <span class="hljs-keyword">const</span> user = res.body.pop();
        user.id.should.equal(createdUserId);
        user.email.should.equal(TEST_USER.email);
        user.firstname.should.equal(TEST_USER.firstname);
        done();
      });
  });
});
</code></pre>
<p>Good. Now to test our API let's define a Docker compose environment. A file called <code>docker-compose.yml</code> will describe the containers Docker needs to run.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.1'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">db:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">POSTGRES_USER:</span> <span class="hljs-string">john</span>
      <span class="hljs-attr">POSTGRES_PASSWORD:</span> <span class="hljs-string">mysecretpassword</span>
    <span class="hljs-attr">expose:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">5432</span>

  <span class="hljs-attr">myapp:</span>
    <span class="hljs-attr">build:</span> <span class="hljs-string">.</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">yarn</span> <span class="hljs-string">start</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">APP_DB_HOST:</span> <span class="hljs-string">db</span>
      <span class="hljs-attr">APP_DB_USER:</span> <span class="hljs-string">john</span>
      <span class="hljs-attr">APP_DB_PASSWORD:</span> <span class="hljs-string">mysecretpassword</span>
    <span class="hljs-attr">expose:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">8000</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">db</span>

  <span class="hljs-attr">myapp-tests:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">dockerize</span>
        <span class="hljs-string">-wait</span> <span class="hljs-string">tcp://db:5432</span> <span class="hljs-string">-wait</span> <span class="hljs-string">tcp://myapp:8000</span> <span class="hljs-string">-timeout</span> <span class="hljs-string">10s</span>
        <span class="hljs-string">bash</span> <span class="hljs-string">-c</span> <span class="hljs-string">"node db/init.js &amp;&amp; yarn test"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">APP_URL:</span> <span class="hljs-string">http://myapp:8000</span>
      <span class="hljs-attr">APP_DB_HOST:</span> <span class="hljs-string">db</span>
      <span class="hljs-attr">APP_DB_USER:</span> <span class="hljs-string">john</span>
      <span class="hljs-attr">APP_DB_PASSWORD:</span> <span class="hljs-string">mysecretpassword</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">db</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">myapp</span>
</code></pre>
<p>So what do we have here. There are 3 containers:</p>
<ul>
<li><strong>db</strong> spins up a fresh instance of PostgreSQL. We use the public Postgres image from Docker Hub. We set the database username and password. We tell Docker to expose the port 5432 the database will listen to so other containers can connect </li>
<li><strong>myapp</strong> is the container that will run our API. The <code>build</code> command tells Docker to actually build the container image from our source. The rest is like the db container: environment variables and ports</li>
<li><strong>myapp-tests</strong> is the container that will execute our tests. It will use the same image as myapp because the code will already be there so there is no need to build it again. The command <code>node db/init.js &amp;&amp; yarn test</code> run on the container will initialize the database (create tables etc.) and run the tests. We use dockerize to wait for all the required servers to be up and running. The <code>depends_on</code> options will ensure that containers start in a certain order. It does not ensure that the database inside the db container is actually ready to accept connections. Nor that our API server is already up. </li>
</ul>
<p>The definition of the environment is like 20 lines of very easy to understand code. The only brainy part is the environment definition. User names, passwords and URLs must be consistent so containers can actually work together.</p>
<p>One thing to notice is that Docker compose will set the host of the containers it creates to the name of the container. So the database won't be available under <code>localhost:5432</code> but <code>db:5432</code>. The same way our API will be served under <code>myapp:8000</code>. There is no localhost of any kind here. </p>
<p>This means that your API must support environment variables when it comes to environment definition. No hardcoded stuff. But that has nothing to do with Docker or this article. A configurable application is point 3 of the <a target="_blank" href="https://12factor.net/">12 factor app manifesto</a>, so you should be doing it already.</p>
<p>The very last thing we need to tell Docker is how to actually build the container <strong>myapp</strong>. We use a Dockerfile like below. The content is specific to your tech stack but the idea is to bundle your API into a runnable server. </p>
<p>The example below for our Node API installs Dockerize, installs the API dependencies and copies the code of the API inside the container (the server is written in raw JS so no need to compile it).</p>
<pre><code>FROM node AS base

# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0<span class="hljs-number">.6</span><span class="hljs-number">.0</span>
RUN wget https:<span class="hljs-comment">//github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \</span>
    &amp;&amp; tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
    &amp;&amp; rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz

RUN mkdir -p ~/app

WORKDIR ~/app

COPY package.json .
COPY yarn.lock .

FROM base AS dependencies

RUN yarn

FROM dependencies AS runtime

COPY . .
</code></pre><p>Typically from the line <code>WORKDIR ~/app</code> and below you would run commands that would build your application.</p>
<p>And here is the command we use to run the tests:</p>
<pre><code>docker-compose up --build --abort-on-container-exit
</code></pre><p>This command will tell Docker compose to spin up the components defined in our <code>docker-compose.yml</code> file. The <code>--build</code> flag will trigger the build of the myapp container by executing the content of the <code>Dockerfile</code> above. The <code>--abort-on-container-exit</code> will tell Docker compose to shutdown the environment as soon as one container exits. </p>
<p>That works well since the only component meant to exit is the test container <strong>myapp-tests</strong> after the tests are executed. Cherry on the cake, the <code>docker-compose</code> command will exit with the same exit code as the container that triggered the exit. This means that we can check if the tests succeeded or not from the command line. This is very useful for automated builds in a CI environment. </p>
<p>Isn't that the perfect test setup?</p>
<p>The full example is <a target="_blank" href="https://github.com/fire-ci/tuto-api-e2e-testing">here on GitHub</a>. You can clone the repository and run the docker compose command:</p>
<pre><code>docker-compose up --build --abort-on-container-exit
</code></pre><p>Of course you need Docker installed. Docker has the troublesome tendency of forcing you to sign up for an account just to download the thing. But you actually don't have to. Go to the release notes (<a target="_blank" href="https://docs.docker.com/docker-for-windows/release-notes/">link for Windows</a> and <a target="_blank" href="https://docs.docker.com/docker-for-mac/release-notes/">link for Mac</a>) and download not the latest version but the one right before. This is a direct download link.</p>
<p>The very first run of the tests will be longer than usual. This is because Docker will have to download the base images for your containers and cache a few things. The next runs will be much faster.</p>
<p>Logs from the run will look as below. You can see that Docker is cool enough to put logs from all the components on the same timeline. This is very handy when looking for errors.</p>
<pre><code>Creating tuto-api-e2e-testing_db_1    ... done
Creating tuto-api-e2e-testing_redis_1 ... done
Creating tuto-api-e2e-testing_myapp_1 ... done
Creating tuto-api-e2e-testing_myapp-tests_1 ... done
Attaching to tuto-api-e2e-testing_redis_1, tuto-api-e2e-testing_db_1, tuto-api-e2e-testing_myapp_1, tuto-api-e2e-testing_myapp-tests_1
db_1           | The files belonging to <span class="hljs-built_in">this</span> database system will be owned by user <span class="hljs-string">"postgres"</span>.
redis_1        | <span class="hljs-number">1</span>:M <span class="hljs-number">09</span> Nov <span class="hljs-number">2019</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">22.161</span> * Running mode=standalone, port=<span class="hljs-number">6379.</span>
myapp_1        | yarn run v1<span class="hljs-number">.19</span><span class="hljs-number">.0</span>
redis_1        | <span class="hljs-number">1</span>:M <span class="hljs-number">09</span> Nov <span class="hljs-number">2019</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">22.162</span> # WARNING: The TCP backlog setting <span class="hljs-keyword">of</span> <span class="hljs-number">511</span> cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value <span class="hljs-keyword">of</span> <span class="hljs-number">128.</span>
redis_1        | <span class="hljs-number">1</span>:M <span class="hljs-number">09</span> Nov <span class="hljs-number">2019</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">22.162</span> # Server initialized
db_1           | This user must also own the server process.
db_1           | 
db_1           | The database cluster will be initialized <span class="hljs-keyword">with</span> locale <span class="hljs-string">"en_US.utf8"</span>.
db_1           | The <span class="hljs-keyword">default</span> database encoding has accordingly been set to <span class="hljs-string">"UTF8"</span>.
db_1           | The <span class="hljs-keyword">default</span> text search configuration will be set to <span class="hljs-string">"english"</span>.
db_1           | 
db_1           | Data page checksums are disabled.
db_1           | 
db_1           | fixing permissions on existing directory /<span class="hljs-keyword">var</span>/lib/postgresql/data ... ok
db_1           | creating subdirectories ... ok
db_1           | selecting dynamic shared memory implementation ... posix
myapp-tests_1  | <span class="hljs-number">2019</span>/<span class="hljs-number">11</span>/<span class="hljs-number">09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">25</span> Waiting <span class="hljs-keyword">for</span>: tcp:<span class="hljs-comment">//db:5432</span>
myapp-tests_1  | <span class="hljs-number">2019</span>/<span class="hljs-number">11</span>/<span class="hljs-number">09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">25</span> Waiting <span class="hljs-keyword">for</span>: tcp:<span class="hljs-comment">//redis:6379</span>
myapp-tests_1  | <span class="hljs-number">2019</span>/<span class="hljs-number">11</span>/<span class="hljs-number">09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">25</span> Waiting <span class="hljs-keyword">for</span>: tcp:<span class="hljs-comment">//myapp:8000</span>
myapp_1        | $ node server.js
redis_1        | <span class="hljs-number">1</span>:M <span class="hljs-number">09</span> Nov <span class="hljs-number">2019</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">22.163</span> # WARNING you have Transparent Huge Pages (THP) support enabled <span class="hljs-keyword">in</span> your kernel. This will create latency and memory usage issues <span class="hljs-keyword">with</span> Redis. To fix <span class="hljs-built_in">this</span> issue run the command <span class="hljs-string">'echo never &gt; /sys/kernel/mm/transparent_hugepage/enabled'</span> <span class="hljs-keyword">as</span> root, and add it to your /etc/rc.local <span class="hljs-keyword">in</span> order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1           | selecting <span class="hljs-keyword">default</span> max_connections ... <span class="hljs-number">100</span>
myapp_1        | Starting web server...
myapp-tests_1  | <span class="hljs-number">2019</span>/<span class="hljs-number">11</span>/<span class="hljs-number">09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">25</span> Connected to tcp:<span class="hljs-comment">//myapp:8000</span>
myapp-tests_1  | <span class="hljs-number">2019</span>/<span class="hljs-number">11</span>/<span class="hljs-number">09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">25</span> Connected to tcp:<span class="hljs-comment">//db:5432</span>
redis_1        | <span class="hljs-number">1</span>:M <span class="hljs-number">09</span> Nov <span class="hljs-number">2019</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">22.164</span> * Ready to accept connections
myapp-tests_1  | <span class="hljs-number">2019</span>/<span class="hljs-number">11</span>/<span class="hljs-number">09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">25</span> Connected to tcp:<span class="hljs-comment">//redis:6379</span>
myapp_1        | Server started on: <span class="hljs-number">8000</span>
db_1           | selecting <span class="hljs-keyword">default</span> shared_buffers ... <span class="hljs-number">128</span>MB
db_1           | selecting <span class="hljs-keyword">default</span> time zone ... Etc/UTC
db_1           | creating configuration files ... ok
db_1           | running bootstrap script ... ok
db_1           | performing post-bootstrap initialization ... ok
db_1           | syncing data to disk ... ok
db_1           | 
db_1           | 
db_1           | Success. You can now start the database server using:
db_1           | 
db_1           |     pg_ctl -D /<span class="hljs-keyword">var</span>/lib/postgresql/data -l logfile start
db_1           | 
db_1           | initdb: warning: enabling <span class="hljs-string">"trust"</span> authentication <span class="hljs-keyword">for</span> local connections
db_1           | You can change <span class="hljs-built_in">this</span> by editing pg_hba.conf or using the option -A, or
db_1           | --auth-local and --auth-host, the next time you run initdb.
db_1           | waiting <span class="hljs-keyword">for</span> server to start...<span class="hljs-number">.2019</span><span class="hljs-number">-11</span><span class="hljs-number">-09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">24.328</span> UTC [<span class="hljs-number">41</span>] LOG:  starting PostgreSQL <span class="hljs-number">12.0</span> (Debian <span class="hljs-number">12.0</span><span class="hljs-number">-2.</span>pgdg100+<span class="hljs-number">1</span>) on x86_64-pc-linux-gnu, compiled by gcc (Debian <span class="hljs-number">8.3</span><span class="hljs-number">.0</span><span class="hljs-number">-6</span>) <span class="hljs-number">8.3</span><span class="hljs-number">.0</span>, <span class="hljs-number">64</span>-bit
db_1           | <span class="hljs-number">2019</span><span class="hljs-number">-11</span><span class="hljs-number">-09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">24.346</span> UTC [<span class="hljs-number">41</span>] LOG:  listening on Unix socket <span class="hljs-string">"/var/run/postgresql/.s.PGSQL.5432"</span>
db_1           | <span class="hljs-number">2019</span><span class="hljs-number">-11</span><span class="hljs-number">-09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">24.373</span> UTC [<span class="hljs-number">42</span>] LOG:  database system was shut down at <span class="hljs-number">2019</span><span class="hljs-number">-11</span><span class="hljs-number">-09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">23</span> UTC
db_1           | <span class="hljs-number">2019</span><span class="hljs-number">-11</span><span class="hljs-number">-09</span> <span class="hljs-number">21</span>:<span class="hljs-number">57</span>:<span class="hljs-number">24.383</span> UTC [<span class="hljs-number">41</span>] LOG:  database system is ready to accept connections
db_1           |  done
db_1           | server started
db_1           | CREATE DATABASE
db_1           | 
db_1           | 
db_1           | <span class="hljs-regexp">/usr/</span>local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d<span class="hljs-comment">/*
db_1           | 
db_1           | waiting for server to shut down....2019-11-09 21:57:24.907 UTC [41] LOG:  received fast shutdown request
db_1           | 2019-11-09 21:57:24.909 UTC [41] LOG:  aborting any active transactions
db_1           | 2019-11-09 21:57:24.914 UTC [41] LOG:  background worker "logical replication launcher" (PID 48) exited with exit code 1
db_1           | 2019-11-09 21:57:24.914 UTC [43] LOG:  shutting down
db_1           | 2019-11-09 21:57:24.930 UTC [41] LOG:  database system is shut down
db_1           |  done
db_1           | server stopped
db_1           | 
db_1           | PostgreSQL init process complete; ready for start up.
db_1           | 
db_1           | 2019-11-09 21:57:25.038 UTC [1] LOG:  starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1           | 2019-11-09 21:57:25.039 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1           | 2019-11-09 21:57:25.039 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1           | 2019-11-09 21:57:25.052 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1           | 2019-11-09 21:57:25.071 UTC [59] LOG:  database system was shut down at 2019-11-09 21:57:24 UTC
db_1           | 2019-11-09 21:57:25.077 UTC [1] LOG:  database system is ready to accept connections
myapp-tests_1  | Creating tables ...
myapp-tests_1  | Creating table 'users'
myapp-tests_1  | Tables created succesfully
myapp-tests_1  | yarn run v1.19.0
myapp-tests_1  | $ mocha --timeout 10000 --bail
myapp-tests_1  | 
myapp-tests_1  | 
myapp-tests_1  |   Users
myapp-tests_1  | Mock server started on port: 8002
myapp-tests_1  |     ✓ should create a new user (151ms)
myapp-tests_1  |     ✓ should get the created user
myapp-tests_1  |     ✓ should not create user if mail is spammy
myapp-tests_1  |     ✓ should not create user if spammy mail API is down
myapp-tests_1  | 
myapp-tests_1  | 
myapp-tests_1  |   4 passing (234ms)
myapp-tests_1  | 
myapp-tests_1  | Done in 0.88s.
myapp-tests_1  | 2019/11/09 21:57:26 Command finished successfully.
tuto-api-e2e-testing_myapp-tests_1 exited with code 0</span>
</code></pre><p>We can see that <strong>db</strong> is the container that initializes the longest. Makes sense. Once it's done the tests start. The total runtime on my laptop is 16 seconds. Compared to the 880ms used to actually execute the tests, it is a lot. In practice, tests that run under 1 minute are gold as it is almost immediate feedback. The 15'ish seconds overhead are a buy in time that will be constant as you add more tests. You could add hundreds of tests and still keep execution time under 1 minute.</p>
<p>Voilà! We have our test framework up and running. In a real world project the next steps would be to enhance functional coverage of your API with more tests. Let's consider CRUD operations covered. It's time to add more elements to our test environment.</p>
<h2 id="heading-adding-a-redis-cluster">Adding a Redis cluster</h2>
<p>Let's add another element to our API environment to understand what it takes. Spoiler alert: it's not much.</p>
<p>Let us imagine that our API keeps user sessions in a Redis cluster. If you wonder why we would do that, imagine 100 instances of your API in production. Users hit one or another server based on round robin load balancing. Every request needs to be authenticated. </p>
<p>This requires user profile data to check for privileges and other application specific business logic. One way to go is to make a round trip to the database to fetch the data every time you need it, but that is not very efficient. Using an in memory database cluster makes the data available across all servers for the cost of a local variable read.</p>
<p>This is how you enhance your Docker compose test environment with an additional service. Let’s add a Redis cluster from the official Docker image (I've only kept the new parts of the file):</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">db:</span>
    <span class="hljs-string">...</span>

  <span class="hljs-attr">redis:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">"redis:alpine"</span>
    <span class="hljs-attr">expose:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">6379</span>

  <span class="hljs-attr">myapp:</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">APP_REDIS_HOST:</span> <span class="hljs-string">redis</span>
      <span class="hljs-attr">APP_REDIS_PORT:</span> <span class="hljs-number">6379</span>
    <span class="hljs-string">...</span>
  <span class="hljs-attr">myapp-tests:</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">dockerize</span> <span class="hljs-string">...</span> <span class="hljs-string">-wait</span> <span class="hljs-string">tcp://redis:6379</span> <span class="hljs-string">...</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">APP_REDIS_HOST:</span> <span class="hljs-string">redis</span>
      <span class="hljs-attr">APP_REDIS_PORT:</span> <span class="hljs-number">6379</span>
      <span class="hljs-string">...</span>
    <span class="hljs-string">...</span>
</code></pre>
<p>You can see it's not much. We added a new container called <strong>redis</strong>. It uses the official minimal redis image called <code>redis:alpine</code>. We added Redis host and port configuration to our API container. And we've made tests wait for it as well as the other containers before executing the tests.</p>
<p>Let’s modify our application to actually use the Redis cluster:</p>
<pre><code><span class="hljs-keyword">const</span> redis = <span class="hljs-built_in">require</span>(<span class="hljs-string">'redis'</span>).createClient({
  <span class="hljs-attr">host</span>: config.redis.host,
  <span class="hljs-attr">port</span>: config.redis.port,
})

...

app.route(<span class="hljs-string">'/api/users'</span>).post(<span class="hljs-keyword">async</span> (req, res, next) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> { email, firstname } = req.body;
    <span class="hljs-comment">// ... validate inputs here ...</span>
    <span class="hljs-keyword">const</span> userData = { email, firstname };
    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> db(<span class="hljs-string">'users'</span>).returning(<span class="hljs-string">'id'</span>).insert(userData);
    <span class="hljs-keyword">const</span> id = result[<span class="hljs-number">0</span>];

    <span class="hljs-comment">// Once the user is created store the data in the Redis cluster</span>
    <span class="hljs-keyword">await</span> redis.set(id, <span class="hljs-built_in">JSON</span>.stringify(userData));

    res.status(<span class="hljs-number">201</span>).send({ id, ...userData });
  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Error: Unable to create user: <span class="hljs-subst">${err.message}</span>. <span class="hljs-subst">${err.stack}</span>`</span>);
    <span class="hljs-keyword">return</span> next(err);
  }
});
</code></pre><p>Let's now change our tests to check that the Redis cluster is populated with the right data. That's why the <strong>myapp-tests</strong> container also gets the Redis host and port configuration in <code>docker-compose.yml</code>.</p>
<pre><code>it(<span class="hljs-string">"should create a new user"</span>, <span class="hljs-function"><span class="hljs-params">done</span> =&gt;</span> {
  chai
    .request(SERVER_URL)
    .post(<span class="hljs-string">"/api/users"</span>)
    .send(TEST_USER)
    .end(<span class="hljs-function">(<span class="hljs-params">err, res</span>) =&gt;</span> {
      <span class="hljs-keyword">if</span> (err) <span class="hljs-keyword">throw</span> err;
      res.should.have.status(<span class="hljs-number">201</span>);
      res.should.be.json;
      res.body.should.be.a(<span class="hljs-string">"object"</span>);
      res.body.should.have.property(<span class="hljs-string">"id"</span>);
      res.body.should.have.property(<span class="hljs-string">"email"</span>);
      res.body.should.have.property(<span class="hljs-string">"firstname"</span>);
      res.body.id.should.not.be.null;
      res.body.email.should.equal(TEST_USER.email);
      res.body.firstname.should.equal(TEST_USER.firstname);
      createdUserId = res.body.id;

      redis.get(createdUserId, <span class="hljs-function">(<span class="hljs-params">err, cacheData</span>) =&gt;</span> {
        <span class="hljs-keyword">if</span> (err) <span class="hljs-keyword">throw</span> err;
        cacheData = <span class="hljs-built_in">JSON</span>.parse(cacheData);
        cacheData.should.have.property(<span class="hljs-string">"email"</span>);
        cacheData.should.have.property(<span class="hljs-string">"firstname"</span>);
        cacheData.email.should.equal(TEST_USER.email);
        cacheData.firstname.should.equal(TEST_USER.firstname);
        done();
      });
    });
});
</code></pre><p>See how easy this was. You can build a complex environment for your tests like you assemble Lego bricks.</p>
<p>We can see another benefit of this kind of containerized full environment testing. The tests can actually look into the environment's components. Our tests can not only check that our API returns the proper response codes and data. We can also check that data in the Redis cluster have the proper values. We could also check the database content.</p>
<h2 id="heading-adding-api-mocks">Adding API mocks</h2>
<p>A common element for API components is to call other API components.</p>
<p>Let's say our API needs to check for spammy user emails when creating a user. The check is done using a third party service:</p>
<pre><code><span class="hljs-keyword">const</span> validateUserEmail = <span class="hljs-keyword">async</span> (email) =&gt; {
  <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">`<span class="hljs-subst">${config.app.externalUrl}</span>/validate?email=<span class="hljs-subst">${email}</span>`</span>);
  <span class="hljs-keyword">if</span>(res.status !== <span class="hljs-number">200</span>) <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
  <span class="hljs-keyword">const</span> json = <span class="hljs-keyword">await</span> res.json();
  <span class="hljs-keyword">return</span> json.result === <span class="hljs-string">'valid'</span>;
}

app.route(<span class="hljs-string">'/api/users'</span>).post(<span class="hljs-keyword">async</span> (req, res, next) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> { email, firstname } = req.body;
    <span class="hljs-comment">// ... validate inputs here ...</span>
    <span class="hljs-keyword">const</span> userData = { email, firstname };

    <span class="hljs-comment">// We don't just create any user. Spammy emails should be rejected</span>
    <span class="hljs-keyword">const</span> isValidUser = <span class="hljs-keyword">await</span> validateUserEmail(email);
    <span class="hljs-keyword">if</span>(!isValidUser) {
      <span class="hljs-keyword">return</span> res.sendStatus(<span class="hljs-number">403</span>);
    }

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> db(<span class="hljs-string">'users'</span>).returning(<span class="hljs-string">'id'</span>).insert(userData);
    <span class="hljs-keyword">const</span> id = result[<span class="hljs-number">0</span>];
    <span class="hljs-keyword">await</span> redis.set(id, <span class="hljs-built_in">JSON</span>.stringify(userData));
    res.status(<span class="hljs-number">201</span>).send({ id, ...userData });
  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Error: Unable to create user: <span class="hljs-subst">${err.message}</span>. <span class="hljs-subst">${err.stack}</span>`</span>);
    <span class="hljs-keyword">return</span> next(err);
  }
});
</code></pre><p>Now we have a problem for testing anything. We can't create any users if the API to detect spammy emails is not available. Modifying our API to bypass this step in test mode is a dangerous cluttering of the code.</p>
<p>Even if we could use the real third party service, we don't want to do that. As a general rule our tests should not depend on external infrastructure. First of all, because you will probably run your tests a lot as part of your CI process. It’s not that cool to consume another production API for this purpose. Second of all the API might be temporarily down, failing your tests for the wrong reasons.</p>
<p>The right solution is to mock the external APIs in our tests. </p>
<p>No need for any fancy framework. We'll build a generic mock in vanilla JS in ~20 lines of code. This will give us the opportunity to control what the API will return to our component. It allows to test error scenarios.</p>
<p>Now let’s enhance our tests.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);

...

const MOCK_SERVER_PORT = process.env.MOCK_SERVER_PORT || <span class="hljs-number">8002</span>;

<span class="hljs-comment">// Some object to encapsulate attributes of our mock server</span>
<span class="hljs-comment">// The mock stores all requests it receives in the `requests` property.</span>
<span class="hljs-keyword">const</span> mock = {
  <span class="hljs-attr">app</span>: express(),
  <span class="hljs-attr">server</span>: <span class="hljs-literal">null</span>,
  <span class="hljs-attr">requests</span>: [],
  <span class="hljs-attr">status</span>: <span class="hljs-number">404</span>,
  <span class="hljs-attr">responseBody</span>: {}
};

<span class="hljs-comment">// Define which response code and content the mock will be sending</span>
<span class="hljs-keyword">const</span> setupMock = <span class="hljs-function">(<span class="hljs-params">status, body</span>) =&gt;</span> {
  mock.status = status;
  mock.responseBody = body;
};

<span class="hljs-comment">// Start the mock server</span>
<span class="hljs-keyword">const</span> initMock = <span class="hljs-keyword">async</span> () =&gt; {
  mock.app.use(bodyParser.urlencoded({ <span class="hljs-attr">extended</span>: <span class="hljs-literal">false</span> }));
  mock.app.use(bodyParser.json());
  mock.app.use(cors());
  mock.app.get(<span class="hljs-string">"*"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
    mock.requests.push(req);
    res.status(mock.status).send(mock.responseBody);
  });

  mock.server = <span class="hljs-keyword">await</span> mock.app.listen(MOCK_SERVER_PORT);
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Mock server started on port: <span class="hljs-subst">${MOCK_SERVER_PORT}</span>`</span>);
};

<span class="hljs-comment">// Destroy the mock server</span>
<span class="hljs-keyword">const</span> teardownMock = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">if</span> (mock.server) {
    mock.server.close();
    <span class="hljs-keyword">delete</span> mock.server;
  }
};

describe(<span class="hljs-string">"Users"</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-comment">// Our mock is started before any test starts ...</span>
  before(<span class="hljs-keyword">async</span> () =&gt; <span class="hljs-keyword">await</span> initMock());

  <span class="hljs-comment">// ... killed after all the tests are executed ...</span>
  after(<span class="hljs-function">() =&gt;</span> {
    redis.quit();
    teardownMock();
  });

  <span class="hljs-comment">// ... and we reset the recorded requests between each test</span>
  beforeEach(<span class="hljs-function">() =&gt;</span> (mock.requests = []));

  it(<span class="hljs-string">"should create a new user"</span>, <span class="hljs-function"><span class="hljs-params">done</span> =&gt;</span> {
    <span class="hljs-comment">// The mock will tell us the email is valid in this test</span>
    setupMock(<span class="hljs-number">200</span>, { <span class="hljs-attr">result</span>: <span class="hljs-string">"valid"</span> });

    chai
      .request(SERVER_URL)
      .post(<span class="hljs-string">"/api/users"</span>)
      .send(TEST_USER)
      .end(<span class="hljs-function">(<span class="hljs-params">err, res</span>) =&gt;</span> {
        <span class="hljs-comment">// ... check response and redis as before</span>
        createdUserId = res.body.id;

        <span class="hljs-comment">// Verify that the API called the mocked service with the right parameters</span>
        mock.requests.length.should.equal(<span class="hljs-number">1</span>);
        mock.requests[<span class="hljs-number">0</span>].path.should.equal(<span class="hljs-string">"/api/validate"</span>);
        mock.requests[<span class="hljs-number">0</span>].query.should.have.property(<span class="hljs-string">"email"</span>);
        mock.requests[<span class="hljs-number">0</span>].query.email.should.equal(TEST_USER.email);
        done();
      });
  });
});
</code></pre>
<p>The tests now check that the external API has been hit with the proper data during the call to our API. </p>
<p>We can also add other tests checking how our API behaves based on the external API response codes:</p>
<pre><code class="lang-javascript">describe(<span class="hljs-string">"Users"</span>, <span class="hljs-function">() =&gt;</span> {
  it(<span class="hljs-string">"should not create user if mail is spammy"</span>, <span class="hljs-function"><span class="hljs-params">done</span> =&gt;</span> {
    <span class="hljs-comment">// The mock will tell us the email is NOT valid in this test ...</span>
    setupMock(<span class="hljs-number">200</span>, { <span class="hljs-attr">result</span>: <span class="hljs-string">"invalid"</span> });

    chai
      .request(SERVER_URL)
      .post(<span class="hljs-string">"/api/users"</span>)
      .send(TEST_USER)
      .end(<span class="hljs-function">(<span class="hljs-params">err, res</span>) =&gt;</span> {
        <span class="hljs-comment">// ... so the API should fail to create the user</span>
        <span class="hljs-comment">// We could test that the DB and Redis are empty here</span>
        res.should.have.status(<span class="hljs-number">403</span>);
        done();
      });
  });

  it(<span class="hljs-string">"should not create user if spammy mail API is down"</span>, <span class="hljs-function"><span class="hljs-params">done</span> =&gt;</span> {
    <span class="hljs-comment">// The mock will tell us the email checking service</span>
    <span class="hljs-comment">//  is down for this test ...</span>
    setupMock(<span class="hljs-number">500</span>, {});

    chai
      .request(SERVER_URL)
      .post(<span class="hljs-string">"/api/users"</span>)
      .send(TEST_USER)
      .end(<span class="hljs-function">(<span class="hljs-params">err, res</span>) =&gt;</span> {
        <span class="hljs-comment">// ... in that case also a user should not be created</span>
        res.should.have.status(<span class="hljs-number">403</span>);
        done();
      });
  });
});
</code></pre>
<p>How you handle errors from third party APIs in your application is of course up to you. But you get the point.</p>
<p>To run these tests we need to tell the container <strong>myapp</strong> what is the base URL of the third party service:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">myapp:</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">APP_EXTERNAL_URL:</span> <span class="hljs-string">http://myapp-tests:8002/api</span>
    <span class="hljs-string">...</span>

  <span class="hljs-attr">myapp-tests:</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MOCK_SERVER_PORT:</span> <span class="hljs-number">8002</span>
    <span class="hljs-string">...</span>
</code></pre>
<h2 id="heading-conclusion-and-a-few-other-thoughts">Conclusion and a few other thoughts</h2>
<p>Hopefully this article gave you a taste of what Docker compose can do for you when it comes to API testing. The full example is <a target="_blank" href="https://github.com/fire-ci/tuto-api-e2e-testing">here on GitHub</a>.</p>
<p>Using Docker compose makes tests run fast in an environment close to production. It requires no adaptations to your component code. The only requirement is to support environment variables driven configuration.</p>
<p>The component logic in this example is very simple but the principles apply to any API. Your tests will just be longer or more complex. They also apply to any tech stack that can be put inside a container (that's all of them). And once you are there you are one step away from deploying your containers to production if need be.</p>
<p>If you have no tests right now this is how I recommend you should start: end to end testing with Docker compose. It is so simple you could have your first test running in a few hours. Feel free to <a target="_blank" href="https://twitter.com/jpdelimat">reach out to me</a> if you have questions or need advice. I'd be happy to help.</p>
<p>I hope you enjoyed this article and will start testing your APIs with Docker Compose. Once you have the tests ready you can run them out of the box on our continuous integration platform <a target="_blank" href="https://fire.ci">Fire CI</a>.</p>
<h2 id="heading-one-last-idea-to-succeed-with-automated-testing">One last idea to succeed with automated testing.</h2>
<p>When it comes to maintaining large test suites, the most important feature is that tests are easy to read and understand. This is key to motivate your team to keep the tests up to date. Complex tests frameworks are unlikely to be properly used in the long run. </p>
<p>Regardless of the stack for your API, you might want to consider using chai/mocha to write tests for it. It might seem unusual to have different stacks for runtime code and test code, but if it gets the job done ... As you can see from the examples in this article, testing a REST API with chai/mocha is as simple as it gets. The learning curve is close to zero. </p>
<p>So if you have no tests at all and have a REST API to test written in Java, Python, RoR, .NET or whatever other stack, you might consider giving chai/mocha a try.</p>
<p>If you wonder how to get start with continuous integration at all, I have written a broader guide about it. Here it is: <a target="_blank" href="https://fire.ci/blog/how-to-get-started-with-continuous-integration/">How to get started with Continuous Integration</a></p>
<p>Originally published on the <a target="_blank" href="https://fire.ci/blog/">Fire CI Blog</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to split the deployment of your front end and back end with the help of Consumer Driven Contract Testing ]]>
                </title>
                <description>
                    <![CDATA[ By Mario Fernandez Consumer driven contract testing is a great way to improve the reliability of interconnected systems. Integration testing becomes way easier and more self contained. It opens the door for independent deployments, and leads to faste... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/split-frontend-backend-deployment-with-cdcs/</link>
                <guid isPermaLink="false">66d460f27df3a1f32ee7f893</guid>
                
                    <category>
                        <![CDATA[ CircleCI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ contract-testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Jest ]]>
                    </category>
                
                    <category>
                        <![CDATA[ pact ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 12 Nov 2019 08:06:54 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2019/11/simple-cdc-1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Mario Fernandez</p>
<p><a target="_blank" href="https://www.thoughtworks.com/de/radar/techniques/consumer-driven-contract-testing">Consumer driven contract testing</a> is a great way to improve the reliability of interconnected systems. Integration testing becomes way easier and more self contained. It opens the door for independent deployments, and leads to faster iterations and more granular feedback. Unlike your insurance, it doesn't have any fine print. This article is about setting it up in a delivery pipeline, in the context of doing <a target="_blank" href="https://continuousdelivery.com/">continuous delivery</a>.</p>
<p>I want to show how <em>Contract Tests</em> help split the deployment of the front end and the back end of a small application. I have a React client and a Spring Boot backend written in Kotlin.</p>
<h2 id="heading-what-is-a-contract-test">What is a Contract Test?</h2>
<p>I am not talking about <a target="_blank" href="https://en.wikipedia.org/wiki/Smart_contract">smart contracts</a>. There is no blockchain whatsoever in this article. Sorry for that (Contract Tests for Smart Contracts sounds like a conference talk that the world badly needs, though!).</p>
<p>In a nutshell, a Contract Test is a specification of the interactions between a consumer and a provider. In our case, the communication happens using REST. The consumer defines the actions sent to the provider and the responses that will be returned. In our case, the frontend is the consumer and the backend is the provider. A <em>contract</em> is generated. Both sides test against this contract.</p>
<p>It is not really about any particular technology. There are a bunch of different frameworks, but some simple scripts could do the trick.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2019/11/simple-cdc.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h3 id="heading-why-have-it-as-part-of-the-delivery-pipeline">Why have it as part of the delivery pipeline?</h3>
<p>First of all, running these tests continuously ensures that they keep working at all times. The big benefit, however, is that we can separate the deployment of the front end and back end. If both sides are fulfilling the contract, it is likely that they work together correctly. Thus, we can consider avoiding expensive integrated tests. They tend to work pretty badly anyways.</p>
<h2 id="heading-setting-up-some-contracts">Setting up some contracts</h2>
<p>There are two sides to set up, consumer and provider. The tests will run in the pipelines that build the front end and the back end, respectively. We are going to use the <a target="_blank" href="https://docs.pact.io/">Pact framework</a> for our examples, which is the tool that I am most familiar with. Because of that, I tend to use pact and contract interchangeably. Our pipelines are written for <a target="_blank" href="https://circleci.com/">CircleCI</a>, but they should be fairly easy to port to other CI Tools.</p>
<h3 id="heading-the-consumer-side">The consumer side</h3>
<p>As mentioned, the consumer leads the creation of the contract. Having the client driving this might sound counterintuitive. Often, APIs are created before the clients that will use them. Flipping it around is a nice habit to get into. It forces you to really think in terms of what the client will actually do, instead of bikeshedding a super generic API that will never need most of its features. You should give it a try!</p>
<p>The pact is defined through interactions specified in unit tests. We specify what we expect to be sent to the back end, and then use the client code to trigger requests. Why? We can compare expectations against actual requests, and fail the tests if they don't match. </p>
<p>Let's have a look at an example. We are using <a target="_blank" href="https://jestjs.io/">Jest</a> to run the tests. We'll start with some initialization code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> path <span class="hljs-keyword">from</span> <span class="hljs-string">'path'</span>
<span class="hljs-keyword">import</span> Pact <span class="hljs-keyword">from</span> <span class="hljs-string">'pact'</span>

<span class="hljs-keyword">const</span> provider = <span class="hljs-function">() =&gt;</span>
  Pact({
    port: <span class="hljs-number">8990</span>,
    log: path.resolve(process.cwd(), <span class="hljs-string">'logs'</span>, <span class="hljs-string">'pact.log'</span>),
    dir: path.resolve(process.cwd(), <span class="hljs-string">'pacts'</span>),
    spec: <span class="hljs-number">2</span>,
    consumer: <span class="hljs-string">'frontend'</span>,
    provider: <span class="hljs-string">'backend'</span>
  })

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> provider
</code></pre>
<p>Then we have the code for an actual test. The test consists of two parts. First we define the expected interaction. This is not very different from mocking an http library, with something like <a target="_blank" href="https://github.com/ctimmerm/axios-mock-adapter">axios</a>. It specifies the request that we will send (URL, headers, body and so forth), and the response that we will get.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> interaction: InteractionObject = {
  state: <span class="hljs-string">'i have a list of recipes'</span>,
  uponReceiving: <span class="hljs-string">'a request to get recipes'</span>,
  withRequest: {
    method: <span class="hljs-string">'GET'</span>,
    path: <span class="hljs-string">'/rest/recipes'</span>,
    headers: {
      Accept: <span class="hljs-string">'application/json'</span>,
      <span class="hljs-string">'X-Requested-With'</span>: <span class="hljs-string">'XMLHttpRequest'</span>
    }
  },
  willRespondWith: {
    status: <span class="hljs-number">200</span>,
    headers: { <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json; charset=utf-8'</span> },
    body: [
      {
        id: <span class="hljs-number">1</span>,
        name: <span class="hljs-string">'pasta carbonara'</span>,
        servings: <span class="hljs-number">4</span>,
        duration: <span class="hljs-number">35</span>
      }
    ]
  }
}
</code></pre>
<p>Then we have the test itself, where we call the actual client code that will trigger the request. I like to encapsulate these requests in services that convert the raw response into the domain model that will be used by the rest of the app. Through some assertions, we make sure that the data that we are delivering from the service is exactly what we expect.</p>
<pre><code class="lang-typescript">it(<span class="hljs-string">'works'</span>, <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> recipeList()

  expect(response.data.length).toBeGreaterThan(<span class="hljs-number">0</span>)
  expect(response.data[<span class="hljs-number">0</span>]).toEqual({
    id: <span class="hljs-number">1</span>,
    name: <span class="hljs-string">'pasta carbonara'</span>,
    servings: <span class="hljs-number">4</span>,
    duration: <span class="hljs-number">35</span>
  })
})
</code></pre>
<p>Note that even if <code>recipeList</code> is properly typed with <code>TypeScript</code>, that won't help us here. Types disappear at runtime, so if the method is returning an invalid <code>Recipe</code> we won't realize it, unless we explicitly test for it.</p>
<p>Finally we need to define some extra methods that will ensure that the interactions are verified. If there are interactions missing, or they don't look like they should, the test will fail here. After that, all that remains is writing the pact to disk.</p>
<pre><code class="lang-typescript">beforeAll(<span class="hljs-function">() =&gt;</span> provider.setup())
afterEach(<span class="hljs-function">() =&gt;</span> provider.verify())
afterAll(<span class="hljs-function">() =&gt;</span> provider.finalize())
</code></pre>
<p>In the end, the pact gets generated as a JSON file, reflecting all the interactions that we have defined throughout all our tests.</p>
<h4 id="heading-flexible-matching">Flexible matching</h4>
<p>Our pact thus far is specifying the exact values that it will get from the backend. That won't be maintainable in the long run. Certain things are inherently harder to pin down to exact values (for example, dates). </p>
<p>A pact that breaks constantly will lead to frustration. We are going through this whole process to make our life easier, not harder. We'll avoid that by using <a target="_blank" href="https://docs.pact.io/getting_started/matching">matchers</a>. We can be more flexible and define how things will look like, without having to provide exact values. Let's rewrite our previous body:</p>
<pre><code class="lang-typescript">willRespondWith: {
  status: <span class="hljs-number">200</span>,
  headers: { <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json; charset=utf-8'</span> },
  body: Matchers.eachLike({
    id: Matchers.somethingLike(<span class="hljs-number">1</span>),
    name: Matchers.somethingLike(<span class="hljs-string">'pasta carbonara'</span>),
    servings: Matchers.somethingLike(<span class="hljs-number">4</span>),
    duration: Matchers.somethingLike(<span class="hljs-number">35</span>)
  })
}
</code></pre>
<p>You can be more specific. You can set the expected length of a list, use regexes and a bunch of other things.</p>
<h4 id="heading-integrating-it-in-the-pipeline">Integrating it in the pipeline</h4>
<p>The pact tests rely on an external process, and having multiple tests hitting it can lead to non deterministic behavior. One solution is to run all the tests sequentially:</p>
<pre><code class="lang-bash">npm <span class="hljs-built_in">test</span> --coverage --runInBand
</code></pre>
<p>If you want to run the pact tests independently, we can build our own task to run them separately:</p>
<pre><code class="lang-json"><span class="hljs-string">"scripts"</span>: {
  <span class="hljs-attr">"pact"</span>: <span class="hljs-string">"jest --transform '{\"^.+\\\\.ts$\": \"ts-jest\"}' --testRegex '.test.pact.ts$' --runInBand"</span>
}
</code></pre>
<p>Which will become an extra step in our pipeline:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">check:</span>
    <span class="hljs-attr">working_directory:</span> <span class="hljs-string">~/app</span>

    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">circleci/node:12.4</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">linter:js</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span> <span class="hljs-string">--coverage</span> <span class="hljs-string">--runInBand</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">pact</span>
</code></pre>
<h4 id="heading-storing-the-pact">Storing the pact</h4>
<p>Our pact is a json file that we are going to commit directly in the frontend repository, after running the tests locally. I've found that this tends to work well enough. Making the pipeline itself commit the pact to <code>git</code> does not seem to be necessary.<br>We'll get to extending the pact and in a second.</p>
<h3 id="heading-the-provider-side">The provider side</h3>
<p>At this point we have a working pact, that is being verified by the consumer. But that is only half of the equation. Without a verification from the provider side, we haven't accomplished anything. Maybe even less than that, because we might get a false sense of security!</p>
<p>To do this, we are going to start the back end as a development server and run the pact against it. There is a <code>gradle</code> provider that takes care of this. We need to configure it and provide a way of finding the pact (which is stored in the front end repository). You can fetch the pact from the internet, or from a local file, whichever is more convenient.</p>
<pre><code class="lang-groovy">buildscript {
    dependencies {
        classpath 'au.com.dius:pact-jvm-provider-gradle_2.12:3.6.14'
    }
}

apply plugin: 'au.com.dius.pact'

pact {
    serviceProviders {
        api {
            port = 4003

            hasPactWith('frontend') {
                pactSource = url('https://path-to-the-pact/frontend-backend.json')
                stateChangeUrl = url("http://localhost:$port/pact")
            }
        }
    }
}
</code></pre>
<p>What remains is starting the server and running the pact against it, which we do with a small script:</p>
<pre><code class="lang-bash">goal_test-<span class="hljs-function"><span class="hljs-title">pact</span></span>() {
  <span class="hljs-built_in">trap</span> <span class="hljs-string">"stop_server"</span> EXIT

  goal_build
  start_server

  ./gradlew pactVerify
}

<span class="hljs-function"><span class="hljs-title">start_server</span></span>() {
  artifact=app.jar
  port=4003

  <span class="hljs-keyword">if</span> lsof -i -P -n | grep LISTEN | grep :<span class="hljs-variable">$port</span> &gt; /dev/null ; <span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Port[<span class="hljs-variable">${port}</span>] is busy. Server won't be able to start"</span>
    <span class="hljs-built_in">exit</span> 1
  <span class="hljs-keyword">fi</span>

  nohup java -Dspring.profiles.active=pact -jar ./build/libs/<span class="hljs-variable">${artifact}</span> &gt;/dev/null 2&gt;&amp;1 &amp;

  <span class="hljs-comment"># Wait for server to answer requests</span>
  until curl --output /dev/null --silent --fail http://localhost:<span class="hljs-variable">$port</span>/actuator/health; <span class="hljs-keyword">do</span>
    <span class="hljs-built_in">printf</span> <span class="hljs-string">'.'</span>
    sleep 3
  <span class="hljs-keyword">done</span>
}

<span class="hljs-function"><span class="hljs-title">stop_server</span></span>() {
  pkill -f <span class="hljs-string">'java -Dspring.profiles.active=pact -jar'</span>
}
</code></pre>
<h4 id="heading-fixtures">Fixtures</h4>
<p>If you are running your back end in development mode, it will have to deliver some data, so that the contract is fulfilled. Even if we are not using exact matching, we have to return something, otherwise it won't be possible to verify it.</p>
<p>You can use mocks, but I've found that avoiding them as much as possible leads to more trustworthy results. Your app is closer to what will happen in production. So what other options are there? Remember that when we were defining interactions, we had a <code>state</code>. That's the cue for the provider. One way of using it is the <code>stateChangeUrl</code>. We can provide a special controller to initialize our back end based on the <code>state</code>:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">private</span> <span class="hljs-keyword">const</span> <span class="hljs-keyword">val</span> PATH = <span class="hljs-string">"/pact"</span>

<span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Pact</span></span>(<span class="hljs-keyword">val</span> state: String)

<span class="hljs-meta">@RestController</span>
<span class="hljs-meta">@RequestMapping(PATH, consumes = [MediaType.APPLICATION_JSON_VALUE])</span>
<span class="hljs-meta">@ConditionalOnExpression(<span class="hljs-meta-string">"\${pact.enabled:true}"</span>)</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">PactController</span></span>(<span class="hljs-keyword">val</span> repository: RecipeRepository) {
    <span class="hljs-meta">@PostMapping</span>
    <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">setup</span><span class="hljs-params">(<span class="hljs-meta">@RequestBody</span> body: <span class="hljs-type">Pact</span>)</span></span>: ResponseEntity&lt;Map&lt;String,String&gt;&gt; {
        <span class="hljs-keyword">when</span>(body.state) {
            <span class="hljs-string">"i have a list of recipes"</span> -&gt; initialRecipes()
            <span class="hljs-keyword">else</span> -&gt; doNothing()
        }

        <span class="hljs-keyword">return</span> ResponseEntity.ok(mapOf())
    }
}
</code></pre>
<p>Note that this controller is only active for a specific profile, and won't exist outside of it.</p>
<h4 id="heading-integrating-it-in-the-pipeline-1">Integrating it in the pipeline</h4>
<p>As with the provider, we will run the check as part of our pipeline</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>

    <span class="hljs-attr">working_directory:</span> <span class="hljs-string">~/app</span>

    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">circleci/openjdk:8-jdk</span>

    <span class="hljs-attr">steps:</span>

      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">./go</span> <span class="hljs-string">linter-kt</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">./go</span> <span class="hljs-string">test-unit</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">./go</span> <span class="hljs-string">test-pact</span>
</code></pre>
<p>There is a slight difference, though. Our contract gets generated by the consumer. That means that a change in the front end could lead to a pact that does not verify properly anymore, even though no code was changed in the back end. So ideally, a change in the pact should trigger the back end pipeline as well. I haven't found a way to represent this elegantly in <em>CircleCI</em>, unlike say <a target="_blank" href="https://concourse-ci.org/">ConcourseCI</a>.</p>
<h2 id="heading-how-the-contract-influences-the-relationship-between-front-end-and-back-end">How the contract influences the relationship between front end and back end</h2>
<p>It's nice that we got this set up. <a target="_blank" href="https://en.wiktionary.org/wiki/never_change_a_running_system">Never touch a running system</a>, right? Well, we might! After all, quick change is why we invest in all this tooling. How would you introduce a change that requires extending the API?</p>
<ol>
<li>We start with the client. We want to define what the client will get that is not there yet. As we learned, we do that through a test in the front end that defines the expectation for the new route, or the new fields. That will create a new version of the pact.</li>
<li>Note that at this point the back end <em>does not</em> fulfill the pact. A new deployment of the back end will fail. But also, the <em>existing</em> backend does not fulfill the pact either right now. The change you introduced has to be backwards compatible. The front end should not be relying on the changes, either.</li>
<li>Now it's time to fulfill the new pact from the back end side. If this takes a long time, you will block your deployment process, which is not good. Consider doing smaller increments in that case. Anyways, you've got to implement the new functionality. The pact test will verify that your change is what's actually expected.</li>
<li>Now that the back end is providing the new functionality, you can freely integrate it in your front end.</li>
</ol>
<p>This flow can get a bit awkward in the beginning. It is really important to work with the smallest quantum of functionality. You don't want to block your deployment process.</p>
<h2 id="heading-next-steps">Next steps</h2>
<p>For the integration between your own front end and back end I have found this setup to be sufficient in practice. However, as complexity grows, versioning will become important. You'll want to help multiple teams collaborate more easily. For that, we can use a <a target="_blank" href="https://docs.pact.io/pact_broker">broker</a>. This is a lot harder to implement, so you should ask yourself if you really need it. Don't fix problems that you don't have yet.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>To summarize, this is the setup we arrived at:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2019/11/pipelines-full.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Think about all the time you have spent writing tests to check that your back end is sending the right data. That is a lot more convenient to do with a contract. Moreover, releasing front end and back end independently means being faster, releasing smaller pieces of functionality. It might feel scary at first, but you will realize that you actually are much more aware of what's going out that way.</p>
<p>Once you have adopted this for one service, there is no reason not to do it for all of them. I really don't miss running costly end to end test suites just to verify that my back end is working. Here is the code that I used in the examples for the <a target="_blank" href="https://github.com/sirech/cookery2-frontend">front end</a> and the <a target="_blank" href="https://github.com/sirech/cookery2-backend">back end</a>. It is a full running (albeit small) application. Good luck with your contracts!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to set up a lightweight, tool-agnostic CI/CD flow with GitHub Actions ]]>
                </title>
                <description>
                    <![CDATA[ Agnostic tooling is the clever notion that you should be able to run your code in various environments. With many continuous integration and continuous development (CI/CD) apps available, agnostic tooling gives developers a big advantage: portability... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/a-lightweight-tool-agnostic-ci-cd-flow-with-github-actions/</link>
                <guid isPermaLink="false">66bd8f04de1f3ee5a6a8f4c5</guid>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ tools ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Victoria Drake ]]>
                </dc:creator>
                <pubDate>Tue, 29 Oct 2019 12:13:14 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2019/10/cover-3.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Agnostic tooling is the clever notion that you should be able to run your code in various environments. With many continuous integration and continuous development (CI/CD) apps available, agnostic tooling gives developers a big advantage: portability.</p>
<p>Of course, having your CI/CD work <em>everywhere</em> is a tall order. Popular <a target="_blank" href="https://github.com/marketplace/category/continuous-integration">CI apps for GitHub repositories</a> alone use a multitude of configuration languages spanning <a target="_blank" href="https://groovy-lang.org/syntax.html">Groovy</a>, <a target="_blank" href="https://yaml.org/">YAML</a>, <a target="_blank" href="https://github.com/toml-lang/toml">TOML</a>, <a target="_blank" href="https://json.org/">JSON</a>, and more… all with differing syntax, of course. Porting workflows from one tool to another is more than a one-cup-of-coffee process.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2019/10/cover-2.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The introduction of <a target="_blank" href="https://github.com/features/actions">GitHub Actions</a> has the potential to add yet another tool to the mix; or, for the right set up, greatly simplify a CI/CD workflow.</p>
<p>Prior to this article, I accomplished my CD flow with several lashed-together apps. I used AWS Lambda to trigger site builds on a schedule. I had Netlify build on push triggers, as well as run image optimization, and then push my site to the public Pages repository. I used Travis CI in the public repository to test the HTML. All this  worked in conjunction with GitHub Pages, which actually hosts the site.</p>
<p>I’m now using the GitHub Actions beta to accomplish all the same tasks, with one <a target="_blank" href="https://victoria.dev/blog/a-portable-makefile-for-continuous-delivery-with-hugo-and-github-pages/">portable Makefile</a> of build instructions, and without any other CI/CD apps.</p>
<h2 id="heading-appreciating-the-shell">Appreciating the shell</h2>
<p>What do most CI/CD tools have in common? They run your workflow  instructions in a shell environment! This is wonderful, because that means that most CI/CD tools can do anything that you can do in a  terminal… and you can do pretty much <em>anything</em> in a terminal.</p>
<p>Especially for a contained use case like building my static site with a generator like Hugo, running it all in a shell is a no-brainer. To  tell the magic box what to do, we just need to write instructions.</p>
<p>While a shell script is certainly the most portable option, I use the still-very-portable <a target="_blank" href="https://en.wikipedia.org/wiki/Make_(software)">Make</a> to write my process instructions. This provides me with some advantages over simple shell scripting, like the use of variables and <a target="_blank" href="https://en.wikipedia.org/wiki/Make_(software)#Macros">macros</a>, and the modularity of <a target="_blank" href="https://en.wikipedia.org/wiki/Makefile#Rules">rules</a>.</p>
<p>I got into the <a target="_blank" href="https://victoria.dev/blog/a-portable-makefile-for-continuous-delivery-with-hugo-and-github-pages/">nitty-gritty of my Makefile in my last post</a>. Let’s look at how to get GitHub Actions to run it.</p>
<h2 id="heading-using-a-makefile-with-github-actions">Using a Makefile with GitHub Actions</h2>
<p>To our point on portability, my magic Makefile is stored right in the  repository root. Since it’s included with the code, I can run the Makefile locally on any system where I can clone the repository, provided I set the environment variables. Using GitHub Actions as my CI/CD tool is as straightforward as making Make go worky-worky.</p>
<p>I found the <a target="_blank" href="https://help.github.com/en/articles/workflow-syntax-for-github-actions">GitHub Actions workflow syntax guide</a> to be pretty straightforward, though also lengthy on options. Here’s the necessary set up for getting the Makefile to run.</p>
<p>The workflow file at <code>.github/workflows/make-master.yml</code> contains the following:</p>
<pre><code>name: make-master

<span class="hljs-attr">on</span>:
  push:
    branches:
      - master
  <span class="hljs-attr">schedule</span>:
    - cron: <span class="hljs-string">'20 13 * * *'</span>

<span class="hljs-attr">jobs</span>:
  build:
    runs-on: ubuntu-latest
    <span class="hljs-attr">steps</span>:
      - uses: actions/checkout@master
        <span class="hljs-attr">with</span>:
          fetch-depth: <span class="hljs-number">1</span>
      - name: Run Makefile
        <span class="hljs-attr">env</span>:
          TOKEN: ${{ secrets.TOKEN }}
        <span class="hljs-attr">run</span>: make all
</code></pre><p>I’ll explain the components that make this work.</p>
<h2 id="heading-triggering-the-workflow">Triggering the workflow</h2>
<p>Actions support multiple <a target="_blank" href="https://help.github.com/en/articles/events-that-trigger-workflows">triggers for a workflow</a>. Using the <code>on</code> syntax, I’ve defined two triggers for mine: a <a target="_blank" href="https://help.github.com/en/articles/workflow-syntax-for-github-actions#onpushpull_requestbranchestags">push event</a> to the <code>master</code> branch only, and a <a target="_blank" href="https://help.github.com/en/articles/events-that-trigger-workflows#scheduled-events-schedule">scheduled</a> <code>cron</code> job.</p>
<p>Once the <code>make-master.yml</code> file is in your repository, either of your triggers will cause Actions to run your Makefile. To see how  the last run went, you can also <a target="_blank" href="https://help.github.com/en/articles/configuring-a-workflow#adding-a-workflow-status-badge-to-your-repository">add a fun badge</a> to the README.</p>
<h3 id="heading-one-hacky-thing">One hacky thing</h3>
<p>Because the Makefile runs on every push to <code>master</code>, I sometimes would get errors when the site build had no changes. When Git, via <a target="_blank" href="https://victoria.dev/blog/a-portable-makefile-for-continuous-delivery-with-hugo-and-github-pages/">my Makefile</a>, attempted to commit to the Pages repository, no changes were detected and the commit would fail annoyingly:  </p>
<pre><code>nothing to commit, working tree clean
On branch master
Your branch is up to date <span class="hljs-keyword">with</span> <span class="hljs-string">'origin/master'</span>.
nothing to commit, working tree clean
<span class="hljs-attr">Makefile</span>:<span class="hljs-number">62</span>: recipe <span class="hljs-keyword">for</span> target <span class="hljs-string">'deploy'</span> failed
<span class="hljs-attr">make</span>: *** [deploy] <span class="hljs-built_in">Error</span> <span class="hljs-number">1</span>
##[error]Process completed <span class="hljs-keyword">with</span> exit code <span class="hljs-number">2.</span>
</code></pre><p>I came across some solutions that proposed using <code>diff</code> to check if a commit should be made, but this may not work for <a target="_blank" href="https://github.com/benmatselby/hugo-deploy-gh-pages/issues/4">reasons</a>. As a workaround, I simply added the <a target="_blank" href="https://gohugo.io/functions/format/#use-local-and-utc">current UTC time</a> to my index page so that every build would contain a change to be committed.</p>
<h2 id="heading-environment-and-variables">Environment and variables</h2>
<p>You can define the <a target="_blank" href="https://help.github.com/en/github/automating-your-workflow-with-github-actions/virtual-environments-for-github-hosted-runners#supported-runners-and-hardware-resources">virtual environment</a> for your workflow to run in using the <code>runs-on</code> syntax. The obvious best choice one I chose is Ubuntu. Using <code>ubuntu-latest</code> gets me the most updated version, whatever that happens to be when you're reading this.</p>
<p>GitHub sets some <a target="_blank" href="https://help.github.com/en/github/automating-your-workflow-with-github-actions/using-environment-variables#default-environment-variables">default environment variables</a> for workflows. The <a target="_blank" href="https://github.com/actions/checkout"><code>actions/checkout</code> action</a> with <code>fetch-depth: 1</code> creates a copy of just the most recent commit your repository in the <code>GITHUB_WORKSPACE</code> variable. This allows the workflow to access the Makefile at <code>GITHUB_WORKSPACE/Makefile</code>. Without using the checkout action, the Makefile won't be found, and I get an error that looks like this:  </p>
<pre><code>make: *** No rule to make target <span class="hljs-string">'all'</span>.  Stop.
Running Makefile
##[error]Process completed <span class="hljs-keyword">with</span> exit code <span class="hljs-number">2.</span>
</code></pre><p>While there is a <a target="_blank" href="https://help.github.com/en/github/automating-your-workflow-with-github-actions/authenticating-with-the-github_token">default <code>GITHUB_TOKEN</code> secret</a>,  this is not the one I used. The default is only locally scoped to the  current repository. To be able to push to my separate GitHub Pages  repository, I created a <a target="_blank" href="https://github.com/settings/tokens">personal access token</a> scoped to <code>public_repo</code> and pass it in as the <code>secrets.TOKEN</code> encrypted variable. For a step-by-step, see <a target="_blank" href="https://help.github.com/en/github/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets">Creating and using encrypted secrets</a>.</p>
<h2 id="heading-portable-tooling">Portable tooling</h2>
<p>The nice thing about using a simple Makefile to define the bulk of my  CI/CD process is that it’s completely portable. I can run a Makefile  anywhere I have access to an environment, which is most CI/CD apps,  virtual instances, and, of course, on my local machine.</p>
<p>One of the reasons I like GitHub Actions is that getting my Makefile  to run was pretty straightforward. I think the syntax is well done - easy to read, and intuitive when it comes to finding an option you’re  looking for. For someone already using GitHub Pages, Actions provides a  pretty seamless CD experience; and if that should ever change, I can run my Makefile elsewhere. ¯_(ツ)_/¯</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ A portable Makefile for continuous delivery with Hugo and GitHub Pages ]]>
                </title>
                <description>
                    <![CDATA[ Fun fact: I first launched my GitHub Pages site 1,018 days ago. Since then, we’ve grown together. From early cringe-worthy commit messages, through eighty-six versions of Hugo, and up until last week, a less-than-streamlined multi-app continuous inte... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/a-portable-makefile-for-continuous-delivery-with-hugo-and-github-pages/</link>
                <guid isPermaLink="false">66bd8f07de1f3ee5a6a8f4c7</guid>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Productivity ]]>
                    </category>
                
                    <category>
                        <![CDATA[ website development, ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Victoria Drake ]]>
                </dc:creator>
                <pubDate>Mon, 21 Oct 2019 17:09:00 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2019/10/cover-1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p><img src="https://www.freecodecamp.org/news/content/images/2019/10/cover.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Fun fact: I first launched <a target="_blank" href="https://victoria.dev">my GitHub Pages site</a> 1,018 days ago.</p>
<p>Since then, we’ve grown together. From early cringe-worthy commit messages, through eighty-six versions of <a target="_blank" href="https://gohugo.io/">Hugo</a>, and up until last week, a less-than-streamlined multi-app continuous integration and deployment (CI/CD) workflow.</p>
<p>If you know me at all, you know I love to automate things. I’ve been  using a combination of AWS Lambda, Netlify, and Travis CI to  automatically build and publish this site. My workflow for the task  includes:</p>
<ul>
<li>Build with <a target="_blank" href="https://gohugo.io/">Hugo</a> on push to master, and on a schedule (Netlify and Lambda);</li>
<li>Optimize and resize images (Netlify);</li>
<li>Test with <a target="_blank" href="https://github.com/gjtorikian/html-proofer">HTMLProofer</a> (Travis CI); and</li>
<li>Deploy to my <a target="_blank" href="https://victoria.dev/blog/two-ways-to-deploy-a-public-github-pages-site-from-a-private-hugo-repository/">separate, public, GitHub Pages repository</a> (Netlify).</li>
</ul>
<p>Thanks to the introduction of GitHub Actions, I’m able to do all the above with just one portable <a target="_blank" href="https://en.wikipedia.org/wiki/Makefile">Makefile</a>.</p>
<p>Next week I’ll cover my Actions set up; today, I’ll take you through the nitty-gritty of my Makefile so you can write your own.</p>
<h2 id="heading-makefile-portability">Makefile portability</h2>
<p><a target="_blank" href="https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html">POSIX-standard-flavour Make</a> runs on every Unix-like system out there. <a target="_blank" href="https://en.wikipedia.org/wiki/Make_(software)#Derivatives">Make derivatives</a>, such as <a target="_blank" href="https://www.gnu.org/software/make/">GNU Make</a> and several flavours of BSD Make also run on Unix-like systems, though  their particular use requires installing the respective program. To  write a truly portable Makefile, mine follows the POSIX standard. (For a  more thorough summation of POSIX-compatible Makefiles, I found this  article helpful: <a target="_blank" href="https://nullprogram.com/blog/2017/08/20/">A Tutorial on Portable Makefiles</a>.) I run Ubuntu, so I’ve tested the portability aspect using the BSD Make programs <code>bmake</code>, <code>pmake</code>, and <code>fmake</code>.  Compatibility with non-Unix-like systems is a little more complicated,  since shell commands differ. With derivatives such as Nmake, it’s better  to write a separate Makefile with appropriate Windows commands.</p>
<p>While much of my particular use case could be achieved with shell  scripting, I find Make offers some worthwhile advantages. I enjoy the  ease of using variables and <a target="_blank" href="https://en.wikipedia.org/wiki/Make_(software)#Macros">macros</a>, and the modularity of <a target="_blank" href="https://en.wikipedia.org/wiki/Makefile#Rules">rules</a> when it comes to organizing my steps.</p>
<p>The writing of rules mostly comes down to shell commands, which is  the main reason Makefiles are as portable as they are. The best part is  that you can do pretty much <em>anything</em> in a terminal, and certainly handle all the workflow steps listed above.</p>
<h2 id="heading-my-continuous-deployment-makefile">My continuous deployment Makefile</h2>
<p>Here’s the portable Makefile that handles my workflow. Yes, I put emojis in there. I’m a monster.</p>
<pre><code class="lang-makefile"><span class="hljs-section">.POSIX:</span>
DESTDIR=public
HUGO_VERSION=0.58.3

OPTIMIZE = find <span class="hljs-variable">$(DESTDIR)</span> -not -path <span class="hljs-string">"*/static/*"</span> \( -name '*.png' -o -name '*.jpg' -o -name '*.jpeg' \) -print0 | \
xargs -0 -P8 -n2 mogrify -strip -thumbnail '1000&gt;'

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: all</span>
<span class="hljs-section">all: get_repository clean get build test deploy</span>

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: get_repository</span>
<span class="hljs-section">get_repository:</span>
    @echo <span class="hljs-string">"? Getting Pages repository"</span>
    git clone https://github.com/victoriadrake/victoriadrake.github.io.git <span class="hljs-variable">$(DESTDIR)</span>

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: clean</span>
<span class="hljs-section">clean:</span>
    @echo <span class="hljs-string">"? Cleaning old build"</span>
    cd <span class="hljs-variable">$(DESTDIR)</span> &amp;&amp; rm -rf *

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: get</span>
<span class="hljs-section">get:</span>
    @echo <span class="hljs-string">"❓ Checking for hugo"</span>
    @if ! [ -x <span class="hljs-string">"$$(command -v hugo)"</span> ]; then\
        echo <span class="hljs-string">"? Getting Hugo"</span>;\
        wget -q -P tmp/ https://github.com/gohugoio/hugo/releases/download/v<span class="hljs-variable">$(HUGO_VERSION)</span>/hugo_extended_<span class="hljs-variable">$(HUGO_VERSION)</span>_Linux-64bit.tar.gz;\
        tar xf tmp/hugo_extended_<span class="hljs-variable">$(HUGO_VERSION)</span>_Linux-64bit.tar.gz -C tmp/;\
        sudo mv -f tmp/hugo /usr/bin/;\
        rm -rf tmp/;\
        hugo version;\
    fi

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: build</span>
<span class="hljs-section">build:</span>
    @echo <span class="hljs-string">"? Generating site"</span>
    hugo --gc --minify -d <span class="hljs-variable">$(DESTDIR)</span>

    @echo <span class="hljs-string">"? Optimizing images"</span>
    <span class="hljs-variable">$(OPTIMIZE)</span>

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: test</span>
<span class="hljs-section">test:</span>
    @echo <span class="hljs-string">"? Testing HTML"</span>
    docker run -v <span class="hljs-variable">$(GITHUB_WORKSPACE)</span>/<span class="hljs-variable">$(DESTDIR)</span>/:/mnt 18fgsa/html-proofer mnt --disable-external

<span class="hljs-meta"><span class="hljs-meta-keyword">.PHONY</span>: deploy</span>
<span class="hljs-section">deploy:</span>
    @echo <span class="hljs-string">"? Preparing commit"</span>
    @cd <span class="hljs-variable">$(DESTDIR)</span> \
    &amp;&amp; git config user.email <span class="hljs-string">"hello@victoria.dev"</span> \
    &amp;&amp; git config user.name <span class="hljs-string">"Victoria via GitHub Actions"</span> \
    &amp;&amp; git add . \
    &amp;&amp; git status \
    &amp;&amp; git commit -m <span class="hljs-string">"? CD bot is helping"</span> \
    &amp;&amp; git push -f -q https://<span class="hljs-variable">$(TOKEN)</span>@github.com/victoriadrake/victoriadrake.github.io.git master
    @echo <span class="hljs-string">"? Site is deployed!"</span>
</code></pre>
<p>Sequentially, this workflow:</p>
<ol>
<li>Clones the public Pages repository;</li>
<li>Cleans (deletes) the previous build files;</li>
<li>Downloads and installs the specified version of Hugo, if Hugo is not already present;</li>
<li>Builds the site;</li>
<li>Optimizes images;</li>
<li>Tests the built site with HTMLProofer, and</li>
<li>Prepares a new commit and pushes to the public Pages repository.</li>
</ol>
<p>If you’re familiar with command line, most of this may look familiar.  Here are a couple bits that might warrant a little explanation.</p>
<h3 id="heading-checking-if-a-program-is-already-installed">Checking if a program is already installed</h3>
<p>I think this bit is pretty tidy:</p>
<pre><code class="lang-sh"><span class="hljs-keyword">if</span> ! [ -x <span class="hljs-string">"$<span class="hljs-subst">$(command -v hugo)</span>"</span> ]; <span class="hljs-keyword">then</span>\
...
<span class="hljs-keyword">fi</span>
</code></pre>
<p>I use a negated <code>if</code> conditional in conjunction with <code>command -v</code> to check if an executable (<code>-x</code>) called <code>hugo</code> exists. If one is not present, the script gets the specified version of Hugo and installs it. <a target="_blank" href="https://stackoverflow.com/a/677212">This Stack Overflow answer</a> has a nice summation of why <code>command -v</code> is a more portable choice than <code>which</code>.</p>
<h3 id="heading-image-optimization">Image optimization</h3>
<p>My Makefile uses <code>mogrify</code> to batch resize and compress  images in particular folders. It finds them automatically using the file  extension, and only modifies images that are larger than the target  size of 1000px in any dimension. I wrote more about the <a target="_blank" href="https://victoria.dev/blog/how-to-quickly-batch-resize-compress-and-convert-images-with-a-bash-one-liner/">batch-processing one-liner in this post</a>.</p>
<p>There are a few different ways to achieve this same task, one of which, theoretically, is to take advantage of Make’s <a target="_blank" href="https://en.wikipedia.org/wiki/Make_(software)#Suffix_rules">suffix rules</a> to run commands only on image files. I find the shell script to be more readable.</p>
<h3 id="heading-using-dockerized-htmlproofer">Using Dockerized HTMLProofer</h3>
<p>HTMLProofer is installed with <code>gem</code>, and uses Ruby and <a target="_blank" href="https://nokogiri.org/tutorials/ensuring_well_formed_markup.html">Nokogiri</a>, which adds up to a lot of installation time for a CI workflow. Thankfully, <a target="_blank" href="https://github.com/18F">18F</a> has a <a target="_blank" href="https://github.com/18F/html-proofer-docker">Dockerized version</a> that is much faster to implement. Its usage requires starting the container with the built site directory <a target="_blank" href="https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume">mounted as a data volume</a>, which is easily achieved by appending to the <code>docker run</code> command.</p>
<pre><code class="lang-sh">docker run -v /absolute/path/to/site/:/mounted-site 18fgsa/html-proofer /mounted-site
</code></pre>
<p>In my Makefile, I specify the absolute site path using the <a target="_blank" href="https://help.github.com/en/articles/virtual-environments-for-github-actions#environment-variables">default environment variable</a> <code>GITHUB_WORKSPACE</code>. I’ll dive into this and other GitHub Actions features in the next post.</p>
<p>In the meantime, happy Making!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Speed is all you need: how we set up continuous delivery ]]>
                </title>
                <description>
                    <![CDATA[ By Jože Rožanec At Qlector we are committed to develop and deliver high quality software and take into account best engineering practices as listed in 12… At Qlector, we are committed to developing and delivering high quality software and we take int... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/speed-is-all-you-need-how-we-set-up-continuous-delivery-e4d8010cb1c5/</link>
                <guid isPermaLink="false">66c35f4e258ebfc3dc8f1f9e</guid>
                
                    <category>
                        <![CDATA[ agile ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Productivity ]]>
                    </category>
                
                    <category>
                        <![CDATA[ technology ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Thu, 18 Apr 2019 17:58:53 +0000</pubDate>
                <media:content url="https://cdn-media-1.freecodecamp.org/images/1*-VJitDDm0wCJMYolyZwwwQ.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Jože Rožanec</p>
<h4 id="heading-at-qlector-we-are-committed-to-develop-and-deliver-high-quality-software-and-take-into-account-best-engineering-practices-as-listed-in-12">At Qlector we are committed to develop and deliver high quality software and take into account best engineering practices as listed in 12…</h4>
<p>At Qlector, we are committed to developing and delivering high quality software and we take into account the best engineering practices as listed in 12 factor apps.</p>
<p>In the following post, we describe how we introduced Continuous Delivery. By doing so, we’ve reduced waste and the…</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to make an iOS on-demand build system with Jenkins  and Fastlane ]]>
                </title>
                <description>
                    <![CDATA[ By Agam Mahajan This article is about creating iOS builds through Jenkins BOT, remotely, without the need of a developer. Before starting, I want to say that this is my first article. So feel free to leave a comment if something can be improved :) W... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-make-an-ios-on-demand-build-system-with-jenkins-and-fastlane-8eb1e02c73d1/</link>
                <guid isPermaLink="false">66c35351ddebf8cbc124cb9d</guid>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ iOS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ General Programming ]]>
                    </category>
                
                    <category>
                        <![CDATA[ technology ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 23 Oct 2018 01:11:09 +0000</pubDate>
                <media:content url="https://cdn-media-1.freecodecamp.org/images/1*RmcSmwPhUn8ljLiiwYxK0A.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Agam Mahajan</p>
<p>This article is about creating iOS builds through Jenkins BOT, remotely, without the need of a developer.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*RmcSmwPhUn8ljLiiwYxK0A.png" alt="Image" width="690" height="710" loading="lazy"></p>
<p>Before starting, I want to say that this is my first article. So feel free to leave a comment if something can be improved :)</p>
<h4 id="heading-why-is-this-a-good-idea"><strong>Why is this a good idea?</strong></h4>
<p>When a developer makes a feature, they QA test it before pushing it to production. So a build has to be shared with the QA team with some test configurations.</p>
<p>Xcode (the IDE) takes a significant amount of time to compile and generate this build. This means that any person that needs the build would have to install the IDE, clone the repository, create a signing identity and certificate and then create the build themselves. Or depend on the developer to create one for them.</p>
<p>During the build creation process, the IDE is unusable. This severely impacts the productivity of the developer. In my company, the average build time of an .ipa is around 20 mins. On average, a developer makes 2–3 builds daily.<br>This means 5 working hours a week get wasted.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*GFEYrijn6zJapmjaN2V4dA.jpeg" alt="Image" width="620" height="465" loading="lazy">
<em>How much more time will it take to build?</em></p>
<p>But what if there was an automated system which could generate the builds on its own? This would free the developers from this responsibility. It would also make it possible for anyone to get a build easily.</p>
<p>Jenkins is one of the solutions to our problem.</p>
<p>Making builds easily available to testers and developers ensures that people are able to test features faster and ship to production more easily. This improves the productivity of development teams. It also improves the quality of products pushed to production.</p>
<h3 id="heading-lets-get-started-now"><strong>Let’s get started now.</strong></h3>
<h4 id="heading-prerequisites"><strong>Prerequisites</strong></h4>
<p>You will require:</p>
<ul>
<li>macOS Machine (it is better to run it on Mac products)</li>
<li>10 GB of drive space (for Jenkins)</li>
<li>Java 8 installed (either a JRE or Java Development Kit (JDK) is fine)<br><a target="_blank" href="http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html">http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html</a></li>
</ul>
<p><strong>Additional Plugins to be installed</strong></p>
<ul>
<li>homebrew</li>
<li>wget</li>
<li>RVM Plugin<br><a target="_blank" href="http://usabilityetc.com/articles/ruby-on-mac-os-x-with-rvm/">Installation guide</a><br><a target="_blank" href="https://rvm.io/rvm/security">https://rvm.io/rvm/security</a></li>
</ul>
<p>Make one Branch with a file in it with the name <code>Jenkinsfile</code> with sample code:<br><code>_node {_</code><br> <code>_sh ‘echo HelloWorld’_</code><br><code>_}_</code><br>Let's name it <strong>jenkins-integration</strong>. Will come back to this later.</p>
<ul>
<li>Install Xcode on your machine from the App Store</li>
<li>Install Fastlane on your machine. Jenkins will internally use fastlane commands to generate builds.</li>
</ul>
<p>Now let’s go through it step by step.</p>
<h3 id="heading-step-1-install-jenkins-on-your-machine"><strong>Step 1. Install Jenkins on your machine</strong></h3>
<p>You can install on a MacBook or mac-mini. Mac-mini is preferred as it can be kept alive.</p>
<p>Download Jenkins -&amp;g<a target="_blank" href="https://jenkins.io/">t; https://jenkins.</a>io/</p>
<p>Run <strong>java -jar jenkins.war --httpPort=8080</strong> in the command line. If you’re getting an error in the terminal, try a different port (for example, 9090) as sometimes some ports are not available.</p>
<p>Browse to <a target="_blank" href="http://localhost:8080">http://localhost:8080</a> and follow the instructions to complete the installation.</p>
<p>Then add admin credentials and don’t forget them (as did I :P). Later you can go to <strong>Jenkins &gt; Manage Jenkins &gt; Manager</strong> Users and do your changes if needed.</p>
<h3 id="heading-step-2-creating-your-first-pipeline">Step 2. Creating Your first Pipeline</h3>
<p>Create a new job and choose <strong>Pipeline Project</strong>.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*cN7MqM8LaClcPg29kVGq_Q.png" alt="Image" width="800" height="433" loading="lazy"></p>
<p>To check out your project, under the section <strong>Pipeline,</strong> in <strong>Definition,</strong> choose <strong>Pipeline Script from SCM</strong> and in SCM choose <strong>Git</strong></p>
<p>Then add your repo URL and add the credentials if its a private repo. In branches to build , add <em>/<em>*jenkins-integration,</em></em> the branch which we created earlier.</p>
<p>Make sure Script Path is <strong>Jenkinsfile</strong> which we have created in our new branch. All the scripts will be written in this Jenkinsfile.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*l6wpHeIHZZ5ZuoTjcdJoLA.png" alt="Image" width="800" height="713" loading="lazy"></p>
<p>Click on Save and Jenkins will automatically scan your repo with the mentioned branch and will run the Jenkinsfile script.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*v7QhFuQlGH9YhjEruS52nw.png" alt="Image" width="800" height="444" loading="lazy"></p>
<p>Now we are ready to configure our Jenkinsfile to create builds</p>
<h3 id="heading-step-3-add-parameters-to-the-job">Step 3. Add parameters to the Job</h3>
<p>User input is required for</p>
<ul>
<li>branch</li>
<li>environment (test or prod)</li>
</ul>
<p>For that we need to configure our project to take input parameters for a job.</p>
<p>Go to the <strong>Configure</strong> section and check <strong>This project is parameterised<em>.</em></strong><br>Then select add parameter and add the same accordingly.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*EfmwPbDPa-YwWAZ-rEU2cw.png" alt="Image" width="800" height="586" loading="lazy"></p>
<p>When you click on save, you will see a new section on left side -&amp;g<strong>t; Build with Paramet</strong>ers. This will be the user interface to make builds.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*msne-k6C4ksZdj8NpbGdrA.png" alt="Image" width="800" height="293" loading="lazy"></p>
<p>These params will be used in our Jenkins script.</p>
<h3 id="heading-step-3-configure-jenkins-script">Step 3. Configure Jenkins Script</h3>
<p>Will create multiple steps in our Jenkinsfile, each having one responsibility, and it will create a nice UI when it is built.</p>
<p>Go to your Jenkinsfile and replace the script with the following:</p>
<p>First, check out the branch through the parameter which we added earlier. Add your repo and GitHub token.</p>
<p>Now the GitHub token should not be visible to others. To do this, go to <strong>Manage Jenkins</strong>-&amp;g<strong>t; Configure Sys</strong>tem <strong>-&gt;Global prope</strong>rties an<strong>d add github</strong>Token as an environment variable.</p>
<p>Then invoke the script to change the environment.</p>
<p>Next, invoke fastlane to clean (remove derived data, clean, delete .dsym files etc).</p>
<p>If code signing is required, do that next using <strong>ad-hoc</strong>. You can use <strong>development</strong> or <strong>app store</strong> based on your needs.</p>
<p>Next, create builds using the <strong>gym</strong> command in fastlane.</p>
<h3 id="heading-step-4-run-the-job">Step 4. Run the Job</h3>
<p>Now our script is ready. Go to Jenkins and open <strong>Build with Parameters.</strong></p>
<p>It will start to run the script and will create a nice UI with multiple steps as mentioned in the Jenkinsfile.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*atL19HWAh9PkfnyxjfcsMg.png" alt="Image" width="800" height="504" loading="lazy"></p>
<p>When the job is completed, go to the project <strong>Users/agammahajan/.jenkins/workspace/iOS_Build_Systems</strong><br>and you will see that the .ipa has been created. Voilà!</p>
<p>Now you can share this build with others. You can use the Slack plugin to upload the builds to Slack if you want.</p>
<h4 id="heading-wrapping-up">Wrapping up</h4>
<p>So to conclude here, we can see how easy it is to setup an automated bot which enables any person to trigger builds in just 2 steps: <strong>Give Branch-&gt;Test Environment-&gt;</strong> Done.</p>
<p>This has helped me and my fellow developers improve productivity and ship faster. It has also helped the QA team, so that they don’t have to depend on developers every time they need to test something. I hope it benefits you and your company also.</p>
<p>From here, the <strong>possibilities</strong> are endless.</p>
<ol>
<li>You can make scheduled jobs to generate nightly builds.</li>
<li>Upload builds directly to the App Store.</li>
<li>Cache the builds, so builds with the same configuration are not generated again.</li>
<li>Distributing the IPA in house for OTA (Over the air) installation.</li>
<li>Make a CI-CD pipeline to run automated tests on every commit and make them production ready.</li>
</ol>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
