<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Continuous Integration - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Fri, 08 May 2026 22:32:28 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/continuous-integration/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Automate Branch-Specific Netlify Configurations with a Bash Script: A Step-by-Step Guide ]]>
                </title>
                <description>
                    <![CDATA[ When you’re working on a project with multiple environments – like staging and production – for your backend APIs and frontend deployments, you’ll want to make sure you have the correct configuration and commands for each branch in your repository. T... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-automate-branch-specific-netlify-configurations-with-a-bash-script/</link>
                <guid isPermaLink="false">6760688b00f110abd3d0f655</guid>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Netlify ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Web Development ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Bash ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Francis Ihejirika ]]>
                </dc:creator>
                <pubDate>Mon, 16 Dec 2024 17:51:07 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733871988108/cde4ea9b-705c-40e0-9730-09dbeebdfbae.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>When you’re working on a project with multiple environments – like staging and production – for your backend APIs and frontend deployments, you’ll want to make sure you have the correct configuration and commands for each branch in your repository.</p>
<p>This can be daunting in situations where multiple developers are actively working on a codebase, making changes to different branches, or managing multiple branch-specific configurations.</p>
<p>Like with every pull request or change pushed to a branch, you’ll need to review every line of code that’s been changed, added, or removed before deciding what gets merged or not. Configuration files in codebases are not exempt from this, making them prone to errors, as a simple change can affect your entire Continuous Integration setup.</p>
<p>When changes get made to the staging or production branch and a build is triggered, you’ll want to ensure that the correct resources attached to a branch are maintained. In some cases, you may need to define different redirect rules for each respective client, custom build commands, or other settings for each branch.</p>
<p>In this article, I’ll walk through how to manage branch-specific configurations including redirects for multiple branches automatically, using a simple bash script. I’ll also show you how to safely merge context-specific rules for your staging and production branches on Netlify.</p>
<h2 id="heading-what-well-cover">What we’ll cover:</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-project-structure-and-scenario">Project Structure and Scenario</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-are-redirectsrewrites">What are Redirects/Rewrites?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-netlify-processes-redirects">How Netlify Processes Redirects</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-using-the-redirects-file-syntax">Using the _redirects file syntax</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-using-the-netlifytoml-configuration-file-syntax">Using the netlify.toml configuration file syntax</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-the-problem-managing-multiple-netlifytoml-files-for-different-branches">The Problem: Managing Multiple netlify.toml Files for Different Branches</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-write-the-script-to-automatically-create-our-configuration-files">How to Write the Script to Automatically Create Our Configuration File(s)</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-sample-netlifytoml-file">Sample Netlify.toml file</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-1-create-the-script-folder-and-add-the-script-file">Step 1: Create the script folder and add the script file</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-modify-packagejson-to-include-the-script-command">Step 2: Modify package.json to include the script command</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-deploy-our-client-to-netlify">How to Deploy Our Client to Netlify</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-first-deployment-of-your-project-to-netlify">First deployment of your project to Netlify</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-subsequent-deployments-how-to-set-up-branch-deployments">Subsequent Deployments / How to Set Up Branch Deployments</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-step-1-set-up-environment-variables-in-netlify-for-each-branch-context-production-staging-and-so-on">Step 1: Set Up Environment Variables in Netlify for each branch context — production, staging, and so on</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-trigger-a-new-deploy">Step 2: Trigger a new deploy</a></p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-inspect-your-deployments">Inspect Your Deployments</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-project-structure-and-scenario">Project Structure and Scenario</h2>
<p>Consider a situation where you have two separate servers deployed for a project: one to serve requests to a staging environment (deployed to Render), and another to the production environment (deployed to Google Cloud Run).</p>
<p>You also have two separate client deployments on Netlify, each with their respective API_BASE_URLs, that are served by their respective servers as illustrated below:</p>
<p><img src="https://cdn-images-1.medium.com/max/1200/1*Zat3jiq5BCucEzDHKp8yuA.png" alt="Illustration showing branches of a project repository - development, staging and production - each with its own server and client" width="600" height="400" loading="lazy"></p>
<p>The image below is a <code>sample-project</code> repository, with <code>api</code> and <code>client</code> folders/directories within it. This is an overview of the structure in each of the branches outlined above. Each directory contains its own <code>package.json</code> file, is treated as an independent component, and can be deployed to two separate services.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*Vkh8EyIA5qamhoJOz2ksSg.png" alt="A project structure for a sample project, including directories and files for both backend and frontend. " width="600" height="400" loading="lazy"></p>
<p>In your frontend deployment for each of the clients, all your requests made to endpoints that begin with <code>/api/v1/</code> are routed to the server. Other routes remain within the frontend to direct you to pages within the client. So you’re required to write the correct rules to guide your client on how to process these requests. These are called redirect rules or rewrites.</p>
<h2 id="heading-what-are-redirectsrewrites">What are Redirects/Rewrites?</h2>
<p>Redirects, or rewrites, are rules you can create to have certain URLs automatically go to a new location anywhere on the internet (source: <a target="_blank" href="https://wpengine.com/">WPengine</a>). These are also generally known as <strong>URL forwarding</strong> and you can use them anywhere – in entire websites, sections of a website, or an entire web application.</p>
<p>In web applications, redirects are often utilized to determine how to process requests. Web hosting platforms such as Netlify and Vercel use them as well, giving developers the option to determine how their web applications process requests.</p>
<h2 id="heading-how-netlify-processes-redirects">How Netlify Processes Redirects</h2>
<p>Netlify has two possible ways to specify redirect rules. You can do it using the <code>_redirects</code> file syntax or using the <code>netlify.toml</code> configuration file syntax. They achieve the same goal, but the <code>netlify.toml</code> syntax gives you more options and capabilities.</p>
<h3 id="heading-using-the-redirects-file-syntax">Using the <code>_redirects</code> file syntax</h3>
<p>If you opt to use the redirect syntax, you should simply create a <code>_redirects</code> file in the public folder of your client app, and specify the redirect rules within it. That’s as easy as it gets. Below is an example of a redirect rule within the file:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733944577546/2f21a9b9-6843-4900-a6fe-5573a087b3d9.png" alt="Sample Netlify _redirects file showing usage syntax and redirect rules" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>The above rule can be interpreted as:</p>
<ol>
<li><p>Send all my requests that match <code>/api/v1</code> to the API URL specified, and return a 200 success status code. The asterisks (*) after <code>/api/v1/</code> as seen in <code>/api/v1/*</code> instruct it to append any other extension of the original URL to the stated API URL. For example, if you have a <code>/api/v1/users</code> route in your frontend, that request will be redirected to <code>https://your-api-base-url.com/api/v1/users</code>. The <code>:splat</code> seen in the API URL is simply a placeholder.</p>
</li>
<li><p>Serve all other default routes through the index.html folder. This is required to ensure that you don’t encounter broken pages when you navigate to other pages and attempt to visit the previous page using the “back” button.</p>
</li>
</ol>
<h3 id="heading-using-the-netlifytoml-configuration-file-syntax">Using the <code>netlify.toml</code> configuration file syntax</h3>
<p>The <code>netlify.toml</code> configuration file gives you a lot more flexibility when specifying redirect rules, including but not limited to matching the original request route, the required destination, the preferred status code response, header rules, signatures, country restrictions, roles and more.</p>
<p>This is a sample <code>netlify.toml</code> file sourced from <a target="_blank" href="https://docs.netlify.com/routing/redirects/#syntax-for-the-netlify-configuration-file">Netlify’s documentation</a>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733947216566/f64670b4-9d28-4c50-a753-1deb27dfc646.png" alt="Sample netlify.toml file showing configuration" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><strong>Quick Note:</strong> using the redirects file for redirecting certain requests to our API is perfectly fine. But it can be considered a security risk adding our API URL in plain text in the <em>redirects</em> file if the API_BASE_URL is supposed to be private. This is because any file in the public folder is what it sounds like – public – and anyone can access it.</p>
<p>If the direct locations you desire to have in your app are public URLs, then feel free to utilize the <code>_redirects</code> file syntax. But if you prefer to have a private URL(s), utilizing a <code>netlify.toml</code> configuration file in combination with the environment variables is generally a better idea.</p>
<h2 id="heading-the-problem-managing-multiple-netlifytoml-files-for-different-branches">The Problem: Managing Multiple <code>netlify.toml</code> Files for Different Branches</h2>
<p>When you use the <code>netlify.toml</code> file to define your build commands and environment-specific settings, and you make changes that are pushed to your repository and open pull requests, you have to manually ignore or edit each <code>netlify.toml</code>in each branch. This eventually becomes very stressful and susceptible to errors.</p>
<p>In addition to this, we want to avoid having our API URLs hardcoded in either our <code>_redirects</code> or <code>netlify.toml</code>file within our project codebase for security reasons. We will use environment variables as provided within our Netlify UI for production and staging contexts.</p>
<p>To avoid the above problems, we will use a small script in our codebase to dynamically generate the correct <code>netlify.toml</code> files for each branch. This approach eliminates conflicts and removes the need for manual intervention when switching between branches or handling pull requests.</p>
<h2 id="heading-how-to-write-the-script-to-automatically-create-our-configuration-files">How to Write the Script to Automatically Create Our Configuration File(s)</h2>
<h3 id="heading-sample-netlifytoml-file">Sample <code>Netlify.toml</code> file</h3>
<p>Below is a screenshot of a sample <code>netlify.toml</code> file we are trying to achieve for each build. You can see that all our requests that match <code>api/v1/</code> in our codebase will be routed to our API.</p>
<p>You could have your API endpoint requests structured differently, for example <code>/api/your-endpoint</code> – just make sure to adjust the script accordingly. In this sample project, we use <code>api/v1/your-endpoint</code> as our structure.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*oj_oJDA7lnC9we2zuQHm4w.png" alt="Netlify configuration file showing build commands and redirect rules" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-1-create-the-script-folder-and-add-the-script-file">Step 1: Create the script folder and add the script file</h3>
<p>In the <code>client</code> directory, create a <code>scripts/</code> directory and a <a target="_blank" href="http://configure-netlify.sh"><code>configure-netlify.sh</code></a> script file. You are required to do this for each branch in your repo. The content remains the same.</p>
<p>Open the <a target="_blank" href="http://configure-netlify.sh"><code>configure-netlify.sh</code></a> script file and paste the following content within it:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-comment"># Ensure API_BASE_URL is set</span>
<span class="hljs-keyword">if</span> [ -z <span class="hljs-string">"<span class="hljs-variable">$API_BASE_URL</span>"</span> ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Error: API_BASE_URL environment variable is not set."</span>
  <span class="hljs-built_in">exit</span> 1  <span class="hljs-comment"># Exit the script to stop the deployment</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Using API endpoint: <span class="hljs-variable">$API_BASE_URL</span>"</span>

<span class="hljs-comment"># Define the desired Netlify configuration</span>
NETLIFY_CONFIG=<span class="hljs-string">"
[build]
  command = \"npm install &amp;&amp; npm run build\"
  base = \"client\"
  publish = \"dist\"

[[redirects]]
  from = \"/api/v1/*\"
  to = \"<span class="hljs-variable">$API_BASE_URL</span>/:splat\"
  status = 200
  force = true

[[redirects]]
  from = \"/*\"
  to = \"/index.html\"
  status = 200
"</span>

<span class="hljs-comment"># Create or update the netlify.toml file</span>
<span class="hljs-keyword">if</span> [ ! -f <span class="hljs-string">"netlify.toml"</span> ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Creating netlify.toml file..."</span>
<span class="hljs-keyword">else</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Updating existing netlify.toml file..."</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$NETLIFY_CONFIG</span>"</span> &gt; netlify.toml

<span class="hljs-comment"># Confirm successful configuration</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"netlify.toml file has been configured successfully!"</span>
</code></pre>
<p>The script does the following:</p>
<ol>
<li><p>It checks the environment variables to ensure that the <code>API_BASE_URL</code> is set. If this isn’t set, it exits the build and causes it to fail. We did this to ensure that you don’t mistakenly create a successful deployment but with invalid URLs in production.</p>
</li>
<li><p>It then creates the content of the <code>netlify.toml</code> file as shown in the sample above. If your API endpoints use a different structure from <code>api/v1/your-endpoint</code>, you can adjust the script to fit your desired structure.</p>
</li>
<li><p>It checks if there already exists a <code>netlify.toml</code> file. If it doesn’t exist, it creates one and writes the content into it. If it exists, it updates it with the correct content during the build, using the <code>API_BASE_URL</code> set in the environment variables.</p>
</li>
</ol>
<h3 id="heading-step-2-modify-packagejson-to-include-the-script-command">Step 2: Modify <code>package.json</code> to include the script command</h3>
<p>To integrate this script with your build process, we will add a script command to the <code>package.json</code> file to call this script before running the actual build.</p>
<p>Add the <code>configure-netlify</code> command as follows: <code>"configure-netlify": "bash scripts/</code><a target="_blank" href="http://configure-netlify.sh"><code>configure-netlify.sh"</code></a> within the scripts in your <code>package.json</code> file.</p>
<p>Update your build command to run the script before running the actual build: <code>"build": "npm run configure-netlify &amp;&amp; vite build"</code>.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*Sds0AS4Poe80pc9D9YkBvQ.png" alt="Image showing updated package.json file with custom configure-netlify command and updated build command" width="600" height="400" loading="lazy"></p>
<p>Don’t forget to save these changes and push them to your remote repository.</p>
<h2 id="heading-how-to-deploy-our-client-to-netlify">How to Deploy Our Client To Netlify</h2>
<p>When deploying our client to Netlify, we are given three options:</p>
<ol>
<li><p>importing an existing project (that is, a project that exists on a git repository service such as GitHub and GitLab),</p>
</li>
<li><p>importing from a template, or</p>
</li>
<li><p>manually deploying a static site using the Netlify drop (drag and drop) interface.</p>
</li>
</ol>
<p>For the configuration in our repository to work as expected during our build process, you’ll need to use the option that requires importing from an existing project such as GitHub. Using the drag-and-drop interface won’t work. If you must use this, then opt for the <code>_redirects</code> file syntax option to define your redirects.</p>
<h3 id="heading-first-deployment-of-your-project-to-netlify">First deployment of your project to Netlify</h3>
<p>When deploying your project for the first time, you are given the option of deploying only one branch initially. You can only add and specify other options, such as other branches, in subsequent deployments.</p>
<p>To deploy your project, take the following steps:</p>
<ol>
<li><p>Sign in to Netlify &gt; <a target="_blank" href="http://netlify.com">netlify.com</a></p>
</li>
<li><p>Click "Add new site" &gt; "Import an existing project" &gt; "Deploy with GitHub"</p>
</li>
<li><p>Click "Configure Netlify on GitHub" &gt; Search for your repository &gt; Select it</p>
</li>
<li><p>Enter a unique site name for your project</p>
</li>
<li><p>Configure deploy settings. Here you are required to select the preferred branch to deploy. For the initial deployment, we will deploy the <code>main</code> branch which we use as the production branch.</p>
<ul>
<li><p>Branch: main/master</p>
</li>
<li><p>Build command: <code>npm run build</code></p>
</li>
<li><p>Publish directory: <code>dist</code> (Select the directory where your static file lives. In this sample project, it’s exported into the <code>dist</code> directory. Some tools export into <code>build</code>)</p>
</li>
</ul>
</li>
<li><p>Enter the environment variables for your project. Don’t forget to enter your <code>API_BASE_URL</code> from your server. This is an essential requirement according to the bash script.</p>
</li>
<li><p>Click "Deploy site"</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733951997499/f329f2e6-b977-4b1f-a6ea-6b20610dc0d2.png" alt="Netlify deployment screen showing optional project build settings" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Your project should deploy correctly, and you’ll be able to see the <code>netlify.toml</code> configuration generated by the script by inspecting the deployment details at the bottom of the successful deployment page.</p>
<p>You can download this file to your local machine to see the configuration generated. It should match the sample <code>netlify.toml</code> file above. You can also test that it works using your generated site link.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733952720930/97ccee2f-e93b-4205-94fa-a8ab32dd37c2.png" alt="Netlify screen showing deploy log and static files after successful deployment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-subsequent-deployments-how-to-set-up-branch-deployments">Subsequent Deployments / How to Set Up Branch Deployments</h2>
<h3 id="heading-step-1-set-up-environment-variables-in-netlify-for-each-branch-context-production-staging-and-so-on">Step 1: Set Up Environment Variables in Netlify for each branch context  — production, staging, and so on</h3>
<p>When your project has been deployed successfully, you can set up deployments for your staging branch. To edit the configurations, you’ll need to:</p>
<ol>
<li><p>Navigate to the list of your sites</p>
</li>
<li><p>Select your successfully deployed site</p>
</li>
<li><p>Click on “site configuration” on the left menu</p>
</li>
<li><p>Select “environment variables” &gt; click the “Add a variable” button.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733953093253/26948920-70a7-47bc-8f53-4cb19a9d8543.png" alt="Netlify site configuration page of successfully deployed site, showing environment variables" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You will be given the option of adding variables individually or importing an entire <code>.env</code> file. You can choose either option. In the image below, I’ve selected “Import from a <code>.env</code> file”.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734124727631/1bb20e6b-1232-4a79-bc18-2df2440cb641.png" alt="Environment variables screen showing options available to add a variable - using a single variable entry or multi entry from a .env file" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Seeing that our production site, deployed from the <code>main</code> branch (with the production environmental variables), has already been deployed, you’ll need to:</p>
<ol>
<li><p>Uncheck the production branch (to prevent replacing the initially deployed main branch. Be careful not to mix up your environment variables for different branches)</p>
</li>
<li><p>Select “branch deploys”</p>
</li>
<li><p>Copy and paste the content of your .env file in the input section</p>
</li>
<li><p>Don’t forget to add the <code>API_BASE_URL</code> environment variable for your staging environment</p>
</li>
</ol>
<p>Note that in selecting branch deploys, the environment variables imported here will affect all branch deploys, apart from the production branch. You can further customize your contexts by selecting a custom branch, but that’s an entirely different approach which may require you to further customize your <code>netlify.toml</code> configuration file or the bash script.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733953419262/2f62d3d6-2549-4a35-aa0b-7a02225bd630.png" alt="Environment variables screen with available contexts for deployment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If you decide to import each environment variable individually, you are given a similar option as seen below. Ensure that you select the correct context for each branch.</p>
<p>DON’T USE THE SAME VALUES FOR ALL CONTEXTS. As seen in the image below, selecting “<em>different value for each deploy context</em>” will allow you to define the values for each one. In this case, we define the values for branch deploys. Your initially used production variable should already exist.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*699HAdcahAzATFCDYlbqpw.png" alt="Environment variable insertion screen for single variable, showing value options for different contexts" width="600" height="400" loading="lazy"></p>
<p>When all the variables have been imported, you can inspect them to confirm that they have been correctly imported by selecting the dropdown on the right beside each variable and inspecting their values.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733954618431/66e90f42-4e3d-4c5b-95ec-a6ae03207498.png" alt="Environmental variables set for deployed web application" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-2-trigger-a-new-deploy">Step 2: Trigger a new deploy</h3>
<p>When all your environment variables have been imported for the different contexts – production and staging in this case – navigate to “deploys” on the left panel of your screen. Then hit the “Trigger deploy” button, clear the cache, and initiate a new deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733954838853/79685cf6-54a5-4495-8777-914fcc46950f.png" alt="79685cf6-54a5-4495-8777-914fcc46950f" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-inspect-your-deployments">Inspect Your Deployments</h2>
<p>You can confirm that your script works as expected by selecting any of the deployments and selecting the building dropdown in the “Deploy log”. You will see the command run, as well as your output and API URL for that deployment, as defined by your context.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733955355311/8268c1f3-c9cb-4b98-8094-59b7dd2d5b13.png" alt="Deploy log for successfully deployed web application, showing values logged by automation script during build" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By following the steps in this guide, and using your script and updated commands in each branch in your repo, when you push changes then Netlify will automatically generate or update the <code>netlify.toml</code>file in each branch. This ensures that the correct configurations and environment variables for each environment are used at build time.</p>
<p>Your script remains the same across all the branches. This lets you focus on other code changes while your script handles the correct configuration for you safely and easily.</p>
<p>Push changes to any branch to see this in action.</p>
<p>Feel free to connect with me on <a target="_blank" href="https://x.com/@francisihej">Twitter</a> (@francisihej) or <a target="_blank" href="https://linkedin.com/in/francis-ihejirika">LinkedIn</a>!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ The CI/CD Handbook: Learn Continuous Integration and Delivery with GitHub Actions, Docker, and Google Cloud Run ]]>
                </title>
                <description>
                    <![CDATA[ Hey everyone! 🌟 If you’re in the tech space, chances are you’ve come across terms like Continuous Integration (CI), Continuous Delivery (CD), and Continuous Deployment. You’ve probably also heard about automation pipelines, staging environments, pro... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/learn-continuous-integration-delivery-and-deployment/</link>
                <guid isPermaLink="false">6751d2f856661d3d5a501466</guid>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                    <category>
                        <![CDATA[ CI/CD ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Prince Onukwili ]]>
                </dc:creator>
                <pubDate>Thu, 05 Dec 2024 16:21:12 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734119999570/cfbf3375-1e95-41df-b5b0-8fbb8b827f59.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Hey everyone! 🌟 If you’re in the tech space, chances are you’ve come across terms like <strong>Continuous Integration (CI)</strong>, <strong>Continuous Delivery (CD)</strong>, and <strong>Continuous Deployment</strong>. You’ve probably also heard about automation pipelines, staging environments, production environments, and concepts like testing workflows.</p>
<p>These terms might seem complex or interchangeable at first glance, leaving you wondering: What do they actually mean? How do they differ from one another? 🤔</p>
<p>In this handbook, I’ll break down these concepts in a clear and approachable way, drawing on relatable analogies to make each term easier to understand. 🧠💡 Beyond just theory, we’ll dive into a hands-on tutorial where you’ll learn how to set up a CI/CD workflow step by step.</p>
<p>Together, we’ll:</p>
<ul>
<li><p>Set up a Node.js project. ✨</p>
</li>
<li><p>Implement automated tests using Jest and Supertest. 🛠️</p>
</li>
<li><p>Set up a CI/CD workflow using GitHub Actions, triggered on push, and pull requests, or after a new release. ⚙️</p>
</li>
<li><p>Build and publish a Docker image of your application to Docker Hub. 📦</p>
</li>
<li><p>Deploy your application to a staging environment for testing. 🚀</p>
</li>
<li><p>Finally, roll it out to a production environment, making it live! 🌐</p>
</li>
</ul>
<p>By the end of this guide, not only will you understand the difference between CI/CD concepts, but you’ll also have practical experience in building your own automated pipeline. 😃</p>
<h3 id="heading-table-of-contents">Table of Contents</h3>
<ol>
<li><p><a class="post-section-overview" href="#heading-what-is-continuous-integration-deployment-and-delivery"><strong>What is Continuous Integration, Deployment, and Delivery?</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-differences-between-continuous-integration-continuous-delivery-and-continuous-deployment"><strong>Differences Between Continuous Integration, Continuous Delivery, and Continuous Deployment</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-a-nodejs-project-with-a-web-server-and-automated-tests"><strong>How to Set Up a Node.js Project with a Web Server and Automated Tests</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-create-a-github-repository-to-host-your-codebase"><strong>How to Create a GitHub Repository to Host Your Codebase</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-the-ci-and-cd-workflows-within-your-project"><strong>How to Set Up the CI and CD Workflows Within Your Project</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-set-up-a-docker-hub-repository-for-the-projects-image-and-generate-an-access-token-for-publishing-the-image"><strong>Set Up a Docker Hub Repository for the Project's Image and Generate an Access Token for Publishing the Image</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-a-google-cloud-account-project-and-billing-account"><strong>Create a Google Cloud Account, Project, and Billing Account</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-a-google-cloud-service-account-to-enable-deployment-of-the-nodejs-application-to-google-cloud-run-via-the-cd-pipeline"><strong>Create a Google Cloud Service Account to Enable Deployment of the Node.js Application to Google Cloud Run via the CD Pipeline</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-the-staging-branch-and-merge-the-feature-branch-into-it-continuous-integration-and-continuous-delivery"><strong>Create the Staging Branch and Merge the Feature Branch into It (Continuous Integration and Continuous Delivery)</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-merge-the-staging-branch-into-the-main-branch-continuous-integration-and-continuous-deployment"><strong>Merge the Staging Branch into the Main Branch (Continuous Integration and Continuous Deployment)</strong></a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion"><strong>Conclusion</strong></a></p>
</li>
</ol>
<h2 id="heading-what-is-continuous-integration-deployment-and-delivery"><strong>What is Continuous Integration, Deployment, and Delivery?</strong> 🤔</h2>
<h3 id="heading-continuous-integration-ci"><strong>Continuous Integration (CI)</strong></h3>
<p>Imagine you’re part of a team of six developers, all working on the same project. Without a proper system, chaos would ensue.</p>
<p>Let’s say Mr. A is building a new login feature, Mrs. B is fixing a bug in the search bar, and Mr. C is tweaking the dashboard UI—all at the same time. If everyone is editing the same "folder" or codebase directly, things could go horribly wrong: <em>"Hey! Who just broke the app?!"</em> 😱</p>
<p>To keep everything in order, teams use <strong>Version Control Systems (VCS)</strong> like GitHub, GitLab, or BitBucket. Think of it as a digital workspace where everyone can safely collaborate without stepping on each other’s toes. 🗂️✨</p>
<p>Here’s how Continuous Integration fits into this process step-by-step:</p>
<h4 id="heading-1-the-main-branch-the-general-folder">1. <strong>The Main Branch: The General Folder</strong> ✨</h4>
<p>At the heart of every project is the <strong>main branch</strong>—the ultimate source of truth. It contains the stable codebase that powers your live app. It’s where every team member contributes their work, but with one important rule: only tested and approved code gets merged here. 🚀</p>
<h4 id="heading-2-feature-branches-personal-workspaces">2. <strong>Feature Branches: Personal Workspaces</strong> 🔨</h4>
<p>When someone like Mr. A wants to work on a new feature, they create a <strong>feature branch</strong>. This branch is essentially a personal copy of the main branch where they can tinker, write code, and test without affecting others. Mrs. B and Mr. C are also working on their own branches. Everyone’s experiments stay neatly organized. 🧪💡</p>
<h4 id="heading-3-merging-changes-the-ci-workflow">3. <strong>Merging Changes: The CI Workflow</strong> 🎉</h4>
<p>When Mr. A is satisfied with his feature, he doesn’t just shove it into the main branch—CI ensures it’s done safely:</p>
<ul>
<li><p><strong>Automated Tests</strong>: Before merging, CI tools automatically run tests on Mr. A’s code to check for bugs or errors. Think of it as a bouncer guarding the main branch, ensuring no bad code gets in. 🕵️‍♂️</p>
</li>
<li><p><strong>Build Verification</strong>: The feature branch code is also "built" (converted into a deployable version of the app) to confirm it works as intended.</p>
</li>
</ul>
<p>Once these checks are passed, Mr. A’s feature branch is merged into the main branch. This frequent merging of changes is what we call <strong>Continuous Integration</strong>.</p>
<h3 id="heading-continuous-delivery-cd">Continuous Delivery (CD)</h3>
<p>Continuous Delivery (CD) often gets mixed up with Continuous Deployment, and while they share similarities, they serve distinct purposes in the development lifecycle. Let’s break it down! 🧐</p>
<h4 id="heading-the-need-for-a-staging-area">The Need for a <code>Staging</code> Area 🌉</h4>
<p>In the Continuous Integration (CI) process we discussed above, we primarily dealt with <strong>feature branches</strong> and the <strong>main branch</strong>. But directly merging changes from feature branches into the main branch (which powers the live product) can be risky. Why? 🛑</p>
<p>While automated tests and builds catch many errors, they’re not foolproof. Some edge cases or bugs might slip through unnoticed. This is where the <strong>staging branch</strong> and <strong>staging environment</strong> come into play! 🎭</p>
<p>Think of the staging branch as a “trial run.” Before unleashing changes to real customers, the codebase from feature branches is merged into the staging branch and deployed to a <strong>staging environment</strong>. This environment is an exact replica of the production environment, but it’s used exclusively by the <strong>Quality Assurance (QA) team</strong> for testing.</p>
<p>The QA team takes the role of a “test driver,” running the platform through its paces just as a real user would. They check for usability issues, edge cases, or bugs that automated tests might miss, and provide feedback to developers for fixes. 🚦 If everything passes, the codebase is cleared for deployment to production.</p>
<h4 id="heading-continuous-delivery-in-action">Continuous Delivery in Action 📦</h4>
<p>The process of merging changes into the staging branch and deploying them to the <strong>staging environment</strong> is what we call <strong>Continuous Delivery</strong>. 🛠️ It ensures that the application is always in a deployable state, ready for the next step in the pipeline.</p>
<p>Unlike Continuous Deployment (which we’ll discuss later), Continuous Delivery doesn’t automatically push changes to production (live platform). Instead, it pauses to let humans—namely the QA team or stakeholders—decide when to proceed. This adds an extra layer of quality assurance, reducing the chances of errors making it to the live product. 🕵️‍♂️</p>
<h3 id="heading-continuous-deployment-cd">Continuous Deployment (CD)</h3>
<p>Continuous Deployment (CD) takes automation to its peak. While it shares similarities with Continuous Delivery, the key difference lies in the <strong>final step</strong>: there’s no manual approval required. The final process—merging the codebase and deploying it live for end users (the QA testers or the team lead could do this).</p>
<p>Let’s explore what makes Continuous Deployment so powerful (and a little scary)! 😅</p>
<h4 id="heading-the-last-mile-of-the-cicd-pipeline">The Last Mile of the CI/CD Pipeline 🛣️</h4>
<p>Imagine you’ve gone through the rigorous process of Continuous Integration: teammates have merged their feature branches, automated tests were run, and the codebase was successfully deployed to the staging environment during Continuous Delivery.</p>
<p>Now, you’re confident that the application is free of bugs and ready to shine in the production environment—the live version of your platform used by real customers.</p>
<p>In <strong>Continuous Deployment</strong>, this final step of deploying changes to the live environment happens <strong>automatically</strong>. The pipeline triggers whenever specific events occur, such as:</p>
<ul>
<li><p>A <strong>Pull Request (PR)</strong> is merged into the <strong>main branch</strong>.</p>
</li>
<li><p>A new <strong>release version</strong> is created.</p>
</li>
<li><p>A <strong>commit</strong> is pushed directly to the production branch (though this is rare for most teams).</p>
</li>
</ul>
<p>Once triggered, the pipeline springs into action, building, testing, and finally deploying the updated codebase to the production environment. 📡</p>
<h2 id="heading-differences-between-continuous-integration-continuous-delivery-and-continuous-deployment"><strong>Differences Between Continuous Integration, Continuous Delivery, and Continuous Deployment</strong> 🔍</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aspect</td><td>Continuous Integration (CI)</td><td>Continuous Delivery (CD)</td><td>Continuous Deployment (CD)</td></tr>
</thead>
<tbody>
<tr>
<td>Primary Focus</td><td>Merging feature branches into the main/general codebase OR to the staging codebase.</td><td>Deploying the tested code to a staging environment for QA testing and approval.</td><td>Automatically deploying the code to the live production environment.</td></tr>
<tr>
<td><strong>Automation Level</strong></td><td>Automates testing and building processes for feature branches.</td><td>Automates deployment to staging/test environments after successful testing.</td><td>Fully automates the deployment to production with no manual approval.</td></tr>
<tr>
<td><strong>Testing Scope</strong></td><td>Automated tests run on feature branches to ensure code quality before merging into the main or staging branch.</td><td>Includes automated tests before deployment to staging and allows QA testers to perform manual testing in a controlled environment.</td><td>May include automated tests as a final check, ensuring the production environment is stable before deployment.</td></tr>
<tr>
<td><strong>Branch Involved</strong></td><td>Feature branches merging into the main/general or staging branch.</td><td>Staging branch used as an intermediate step before merging into the main branch.</td><td>Main/general branch deployed directly to production.</td></tr>
<tr>
<td><strong>Environment Target</strong></td><td>Ensures integration and testing within a local environment or build pipeline.</td><td>Deploys to staging/test environments where QA testers validate features.</td><td>Deploys to production/live environment accessed by end users.</td></tr>
<tr>
<td><strong>Key Goal</strong></td><td>Prevent integration conflicts and ensure new changes don’t break the existing codebase.</td><td>Provide a stable, near-production environment for thorough QA testing before final deployment.</td><td>Ensure that new features and updates reach users as soon as possible with minimal delays.</td></tr>
<tr>
<td><strong>Approval Process</strong></td><td>No approval needed. Feature branches are tested and merged upon passing criteria.</td><td>QA team or lead provides feedback/approval before changes are merged into the main branch for production.</td><td>No manual approval. Deployment is entirely automated.</td></tr>
<tr>
<td><strong>Example Trigger</strong></td><td>A developer merges a feature branch into the main branch.</td><td>The staging branch passes automated tests (during PR) and is ready for deployment to the testing environment.</td><td>A new release is created or a pull request is merged into the main branch, triggering an automatic production deployment.</td></tr>
</tbody>
</table>
</div><p>Now that we’ve untangled the mysteries of Continuous Integration, Continuous Delivery, and Continuous Deployment, it’s time to roll up our sleeves and put theory into practice 😁.</p>
<h2 id="heading-how-to-set-up-a-nodejs-project-with-a-web-server-and-automated-tests"><strong>How to Set Up a Node.js Project with a Web Server and Automated Tests</strong> ✨</h2>
<p>In this hands-on section, we’ll build a Node.js web server with automated tests using Jest. From there, we’ll create a CI/CD pipeline with GitHub Actions that automates testing for every <strong>pull request to the staging and main branches</strong>. Finally, we’ll publish an Image of our application to DockerHub and deploy the image to <strong>Google Cloud Run</strong>, first to a staging environment for testing and later to the production environment for live use.</p>
<p>Ready to bring your project to life? Let’s get started! 🚀✨</p>
<h3 id="heading-step-1-install-nodejs">Step 1: Install Node.js 📥</h3>
<p>To get started, you’ll need to have <strong>Node.js</strong> installed on your machine. Node.js provides the JavaScript runtime we’ll use to create our web server.</p>
<ol>
<li><p>Visit <a target="_blank" href="https://nodejs.org/en/download/package-manager">https://nodejs.org/en/download/package-manager</a></p>
</li>
<li><p>Choose your operating system (Windows, macOS, or Linux) and download the installer.</p>
</li>
<li><p>Follow the installation instructions to complete the setup.</p>
</li>
</ol>
<p>To verify that Node.js was installed successfully, open your terminal and run <code>node -v</code>. This should display the installed version of Node.js</p>
<h3 id="heading-step-2-clone-the-starter-repository">Step 2: Clone the Starter Repository 📂</h3>
<p>The next step is to grab the starter code from GitHub. If you don’t have Git installed, you can download it at <a target="_blank" href="https://git-scm.com/downloads">https://git-scm.com/downloads</a>. Choose your OS and follow the instructions to install Git. Once you’re set, it’s time to clone the repository.</p>
<p>Run the following command in your terminal to clone the boilerplate code:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> --single-branch --branch initial https://github.com/onukwilip/ci-cd-tutorial
</code></pre>
<p>This will download the project files from the <code>initial</code> branch, which contains the starter template for our Node.js web server.</p>
<p>Navigate into the project directory:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ci-cd-tutorial
</code></pre>
<h3 id="heading-step-3-install-dependencies">Step 3: Install Dependencies 📦</h3>
<p>Once you’re in the project directory, install the required dependencies for the Node.js project. These are the packages that power the application:</p>
<pre><code class="lang-bash">npm install --force
</code></pre>
<p>This will download and set up all the libraries specified in the project. Alright, dependencies installed? You’re one step closer!</p>
<h3 id="heading-step-4-run-automated-tests">Step 4: Run Automated Tests ✅</h3>
<p>Before diving into the code, let’s confirm that the automated tests are functioning correctly. Run:</p>
<pre><code class="lang-bash">npm <span class="hljs-built_in">test</span>
</code></pre>
<p>You should see two successful test results in your terminal. This indicates that the starter project is correctly configured with working automated tests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733074280408/93b4ea86-1dfa-42eb-a163-b97c19c2a053.png" alt="Successful test run" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-5-start-the-web-server">Step 5: Start the Web Server 🌐</h3>
<p>Finally, let’s start the web server and see it in action. Run the following command:</p>
<pre><code class="lang-bash">npm start
</code></pre>
<p>Wait for the application to start running. Open your browser and visit <a target="_blank" href="http://localhost:5000/">http://localhost:5000</a>. 🎉 You should see the starter web server up and running, ready for your CI/CD magic:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733074667521/7b80bb21-1f43-430e-8a56-2bff8b81ddad.png" alt="Successful project run" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-create-a-github-repository-to-host-your-codebase"><strong>How to Create a GitHub Repository to Host Your Codebase 📂</strong></h2>
<h3 id="heading-step-1-sign-in-to-github">Step 1: Sign In to GitHub</h3>
<ol>
<li><p><strong>Go to GitHub</strong>: Open your browser and visit GitHub - <a target="_blank" href="https://github.com/">https://github.com</a>.</p>
</li>
<li><p><strong>Sign In</strong>: Click on the <strong>Sign In</strong> button in the top-right corner and enter your username and password to log in, OR create an account if you don’t have one by clicking the <strong>Sign up</strong> button.</p>
</li>
</ol>
<h3 id="heading-step-2-create-a-new-repository">Step 2: Create a New Repository</h3>
<p>Once you're signed in, on the main GitHub page, you’ll see a "+" sign in the top-right corner next to your profile picture. Click on it, and select <strong>“New repository”</strong> from the dropdown.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733130465203/dac28dee-74da-4fd4-8a96-bc90aef01207.png" alt="New GitHub repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now it’s time to set the repository details. You’ll include:</p>
<ul>
<li><p><strong>Repository Name</strong>: Choose a name for your repository. For example, you can call it <code>ci-cd-tutorial</code>.</p>
</li>
<li><p><strong>Description</strong> (Optional): You can add a short description, like “A tutorial project for CI/CD with Docker and GitHub Actions.”</p>
</li>
<li><p><strong>Visibility</strong>: Choose whether you want your repository to be <strong>public</strong> (accessible by anyone) or <strong>private</strong> (only accessible by you and those you invite). For the sake of this tutorial, make it <strong>public</strong>.</p>
</li>
<li><p><strong>Do Not Check the Add a README File Box</strong>: <strong>Important</strong>: Make sure you <strong>do not check</strong> the option to <strong>Add a README file</strong>. This will automatically create a <code>README.md</code> file in your repository, which could cause conflicts later when you push your local files. We'll add the README file manually if needed later.</p>
</li>
</ul>
<p>After filling out the details, click on <strong>“Create repository”</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733130890582/04e09ac8-0ee6-4d26-a9f2-007c0e6ca08f.png" alt="Create GitHub repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-3-change-the-remote-destination-and-push-to-your-new-repository">Step 3: Change the Remote Destination and Push to Your New Repository</h3>
<h4 id="heading-update-the-remote-repository-url"><strong>Update the Remote Repository URL</strong>:</h4>
<p>Since you've already cloned the codebase from my repository, you need to update the remote destination to point to your newly created GitHub repository.</p>
<p>Copy your repository URL (the URL of the page you were redirected to after creating the repository). It should look similar to this: <code>https://github.com/&lt;username&gt;/&lt;repo-name&gt;</code>.</p>
<p>Open your terminal in the project directory and run the following commands:</p>
<pre><code class="lang-bash">git remote set-url origin &lt;your-repo-url&gt;
</code></pre>
<p>Replace <code>&lt;your-repo-url&gt;</code> with your GitHub repository URL which you copied earlier.</p>
<h4 id="heading-rename-the-current-branch-to-main"><strong>Rename the Current Branch to</strong> <code>main</code>:</h4>
<p>If your branch is named something other than <code>main</code>, you can rename it to <code>main</code> using:</p>
<pre><code class="lang-bash">git branch -M main
</code></pre>
<h4 id="heading-push-to-your-new-repository"><strong>Push to Your New Repository</strong>:</h4>
<p>Finally, commit any changes you’ve made and push your local repository to the new remote GitHub repository by running:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">'Created boilerplate'</span>
git push -u origin main
</code></pre>
<p>Now your local codebase is linked to your new GitHub repository, and the files are successfully pushed there. You can verify by visiting your repository on GitHub.</p>
<h2 id="heading-how-to-set-up-the-ci-and-cd-workflows-within-your-project">How to Set Up the CI and CD Workflows Within Your Project ⚙️</h2>
<p>Now it’s time to create the <strong>CI and CD workflows</strong> for our project! These workflows won’t run on your local PC but will be automatically triggered and executed in the cloud once you push your changes to the remote repository. GitHub Actions will detect these workflows and run them based on the triggers you define.</p>
<h3 id="heading-step-1-prepare-the-workflow-directory">Step 1: Prepare the Workflow Directory 📂</h3>
<p>Before adding the CI/CD pipelines, it's a good practice to first create a feature branch. This step mirrors the workflow commonly used in teams, where new features or changes are made in separate branches before they are merged into the main codebase.</p>
<p>To create and switch to a new branch, run the following command:</p>
<pre><code class="lang-bash">git checkout -b feature/ci-cd-pipeline
</code></pre>
<p>This will create a new branch called <code>feature/ci-cd-pipeline</code> and switch to it. Now, you can safely add and test the CI/CD workflows without affecting the main branch.</p>
<p>Once you finish, you’ll be able to merge this feature branch back into <code>main</code> or <code>staging</code> as part of the pull request process.</p>
<p>In the project’s root directory, create a folder named <code>.github</code>. Inside <code>.github</code>, create another folder called <code>workflows</code>.</p>
<p>Any YAML file placed in the <code>.github/workflows</code> directory is automatically recognized as a GitHub Actions workflow. These workflows will execute based on specific triggers, such as pull requests, pushes, or releases.</p>
<h3 id="heading-step-2-create-the-continuous-integration-workflow">Step 2: Create the Continuous Integration Workflow 🚀</h3>
<p>We’ll now create a CI workflow that automatically tests the application whenever a pull request is made to the <code>main</code> or <code>staging</code> branches.</p>
<p>First, inside the <code>workflows</code> directory, create a file named <code>ci-pipeline.yml</code>.</p>
<p>Paste the following code into the file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CI</span> <span class="hljs-string">Pipeline</span> <span class="hljs-string">to</span> <span class="hljs-string">staging/production</span> <span class="hljs-string">environment</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">test:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup,</span> <span class="hljs-string">test,</span> <span class="hljs-string">and</span> <span class="hljs-string">build</span> <span class="hljs-string">project</span>
    <span class="hljs-attr">env:</span>
      <span class="hljs-attr">PORT:</span> <span class="hljs-number">5001</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">application</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">application</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          echo "Run command to build the application if present"
          npm run build --if-present</span>
</code></pre>
<h4 id="heading-explanation-of-the-ci-workflow">Explanation of the CI Workflow</h4>
<p>Here’s a breakdown of each section in the workflow:</p>
<ol>
<li><p><code>name: CI Pipeline to staging/production environment</code>: This is the title of your workflow. It helps you identify this pipeline in GitHub Actions.</p>
</li>
<li><p><code>on</code>: The <code>on</code> parameter is what determines the events that trigger your workflow. When the workflow YAML file is pushed to the remote GitHub repository, GitHub Actions automatically registers the workflow using the configured triggers in the <code>on</code> field. These triggers act as event listeners that tell GitHub when to execute the workflow</p>
<p> <strong>For example:</strong></p>
<p> If we set <code>pull_request</code> as the value for the <code>on</code> parameter and specify the branches we want to monitor using the <code>branches</code> key, GitHub sets up event listeners for pull requests to those branches.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">on:</span>
   <span class="hljs-attr">pull_request:</span>
     <span class="hljs-attr">branches:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
</code></pre>
<p> This configuration means that GitHub will trigger the workflow whenever a pull request is made to the <code>main</code> or <code>staging</code> branches.</p>
<p> <strong>Multiple Triggers</strong>:<br> You can define multiple event listeners in the <code>on</code> parameter. For instance, in addition to pull requests, you can add a listener for push events.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">on:</span>
   <span class="hljs-attr">pull_request:</span>
     <span class="hljs-attr">branches:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
   <span class="hljs-attr">push:</span>
     <span class="hljs-attr">branches:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
</code></pre>
<p> This configuration ensures that the workflow is triggered when:</p>
<ul>
<li><p>A pull request is made to either the <code>main</code> or <code>staging</code> branch.</p>
</li>
<li><p>A push is made directly to the <code>main</code> branch.</p>
</li>
</ul>
</li>
</ol>
<p>    📘 <strong>Learn more about triggers:</strong> Check out the <a target="_blank" href="https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows">official GitHub documentation here</a>.</p>
<ol start="3">
<li><p><code>jobs</code>: The <code>jobs</code> section outlines the specific tasks (or jobs) that the workflow will execute. Each job is an independent unit of work that runs on a separate virtual machine (VM). This isolation ensures a clean, unique environment for every job, avoiding potential conflicts between tasks.</p>
<p> <strong>Key Points About Jobs:</strong></p>
<ol>
<li><p><strong>Clean VM for Each Job</strong>: When GitHub Actions runs a workflow, it assigns a dedicated VM instance to each job. This means the environment is reset for every job, ensuring there’s no overlap or interference between tasks.</p>
</li>
<li><p><strong>Multiple Jobs</strong>: Workflows can have multiple jobs, each responsible for a specific task. For example:</p>
<ul>
<li><p>A <strong>Test</strong> job to install dependencies and run automated tests.</p>
</li>
<li><p>A <strong>Build</strong> job to compile the application.</p>
</li>
</ul>
</li>
<li><p><strong>Job Organization</strong>: Jobs can be organized to run:</p>
<ul>
<li><p><strong>Sequentially</strong>: Ensures one job is completed before the next starts, for example the Test job must finish before the Build job. This sequential flow mimics the "pipeline" structure.</p>
</li>
<li><p><strong>Simultaneously</strong>: Multiple jobs can run in parallel to save time, especially if the jobs are independent of one another.</p>
</li>
</ul>
</li>
<li><p><strong>Single Job in This Workflow</strong>: In our current workflow, there is only one job, <code>test</code>, which:</p>
<ul>
<li><p>Installs dependencies.</p>
</li>
<li><p>Runs automated tests.</p>
</li>
<li><p>Builds the application.</p>
</li>
</ul>
</li>
</ol>
</li>
</ol>
<p>    📘 <strong>Learn more about jobs:</strong> Dive into the <a target="_blank" href="https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/using-jobs-in-a-workflow">GitHub Actions jobs documentation here</a>.</p>
<ol start="4">
<li><p><code>runs-on: ubuntu-latest</code>: Specifies the operating system the job will run on. GitHub provides pre-configured virtual environments, and we’re using the latest Ubuntu image.</p>
</li>
<li><p><code>env</code>: Sets environment variables for the job. Here, we define the <strong>PORT</strong> variable used by our application.</p>
</li>
<li><p><strong>Steps</strong>: Steps define the individual actions to execute within a job:</p>
<ul>
<li><p><code>Checkout</code>: Uses the <code>actions/checkout</code> action to clone the repository containing the codebase in the feature branch into the virtual machine instance environment. This step ensures the pipeline has access to the project files.</p>
</li>
<li><p><code>Install dependencies</code>: Runs <code>npm ci</code> to install the required Node.js packages.</p>
</li>
<li><p><code>Test application</code>: Runs the automated tests using the <code>npm test</code> command. This validates the codebase for errors or failing test cases.</p>
</li>
<li><p><code>Build application</code>: Builds the application if a build script is defined in the <code>package.json</code>. The <code>--if-present</code> flag ensures this step doesn’t fail if no build script is present.</p>
</li>
</ul>
</li>
</ol>
<p>Now that we’ve completed the CI pipeline, which runs on pull requests to the <code>main</code> or <code>staging</code> branches, let’s move on to setting up the <strong>Continuous Delivery (CD)</strong> and <strong>Continuous Deployment</strong> pipelines. 🚀</p>
<h3 id="heading-step-3-the-continuous-delivery-and-deployment-workflow">Step 3: The Continuous Delivery and Deployment Workflow</h3>
<p><strong>First, create the Pipeline File</strong>:<br>In the <code>.github/workflows</code> folder, create a new file called <code>cd-pipeline.yml</code>. This file will define the workflows for automating delivery and deployment.</p>
<p><strong>Next, paste the configuration</strong>:<br>Copy and paste the following configuration into the <code>cd-pipeline.yml</code> file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CD</span> <span class="hljs-string">Pipeline</span> <span class="hljs-string">to</span> <span class="hljs-string">Google</span> <span class="hljs-string">Cloud</span> <span class="hljs-string">Run</span> <span class="hljs-string">(staging</span> <span class="hljs-string">and</span> <span class="hljs-string">production)</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">staging</span>
  <span class="hljs-attr">workflow_dispatch:</span> {}
  <span class="hljs-attr">release:</span>
    <span class="hljs-attr">types:</span> <span class="hljs-string">published</span>

<span class="hljs-attr">env:</span>
  <span class="hljs-attr">PORT:</span> <span class="hljs-number">5001</span>
  <span class="hljs-attr">IMAGE:</span> <span class="hljs-string">${{vars.IMAGE}}:${{github.sha}}</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">test:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup,</span> <span class="hljs-string">test,</span> <span class="hljs-string">and</span> <span class="hljs-string">build</span> <span class="hljs-string">project</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">application</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">needs:</span> <span class="hljs-string">test</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Setup</span> <span class="hljs-string">project,</span> <span class="hljs-string">Authorize</span> <span class="hljs-string">GitHub</span> <span class="hljs-string">Actions</span> <span class="hljs-string">to</span> <span class="hljs-string">GCP</span> <span class="hljs-string">and</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub,</span> <span class="hljs-string">and</span> <span class="hljs-string">deploy</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Authenticate</span> <span class="hljs-string">for</span> <span class="hljs-string">GCP</span>
        <span class="hljs-attr">id:</span> <span class="hljs-string">gcp-auth</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">google-github-actions/auth@v0</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">credentials_json:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.GCP_SERVICE_ACCOUNT</span> <span class="hljs-string">}}</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Cloud</span> <span class="hljs-string">SDK</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">google-github-actions/setup-gcloud@v0</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Authenticate</span> <span class="hljs-string">for</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub</span>
        <span class="hljs-attr">id:</span> <span class="hljs-string">docker-auth</span>
        <span class="hljs-attr">env:</span>
          <span class="hljs-attr">D_USER:</span> <span class="hljs-string">${{secrets.DOCKER_USER}}</span>
          <span class="hljs-attr">D_PASS:</span> <span class="hljs-string">${{secrets.DOCKER_PASSWORD}}</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker login -u $D_USER -p $D_PASS
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">tag</span> <span class="hljs-string">Image</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker build -t ${{env.IMAGE}} .
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">the</span> <span class="hljs-string">image</span> <span class="hljs-string">to</span> <span class="hljs-string">Docker</span> <span class="hljs-string">hub</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker push ${{env.IMAGE}}
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Enable</span> <span class="hljs-string">the</span> <span class="hljs-string">Billing</span> <span class="hljs-string">API</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          gcloud services enable cloudbilling.googleapis.com --project=${{secrets.GCP_PROJECT_ID}}
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">GCP</span> <span class="hljs-string">Run</span> <span class="hljs-bullet">-</span> <span class="hljs-string">Production</span> <span class="hljs-string">environment</span> <span class="hljs-string">(If</span> <span class="hljs-string">a</span> <span class="hljs-string">new</span> <span class="hljs-string">release</span> <span class="hljs-string">was</span> <span class="hljs-string">published</span> <span class="hljs-string">from</span> <span class="hljs-string">the</span> <span class="hljs-string">master</span> <span class="hljs-string">branch)</span>
        <span class="hljs-attr">if:</span> <span class="hljs-string">github.event_name</span> <span class="hljs-string">==</span> <span class="hljs-string">'release'</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">github.event.action</span> <span class="hljs-string">==</span> <span class="hljs-string">'published'</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">github.event.release.target_commitish</span> <span class="hljs-string">==</span> <span class="hljs-string">'main'</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          gcloud run deploy ${{vars.GCR_PROJECT_NAME}} \
          --region ${{vars.GCR_REGION}} \
          --image ${{env.IMAGE}} \
          --platform "managed" \
          --allow-unauthenticated \
          --tag production \
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">GCP</span> <span class="hljs-string">Run</span> <span class="hljs-bullet">-</span> <span class="hljs-string">Staging</span> <span class="hljs-string">environment</span>
        <span class="hljs-attr">if:</span> <span class="hljs-string">github.ref</span> <span class="hljs-type">!=</span> <span class="hljs-string">'refs/heads/main'</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          echo "Deploying to staging environment"
          # Deploy service with to staging environment
          gcloud run deploy ${{vars.GCR_STAGING_PROJECT_NAME}} \
          --region ${{vars.GCR_REGION}} \
          --image ${{env.IMAGE}} \
          --platform "managed" \
          --allow-unauthenticated \
          --tag staging \</span>
</code></pre>
<p>The <strong>CD pipeline</strong> configuration combines Continuous Delivery and Continuous Deployment workflows into a single file for simplicity. It builds on the concepts of CI/CD we discussed earlier, automating testing, building, and deploying the application to Google Cloud Run.</p>
<h4 id="heading-explanation-of-the-cd-pipeline">Explanation of the CD pipeline:</h4>
<ol>
<li><h4 id="heading-workflow-triggers-on">Workflow Triggers (<code>on</code>)</h4>
</li>
</ol>
<ul>
<li><p><code>push</code>: Workflow triggers on pushes to the <code>staging</code> branch.</p>
</li>
<li><p><code>workflow_dispatch</code>: Enables manual execution of the workflow via the GitHub Actions interface.</p>
</li>
<li><p><code>release</code>: Triggers when a new release is published.<br>  Example: When a release is published from the <code>main</code> branch, the app deploys to the production environment.</p>
</li>
</ul>
<ol start="2">
<li><p><strong>Job 1 – Testing the Codebase:</strong> The first job in the pipeline, Test, ensures the codebase is functional and error-free before proceeding with delivery or deployment</p>
</li>
<li><p><strong>Job 2 – Building and Deploying the Application:</strong> Aha! Moment ✨: These jobs run sequentially. 😃 The <strong>Build</strong> job begins only after the <strong>Test</strong> job is completed successfully. It prepares the application for deployment and manages the actual deployment process.</p>
<p> Here's what happens:</p>
<ul>
<li><p><strong>Authorization for GCP and Docker Hub</strong>: The workflow authenticates with both Google Cloud Platform (GCP) and Docker Hub. For GCP, it uses the <code>google-github-actions/auth@v0</code> action to handle service account credentials stored as secrets. Similarly, it logs into Docker Hub with stored credentials to enable image uploads.</p>
</li>
<li><p><strong>Build and Push Docker Image</strong>: The application is built into a Docker image and tagged with a unique identifier (<code>${{env.IMAGE}}</code>). This image is then pushed to Docker Hub, making it accessible for deployment.</p>
</li>
<li><p><strong>Deploy to Google Cloud Run</strong>: Based on the event that triggered the workflow, the application is <strong>deployed to either the staging or production environment</strong> in Google Cloud Run. A <strong>push</strong> to the <code>staging</code> branch deploys to the staging environment (Continuous Delivery), while a <strong>release</strong> from the <code>main</code> branch deploys to production (Continuous Deployment).</p>
</li>
</ul>
</li>
</ol>
<p>To ensure the security and flexibility of our pipeline, we rely on external variables and secrets rather than hardcoding sensitive information directly into the workflow file.</p>
<p>Why? Workflow configuration files are part of your repository and accessible to anyone with access to the codebase. If sensitive data, like API keys or passwords, is exposed here, it can be easily compromised. 😨</p>
<p>Instead, we use GitHub’s <strong>Secrets</strong> to securely store and access this information. Secrets allow us to define variables that are encrypted and only accessible by our workflows. For example:</p>
<ul>
<li><p><strong>DockerHub Credentials</strong>: We’ll add a Docker username and access token to the repository’s secrets. These are essential for authenticating with DockerHub to upload the built Docker images.</p>
</li>
<li><p><strong>Google Cloud Service Account Key</strong>: This key will grant the pipeline the necessary permissions to deploy the application on <strong>Google Cloud Run</strong> securely.</p>
</li>
</ul>
<p>We'll set up these variables and secrets incrementally as we proceed, ensuring each step is fully secure and functional. 🎯</p>
<h2 id="heading-set-up-a-docker-hub-repository-for-the-projects-image-and-generate-an-access-token-for-publishing-the-image"><strong>Set Up a Docker Hub Repository for the Project's Image and Generate an Access Token for Publishing the Image</strong> 📦</h2>
<p>Before we dive into the steps, let’s quickly go over what we’re about to do. In this section, you’ll learn how to create a Docker Hub repository, which acts like an online storage space for your application’s container image.</p>
<p>Think of a container image as a snapshot of your application, ready to be deployed anywhere. To ensure smooth and secure access, we’ll also generate a special access token, kind of like a revokable password that our CI/CD pipeline can use to upload your app’s image to Docker Hub. Let’s get started! 🚀</p>
<h3 id="heading-step-1-sign-up-for-docker-hub">Step 1: Sign Up for Docker Hub</h3>
<p>Here are the steps to follow to sign up for Docker Hub:</p>
<ol>
<li><p><strong>Go to the Docker Hub website</strong>: Open your web browser and visit Docker Hub - <a target="_blank" href="https://hub.docker.com/">https://hub.docker.com/</a>.</p>
</li>
<li><p><strong>Create an account</strong>: On the Docker Hub homepage, you’ll see a button labelled <strong>"Sign Up"</strong> in the top-right corner. Click on it.</p>
</li>
<li><p><strong>Fill in your details</strong>: You'll be asked to provide a few details like your username, email address, and password. Choose a strong password that you can remember.</p>
</li>
<li><p><strong>Agree to the terms</strong>: You’ll need to check a box to agree to Docker’s terms of service. After that, click <strong>“Sign Up”</strong> to create your account.</p>
</li>
<li><p><strong>Verify your email</strong>: Docker Hub will send you an email to verify your account. Open that email and click on the verification link to complete your account creation.</p>
</li>
</ol>
<h3 id="heading-step-2-sign-in-to-docker-hub">Step 2: Sign In to Docker Hub</h3>
<p>After verifying your email, go back to Docker Hub, and click on <strong>"Sign In"</strong> at the top right. Then you can use the credentials you just created to log in.</p>
<h3 id="heading-step-3-generate-an-access-token-for-the-cicd-pipeline">Step 3: Generate an Access Token (for the CI/CD pipeline)</h3>
<p>Now that you have an account, you can create an access token. This token will allow your GitHub Actions workflow to securely sign into Docker Hub and upload Docker images.</p>
<p>Once you’re logged into Docker Hub, click on your profile picture (or avatar) in the top right corner. This will open a menu. From the menu, click “Account Settings”.</p>
<p>Then in the left-hand menu of your account settings, scroll to the <strong>"Security"</strong> tab. This section is where you manage your tokens and passwords.</p>
<p>Now you’ll need to create a new access token. In the Security tab, you’ll see a link labelled <strong>“Personal access tokens”</strong> – click on it. Click the button labelled <strong>“Generate new token”</strong>.</p>
<p>You’ll be asked to give your token a description. You can name it something like "GitHub Actions CI/CD" so that you know what it's for.</p>
<p>After giving it a description, click on the “<strong>Access permissions dropdown</strong>“ and select <strong>“Read &amp; Write“,</strong> or <strong>“Read, Write, Delete“</strong>. Click “<strong>Generate</strong>“</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733129374816/c725f041-c0ef-49a0-b8ef-ca62acafc1ee.png" alt="Create Docker access token" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now, you need to copy the credentials. After clicking the generate button, Docker Hub will create an access token. <strong>Immediately copy this token along with your username</strong> and save it somewhere safe, like in a file (don’t worry, we’ll add it to our GitHub secrets). You won’t be able to see this token again, so make sure you save it!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733133363382/33dbf334-a7ec-4151-8639-5368c3ccaedb.png" alt="Copy Docker username + access token" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-4-add-the-token-to-github-as-a-secret">Step 4: Add the Token to GitHub as a Secret</h3>
<p>To do this, open your GitHub repository where the codebase is hosted. In the GitHub repo, click on the <strong>Settings</strong> tab (located near the top of your repo page).</p>
<p>Then on the left sidebar, scroll down and click on <strong>“Secrets and Variables”</strong>, then choose <strong>“Actions”</strong>.</p>
<ol>
<li><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733133003023/75c3bd35-1a5b-46fa-845a-0f4fd8305d53.png" alt="Open GitHub Actions Secrets" class="image--center mx-auto" width="600" height="400" loading="lazy"></li>
</ol>
<p>Here are the steps to create and manage your new secret:</p>
<ol>
<li><p><strong>Add a new secret</strong>: Click on the <strong>“New repository secret”</strong> button.</p>
</li>
<li><p><strong>Set up the secret</strong>:</p>
<ul>
<li><p>In the <strong>Name</strong> field, type <code>DOCKER_PASSWORD</code>.</p>
</li>
<li><p>In the <strong>Value</strong> field, paste the access token you copied earlier.</p>
</li>
</ul>
</li>
<li><p><strong>Save the secret</strong>: Finally, click <strong>Add secret</strong> to save your Docker access token securely in GitHub.</p>
</li>
</ol>
<p>Then you’ll repeat the process for your Docker username. Create a new secret called <code>DOCKER_USER</code> and add your Docker username that you copied earlier.</p>
<p>And that’s it! Now your CI/CD pipeline can use this token to securely log in to Docker Hub and upload images automatically when triggered. 🎉</p>
<h3 id="heading-step-5-creating-the-dockerfile-for-the-project"><strong>Step 5: Creating the Dockerfile for the Project</strong></h3>
<p>Before you can build and publish the Docker image to Docker Hub, you need to create a <code>Dockerfile</code> that contains the necessary instructions to build your application.</p>
<p>Follow the steps below to create the <code>Dockerfile</code> in the root folder of your project:</p>
<ol>
<li><p>Navigate to your project’s root folder.</p>
</li>
<li><p>Create a new file named <code>Dockerfile</code>.</p>
</li>
<li><p>Open the <strong>Dockerfile</strong> in a text editor and paste the following content into it:</p>
</li>
</ol>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> node:<span class="hljs-number">18</span>-slim

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-keyword">COPY</span><span class="bash"> package.json .</span>

<span class="hljs-keyword">RUN</span><span class="bash"> npm install -f</span>

<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment"># EXPOSE 5001</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">5001</span>

<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"npm"</span>, <span class="hljs-string">"start"</span>]</span>
</code></pre>
<h4 id="heading-explanation-of-the-dockerfile">Explanation of the Dockerfile:</h4>
<ul>
<li><p><code>FROM node:18-slim</code>: This sets the base image for the Docker container, which is a slim version of the official Node.js image based on version 18.</p>
</li>
<li><p><code>WORKDIR /app</code>: Sets the working directory for the application inside the container to <code>/app</code>.</p>
</li>
<li><p><code>COPY package.json .</code>: Copies the <code>package.json</code> file into the working directory.</p>
</li>
<li><p><code>RUN npm install -f</code>: Installs the project dependencies using <code>npm</code>.</p>
</li>
<li><p><code>COPY . .</code>: Copies the rest of the project files into the container.</p>
</li>
<li><p><code>EXPOSE 5001</code>: This tells Docker to expose port <code>5001</code>, which is the port our app will run on inside the container.</p>
</li>
<li><p><code>CMD ["npm", "start"]</code>: This sets the default command to start the application when the container is run, using <code>npm start</code>.</p>
</li>
</ul>
<h2 id="heading-create-a-google-cloud-account-project-and-billing-account"><strong>Create a Google Cloud Account, Project, and Billing Account</strong> ☁️</h2>
<p>In this section, we’re laying the foundation for deploying our application to Google Cloud. First, we’ll set up a Google Cloud account (don’t worry, it’s free to get started!). Then, we’ll create a new project where all the resources for your app will live.</p>
<p>Finally, we’ll enable billing so you can unlock the cloud services needed for deployment. Think of this as setting up your workspace in the cloud—organized, ready, and secure! Let’s dive in! ☁️</p>
<h3 id="heading-step-1-create-or-sign-in-to-a-google-cloud-account">Step 1: Create or Sign in to a Google Cloud Account 🌐</h3>
<p>First, go to <a target="_blank" href="https://console.cloud.google.com">Google Cloud Console</a>. If you don’t have a Google Cloud account, you’ll need to create one.</p>
<p>To do this, click on <strong>Get Started for Free</strong> and follow the steps to set up your account (you’ll need to provide payment information, but Google offers $300 in free credits to get started). If you already have a Google account, simply sign in using your credentials.</p>
<p>Once you’ve signed in, you’ll be taken to your Google Cloud dashboard. This is where you can manage all your cloud projects and resources.</p>
<h3 id="heading-step-2-create-a-new-google-cloud-project">Step 2: Create a New Google Cloud Project 🏗️</h3>
<p>At the top left of the Google Cloud Console, you’ll see a drop-down menu beside the Google Cloud logo. Click on this drop-down to display your current projects.</p>
<p>Now it’s time to create a new project. In the top-left corner of the pop-up modal, click on the <strong>New Project</strong> button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733134260252/6769909a-cf9c-4c91-9d79-7676500f3981.webp" alt="Create Google Cloud Project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You’ll be redirected to a page where you’ll need to provide some basic details for your new project. So now enter the following information:</p>
<ul>
<li><p><strong>Project Name:</strong> Enter a name of your choice for the project (for example, <code>gcr-ci-cd-project</code>).</p>
</li>
<li><p><strong>Location:</strong> Select a location for your project. You can leave it as the default "No organization" if you're just getting started.</p>
</li>
</ul>
<p>Once you've entered the project name, click the <strong>Create</strong> button. Google Cloud will now start creating your new project. It may take a few seconds.</p>
<h3 id="heading-step-3-access-your-new-project">Step 3: Access Your New Project 🛠️</h3>
<p>After a few seconds, you’ll be redirected to your <strong>Google Cloud dashboard</strong>.</p>
<p>Click on the drop-down menu beside the Google Cloud logo again, and you should now see your newly created project listed in the modal where you can select it.</p>
<p>Then click on the project name (for example, <code>gcr-ci-cd-project</code>) to enter your project’s dashboard.</p>
<h3 id="heading-step-4-link-a-billing-account-to-your-project">Step 4: Link A Billing Account To Your Project 💳</h3>
<p>To access the billing page, in the Google Cloud Console, find the <strong>Navigation Menu</strong> (the three horizontal lines) at the top left of the screen. Click on it to open a list of options. Scroll down and click on <strong>Billing</strong>. This will take you to the billing section of your Google Cloud account.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733134747962/745c8a0e-13c5-4dde-849b-303c1200f495.png" alt="Navigate to Google Cloud Billing dashboard/section " class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If you haven't set up a billing account yet, you'll be prompted to do so. Click on the <strong>"Link a billing account"</strong> button to start the process.</p>
<p>Now you can create a new billing account (if you don’t have one). You’ll be redirected to a page where you can either select an existing billing account or create a new one. If you don't already have a billing account, click on <strong>"Create a billing account"</strong>.</p>
<p>Provide the necessary details, including:</p>
<ul>
<li><p><strong>Account name</strong> (for example, "Personal Billing Account" or your business name).</p>
</li>
<li><p><strong>Country</strong>: Choose the country where your business or account is based.</p>
</li>
<li><p><strong>Currency</strong>: Choose the currency in which you want to be billed.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733135153425/1287ab53-e9c5-45b5-a09d-3d3a13840ca4.png" alt="Create Google Cloud billing account" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ul>
<p>Next, enter your payment information (credit card or bank account details). Google Cloud will verify your payment method, so make sure the information is correct.</p>
<p>Read and agree to the Google Cloud Terms of Service and Billing Account Terms. Once you’ve done this, click <strong>"Start billing"</strong> to finish setting up your billing account</p>
<p>After setting up your billing account, you’ll be taken to a page that asks you to <strong>link</strong> it to your project. Select the billing account you just created or an existing billing account you want to use. Click Set Account to link the billing account to your project.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733337276189/b80702dd-2ff6-42db-a325-c2082e8059e5.png" alt="Link Google Cloud billing account to project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>After you’ve linked your billing account to your project, you should see a confirmation message indicating that billing has been successfully enabled for your project.</p>
<p>You can always verify this by returning to the Billing section in the Google Cloud Console, where you’ll see your billing account listed.</p>
<h2 id="heading-create-a-google-cloud-service-account-to-enable-deployment-of-the-nodejs-application-to-google-cloud-run-via-the-cd-pipeline"><strong>Create a Google Cloud Service Account to Enable Deployment of the Node.js Application to Google Cloud Run via the CD Pipeline</strong> 🚀</h2>
<h3 id="heading-why-do-we-need-a-service-account-and-key">Why Do We Need a Service Account and Key? 🤔</h3>
<p>A <strong>service account</strong> allows our CI/CD pipeline to authenticate and interact with Google Cloud services programmatically. By assigning specific roles (permissions), we ensure the service account can only perform tasks related to deployment, such as managing Google Cloud Run.</p>
<p>The <strong>service account key</strong> is a JSON file containing the credentials used for authentication. We securely store this key as a GitHub secret to protect sensitive information.</p>
<h3 id="heading-step-1-open-the-service-accounts-page">Step 1: Open the Service Accounts Page</h3>
<p>Here are the steps you can follow to set up your service account and get your key:</p>
<p>First, visit the Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>. Ensure you’ve selected the correct project (e.g. <code>gcr-ci-cd-project</code>). To change projects, click the drop-down menu next to the Google Cloud logo at the top-left corner and select your project.</p>
<p>Then navigate to the Navigation Menu (three horizontal lines in the top-left corner) and click on <strong>IAM &amp; Admin &gt; Service Accounts</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733147553088/e3647442-ca8e-4197-ab5f-91cee5a6d6b0.png" alt="Navigate to Google Cloud IAM - Service Account" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-2-create-a-new-service-account">Step 2: Create a New Service Account</h3>
<p>Click on the "Create Service Account" button. This will open a form where you’ll define your service account details.</p>
<p>Next, enter the Service Account details:</p>
<ul>
<li><p><strong>Name</strong>: Enter a descriptive name (for example, <code>ci-cd-sa</code>).</p>
</li>
<li><p><strong>ID</strong>: This will auto-fill based on the name.</p>
</li>
<li><p><strong>Description</strong>: Add a description to help identify its purpose, such as “Used for deploying Node.js app to Cloud Run.”</p>
</li>
<li><p>Click <strong>Create and Continue</strong> to proceed.</p>
</li>
</ul>
<h3 id="heading-step-3-assign-necessary-roles-permissions">Step 3: Assign Necessary Roles (Permissions)</h3>
<p>On the next screen, you’ll assign roles to the service account. Add the following roles one by one:</p>
<ul>
<li><p><strong>Cloud Run Admin</strong>: Allows management of Cloud Run services.</p>
</li>
<li><p><strong>Service Account User</strong>: Grants the ability to use service accounts.</p>
</li>
<li><p><strong>Service Usage Admin</strong>: Enables control over enabling APIs.</p>
</li>
<li><p><strong>Viewer</strong>: Provides read-only access to view resources.</p>
</li>
</ul>
<p>To add a role:</p>
<ul>
<li><p>Click on <strong>"Select a Role"</strong>.</p>
</li>
<li><p>Use the search bar to type the role name (for example, "Cloud Run Admin") and select it.</p>
</li>
<li><p>Repeat for all four roles.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733147870701/393833c9-c320-49e3-8743-dbc0d739b99b.png" alt="Create Google Cloud Service Account - Add role to a service account during creation" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Your screen should look similar to this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733147949148/c509c810-767d-4900-aa44-a737cc1c8dc1.png" alt="Create a Google Cloud service account (SA) - Done assigning all roles to SA" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>After assigning the roles, click <strong>Continue</strong>.</p>
<h3 id="heading-step-4-skip-granting-users-access-to-the-service-account">Step 4: Skip Granting Users Access to the Service Account</h3>
<p>On the next screen, you’ll see an option to grant additional users access to this service account. Click <strong>Done</strong> to complete the creation process.</p>
<h3 id="heading-step-5-generate-a-service-account-key">Step 5: Generate a Service Account Key 🔑</h3>
<p>You should now see your newly created service account in the list. Find the row for your service account (for example, <code>ci-cd-sa</code>) and click the three vertical dots under the “Actions” column. Select <strong>"Manage Keys"</strong> from the drop-down menu.</p>
<p>To add a new key:</p>
<ul>
<li><p>Click on <strong>"Add Key" &gt; "Create New Key"</strong>.</p>
</li>
<li><p>In the pop-up dialog, select <strong>JSON</strong> as the key type.</p>
</li>
<li><p>Click <strong>Create</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733148120618/c7014982-ae7d-40ed-bbfb-0c8f5c4b8090.png" alt="Create Google Cloud service account key" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ul>
<p>Now, download the key file. A JSON file will automatically be downloaded to your computer. This file contains the credentials needed to authenticate with Google Cloud.</p>
<p>Make sure you keep the key secure and store it in a safe location. Don’t share it – treat it as sensitive information.</p>
<h3 id="heading-step-6-add-the-service-account-key-to-github-secrets">Step 6: Add the Service Account Key to GitHub Secrets 🔒</h3>
<p>Start by opening the downloaded JSON file using a text editor (like Notepad or VS Code). Then select and copy the entire contents of the file.</p>
<p>Then navigate to the repository you created for this project on GitHub. Click on the <strong>Settings</strong> tab at the top of the repository. Scroll down and find the <strong>Secrets and variables &gt; Actions</strong> section.</p>
<p>Now you need to add a new secret. Click the <strong>"New repository secret"</strong> button. In the <strong>Name</strong> field, enter <code>GCP_SERVICE_ACCOUNT</code>. In the <strong>Value</strong> field, paste the JSON content you copied earlier. Click <strong>Add secret</strong> to save it.</p>
<p>Do the same for the <code>GCP_PROJECT_ID</code> secret, but now add your Google Project ID as the value. To get your project ID, follow these steps:</p>
<ol>
<li><p><strong>Navigate to the Google Cloud Console</strong>: Open Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>.</p>
</li>
<li><p><strong>Locate the Project Dropdown</strong>: At the top-left of the screen, next to the <strong>Google Cloud logo</strong>, you will see a drop-down that shows the name of your current project.</p>
</li>
<li><p><strong>View the Project ID</strong>: Click the drop-down, and you'll see a list of all your projects. Your <strong>Project ID</strong> will be displayed next to the project name. It is a unique identifier used by Google Cloud.</p>
</li>
<li><p><strong>Copy the Project ID</strong>: Copy the <strong>Project ID</strong> that is displayed, and add it as the value of the <code>GCP_PROJECT_ID</code> secret.</p>
</li>
</ol>
<h3 id="heading-step-7-adding-external-variables-to-the-github-repository">Step 7: Adding External Variables to the GitHub Repository 🔧</h3>
<p>Before proceeding with deployment, we need to define some external variables that were referenced in the CD workflow. These variables ensure that the pipeline knows critical details about your Google Cloud Run services and Docker container registry.</p>
<p>Here are the steps you’ll need to follow to do this:</p>
<ol>
<li><p>First, go to your repository on GitHub.</p>
</li>
<li><p>Click the <strong>Settings</strong> tab at the top of the repository. Scroll down to <strong>Secrets and variables &gt; Actions</strong>.</p>
</li>
<li><p>Click on the <strong>Variables</strong> tab next to <strong>Secrets</strong>. Click <strong>"New repository variable"</strong> for each variable. Then you’ll need to define these variables:</p>
<ul>
<li><p><code>GCR_PROJECT_NAME</code>: Set this to the name of your Cloud Run service for the production/live environment. For example, <code>gcr-ci-cd-app</code>.</p>
</li>
<li><p><code>GCR_STAGING_PROJECT_NAME</code>: Set this to the name of your Cloud Run service for the staging/test environment. For example, <code>gcr-ci-cd-staging</code>.</p>
</li>
<li><p><code>GCR_REGION</code>: Enter the region where you’d like to deploy the services. For this tutorial, set it to <code>us-central1</code>.</p>
</li>
<li><p><code>IMAGE</code>: Specify the name of the Docker image/container registry where the published image will be uploaded. For example, <code>&lt;dockerhub-username&gt;/ci-cd-tutorial-app</code>.</p>
</li>
</ul>
</li>
<li><p>After entering each variable name and value, click <strong>Add variable</strong>.</p>
</li>
</ol>
<h3 id="heading-enabling-the-service-usage-api-on-the-google-cloud-project">Enabling the Service Usage API on the Google Cloud Project 🌐</h3>
<p>To deploy your application, the <strong>Service Usage API</strong> must be enabled in your Google Cloud project. This API allows you to manage Google Cloud services programmatically, including enabling/disabling APIs and monitoring their usage.</p>
<p>Follow these steps to enable it:</p>
<ol>
<li><p>First, visit the Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>.</p>
</li>
<li><p>Then make sure you’re in the correct project. Click the project drop-down menu near the <strong>Google Cloud logo</strong> at the top-left corner. Select <code>gcr-ci-cd-project</code> , or the name you gave your project from the list of projects.</p>
</li>
<li><p>Next you’ll need to access the API library. Open the <strong>Navigation Menu</strong> (three horizontal lines in the top-left corner). Select <strong>APIs &amp; Services &gt; Library</strong> from the menu.</p>
</li>
<li><p>In the API Library, use the search bar to search for <strong>"Service Usage API"</strong>.</p>
</li>
<li><p>Click on the <strong>Service Usage API</strong> from the search results. On the API’s details page, click <strong>Enable</strong>.</p>
</li>
<li><p>To verify, go to <strong>APIs &amp; Services &gt; Enabled APIs &amp; Services</strong> in the Google Cloud Console. Confirm that the <strong>Service Usage API</strong> appears in the list of enabled APIs.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733150269757/00a4e20b-72ac-4bd4-b05f-af6e61600e09.png" alt="Enable the Google Cloud &quot;Service Usage API&quot; in the project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ol>
<h2 id="heading-create-the-staging-branch-and-merge-the-feature-branch-into-it-continuous-integration-and-continuous-delivery"><strong>Create the Staging Branch and Merge the Feature Branch into It (Continuous Integration and Continuous Delivery) 🌟</strong></h2>
<p>When changes from the <code>feature/ci-cd-pipeline</code> branch are merged into the <code>staging</code> branch, we complete the <strong>Continuous Integration (CI)</strong> process, and the workflow <code>ci-pipeline.yml</code> will run. This ensures that the changes made in the feature branch are tested and integrated into a shared branch.</p>
<p>Once the pull request (PR) is merged into <code>staging</code>, the <strong>Continuous Delivery (CD)</strong> pipeline automatically triggers, deploying the application to the staging environment. This simulates how updates are tested in a safe environment before being pushed to production.</p>
<h3 id="heading-create-the-staging-branch-on-the-remote-repository">Create the <code>staging</code> Branch on the Remote Repository</h3>
<p>To enable the CI/CD pipeline, we’ll first create a <code>staging</code> branch on the remote GitHub repository. This branch will serve as the test environment where changes are deployed before they reach the production environment.</p>
<p>To create the <code>staging</code> branch directly on GitHub, follow these steps:</p>
<ol>
<li><p>First, navigate to your repository on GitHub. Open your web browser and go to the GitHub repository where you want to create the new <code>staging</code> branch.</p>
</li>
<li><p>Then, switch to the <code>main</code> branch. On the top of the repository page, locate the <strong>Branch</strong> dropdown (usually labelled as <code>main</code> or the current branch name). Click on the dropdown and make sure you are on the <code>main</code> branch.</p>
</li>
<li><p>Next, create the <code>staging</code> branch. In the same dropdown where you see the <code>main</code> branch, type <code>staging</code> into the text box. Once you start typing, GitHub will offer you the option to create a new branch called <code>staging</code>. Select the <strong>Create branch: staging</strong> option from the dropdown.</p>
</li>
<li><p>Finally, verify the branch**.** After creating the <code>staging</code> branch, GitHub will automatically switch to it. You should now see <code>staging</code> in the branch dropdown, confirming the new branch was created.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733152232155/e6215137-5e3b-474b-88f8-af03269eccc2.png" alt="Create a new Staging branch in the GitHub repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
</ol>
<h3 id="heading-merge-your-feature-branch-into-the-staging-branch-via-a-pull-request-pr"><strong>Merge Your Feature Branch into the Staging Branch via a Pull Request (PR)</strong></h3>
<p>This process combines both Continuous Integration (CI) and Continuous Delivery (CD). You will commit changes from your feature branch, push them to the remote feature branch, and then open a PR to merge those changes into the <code>staging</code> branch. Here's how to do it:</p>
<h4 id="heading-step-1-commit-local-changes-on-your-feature-branch"><strong>Step 1: Commit Local Changes on Your Feature Branch</strong></h4>
<p>First, you’ll want to make sure that you are on the correct branch (the feature branch) by running:</p>
<pre><code class="lang-bash">git status
</code></pre>
<p>If you are not on the <code>feature/ci-cd-pipeline</code> branch, switch to it by running:</p>
<pre><code class="lang-bash">git checkout feature/ci-cd-pipeline
</code></pre>
<p>Now, it’s time to add your changes you made for the commit:</p>
<pre><code class="lang-bash">git add .
</code></pre>
<p>This stages all changes, including new files, modified files, and deleted files.</p>
<p>Next, commit your changes with a clear and descriptive message:</p>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"Set up CI/CD pipelines for the project"</span>
</code></pre>
<p>Then you can verify your commit by running:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">log</span>
</code></pre>
<p>This will display your most recent commits, and you should see the commit message you just added.</p>
<h4 id="heading-step-2-push-your-feature-branch-changes-to-the-remote-repository"><strong>Step 2: Push Your Feature Branch Changes to the Remote Repository</strong></h4>
<p>After committing your changes, push them to the remote repository:</p>
<pre><code class="lang-bash">git push origin feature/ci-cd-pipeline
</code></pre>
<p>This pushes your local changes on the <code>feature/ci-cd-pipeline</code> branch to the remote GitHub repository.</p>
<p>Once the push is successful, visit your GitHub repository in a web browser, and confirm that the <code>feature/ci-cd-pipeline</code> branch is updated with your new commit.</p>
<h4 id="heading-step-3-create-a-pull-request-to-merge-the-feature-branch-into-staging"><strong>Step 3: Create a Pull Request to Merge the Feature Branch into Staging</strong></h4>
<p>Go to your repository on GitHub and ensure that you are on the main page of the repository.</p>
<p>You should see an alert at the top of the page suggesting you create a pull request for the recently pushed branch (<code>feature/ci-cd-pipeline</code>). Click the <strong>Compare &amp; Pull Request</strong> button next to the alert.</p>
<p>Now, it’s time to choose the base and compare branches. On the PR creation page, make sure the <strong>base</strong> branch is set to <code>staging</code> (this is the branch you want to merge your changes into). The <strong>compare</strong> branch should already be set to <code>feature/ci-cd-pipeline</code> (the branch you just pushed). If they’re not selected correctly, use the dropdowns to change them.</p>
<p>You’ll want to come up with a good PR description for this. Write a clear title and description for the pull request, explaining what changes you're merging and why. For example:</p>
<ul>
<li><p><strong>Title</strong>: "Merge CI/CD setup changes from feature branch"</p>
</li>
<li><p><strong>Description</strong>: "This pull request adds the CI/CD pipelines for GitHub Actions and Docker Hub integration to the project. It includes the configurations for both CI and CD workflows."</p>
</li>
</ul>
<p>Now GitHub will show a list of all the changes that will be merged. Take a moment to review them and ensure everything looks correct.</p>
<p>If all looks good after reviewing, click on the <strong>Create pull request</strong> button. This will create the PR and notify team members (if any) that changes are ready to be reviewed and merged.</p>
<p>Wait a few seconds, and you should see a message indicating that all the checks have passed. Click on the link with the description "<strong>CI Pipeline to staging/production environment...</strong>". This should direct you to the Continuous Integration workflow, where you can view the steps that ran</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733153444873/6ecdb277-0a45-44ec-981c-c7ee671cd2f0.png" alt="Create a new pull request (PR) from the feature to the staging branch" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733153637817/e12fefde-9259-41a3-9bd1-63b5da1d88ea.png" alt="CI workflow run from PR (feature to staging branch)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-the-continuous-integration-ci-process">The Continuous Integration (CI) Process</h4>
<p>The CI process begins when a Pull Request is made to the <code>staging</code> branch. It triggers the GitHub Actions workflow defined in the <code>.github/workflows/ci-pipeline.yml</code> file. The workflow runs the necessary steps to set up the environment, install dependencies, and build the Node.js application.</p>
<p>It then runs automated tests (using <code>npm test</code>) to ensure that the changes do not break any functionality in the codebase. If all these steps are completed successfully, the CI pipeline confirms that the feature branch is stable and ready to be merged into the <code>staging</code> branch for further testing and deployment.</p>
<h4 id="heading-step-4-merge-the-pull-request"><strong>Step 4: Merge the Pull Request</strong></h4>
<p>If your team or collaborators are part of the project, they may review your PR. This step may involve discussing any changes or improvements. If everything looks good, a reviewer will merge the PR.</p>
<p>Once the PR has been reviewed and approved, you can merge the PR. To do this, just click on the <strong>Merge pull request</strong> button. Choose <strong>Confirm merge</strong> when prompted.</p>
<p>After merging, you can go to the <code>staging</code> branch to verify that the changes were successfully merged.</p>
<h3 id="heading-navigating-to-the-actions-page-after-merging-the-pr"><strong>Navigating to the Actions Page After Merging the PR</strong></h3>
<p>Once you have successfully merged your pull request from the <code>feature/ci-cd-pipeline</code> branch into the <code>staging</code> branch, the Continuous Delivery (CD) pipeline will be triggered. To view the progress of the CD pipeline, navigate to the <strong>Actions</strong> tab in your GitHub repository. Here's how to do it:</p>
<ol>
<li><p>Go to your GitHub repository.</p>
</li>
<li><p>At the top of the page, you will see the <strong>Actions</strong> tab next to the <strong>Code</strong> tab. Click on it.</p>
</li>
<li><p>On the Actions page, you will see a list of workflows that have been triggered. Look for the one labelled <strong>CD Pipeline to Google Cloud Run (staging and production)</strong>. It should appear as a new run after the PR merge.</p>
</li>
<li><p>Click on the workflow run to view its progress and see the detailed logs for each step.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733154575368/96e236a2-ae66-494b-b544-f96955a18ac9.png" alt="Continuous Delivery workflow from merge to staging (feature to staging)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733159329441/cb7e26a9-7a20-4b1b-9869-e00facc695c1.png" alt="Continuous Delivery workflow Jobs from merge to staging (feature to staging)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733160506355/4682afe3-bb04-405d-af4e-fd9bd3494659.png" alt="Continuous Delivery workflow steps from merge to staging (feature to staging)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This will allow you to monitor the status of the CD pipeline and check if there are any issues during deployment.</p>
<p>If you look at the CD steps and workflow, you'll see that the step to deploy the application to the <strong>production</strong> environment was skipped, while the step to deploy to the <strong>staging</strong> environment was executed.</p>
<h4 id="heading-continuous-delivery-cd-pipeline-whats-going-on"><strong>Continuous Delivery (CD) pipeline – what’s going on:</strong></h4>
<p>The <strong>Continuous Delivery (CD) Pipeline</strong> automates the process of deploying the application to Google Cloud Run (testing environment). This workflow is triggered by a push to the <code>staging</code> branch, which happens after the changes from the feature branch are merged into <code>staging</code>. It can also be manually triggered via <code>workflow_dispatch</code> or upon a new release being published.</p>
<p>The pipeline consists of multiple stages:</p>
<ol>
<li><p><strong>Test Job:</strong> The pipeline begins by setting up the environment and running tests using the <code>npm test</code> command. If the tests pass, the process moves forward.</p>
</li>
<li><p><strong>Build Job:</strong> The next step builds the Docker image of the Node.js application, tags it, and then pushes it to Docker Hub.</p>
</li>
<li><p><strong>Deployment to GCP:</strong> After the image is pushed, the workflow authenticates to Google Cloud and deploys the application. If the event is a release (that is, a push to the <code>main</code> branch), the application is deployed to the production environment. If the event is a push to <code>staging</code>, the app is deployed to the staging environment.</p>
</li>
</ol>
<p>The CD process ensures that any changes made to the <code>staging</code> branch are automatically tested, built, and deployed to the staging environment, ready for further validation. When a release is published, it will trigger deployment to production, ensuring your app is always up to date.</p>
<h3 id="heading-accessing-the-deployed-application-in-the-staging-environment-on-google-cloud-run">Accessing the Deployed Application in the Staging Environment on Google Cloud Run 🌐</h3>
<p>Once the deployment to Google Cloud Run is successfully completed, you'll want to access your application running in the <strong>staging</strong> environment. Follow these steps to find and visit your deployed application:</p>
<h4 id="heading-1-navigate-to-the-google-cloud-console">1. <strong>Navigate to the Google Cloud Console</strong></h4>
<p>Open the Google Cloud Console in your browser by visiting <a target="_blank" href="https://console.cloud.google.com">https://console.cloud.google.com</a>. If you're not already signed in, make sure you log in with your Google account.</p>
<h4 id="heading-2-go-to-the-cloud-run-dashboard">2. <strong>Go to the Cloud Run Dashboard</strong></h4>
<p>In the Google Cloud Console, use the Search bar at the top or navigate through the left-hand menu: Go to <strong>Cloud Run</strong> (you can type this into the search bar, or find it under <strong>Products &amp; services</strong> &gt; <strong>Compute</strong> &gt; <strong>Cloud Run</strong>). Click on <strong>Cloud Run</strong> to open the Cloud Run dashboard.</p>
<h4 id="heading-3-select-your-staging-service">3. <strong>Select Your Staging Service</strong></h4>
<p>In the <strong>Cloud Run dashboard</strong>, you should see a list of all your services deployed across various environments. Find the service associated with the staging environment. The name should be similar to what you defined in your workflow (for example, <code>gcr-ci-cd-staging</code>).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733159635861/4ac895d2-5071-4d3f-9ed1-5af2bcca8835.png" alt="Google Cloud Run service for the staging environment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-4-access-the-service-url">4. <strong>Access the Service URL</strong></h4>
<p>Once you've selected your staging service, you’ll be taken to the <strong>Service details page</strong>. This page provides all the important information about your deployed service.<br>On this page, look for the <strong>URL</strong> section under the <strong>Service URL</strong> heading. The URL will look something like: <code>https://gcr-ci-cd-staging-&lt;unique-id&gt;.run.app</code>.</p>
<h4 id="heading-5-visit-the-application">5. <strong>Visit the Application</strong></h4>
<p>Click on the <strong>Service URL</strong>, and it will open your staging environment in a new tab in your browser. You can now interact with your application as if it were live, but in the <strong>staging environment</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733160050763/b097e647-bf6d-442e-87df-fc7d82d3585c.png" alt="Google Cloud Run service URL for the staging environment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-merge-the-staging-branch-into-the-main-branch-continuous-integration-and-continuous-deployment"><strong>Merge the Staging Branch into the Main Branch (Continuous Integration and Continuous Deployment) 🌐</strong></h2>
<p>In this section, we'll take the updates in the staging branch, merge them into the main branch, and trigger the CI/CD pipeline. This process not only ensures your changes are production-ready but also deploys them to the production/live environment. 🚀</p>
<h3 id="heading-step-1-push-local-changes-and-open-a-pull-request">Step 1: Push Local Changes and Open a Pull Request</h3>
<p><strong>Why?</strong> The first step involves merging the staging branch into the main branch. Just like in the previous Continuous Delivery process, this ensures the integration of thoroughly tested updates.</p>
<p>Here’s how to do it:</p>
<p>First, visit the GitHub repository where your project is hosted.</p>
<p>Then go to the <strong>Pull Requests</strong> tab. Click <strong>New Pull Request</strong>. Choose <strong>staging</strong> as the source branch (base branch) and <strong>main</strong> as the target branch. Add a clear title and description for the Pull Request, explaining why these updates are ready for production deployment.</p>
<h3 id="heading-step-2-continuous-integration-ci-pipeline-execution">Step 2: Continuous Integration (CI) Pipeline Execution</h3>
<p>After merging the pull request, the <strong>Continuous Integration (CI)</strong> pipeline will automatically execute to validate that the changes are still stable when integrated into the <strong>main branch</strong>.</p>
<h4 id="heading-pipeline-steps">Pipeline Steps:</h4>
<ul>
<li><p><strong>Code Checkout</strong>: The workflow fetches the latest code from the <strong>main branch</strong>.</p>
</li>
<li><p><strong>Dependency Installation</strong>: The pipeline installs all required dependencies.</p>
</li>
<li><p><strong>Testing</strong>: Automated tests are run to validate the application's stability.</p>
</li>
</ul>
<h3 id="heading-step-3-create-a-new-release">Step 3: Create a New Release</h3>
<p>The Continuous Deployment (CD) workflow to deploy to the production environment is triggered by the creation of a new release from the main branch.</p>
<p>Let’s walk through the steps to create a release.</p>
<p>On your GitHub repository page, click on the <strong>Releases</strong> section (located under the <strong>Code</strong> tab).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733338781623/c21e7f03-5381-47f9-8807-b5a3360245ad.png" alt="Navigate to the Release page in theGitHub repo" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Next, click <strong>Draft a new release</strong>. Set the <strong>Target</strong> branch to <strong>main</strong>. Enter a <strong>Tag version</strong> (for example, <code>v1.0.0</code>) following semantic versioning. Add a <strong>Release title</strong> and an optional description of the changes.</p>
<p>Then, click <strong>Publish Release</strong> to finalize.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733161473858/6e14214c-31fb-49b3-9dff-a719b9ec1d40.png" alt="Create a new release in the GitHub repo" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-why-run-the-continuous-deployment-pipeline-on-release-instead-of-on-push">Why run the Continuous Deployment pipeline on release instead of on push? 🤔</h4>
<p>In our setup, we decided not to trigger the Continuous Deployment (CD) pipeline every time changes are pushed to the main branch. Instead, we trigger it only when a new release is created. This gives the team more control over when updates are deployed to the production environment.</p>
<p>Imagine a scenario where developers are working on new features—they may push changes to the main branch as part of their regular workflow, but these features might not be complete or ready for users yet. Automatically deploying every push could accidentally expose unfinished features to your users, which can be confusing or disruptive.</p>
<p>By requiring a release to trigger the deployment, the team gets a chance to finalize and polish all changes before they go live.</p>
<p>For example, developers can test new features in the staging environment, fix any issues, and merge those changes into the main branch without worrying about them immediately appearing in production. This workflow ensures that only well-tested and complete features make their way to your end users.</p>
<p>Ultimately, this approach helps maintain a smooth user experience. Instead of seeing half-built features or unexpected changes, users only see updates that are ready and functional. It also gives the team the flexibility to push changes to the main branch frequently—preventing merge conflicts and making collaboration easier—while keeping control over what gets deployed live. 🚀</p>
<h3 id="heading-step-4-navigate-to-the-actions-page">Step 4: Navigate to the Actions Page</h3>
<p>After the release is published, the CD pipeline for the production environment is triggered. To monitor this repeat the process taken for the Continuous Delivery workflow, follow these steps:</p>
<ol>
<li><p><strong>Go to the GitHub Actions tab</strong>: In your GitHub repository, click on the <strong>Actions</strong> tab.</p>
</li>
<li><p><strong>Locate the deployment workflow</strong>: Look for the <strong>CD Pipeline to Google Cloud Run (staging and production)</strong> workflow. You’ll notice that the workflow has been triggered on the <strong>main branch</strong> due to the push event.</p>
</li>
<li><p><strong>Open the workflow details</strong>: Click on the workflow to view detailed steps, logs, and statuses for each part of the deployment process.</p>
</li>
</ol>
<p>This time, the Continuous delivery workflow deploys the application to the <strong>production</strong>/<strong>live</strong> environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733164741827/303cd415-5bb9-4149-aa5d-7088d0eab582.png" alt="Continuous Deployment workflow from merge to main (staging to main)" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-5-access-the-live-application">Step 5: Access the Live Application</h3>
<p>Once the deployment is complete, go to Google Cloud Console at <a target="_blank" href="https://console.cloud.google.com">https://console.cloud.google.com</a>.</p>
<p>Navigate to <strong>Cloud Run</strong> from the menu. Select the service corresponding to the <strong>production environment</strong> (for example, <code>gcr-ci-cd-app</code>).</p>
<p>Locate the <strong>Service URL</strong> in the service details page. Open the URL in your browser to access the live application.</p>
<p>And now, congratulations – you’re done!</p>
<h2 id="heading-conclusion">Conclusion 🌟</h2>
<p>In this article, we explored how to build and automate a CI/CD pipeline for a Node.js application, using GitHub Actions, Docker Hub, and Google Cloud Run.</p>
<p>We set up workflows to handle Continuous Integration by testing and integrating code changes and Continuous Delivery to deploy those changes to a staging environment. We also containerized our app using Docker and deployed it seamlessly to Google Cloud Run.</p>
<p>Finally, we implemented Continuous Deployment, ensuring updates to the production environment happen only when a release is created from the main branch.</p>
<p>This approach gives teams the flexibility to push and test incomplete features without impacting end users. By following these steps, you've built a robust pipeline that makes deploying your application smoother, faster, and more reliable.</p>
<h3 id="heading-study-further">Study Further 📚</h3>
<p>If you would like to learn more about Continuous Integration, Delivery, and Deployment you can check out the courses below:</p>
<ul>
<li><p><a target="_blank" href="https://www.coursera.org/learn/continuous-integration-and-continuous-delivery-ci-cd"><strong>Continuous Integration and Continuous Delivery (CI/CD) (from IBM Coursera</strong></a><strong>)</strong></p>
</li>
<li><p><a target="_blank" href="https://www.udemy.com/course/github-actions-the-complete-guide/?couponCode=CMCPSALE24"><strong>GitHub Actions - The Complete Guide (from Udemy</strong></a><strong>)</strong></p>
</li>
<li><p><a target="_blank" href="https://www.freecodecamp.org/news/what-is-ci-cd/"><strong>Learn CI/CD by buliding a project (freeCodeCamp tutorial)</strong></a></p>
</li>
</ul>
<h3 id="heading-about-the-author">About the Author 👨‍💻</h3>
<p>Hi, I’m Prince! I’m a software engineer passionate about building scalable applications and sharing knowledge with the tech community.</p>
<p>If you enjoyed this article, you can learn more about me by exploring more of my blogs and projects on my <a target="_blank" href="https://www.linkedin.com/in/prince-onukwili-a82143233/">LinkedIn profile</a>. You can find my <a target="_blank" href="https://www.linkedin.com/in/prince-onukwili-a82143233/details/publications/">LinkedIn articles here</a>. And you can <a target="_blank" href="https://prince-onuk.vercel.app/achievements#articles">visit my website</a> to read more of my articles as well. Let’s connect and grow together! 😊</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up a CI/CD Pipeline with Husky and GitHub Actions ]]>
                </title>
                <description>
                    <![CDATA[ CI/CD is a core practice in the modern software development ecosystem. It helps agile teams deliver high-quality software in short release cycles. In this tutorial, you'll learn what CI/CD is, and I'll help you set up a CI/CD pipeline using Husky and... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-set-up-a-ci-cd-pipeline-with-husky-and-github-actions/</link>
                <guid isPermaLink="false">66bccaed4a4c0beb784641ce</guid>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Viviana Yanez ]]>
                </dc:creator>
                <pubDate>Mon, 15 Jul 2024 17:46:34 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/07/how-to-set-a-cicd-pipeline-1.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>CI/CD is a core practice in the modern software development ecosystem. It helps agile teams deliver high-quality software in short release cycles.</p>
<p>In this tutorial, you'll learn what CI/CD is, and I'll help you set up a CI/CD pipeline using Husky and GitHub Actions in a Next.js application. </p>
<p>This tutorial assumes that you already have knowledge of React and Next.js or other modern JavaScript frameworks. You will need also a GitHub account, and basic knowledge of Git will be strongly beneficial. </p>
<p>If you already have a working web app that is not built with Next.js, you might still find this article useful. All the concepts and most of the configurations will work with little adaptation in apps created with other frameworks.</p>
<h2 id="heading-heres-what-well-cover">Here's What We'll Cover:</h2>
<ol>
<li><a class="post-section-overview" href="#heading-what-is-cicd">What is CI/CD?</a><br>– <a class="post-section-overview" href="#heading-what-is-ci">What is CI?</a><br>– <a class="post-section-overview" href="#heading-what-is-cd">What is CD?</a><br>– <a class="post-section-overview" href="#heading-what-is-a-cicd-pipeline-and-what-are-its-benefits">What is a CI/CD pipeline and what are its benefits?</a></li>
<li><a class="post-section-overview" href="#heading-how-to-set-up-a-cicd-pipeline">How to Set Up a CI/CD Pipeline</a><br>– <a class="post-section-overview" href="#heading-step-1-set-up-a-nextjs-app">Step 1: Set Up a Next.js App with Vitest</a><br>– <a class="post-section-overview" href="#heading-step-2-set-a-git-hook">Step 2: Set a Git Hook</a><br>– <a class="post-section-overview" href="#heading-step-3-create-a-github-actions-workflow">Step 3: Create a GitHub Actions Workflow</a><br>– <a class="post-section-overview" href="#heading-step-4-deploy-the-project">Step 4: Deploy the Project</a></li>
<li><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></li>
</ol>
<h2 id="heading-what-is-cicd">What is CI/CD?</h2>
<p>Continuous Integration/Continuous Delivery or Continuous Deployment (CI/CD) is a practice that involves automating the process of building, testing, and deploying software.</p>
<p>Its main benefit is speeding up the entire development process. It also increases productivity by ensuring smooth code integration, standards, and security best practices adoption. It also helps produce a shorter feedback cycle with early issue detection, among other advantages explained below.</p>
<p>CI/CD is an essential tool in today’s software development practices, enabling teams to deliver high-quality software quickly, efficiently, and reliably.</p>
<p>Let’s learn more about it in detail.</p>
<h3 id="heading-what-is-ci">What is CI?</h3>
<p><strong>Continuous Integration</strong> is a software practice that means that developers in a team merge code changes into a central repository multiple times a day. </p>
<p>Instead of having independent dev environments and merging at a specific time, developers frequently integrate their changes to an application into a shared branch or “trunk”.</p>
<h3 id="heading-what-is-cd">What is CD?</h3>
<p>The CD in CI/CD usually refers to <strong>Continuous Delivery</strong>. It's a practice that, on top of CI, automates the software integration, testing, and release process. The automation stops just before deploying to production, where a human-controlled step is needed.</p>
<p>But CD can also refer to <strong>Continuous Deployment</strong>, which adds automation to the step of releasing software to a production environment.</p>
<p>Even though CD usually refers to Continuous Delivery, both terms are sometimes used interchangeably. The difference between them is the amount of automation implemented in a project.</p>
<h3 id="heading-what-is-a-cicd-pipeline-and-what-are-its-benefits">What is a CI/CD pipeline and what are its benefits?</h3>
<p>When put together, these two practices create a CI/CD pipeline. Adding CI/CD to your project brings the following benefits:</p>
<ul>
<li>Faster development: reduces the time required to deliver new features thanks to automating the build, test and deploy.</li>
<li>Enhanced Collaboration: encourages frequent code integrations and reduces integration conflicts.</li>
<li>Improved Code Quality: enforces the adoption of coding standards and best practices throughout the codebase.</li>
<li>Early Detection of Issues: makes the feedback cycle smaller, as issues can be caught in advance.</li>
<li>Increased Productivity: prevents developers from needing to work on repetitive tasks.</li>
</ul>
<p>These are some of the reasons why CI/CD is a core practice in modern software development and why it is such an important topic to learn about. The following steps will guide you through the process of setting up a CI/CD pipeline for your project.</p>
<h2 id="heading-how-to-set-up-a-cicd-pipeline">How to Set Up a CI/CD Pipeline</h2>
<h3 id="heading-step-1-set-up-a-nextjs-app">Step 1: Set Up a Next.js App</h3>
<p>If you already have a working web app, you can skip this and go directly to the first step.</p>
<p>Otherwise, let's set up a basic Next.js app with the default ESLint configuration and Vitest, and push it to a GitHub repo.</p>
<h4 id="heading-create-a-nextjs-app">Create a Next.js app</h4>
<p>Navigate into the directory where you want to create the new project folder, then run the following command in your terminal:</p>
<pre><code class="lang-bash">npx create-next-app@latest
</code></pre>
<p>When prompted with the installation options, make sure you choose to use ESLint in your project. This will ensure that ESLint is properly installed and a <code>lint</code> script is created in the package.json. </p>
<p>Wait for <code>create-next-app</code> to create the folder and install the project dependencies. Once it's done, navigate into the new folder and start the dev server:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> &lt;your-project-name&gt;
npm run dev
</code></pre>
<h4 id="heading-set-up-vitest">Set up Vitest</h4>
<p>Let's add Vitest to the project and add some automated tests to run in the CI/CD pipeline.</p>
<p>First, install <code>vitest</code> and the dev dependencies needed:</p>
<pre><code class="lang-bash">npm install -D vitest @vitejs/plugin-react jsdom @testing-library/react
</code></pre>
<p>Create a <code>vitest.config.js</code> file (or <code>vitest.config.ts</code> if using TypeScript) with the following content:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> { defineConfig } <span class="hljs-keyword">from</span> <span class="hljs-string">'vitest/config'</span>
<span class="hljs-keyword">import</span> react <span class="hljs-keyword">from</span> <span class="hljs-string">'@vitejs/plugin-react'</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineConfig({
  <span class="hljs-attr">plugins</span>: [react()],
  <span class="hljs-attr">test</span>: {
    <span class="hljs-attr">environment</span>: <span class="hljs-string">'jsdom'</span>,
  },
})
</code></pre>
<p>And finally, add the <code>test</code> script to the package.json:</p>
<pre><code> <span class="hljs-string">"test"</span>: <span class="hljs-string">"vitest --no-watch"</span>
</code></pre><p>Note that I added the no-watch option to the test script. This prevents Vitest from starting in the default watch mode in dev environment.</p>
<p>Now, you can add tests for your project. If you don't know how to start, you can check out <a target="_blank" href="https://nextjs.org/docs/app/building-your-application/testing/vitest#creating-your-first-vitest-unit-test">this guide</a> for some examples.</p>
<h4 id="heading-push-the-project-to-github">Push the project to GitHub</h4>
<p>Log in into your GitHub account and create new repository. Once your are done, you can connect the local repo with the one you just created, adding this repo as the remote. Then push the changes:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">"first commit"</span>
git remote add origin git@github.com:&lt;your-user-name&gt;/&lt;your-repo-name&gt;.git
git push origin main
</code></pre>
<p>You should be now ready to continue to the interesting part of this tutorial. :)</p>
<h3 id="heading-step-2-set-a-git-hook">Step 2: Set a Git Hook</h3>
<p>A Git hook is a script that allows you to run some event within the Git lifecycle. In this case we will be using Husky.</p>
<p><a target="_blank" href="https://typicode.github.io/husky/">Husky</a> is a pre-commit hook for Git that allows you maintain code quality by executing some task upon committing or pushing. You can run various checks before making a commit with new changes, such as linting the code and running automated tests.</p>
<p>By implementing these checks, you can avoid wasting time and resources by catching issues in advance before triggering the GitHub Actions workflow.  </p>
<p>Let’s start by adding Husky to the project with the following command:</p>
<pre><code class="lang-bash">npm install --save-dev husky
</code></pre>
<p>Next, let’s set up the project using the Husky init command:</p>
<pre><code class="lang-bash">npx husky init
</code></pre>
<p>After running this command, you will notice that a pre-commit file was created under <code>./husky</code>. Also, a <code>“prepare”</code> script was added in the package.json.</p>
<p>If you open the pre-commit file inside <code>./husky</code>, you will find the following content:</p>
<pre><code class="lang-bash">npm <span class="hljs-built_in">test</span>
</code></pre>
<p>As its name suggests, this file contains the code that executes before completing a commit. With everything set up as described, tests will run each time you attempt to create a new commit and new commits will be added only if all tests pass. </p>
<h4 id="heading-adding-more-git-hooks">Adding more git hooks</h4>
<p>Now, let’s change the content in the pre-commit file so the code linter also executes before creating a new commit. </p>
<p>You can open your preferred code editor and add <code>npm run lint</code> (or the corresponding ESLint script if you’re not using Next.js) in a new line in the pre-commit file. Alternatively, you can simply run the following command from the root folder of your project:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"npm run lint"</span> &gt;&gt; ./.husky/pre-commit
</code></pre>
<p>Now, each time you attempt to make a new commit, the tests and the linter will run, and the commit will be created – only if all tests are passing and no errors are found in the code.</p>
<h4 id="heading-setting-up-lint-staged">Setting up lint-staged</h4>
<p>You can go one step further and include a tool called <a target="_blank" href="https://github.com/lint-staged/lint-staged">lint-staged</a>. This tool will be especially useful if your project is large, because it allows you to run the Git hooks only for staged files. In this case, it will lint only the files that will be committed, avoiding wasting time by linting the entire project.</p>
<p>To start using lint-staged, let's add it as a dev dependency to the project:</p>
<pre><code class="lang-bash">npm install --save-dev lint-staged
</code></pre>
<p>There are <a target="_blank" href="https://github.com/lint-staged/lint-staged?tab=readme-ov-file#configuration">different ways to configure lint-staged</a> and you can choose the one that best suits your needs. I will add a <code>lint-staged</code> script and object to the package.json of my project with the following content:</p>
<pre><code class="lang-js">  <span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"dev"</span>: <span class="hljs-string">"next dev"</span>,
    <span class="hljs-string">"build"</span>: <span class="hljs-string">"next build"</span>,
    <span class="hljs-string">"start"</span>: <span class="hljs-string">"next start"</span>,
    <span class="hljs-string">"lint"</span>: <span class="hljs-string">"next lint"</span>,
    <span class="hljs-string">"test"</span>: <span class="hljs-string">"vitest --no-watch"</span>,
    <span class="hljs-string">"prepare"</span>: <span class="hljs-string">"husky"</span>
  },
  <span class="hljs-string">"lint-staged"</span>: {
    <span class="hljs-string">"*.{js, jsx,ts,tsx}"</span>: [
        <span class="hljs-string">"eslint --fix"</span>
        ]
    },
</code></pre>
<p>Now, I can replace <code>npm run lint</code> with <code>npm run lint-staged</code> in the pre-commit file.</p>
<p>Each time I make a new commit, any <code>js</code>, <code>jsx</code>, <code>ts</code>, or <code>tsx</code> staged files will be linted and, if there are fixable issues, they will be automatically fixed.</p>
<p>Let's test that the pre-commit hook is working as expected by:</p>
<ol>
<li>Running <code>git add  .</code></li>
<li>Running <code>git commit</code></li>
<li>Waiting for the linter to run and entering a commit message when prompted</li>
<li>Running <code>git log</code> to confirm that the commit was properly created</li>
</ol>
<p>If you want, you can add more checks to your pre-commit file to fit your project's needs. For example, you could run a tool like Prettier to automatically format your code, or <a target="_blank" href="https://commitlint.js.org/">commitlint</a> to lint your commit messages.</p>
<p>Now, let’s move on to setting up a GitHub Actions workflow for the project. </p>
<h3 id="heading-step-3-create-a-github-actions-workflow">Step 3: Create a GitHub Actions Workflow</h3>
<p>With the first part complete, we can move on to the next step. Here, you will add a GitHub Actions workflow to ensure the smooth integration of changes into the entire project.</p>
<h4 id="heading-github-actions-basics">GitHub Actions Basics</h4>
<p>GitHub Actions is a CI/CD platform that allows you to automate the building, testing, and deployment of your project. It also lets you perform actions when certain activities happen in your repository, such as opening a pull request or creating an issue.</p>
<p>GitHub Actions are configured through workflows defined in YAML files. These workflows typically run when triggered by an event in the repository, but they can also be scheduled or run manually.</p>
<p>Workflows are located in the <code>.github/workflows</code> folder and run different jobs. Each job includes a set of steps that run in order on the same runner or server. A step can be either a shell script or an action (a reusable piece of code that helps reduce repetitive code in your workflows). </p>
<p>Let's put all this together by creating the first workflow.</p>
<h4 id="heading-creating-a-workflow-to-execute-when-you-push-to-main-branch">Creating a workflow to execute when you push to main branch</h4>
<p>First create a <code>.github/workflows/</code> under your project root. Then create a <code>run-test.yml</code> file. You will be adding content to this file to create a CI workflow.</p>
<p>The first line is optional and includes a name for the workflow. It will appear at the "Actions" tab in the GitHub repo:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">linter</span> <span class="hljs-string">and</span> <span class="hljs-string">tests</span> <span class="hljs-string">on</span> <span class="hljs-string">push</span>
</code></pre>
<p>Then, you will use the <code>on</code> key to define the event or events that will trigger the workflow run. This can be an event in your repo or a time schedule. In this case, let's set it to run each time a push to the repo happens:</p>
<pre><code class="lang-yml"><span class="hljs-attr">on:</span>
  <span class="hljs-string">push</span>
</code></pre>
<p>You can also set options below the <code>on</code> keyword to limit the execution of a workflow to some branch or files – for example to run only on push to main branch:</p>
<pre><code class="lang-yml"><span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
</code></pre>
<p>Below this, you will add the <code>jobs</code> key. It groups all the jobs in the workflow, followed by the name of the first job, in this case <code>run-linter-and-tests</code>. </p>
<p>The lines below that define workflow properties, configuring it to run on the latest version of an Ubuntu Linux runner and grouping all the steps that run on this job.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">run-linter-and-tests:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Lint</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">lint</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>As mentioned before, each step can be either a shell script or an action. You can see the difference between the first and the second step in the previous code. </p>
<p>The first one specifies with the <code>uses</code> keyword that will run the <code>actions/checkout</code>. This action is used to checkout the repository onto the runner so the workflow can use the repository code. The second step <code>Install dependencies</code> uses the <code>run</code> keyword to tell the job to execute the <code>npm i</code> command on the runner.</p>
<p>This is the complete resulting file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CI</span> <span class="hljs-string">workflow</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-string">push</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">run-linter-and-tests:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Lint</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">lint</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>Let's commit the changes and push them to the GitHub repository.</p>
<p>Now each time you push to your repository, the workflow will trigger. If you click on the "Actions" tab in your GitHub repository navigation bar, you will find a list of all the runs from all your workflows and its complete logs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/07/Screenshot-2024-07-03-at-12.13.05-1.png" alt="Image" width="600" height="400" loading="lazy">
<em>"Actions" tab in a GitHub repository navigation bar</em></p>
<p>Also, you will see that in the GitHub repository's "Code" tab, a green checkmark appears next to the last commit message. This means that workflows ran and finished successfully. </p>
<p>When jobs are still running, you'll see a brown dot, and a red cross when a workflow finished with an error.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/07/Screenshot_.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h4 id="heading-adding-a-second-workflow-to-run-when-a-pr-is-created">Adding a second workflow to run when a PR is created</h4>
<p>Each repository can have one or more workflows, so let's add a second workflow to run each time a PR is created. Let's run the code coverage report each time a PR is opened against the main branch of the repo.</p>
<p>First, create and checkout a new <code>add-wf</code> branch:</p>
<pre><code class="lang-yaml"><span class="hljs-string">git</span> <span class="hljs-string">checkout</span> <span class="hljs-string">-b</span> <span class="hljs-string">add-wf</span>
</code></pre>
<p>Then, create a new YAML file under the <code>.github/workflows</code> directory and start adding some content on it.</p>
<p>First, let's add the name and when to run the workflow with the <code>on</code> keyword:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Coverage</span> <span class="hljs-string">on</span> <span class="hljs-string">PR</span>
<span class="hljs-attr">on:</span> <span class="hljs-string">pull_request</span>
</code></pre>
<p>After that, you will use the <code>jobs</code> keyword to describe the jobs to run. Let's define the first one as <code>build-and-run-coverage</code> to run in <code>ubuntu-latest</code> runner:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-run-coverage:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
</code></pre>
<p>Now, let's add <code>steps</code> for this job:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span> <span class="hljs-string">and</span> <span class="hljs-string">coverage</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">coverage</span>
</code></pre>
<p>Following is the complete resulting code:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Coverage</span> <span class="hljs-string">on</span> <span class="hljs-string">PR</span>
<span class="hljs-attr">on:</span> <span class="hljs-string">pull_request</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-run-coverage:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span> <span class="hljs-string">and</span> <span class="hljs-string">coverage</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">coverage</span>
</code></pre>
<p>Now, you can push the change to your GitHub repo:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">'add a wf to run on opened PR'</span>
git push origin add-wf
</code></pre>
<p>Now you can open a PR against your <code>main</code> branch and and wait for the workflow to complete.</p>
<h5 id="heading-comment-coverage-report-in-the-pr">Comment coverage report in the PR</h5>
<p>As mentioned earlier in this article, actions are reusable pieces of code that avoid repetitive code in the workflow. One cool thing about them is that there are many already written by the community that you can use in your workflows, saving lots of time.</p>
<p>To complete the workflow we created, let's add a new step that uses an action to report coverage results as a comment on the pull request.</p>
<p>First, let's modify the <code>permissions</code> keyword to ensure the workflow has the right access to content and to create comments:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">permissions:</span>
      <span class="hljs-attr">contents:</span> <span class="hljs-string">read</span>
      <span class="hljs-attr">pull-requests:</span> <span class="hljs-string">write</span>
</code></pre>
<p>Then, let's use the <a target="_blank" href="https://github.com/marketplace/actions/vitest-coverage-report">Vitest Coverage Report</a> action by adding a <code>step</code> into the <code>build-and-run-coverage</code> job:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Report</span> <span class="hljs-string">Coverage</span>
        <span class="hljs-attr">uses:</span>  <span class="hljs-string">davelosert/vitest-coverage-report-action@v2</span>
</code></pre>
<p>The final <code>yaml</code> file will look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Coverage</span> <span class="hljs-string">on</span> <span class="hljs-string">PR</span>
<span class="hljs-attr">on:</span> <span class="hljs-string">pull_request</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-run-coverage:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">permissions:</span>
      <span class="hljs-attr">contents:</span> <span class="hljs-string">read</span>
      <span class="hljs-attr">pull-requests:</span> <span class="hljs-string">write</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">test</span> <span class="hljs-string">and</span> <span class="hljs-string">coverage</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">coverage</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Report</span> <span class="hljs-string">Coverage</span>
        <span class="hljs-attr">uses:</span>  <span class="hljs-string">davelosert/vitest-coverage-report-action@v2</span>
</code></pre>
<p>There is one more step to ensure all works as expected. You must add the <code>json-summary</code> reporter in the Vitest configuration:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">import</span> { defineConfig } <span class="hljs-keyword">from</span> <span class="hljs-string">"vitest/config"</span>;
<span class="hljs-keyword">import</span> react <span class="hljs-keyword">from</span> <span class="hljs-string">"@vitejs/plugin-react"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineConfig({
  plugins: [react()],
  test: {
    environment: <span class="hljs-string">"jsdom"</span>,
    coverage: {
      provider: <span class="hljs-string">"v8"</span>,
      extension: [<span class="hljs-string">".tsx"</span>],
      reporter: [<span class="hljs-string">'text'</span>, <span class="hljs-string">'json-summary'</span>, <span class="hljs-string">'json'</span>],
    },
  },
});
</code></pre>
<p>Now, make some changes in your project and add corresponding tests to check if the workflow is working as expected. </p>
<p>Once you push your changes to the GitHub repo, open a PR against the main branch of your project. After the workflows finish running, you should see a comment showing the coverage result:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/07/Screen-Shot-2024-07-12-at-19.18.05.png" alt="Image" width="600" height="400" loading="lazy">
<em>Coverage Report in a pull request comment</em></p>
<h3 id="heading-step-4-deploy-the-project">Step 4: Deploy the Project</h3>
<p>As a last step in this tutorial, let's deploy the project on <a target="_blank" href="https://vercel.com/">Vercel</a>. You will set up an automatic deployment through Git that will trigger a redeploy each time new changes are pushed or merged into the main branch.</p>
<p>First, log in to your Vercel account, or create one if you don't already have one. Then, in your dashboard, click on "Add New Project" and click on the "Import" button next to your repository name in the "Import Git Repository" section. </p>
<p>If you don't see your repository listed, it may be due to your GitHub app permissions configuration. You can manage them in your settings section in your GitHub account.</p>
<p>Finally, choose a name for the project in the "Configure Project" section and click on the "Deploy" button. You can now see the deploy details by clicking on the "Deployment" link.</p>
<p>Vercel automatic deployments ensure that the deployed project is always updated with the latest changes. They also have the benefit of <a target="_blank" href="https://vercel.com/docs/deployments/preview-deployments">Preview Deployments</a>, a preview URL that lets you test new features in advance of merging changes into production.</p>
<p>If you have followed along with the tutorial, with this step completed, you'll have completed the CD part of the CI/CD pipeline for your project. Now, you can be sure any code that is pushed to the main branch is linted and tested, and once all checks pass, it is automatically pushed to production.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this guide, you learned about the importance of CI/CD in today’s software development ecosystem and its main benefits. You also took your first steps in this area by creating your own CI/CD pipeline for your project, learning how to use Husky and GitHub Actions.</p>
<p>Now, you can keep learning more about these tools and improve your CI/CD pipeline by customizing it to better fit your project's needs.</p>
<p>I hope you were able to gain some new knowledge and enjoyed following along. Thanks for reading!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Secure Your Web Server with Continuous Integration Using NGINX and CircleCI ]]>
                </title>
                <description>
                    <![CDATA[ Web servers are responsible for delivering web pages and various resources to clients through the internet. They can exist either as software or hardware components. But unfortunately, they often become targets for hackers and malicious individuals s... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/secure-web-server-with-continuous-integration-using-nginx-and-circleci/</link>
                <guid isPermaLink="false">66d45d5e230dff016690579b</guid>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ nginx ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Security ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Fri, 19 Jan 2024 16:46:59 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/01/feature-image.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Web servers are responsible for delivering web pages and various resources to clients through the internet. They can exist either as software or hardware components.</p>
<p>But unfortunately, they often become targets for hackers and malicious individuals seeking to exploit any vulnerabilities to compromise data and disrupt functionality. As a result, you'll need to prioritize the security of your web server by updating it and implementing safeguards against threats.</p>
<p>To enhance the security of your web server, one effective approach is to use <a target="_blank" href="https://www.freecodecamp.org/news/what-is-ci-cd/">Continous Integration</a> (CI). CI is a DevOps technique that allows the automated merging of code modifications from software engineers into a single repository. This practice enhances code quality, minimizes bugs and speeds up code delivery.</p>
<p>By using CI, you can automate the testing, building, and deployment processes for your web servers’ code and configuration. You can also ensure that your web server consistently maintains a stable state.</p>
<p>In this tutorial, I'll guide you through the process of strengthening the security of your web server by using two popular and powerful tools: <a target="_blank" href="https://www.freecodecamp.org/news/nginx/">NGINX</a> and CircleCI.</p>
<p>NGINX, which is an open source web server, provides a range of features and modules that can greatly enhance the security of your web server. These include SSL/TLS encryption, security headers, and support for HTTP/2.</p>
<p>On the other hand, CircleCI offers both cloud-based and self-hosted options for Continous Integration (CI) and Continuous Delivery (CD), enabling seamless deployment processes.</p>
<p>By following this guide, you will learn how to:</p>
<ul>
<li><p>Configure NGINX to use SSL/TLS encryption and security headers</p>
</li>
<li><p>Create a GitHub repository and push your NGINX configuration files to it</p>
</li>
<li><p>Create a CircleCI project and link it to your GitHub repository</p>
</li>
<li><p>Create a CircleCI configuration file and define your CI pipeline</p>
</li>
<li><p>Test and deploy your web server with CircleCI</p>
</li>
</ul>
<h3 id="heading-here-is-what-well-cover">Here is what we'll cover:</h3>
<ul>
<li><a class="post-section-overview" href="#">Prerequisites</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#Configure-NGINX-to-Use-SSL/TLS-Encryption">Step 1: Configure NGINX to Use SSL/TLS Encryption</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 2: Configure NGINX to Include Security Headers</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 3: Create a GitHub Repository and Push Your NGINX Configuration</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 4: Create a CircleCI Project and Link it to Your GitHub Repository</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 5: Create a CircleCI Configuration File and Define Your CI Pipeline</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 6: Test and Deploy Your Web Server with CircleCI</a></li>
</ul>
<p>Let's get started!</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you start this guide, you need to ensure you have the following:</p>
<ul>
<li><p>A web server running NGINX. If you don't have one, you can follow this <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04">guide</a> to install NGINX on Ubuntu 20.04. You can also use any other operating system or cloud provider that supports NGINX.</p>
</li>
<li><p>A GitHub account. If you don't have one, you can sign up for free <a target="_blank" href="https://github.com/join">here</a>.</p>
</li>
<li><p>A CircleCI account. If you don't have one, you can sign up for free <a target="_blank" href="https://circleci.com/signup/">here</a>. You will also need to link your GitHub account to your CircleCI account.</p>
</li>
<li><p>Some basic knowledge of <a target="_blank" href="https://www.freecodecamp.org/news/learn-web-development-with-this-free-20-hour-course/">web development</a> and <a target="_blank" href="https://www.freecodecamp.org/news/helpful-linux-commands-you-should-know/">Linux commands</a>. You should be familiar with the concepts of web servers, SSL/TLS encryption, security headers, and CI. You should also be comfortable with using the command line and editing configuration files.</p>
</li>
</ul>
<p>Once you have these, you are ready to proceed with the next steps.</p>
<h2 id="heading-step-1-configure-nginx-to-use-ssltls-encryption">Step 1: Configure NGINX to Use SSL/TLS Encryption</h2>
<p>SSL/TLS encryption ensures the transmission of data between your web server and clients. It safeguards against interception or manipulation of information. It also plays a role in verifying the identity and reliability of your web server.</p>
<p>You need an SSL/TLS certificate to use SSL/TLS for your web server. An SSL/TLS certificate contains information about your web server, such as its domain name, owner, and public key. The validity of the certificate is verified through the unique digital signature from a Certificate Authority (CA).</p>
<p>You can either purchase an SSL/TLS certificate from a commercial CA, such as DigiCert, Symantec, or GlobalSign, or you can just get one for free from a non-profit CA, such as Let's Encrypt. You can also create your own self-signed certificate, but this is not recommended for production use, as it will not be trusted by most browsers and clients.</p>
<p>In this guide, you will use Let's Encrypt to get a free SSL/TLS certificate for your web server. To use Let's Encrypt, you need to install a software client on your web server that can communicate with the CA and perform the necessary tasks.</p>
<p>One of the most common and recommended clients for Let's Encrypt is Certbot. Certbot is a command-line tool that can automatically request, install, and renew certificates for your web server. It can also configure your web server to use the certificates and enable HTTPS.</p>
<p>To install Certbot on your web server, run the following commands:</p>
<pre><code class="lang-bash">sudo apt update
sudo apt install certbot python3-certbot-nginx
</code></pre>
<p>After installing Certbot, use it to request and install a certificate for your web server. You need to provide your domain name and your email address for the certificate.</p>
<p>To request and install a certificate for your web server, run the following command:</p>
<pre><code class="lang-bash">sudo certbot --nginx -d yourdomain.com
</code></pre>
<p>Replace yourdomain.com with your actual domain name.</p>
<p>Follow the prompts and answer the questions. Certbot will automatically verify your domain ownership, obtain a certificate, and install it on your web server. It will also ask you whether you want to redirect all HTTP traffic to HTTPS. Choose option 2 to enable the redirection.</p>
<p>After the process is completed, you will see a message like this:</p>
<p><img src="https://i.ibb.co/wBVfh1R/carbon-1.png" alt="certbot-success message" width="600" height="400" loading="lazy"></p>
<p>You have now successfully configured NGINX to use SSL/TLS encryption with a certificate from Let's Encrypt. You can now access your web server using HTTPS and see the lock icon in your browser.</p>
<p><img src="https://i.ibb.co/b3pqJBB/secureverification2.png" alt="Lock Icon" width="600" height="400" loading="lazy"></p>
<p>You can also test your web server security using online tools, such as SSL Labs. You should see a grade of A or higher.</p>
<p>Note: Let's Encrypt certificates are valid for 90 days. Certbot can automatically renew them for you before they expire.</p>
<p>To enable the automatic renewal, you need to create a CRON job or a systemd timer that runs the following command at least once per day:</p>
<pre><code class="lang-bash">sudo certbot renew
</code></pre>
<p>You can also test the renewal process manually by running the following command:</p>
<pre><code class="lang-bash">sudo certbot renew --dry-run
</code></pre>
<p>This will perform a trial run without making any changes.</p>
<p>If you encounter any errors or issues, you can check the <a target="_blank" href="https://eff-certbot.readthedocs.io/en/latest/">Certbot documentation</a> or the <a target="_blank" href="https://community.letsencrypt.org/">Let's Encrypt community forum</a> for help.</p>
<h2 id="heading-step-2-configure-nginx-to-include-security-headers">Step 2: Configure NGINX to Include Security Headers</h2>
<p>Security headers help instruct the browser to apply certain security policies or restrictions when handling your web content. They can prevent or mitigate common web attacks, such as cross-site scripting (XSS), clickjacking, content injection, and more.</p>
<p>In this step, you will be adding security headers to your NGINX configuration. These headers include X Frame Options, X Content Type Options, X XSS Protection, and Content Security Policy.</p>
<h3 id="heading-x-frame-options">X-Frame-Options</h3>
<p>The X-Frame-Options header tells the browser whether or not to allow your web page to be displayed in a frame, iframe, embed, or object element. This can help you prevent clickjacking attacks, where an attacker overlays a hidden frame on top of your web page and tricks the user into clicking on it.</p>
<p>There are three possible values for this header:</p>
<ul>
<li><p>DENY: This value prevents your web page from being displayed in any frame.</p>
</li>
<li><p>SAMEORIGIN: This value allows your web page to be displayed in a frame only if the frame is from the same origin as your web page.</p>
</li>
<li><p>ALLOW-FROM URI: This value allows your web page to be displayed in a frame only if the frame is from the specified URI.</p>
</li>
</ul>
<p>To enable the X-Frame-Options header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> X-Frame-Options <span class="hljs-string">"SAMEORIGIN"</span>;
</code></pre>
<p>This will allow your web page to be displayed in a frame only if the frame is from the same origin as your web page. You can change the value to DENY or ALLOW-FROM uri according to your needs.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h3 id="heading-x-content-type-options">X-Content-Type-Options</h3>
<p>The X Content Type Options header instructs the browser to perform MIME-type sniffing. This feature attempts to determine the content type of a resource by analyzing its content or file extension.</p>
<p>By using this header you can safeguard against content injection attacks. These attacks involve uploading a file with a content type to exploit the browser’s interpretation of it.</p>
<p>There is only one possible value for this header:</p>
<ul>
<li>nosniff: This value prevents the browser from performing MIME type sniffing.</li>
</ul>
<p>To enable the X-Content-Type-Options header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> X-Content-Type-Options <span class="hljs-string">"nosniff"</span>;
</code></pre>
<p>This will prevent the browser from performing MIME type sniffing on your web resources.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h3 id="heading-x-xss-protection">X-XSS-Protection</h3>
<p>The X-XSS-Protection header tells the browser to enable or disable its built-in XSS filter, which can detect and block some types of XSS attacks. This can help you prevent XSS attacks, where an attacker injects malicious code into your web page that executes in the browser.</p>
<p>There are three possible values for this header:</p>
<ul>
<li><p>0: This value disables the XSS filter.</p>
</li>
<li><p>1: This value enables the XSS filter and sanitizes the page if an XSS attack is detected.</p>
</li>
<li><p>1; mode=block: This value enables the XSS filter and blocks the page if an XSS attack is detected.</p>
</li>
</ul>
<p>To enable the X-XSS-Protection header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> X-XSS-Protection <span class="hljs-string">"1; mode=block"</span>;
</code></pre>
<p>This will enable the XSS filter and block the page if an XSS attack is detected. You can change the value to 0 or 1 according to your needs.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h3 id="heading-content-security-policy">Content-Security-Policy</h3>
<p>The Content-Security-Policy header tells the browser to enforce a set of rules that restrict what sources and types of content can be loaded and executed on your web page. This can help you prevent XSS, content injection, and other types of attacks that rely on loading malicious or untrusted content.</p>
<p>The value of this header is a complex policy that consists of multiple directives and values. Each directive specifies a type of content and a list of sources that are allowed or disallowed for that content.</p>
<p>For example, the following policy allows only scripts styles from the same origin, and images from the same origin or yourdomain.com:</p>
<pre><code class="lang-nginx">Content-Security-Policy: default-<span class="hljs-attribute">src</span> <span class="hljs-string">'none'</span>; script-<span class="hljs-attribute">src</span> <span class="hljs-string">'self'</span>; style-<span class="hljs-attribute">src</span> <span class="hljs-string">'self'</span>; img-<span class="hljs-attribute">src</span> <span class="hljs-string">'self'</span> yourdomain.com;
</code></pre>
<p>The Content-Security-Policy header is very powerful and flexible, but also very complicated and error-prone. You need to carefully design and test your policy to ensure that it does not break your web functionality or introduce new vulnerabilities. You can use tools like CSP Evaluator or CSP Scanner to check and improve your policy.</p>
<p>To enable the Content-Security-Policy header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> Content-Security-Policy <span class="hljs-string">"default-src 'none'; script-src 'self'; style-src 'self'; img-src 'self' yourdomain.com;"</span>;
</code></pre>
<p>This will enforce the policy that you described above. You can change the policy according to your needs.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h2 id="heading-step-3-create-a-github-repository-and-push-your-nginx-configuration">Step 3: Create a GitHub Repository and Push Your NGINX Configuration</h2>
<p>To create a GitHub repository and push your NGINX configuration files to it, follow these steps:</p>
<h3 id="heading-create-a-github-repository">Create a GitHub repository</h3>
<p>First, log in to your GitHub account and go to the GitHub homepage.</p>
<p>Click on the plus icon in the top right corner of the page, and select "New repository" from the dropdown menu.</p>
<p><img src="https://i.ibb.co/vkWX0BJ/dropdownmenu-edited.png" alt="Dropdown Menu" width="600" height="400" loading="lazy"></p>
<p>On the next page, enter a name for your repository in the "Repository name" field. This should be a short and descriptive name that accurately reflects the contents of the repository. For example, you can name it "nginx-config".</p>
<p>In the "Description" field, you can enter a longer description of the repository if you want. This is optional, but it can be helpful to provide more information about the purpose of the repository.</p>
<p>For example, you can write "A repository for storing and managing my NGINX configuration files".</p>
<p>You can set the visibility to whatever you prefer. If you want others to be able to see your work, set it to "Public". Otherwise, set it to "Private".</p>
<p>Leave the "Initialize this repository with a README" option unchecked, as you want to create an empty repository.</p>
<p><img src="https://i.ibb.co/SQVJbqh/settingupnewrepo.png" alt="Settting up New Repository" width="600" height="400" loading="lazy"></p>
<p>Click on the "Create repository" button to create the repository.</p>
<p>Your new empty repository will be created and you will be taken to the repository page.</p>
<h3 id="heading-push-your-nginx-configuration-files-to-the-github-repository">Push your NGINX configuration files to the GitHub repository</h3>
<p>On your web server, navigate to the directory where your NGINX configuration files are located. By default, this is /etc/nginx on most Linux distributions.</p>
<p>Initialize a new Git repository in this directory by running the following command:</p>
<pre><code class="lang-bash">git init
</code></pre>
<p>This will create a new .git directory in the current directory, which will be used to store all the version control information for your project.</p>
<p>Add all the configuration files that you want to include in the repository by running the following command:</p>
<pre><code class="lang-bash">git add .
</code></pre>
<p>This will add all the files in the current directory and its subdirectories to the repository. You can also specify individual files to be added by replacing the (.) with the file names, separated by spaces.</p>
<p>Then commit the files to the repository by running the following command:</p>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"Initial commit"</span>
</code></pre>
<p>This will create the first commit in the repository, which will include all the files that were added in the previous step. The -m flag is used to specify a commit message, which should briefly describe the changes that were made in this commit.</p>
<p>Go back to your GitHub repository page and copy the URL of your repository. You can find it under the "Code" section. It should look something like this:</p>
<p><img src="https://i.ibb.co/GThsRSV/code-Button.png" alt="github-url" width="600" height="400" loading="lazy"></p>
<p>On your web server, add the URL of your GitHub repository as a remote for your Git repository by running the following command:</p>
<pre><code class="lang-bash">git remote add origin https://github.com/username/nginx-config.git
</code></pre>
<p>Replace username with your GitHub username and nginx-config with your repository name. The origin is the name of the remote, which you can change to anything you want.</p>
<p>Push your local Git repository to the GitHub repository by running the following command:</p>
<pre><code class="lang-bash">git push -u origin master
</code></pre>
<p>This will push your master branch, which is the default branch in Git, to the origin remote, which is the GitHub repository that you created. The -u flag is used to set the upstream for your branch, which means that you can use git push or git pull without specifying the remote or the branch in the future.</p>
<p>Enter your GitHub username and password when prompted. If you have enabled two-factor authentication, you will need to use a personal access token instead of your password. You can generate one from your GitHub settings page.</p>
<p>You have successfully created a GitHub repository and pushed your NGINX configuration files to it. You can now view and manage your configuration files on GitHub.</p>
<h2 id="heading-step-4-create-a-circleci-project-and-link-it-to-your-github-repository">Step 4: Create a CircleCI Project and Link it to Your GitHub Repository</h2>
<p>CircleCI is a platform that offers cloud-based and self-hosted options for continuous integration and delivery. It allows you to create and run pipelines that automate and streamline your web server deployment and update process.</p>
<p>To use CircleCI, you need to create a CircleCI project and link it to your GitHub repository. This will enable CircleCI to access your code and configuration files, and trigger builds whenever you push to GitHub.</p>
<p>To create a CircleCI project and link it to your GitHub repository, follow these steps:</p>
<h3 id="heading-sign-up-for-circleci-and-connect-your-github-account">Sign up for CircleCI and connect your GitHub account</h3>
<p>Start by logging in to your CircleCI account or sign up for free <a target="_blank" href="https://circleci.com/login/">here</a>.</p>
<p>On the CircleCI dashboard, click on the "Create Project" button in the top right corner of the page.</p>
<p><img src="https://i.ibb.co/wyPsNjk/project-button.png" alt="Create project Button" width="600" height="400" loading="lazy"></p>
<p>On the next page, select "GitHub" as your version control provider and click on the "Connect with GitHub" button.</p>
<p><img src="https://i.ibb.co/MhywfHF/choosing-github.png" alt="Choosing Github" width="600" height="400" loading="lazy"></p>
<p>On the next page, authorize CircleCI to access your GitHub account by clicking on the "Authorize circleci" button.</p>
<p>On the next page, Enter a “Project Name” and follow the remaining instructions to successfully create a CircleCI project.</p>
<p><img src="https://i.ibb.co/MkYZBTF/Project-Name.png" alt="Enter Project Name" width="600" height="400" loading="lazy"></p>
<h3 id="heading-create-a-circleci-project-and-link-it-to-your-github-repository">Create a CircleCI project and link it to your GitHub repository</h3>
<p>Next, create a new SSH key pair in your terminal:</p>
<pre><code class="lang-bash">ssh-keygen -t ed25519 -f ~/.ssh/project_key -C email@example.com
</code></pre>
<p>Then copy the private key generated:</p>
<pre><code class="lang-bash">pbcopy &lt; ~/.ssh/project_key
</code></pre>
<p>Next, enter it in the private key field:</p>
<p><img src="https://i.ibb.co/0Qs2TNJ/Private-SSH-Key.png" alt="Enter private SSH Key" width="600" height="400" loading="lazy"></p>
<p>You will see a list of your GitHub repositories. Find the repository that you created in the previous step and click on the "Create Project" button next to it.</p>
<p>Now you will see a list of templates for different languages and frameworks. Since you are using Python and Flask, select the "Python" template and click on the "Use this config" button.</p>
<p>On the next page, you will see the generated CircleCI configuration file (config.yml) that defines your pipeline. You can review and edit the file if you want, or leave it as it is for now. Click on the "Start building" button to create the project and link it to your GitHub repository.</p>
<p>Your new CircleCI project will be created and linked to your GitHub repository.</p>
<p>You have now successfully created a CircleCI project and linked it to your GitHub repository. You can now configure and run your pipeline on CircleCI.</p>
<h2 id="heading-step-5-create-a-circleci-configuration-file-and-define-your-ci-pipeline">Step 5: Create a CircleCI Configuration File and Define Your CI Pipeline</h2>
<p>A CircleCI configuration file is a YAML file that defines your CI pipeline. A CI pipeline is a sequence of jobs that run whenever you push changes to your GitHub repository. Each job consists of steps that perform specific tasks, such as running commands, installing dependencies, or deploying your web server.</p>
<p>In this step, you will create a CircleCI configuration file and define your CI pipeline. You will use the Python template that you selected in the previous step as a starting point, and modify it to suit our needs. You will also explain what each step in the pipeline does and how it helps to automate and secure your web server deployment.</p>
<h3 id="heading-create-a-circleci-configuration-file">Create a CircleCI configuration file</h3>
<p>On your web server, navigate to the directory where your NGINX configuration files are located. By default, this is /etc/nginx on most Linux distributions.</p>
<p>Create a new directory called .circleci in this directory by running the following command:</p>
<pre><code class="lang-bash">mkdir .circleci
</code></pre>
<p>This is where you will store our CircleCI configuration file.</p>
<p>Then create a new file called config.yml in the .circleci directory by running the following command:</p>
<pre><code class="lang-bash">touch .circleci/config.yml
</code></pre>
<p>This is your CircleCI configuration file.</p>
<p>Open the config.yml file with your preferred text editor and paste the following code:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">2.1</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/python:3.9</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            pip install -r requirements.txt
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            pytest
</span>  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">machine:</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">ubuntu-2004:202101-01</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">add_ssh_keys:</span>
          <span class="hljs-attr">fingerprints:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"YOUR_FINGERPRINT"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">Nginx</span> <span class="hljs-string">configuration</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            scp -r nginx root@YOUR_IP:/etc
            ssh root@YOUR_IP "systemctl restart nginx"
</span>
<span class="hljs-attr">workflows:</span>
  <span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">build-and-deploy:</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">deploy:</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
          <span class="hljs-attr">filters:</span>
            <span class="hljs-attr">branches:</span>
              <span class="hljs-attr">only:</span> <span class="hljs-string">main</span>
</code></pre>
<p>This is your CircleCI configuration file that defines your CI pipeline. You will explain each part of the file in the next section.</p>
<p>Finally, fave and close the file.</p>
<h3 id="heading-define-your-ci-pipeline">Define your CI pipeline</h3>
<p>Let's go through each part of the config.yml file and see what it does.</p>
<ul>
<li><p>Line 1: This indicates the version of the CircleCI platform you are using. 2.1 is the most recent version.</p>
</li>
<li><p>Line 3: The jobs level contains a collection of children, representing your jobs. You specify the names for these jobs, for example, build, test, deploy.</p>
</li>
<li><p>Line 6: This is the Docker image. The example shows cimg/python:3.9, which is a CircleCI-provided image that contains Python 3.9 and other common tools.</p>
</li>
<li><p>Line 9: The run directive executes a shell command or script. You can specify a name and a command for each run directive.</p>
</li>
<li><p>Line 11: The command attribute is a list of shell commands that you want to execute. In this case, you are installing the dependencies for your web application using pip.</p>
</li>
<li><p>Line 12: This is another run directive that runs the tests for your web application using pytest.</p>
</li>
<li><p>Line 13: deploy is the second child in the jobs collection. This job is responsible for deploying your NGINX configuration to your web server.</p>
</li>
<li><p>Line 14: This specifies that you are using a machine executor for this job. A machine executor provides a full virtual machine with root access and various tools installed.</p>
</li>
<li><p>Line 15: This is the machine image. The example shows ubuntu-2004:202101-01, which is a CircleCI-provided image that contains Ubuntu 20.04 and other common tools.</p>
</li>
<li><p>Line 16: The steps collection is a list of run directives and other commands that you want to execute in this job.</p>
</li>
<li><p>Line 18: The add_ssh_keys command adds your SSH keys to the machine. You need to provide the fingerprints of the keys that you want to use. You can generate and add SSH keys from your CircleCI settings page.</p>
</li>
<li><p>Line 21: The command attribute is a list of shell commands that you want to execute. In this case, you are using SCP to copy your NGINX configuration files from the machine to your web server, and SSH to restart the NGINX service on your web server. You need to replace YOUR_FINGERPRINT with the fingerprint of your SSH key, and YOUR_IP with the IP address of your web server.</p>
</li>
<li><p>Line 24: This indicates the version of the workflow syntax you are using. 2 is the most recent version.</p>
</li>
<li><p>Line 29: The required attribute specifies the dependencies of this job. In this case, you are saying that the deploy job requires the build job to finish successfully before running.</p>
</li>
<li><p>Line 30: The filters attribute specifies the conditions for running this job. In this case, you are saying that the deploy job should only run on the main branch of our GitHub repository.</p>
</li>
</ul>
<h3 id="heading-push-your-circleci-configuration-file-to-your-github-repository">Push your CircleCI configuration file to your GitHub repository</h3>
<p>On your web server, add, commit, and push your CircleCI configuration file to your GitHub repository by running the following commands:</p>
<pre><code class="lang-bash">git add .circleci/config.yml
git commit -m <span class="hljs-string">"Add CircleCI config file"</span>
git push origin main
</code></pre>
<p>This will trigger a new build on CircleCI and run your CI pipeline.</p>
<p>Go to your CircleCI dashboard and click on the build-and-deploy workflow.</p>
<p>You can click on each job to see the details and logs of the steps.</p>
<p>Wait for the workflow to finish.</p>
<p>You have successfully created a CircleCI configuration file and defined your CI pipeline. You can now automate and secure your web server deployment with CircleCI. You can also modify and improve your configuration file according to your needs.</p>
<h2 id="heading-step-6-test-and-deploy-your-web-server-with-circleci">Step 6: Test and Deploy Your Web Server with CircleCI</h2>
<p>Now that you have created a CircleCI project and a configuration file for your CI pipeline, you can test and deploy your web server with CircleCI. You can trigger and monitor your CI pipeline from the CircleCI web app or the command line. You can also verify that your web server is deployed and secured correctly by using online tools or by accessing your web server using HTTPS.</p>
<h3 id="heading-trigger-and-monitor-your-ci-pipeline-from-the-circleci-web-app">Trigger and monitor your CI pipeline from the CircleCI web app</h3>
<p>To trigger and monitor your CI pipeline from the CircleCI web app, follow these steps:</p>
<ul>
<li><p>Go to the CircleCI dashboard.</p>
</li>
<li><p>On the dashboard, you will see a list of your projects and pipelines. Find the project that you created in the previous step and click on it.</p>
</li>
<li><p>On the project page, you will see a list of your branches and workflows. Find the branch that you pushed your CircleCI configuration file to in the previous step and click on the build-and-deploy workflow.</p>
</li>
<li><p>On the workflow page, you will see a graphical representation of your pipeline, showing the status and duration of each job and step. You can click on each job or step to see the details and logs of the commands that were executed.</p>
</li>
<li><p>Wait for the workflow to finish. If everything goes well, you will see a green check mark next to each job and step, indicating that they were successful.</p>
</li>
</ul>
<p>You have successfully triggered and monitored your CI pipeline from the CircleCI web app. You can also trigger and monitor your CI pipeline from the command line.</p>
<h3 id="heading-trigger-and-monitor-your-ci-pipeline-from-the-command-line">Trigger and monitor your CI pipeline from the command line</h3>
<p>To trigger and monitor your CI pipeline from the command line, follow these steps:</p>
<ul>
<li><p>On your web server, navigate to the directory where your NGINX configuration files are located. By default, this is /etc/nginx on most Linux distributions.</p>
</li>
<li><p>Make some changes to your configuration files, such as adding or removing security headers, and save them.</p>
</li>
<li><p>Add, commit, and push your changes to your GitHub repository by running the following commands:</p>
</li>
</ul>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">"Update Nginx configuration"</span>
git push origin main
</code></pre>
<p>This will trigger a new build on CircleCI and run your CI pipeline.</p>
<p>To monitor your CI pipeline from the command line, you can use the CircleCI CLI, which is a tool that allows you to interact with CircleCI from your terminal. You can install the CircleCI CLI by following the instructions on the official website.</p>
<p>After installing the CircleCI CLI, you can use the <code>circleci</code> command to perform various actions, such as listing your projects, pipelines, workflows, jobs, and artifacts. You can also use the --help flag to see the available options and arguments for each command.</p>
<p>To monitor your CI pipeline from the command line, you can use the circleci pipeline command to list and describe your pipelines.</p>
<p>For example, you can run the following command to list the pipelines for your project:</p>
<pre><code class="lang-bash">circleci pipeline list --org-slug &lt;VCS&gt;/&lt;your-vcs-org-or-username&gt; --project-slug &lt;VCS&gt;/&lt;your-repo-name&gt;
</code></pre>
<p>Replace <code>&lt;VCS&gt;</code> with either gh or bb depending on your version control system. Replace <code>&lt;your-vcs-org-or-username&gt;</code> with your GitHub or Bitbucket organization or username. Replace <code>&lt;your-repo-name&gt;</code> with your repository name. You will see something like this:</p>
<p><img src="https://i.ibb.co/JKqc0cn/carbon-5.png" alt="Command Output" width="600" height="400" loading="lazy"></p>
<p>You can use the pipeline ID or number to describe a specific pipeline and see its details, such as the status, workflows, jobs, and steps. For example, you can run the following command to describe the latest pipeline for your project:</p>
<pre><code class="lang-bash">circleci pipeline describe --org-slug &lt;VCS&gt;/&lt;your-vcs-org-or-username&gt; --project-slug &lt;VCS&gt;/&lt;your-repo-name&gt; --pipeline-number &lt;number&gt;
</code></pre>
<p>Replace <code>&lt;number&gt;</code> with the pipeline number that you want to describe.</p>
<p>Wait for the pipeline to finish. If everything goes well, you will see a success message for each job and step, indicating that they were successful.</p>
<p>You have successfully triggered and monitored your CI pipeline from the command line. You can also verify that your web server is deployed and secured correctly.</p>
<h3 id="heading-verify-that-your-web-server-is-deployed-and-secured-correctly">Verify that your web server is deployed and secured correctly</h3>
<p>To verify that your web server is deployed and secured correctly, you can use online tools or access your web server using HTTPS. Here are some examples:</p>
<p>To verify that your web server is using the latest Nginx configuration that you pushed to GitHub, you can use a tool like <a target="_blank" href="https://curl.se/">curl</a> or <a target="_blank" href="https://www.gnu.org/software/wget/">wget</a> to make a request to your web server and inspect the response headers.</p>
<p>For example, you can run the following command to see the security headers that your web server is sending:</p>
<pre><code class="lang-bash">curl -I https://www.yourdomain.com
</code></pre>
<p>Replace yourdomain.com with your actual domain name.</p>
<p>You can compare the headers with the ones that you configured in your NGINX configuration file and see if they match.</p>
<p>To verify that your web server is using the SSL/TLS certificate that you installed with Certbot, you can use a tool like <a target="_blank" href="https://www.ssllabs.com/ssltest/">SSL Labs</a> or <a target="_blank" href="https://www.htbridge.com/ssl/">HTBridge</a> to scan your web server and check its SSL/TLS configuration and rating. You can check the details of your certificate, such as the issuer, validity, and chain. You can also check the grade of your SSL/TLS configuration, which should be A or higher.</p>
<p>To verify that your web server is accessible and functional using HTTPS, you can simply open your web browser and visit your web server using HTTPS. For example, you can go to https://www.yourdomain.com and see your web page.</p>
<p>You can check the lock icon in your browser, which indicates that your connection is secure. You can also click on the icon and see the details of your certificate and connection.</p>
<p><img src="https://i.ibb.co/dGwr57V/certificate-viewer.png" alt="Viewing Certificate" width="600" height="400" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, you have learned how to secure your web server using NGINX and CicleCI. NGINX and CircleCI, when used together, can provide a powerful solution for ensuring the continuous security of your web applications.</p>
<p>Stay ahead of the curve by integrating these technologies into your workflow, and empower your team to deliver secure and reliable web services.</p>
<p>Please don't forget to share this guide with your colleagues, friends, and online communities if you find it insightful.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ What is CI/CD? Learn Continuous Integration/Continuous Deployment by Building a Project ]]>
                </title>
                <description>
                    <![CDATA[ Hi everyone! In this article you're going to learn about CI/CD (continuous integration and continuous deployment). We're going to review what this practice is about, how it compares to the previous approach in the software development industry, and f... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/what-is-ci-cd/</link>
                <guid isPermaLink="false">66d45f2a706b9fb1c166b947</guid>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ German Cocca ]]>
                </dc:creator>
                <pubDate>Fri, 07 Apr 2023 19:31:18 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2023/03/jj-ying-4XvAZN8_WHo-unsplash.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Hi everyone! In this article you're going to learn about CI/CD (continuous integration and continuous deployment).</p>
<p>We're going to review what this practice is about, how it compares to the previous approach in the software development industry, and finally see a practical example of how we can implement it in our projects.</p>
<p>Let's go!</p>
<h1 id="heading-table-of-contents">Table of Contents</h1>
<ul>
<li><p><a class="post-section-overview" href="#heading-intro">Intro</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-cicd-works">How CI/CD works</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-the-key-benefits-of-cicd">The key benefits of CI/CD</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-tools-for-cicd">Tools for CI/CD</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-a-cicd-pipeline-with-github-actions">How to set up a CI/CD pipeline with GitHub Actions</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-initializing-the-project">Initializing the project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-are-github-actions-and-how-do-they-work">What are GitHub Actions and how do they work?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-our-workflow">Setting up our workflow</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-the-magic">The magic</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-wrapping-up">Wrapping up</a></p>
</li>
</ul>
<h1 id="heading-intro">Intro</h1>
<p>Continuous Integration and Continuous Delivery (CI/CD) is a software development approach that aims to improve the speed, efficiency, and reliability of software delivery. This approach involves frequent code integration, automated testing, and continuous deployment of software changes to production.</p>
<p>Before the adoption of CI/CD in the software development industry, the common approach was a traditional, <strong>waterfall model</strong> of software development.</p>
<p>In this approach, developers worked in silos, with each stage of the software development life cycle completed in sequence. The process typically involved gathering requirements, designing the software, coding, testing, and deployment.</p>
<p><strong>The disadvantages of this traditional approach include:</strong></p>
<ol>
<li><p><strong>Slow Release Cycles:</strong> Since each stage of the software development life cycle was completed in sequence, the release cycle was slow, which made it difficult to respond quickly to changing customer needs.</p>
</li>
<li><p><strong>High Failure Rates:</strong> Software projects were prone to failure due to a lack of automated testing, which meant that developers had to rely on manual testing, leading to errors and bugs in the code.</p>
</li>
<li><p><strong>Limited Collaboration:</strong> The traditional approach did not encourage collaboration between developers, testers, and other stakeholders, which made it difficult to identify and fix issues.</p>
</li>
<li><p><strong>High Cost:</strong> The manual nature of software development meant that it was expensive, with high costs associated with testing, debugging, and fixing errors.</p>
</li>
<li><p><strong>Limited Agility:</strong> Since the traditional approach was linear, it was not possible to make changes to the software quickly or respond to customer needs in real-time.</p>
</li>
</ol>
<p>CI/CD emerged as a solution to these disadvantages, by introducing a more agile and collaborative approach to software development. CI/CD enables teams to work together, integrating their code changes frequently, and automating the testing and deployment process.</p>
<h1 id="heading-how-cicd-works">How CI/CD Works</h1>
<p>CI/CD is an automated process that involves frequent code integration, automated testing, and continuous deployment of software changes to production.</p>
<p>Let's explain each step in a little more detail:</p>
<h3 id="heading-code-integration">Code Integration</h3>
<p>The first step in the CI/CD pipeline is code integration. In this step, developers commit their code changes to a remote repository (like <a target="_blank" href="https://github.com/">GitHub</a>, <a target="_blank" href="https://about.gitlab.com/">GitLab</a> or <a target="_blank" href="https://bitbucket.org/product/">BitBucket</a>), where the code is integrated with the main codebase.</p>
<p>This step aims to ensure that the code changes are compatible with the rest of the codebase and do not break the build.</p>
<h3 id="heading-automated-testing">Automated Testing</h3>
<p>Once the code is integrated, the next step is automated testing. Automated testing involves running a suite of tests to ensure that the code changes are functional, meet the expected quality standards, and are free of defects.</p>
<p>This step helps identify issues early in the development process, allowing developers to fix them quickly and efficiently.</p>
<p>If you're not familiar with the topic of testing, you can refer <a target="_blank" href="https://www.freecodecamp.org/news/test-a-react-app-with-jest-testing-library-and-cypress/">to this article I wrote a while ago</a>.</p>
<h3 id="heading-continuous-deployment">Continuous Deployment</h3>
<p>After the code changes pass the automated testing step, the next step is continuous deployment. In this step, the code changes are automatically deployed to a staging environment for further testing.</p>
<p>This step aims to ensure that the software is continuously updated with the latest code changes, delivering new features and functionality to users quickly and efficiently.</p>
<h3 id="heading-production-deployment">Production Deployment</h3>
<p>The final step in the CI/CD pipeline is production deployment. In this step, the software changes are released to end-users. This step involves monitoring the production environment, ensuring that the software is running smoothly, and identifying and fixing any issues that arise.</p>
<p>The four steps of a CI/CD pipeline work together to ensure that software changes are tested, integrated, and deployed to production automatically. This automation helps to reduce errors, increase efficiency, and improve the overall quality of the software.</p>
<p>By adopting a CI/CD pipeline, development teams can achieve faster release cycles, reduce the risk of software defects, and improve the user experience.</p>
<p>Keep in mind that the pipeline stages might look different given the specific project or company we're talking about. Meaning, some teams might or might not use automated testing, some teams might or might not have a "staging" environment, and so on.</p>
<p>The key parts that make up the CI/CD practice are integration and deployment. This means that the code has to be continually integrated in a remote repository, and that this code has to be continually deployed to a given environment after each integration.</p>
<h1 id="heading-the-key-benefits-of-cicd">The Key Benefits of CI/CD</h1>
<p>The key benefits of CI/CD include:</p>
<ol>
<li><p><strong>Faster Release Cycles:</strong> By automating the testing and deployment process, CI/CD enables teams to release software more frequently, responding quickly to customer needs.</p>
</li>
<li><p><strong>Improved Quality:</strong> Automated testing ensures that software changes do not introduce new bugs or issues, improving the overall quality of the software.</p>
</li>
<li><p><strong>Increased Collaboration:</strong> Frequent code integration and testing require developers to work closely together, leading to better collaboration and communication.</p>
</li>
<li><p><strong>Reduced Risk:</strong> Continuous deployment allows developers to identify and fix issues quickly, reducing the risk of major failures and downtime.</p>
</li>
<li><p><strong>Cost-Effective:</strong> CI/CD reduces the amount of manual work required to deploy software changes, saving time and reducing costs.</p>
</li>
</ol>
<p>In summary, CI/CD emerged as a solution to the limitations of the traditional, linear approach to software development. By introducing a more agile and collaborative approach to software development, CI/CD enables teams to work together, release software more frequently, and respond quickly to customer needs.</p>
<h1 id="heading-tools-for-cicd">Tools for CI/CD</h1>
<p>There are several tools available for implementing CI/CD pipelines in software development. Each tool has its unique features, pros, and cons. Here are some of the most commonly used tools in CI/CD pipelines today:</p>
<h3 id="heading-jenkins">Jenkins</h3>
<p><a target="_blank" href="https://www.jenkins.io/">Jenkins</a> is an open-source automation server that is widely used in CI/CD pipelines. It is highly customizable and supports a wide range of plugins, making it suitable for various development environments. Some of its key features include:</p>
<p><strong>Pros:</strong></p>
<ul>
<li><p>Highly customizable with a wide range of plugins</p>
</li>
<li><p>Supports integration with various tools and technologies</p>
</li>
<li><p>Provides detailed reporting and analytics</p>
</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li><p>Requires some technical expertise to set up and maintain</p>
</li>
<li><p>Can be resource-intensive, especially for large projects</p>
</li>
<li><p>Lack of a centralized dashboard for managing multiple projects</p>
</li>
</ul>
<p>If you want to learn more about Jenkins, <a target="_blank" href="https://www.freecodecamp.org/news/learn-jenkins-by-building-a-ci-cd-pipeline/">here's a full course for you</a>.</p>
<h3 id="heading-travis-ci">Travis CI</h3>
<p><a target="_blank" href="https://www.travis-ci.com/">Travis CI</a> is a cloud-based CI/CD platform that provides automated testing and deployment for software projects. It supports several programming languages and frameworks, making it suitable for various development environments. Some of its key features include:</p>
<p><strong>Pros:</strong></p>
<ul>
<li><p>Easy to set up and use</p>
</li>
<li><p>Cloud-based, so there's no need to set up and maintain infrastructure</p>
</li>
<li><p>Supports a wide range of programming languages and frameworks</p>
</li>
<li><p>Provides detailed reporting and analytics</p>
</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li><p>Limited customization options</p>
</li>
<li><p>Not suitable for large projects with complex requirements</p>
</li>
<li><p>Limited support for on-premise installations</p>
</li>
</ul>
<p>Here's a helpful tutorial about <a target="_blank" href="https://www.freecodecamp.org/news/learn-how-to-automate-deployment-on-github-pages-with-travis-ci/">how to automate deployment on GitHub Pages with Travis CI</a>.</p>
<h3 id="heading-github-actions">GitHub actions</h3>
<p><a target="_blank" href="https://github.com/features/actions">GitHub Actions</a> is a powerful CI/CD tool that allows developers to automate workflows, run tests, and deploy code directly from their GitHub repositories.</p>
<p><strong>Pros:</strong></p>
<ul>
<li><p>Integrated with GitHub</p>
</li>
<li><p>Easy to use</p>
</li>
<li><p>Provides large ecosystem and good documentation</p>
</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li><p>Limited build minutes</p>
</li>
<li><p>Complex YAML syntax</p>
</li>
</ul>
<p>Side comment: I mention GitHub actions here because it's a popular tool – but keep in mind that other online repository providers like GitLab and BitBucket also provide very similar options to GitHub actions.</p>
<h3 id="heading-built-in-cicd-features-by-hosts">Built-in CI/CD features by hosts</h3>
<p>Popular hosts such as <a target="_blank" href="https://vercel.com/">Vercel</a> or <a target="_blank" href="https://www.netlify.com/">Netlify</a> have built-in in CI/CD features that allow you to link an online repository to a given site, and deploy to that site after a given event occurs in that repo.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Very simple to set up and use</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Limited customization options</li>
</ul>
<p>Each of these tools has its unique features, pros, and cons. The choice of tool will depend on the specific requirements of your project, your team's technical expertise, and your budget.</p>
<p>Here's a tutorial about <a target="_blank" href="https://www.freecodecamp.org/news/how-to-deploy-your-front-end-app/">how to deploy a front-end app with Netlify</a>. And in this tutorial, you'll <a target="_blank" href="https://www.freecodecamp.org/news/how-to-build-a-jamstack-site-with-next-js-and-vercel-jamstack-handbook/">learn how to use Vercel to deploy a Next.js app</a>.</p>
<h1 id="heading-how-to-set-up-a-cicd-pipeline-with-github-actions">How to Set Up a CI/CD Pipeline with GitHub Actions</h1>
<p>Cool, so now that we have a clear idea of what CI/CD is, let's see how we can implement a simple example with an actual project using <a target="_blank" href="https://github.com/features/actions">GitHub actions</a>.</p>
<h2 id="heading-initializing-the-project">Initializing the Project</h2>
<p>We'll start with a very basic React app built with <a target="_blank" href="https://www.freecodecamp.org/news/how-to-build-a-react-app-different-ways/#what-is-vite">Vite</a>. You can do that by running <code>yarn create vite</code> in your console.</p>
<p>We'll focus on the CI/CD pipeline here, so there will be no complexity in the actual app code. But just to have an idea, the <code>app.jsx</code> component will have this code:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> <span class="hljs-string">'./App.css'</span>;

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">App</span>(<span class="hljs-params"></span>) </span>{

    <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">'App'</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Vite + Reactooooo<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
    );
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> App;
</code></pre>
<p>And then we'll have a test file that will check for that text to render:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { describe, expect, it } <span class="hljs-keyword">from</span> <span class="hljs-string">'vitest'</span>;
<span class="hljs-keyword">import</span> { render, screen } <span class="hljs-keyword">from</span> <span class="hljs-string">'./utils/test-utils/test-utils.jsx'</span>;

<span class="hljs-keyword">import</span> App <span class="hljs-keyword">from</span> <span class="hljs-string">'src/App.jsx'</span>;

describe(<span class="hljs-string">'App'</span>, <span class="hljs-keyword">async</span> () =&gt; {
    it(<span class="hljs-string">'should render while authenticating'</span>, <span class="hljs-function">() =&gt;</span> {
        render(<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">App</span> /&gt;</span></span>);

        expect(screen.getByText(<span class="hljs-string">'Vite + Reactooooo'</span>)).toBeInTheDocument();
    });
});
</code></pre>
<p>This test will run each time we run the <code>yarn test</code> command.</p>
<p>Next step should be to push our code to a GitHub repo. Then, let's talk a bit more about what GitHub actions are and how they work.</p>
<h2 id="heading-what-are-github-actions-and-how-do-they-work">What are GitHub Actions and How Do They Work?</h2>
<p>GitHub Actions is a CI/CD (Continuous Integration/Continuous Deployment) service provided by GitHub. It allows developers to automate workflows by defining custom scripts, known as "actions", that can be triggered by events such as pushes to a repository, pull requests, or issues.</p>
<p>Actions are defined in a YAML file, also known as a "workflow", which specifies the steps required to complete a task. GitHub Actions workflows can run on Linux, Windows, and macOS environments and support a wide range of programming languages and frameworks.</p>
<p>When an event triggers a GitHub Actions workflow, the service creates a fresh environment, installs dependencies, and runs the defined steps in the order specified. This can include tasks such as building, testing, packaging, and deploying code.</p>
<p>GitHub Actions also provides several built-in actions that can be used to simplify common tasks, such as checking out code, building and testing applications, publishing releases, and deploying to popular cloud providers like AWS, Azure, and Google Cloud.</p>
<p>GitHub Actions workflows can be run on a schedule, manually, or automatically when a specific event occurs, such as a pull request being opened or a new commit being pushed to a branch.</p>
<h2 id="heading-setting-up-our-workflow">Setting Up Our Workflow</h2>
<p>Great, so as we've seen, basically GitHub actions are a feature that allows us to define workflows four our projects. These workflows are nothing but a series of tasks or steps that will execute on GitHub's cloud after a given event we declare.</p>
<p>The way GitHub reads and executes these workflows is by automatically reading files within the <code>.github/workflows</code> directory in the root of our project. These workflow files should have the <code>.yaml</code> extension and use the <a target="_blank" href="https://www.redhat.com/en/topics/automation/what-is-yaml">YAML</a> syntax.</p>
<p>To create a new workflow we just have to create a new YAML file within that directory. We'll call ours <code>prod.yaml</code> since we'll use it to deploy the production branch of our project.</p>
<p>Keep in mind a single project can have many different workflows that run different tasks on different occasions. For example, we could have a workflow for dev and staging branches as well, as those environments could require different tasks to execute and will probably deploy on different sites.</p>
<p>After creating this file, let's drop the following code in it:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Name of our workflow</span>
<span class="hljs-attr">name:</span> <span class="hljs-string">Production</span> <span class="hljs-string">deploy</span>

<span class="hljs-comment"># Trigger the workflow on push to the main branch</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-comment"># List of jobs</span>
<span class="hljs-comment"># A "job" is a set of steps that are executed on the same runner</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-comment"># Name of the job</span>
  <span class="hljs-attr">test-and-deploy-to-netlify:</span>
    <span class="hljs-comment"># Operating system to run on</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-comment"># List of steps that make up the job</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-comment"># Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>

    <span class="hljs-comment"># Setup Node.js environment</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Use</span> <span class="hljs-string">Node.js</span> <span class="hljs-number">16.</span><span class="hljs-string">x</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v2</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">node-version:</span> <span class="hljs-string">'16.x'</span>

    <span class="hljs-comment"># Install dependencies</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">yarn</span> <span class="hljs-string">install</span>

    <span class="hljs-comment"># Run tests</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">yarn</span> <span class="hljs-string">test</span>

    <span class="hljs-comment"># Deploy to Netlify</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Netlify</span> <span class="hljs-string">Deploy</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">jsmrcaga/action-netlify-deploy@v2.0.0</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-comment"># Auth token to use with netlify</span>
        <span class="hljs-attr">NETLIFY_AUTH_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.NETLIFY_AUTH_TOKEN</span> <span class="hljs-string">}}</span>
        <span class="hljs-comment"># Your Netlify site id</span>
        <span class="hljs-attr">NETLIFY_SITE_ID:</span>  <span class="hljs-string">${{</span> <span class="hljs-string">secrets.NETLIFY_SITE_ID</span> <span class="hljs-string">}}</span>
        <span class="hljs-comment"># Directory where built files are stored</span>
        <span class="hljs-attr">build_directory:</span> <span class="hljs-string">'./dist'</span>
        <span class="hljs-comment"># Command to install dependencies</span>
        <span class="hljs-attr">install_command:</span> <span class="hljs-string">yarn</span> <span class="hljs-string">install</span>
        <span class="hljs-comment"># Command to build static website</span>
        <span class="hljs-attr">build_command:</span> <span class="hljs-string">yarn</span> <span class="hljs-string">build</span>
</code></pre>
<p>So our workflow has the following tasks declared:</p>
<ol>
<li><p>"Checkout code" step that checks out the latest commit on the current branch.</p>
</li>
<li><p>"Use Node.js 16.x" step that sets up the Node.js environment to version 16.x.</p>
</li>
<li><p>"Install dependencies" step that installs the project dependencies using the Yarn package manager.</p>
</li>
<li><p>"Run tests" step that runs the project's tests using the Yarn package manager.</p>
</li>
<li><p>"Netlify Deploy" step that deploys the project to Netlify using the jsmrcaga/action-netlify-deploy action. This step uses the Netlify authentication token and site ID secrets stored in the GitHub repository's secrets. The build directory, install command, and build command are also specified.</p>
</li>
</ol>
<p>You probably noticed the first and last steps have the keyword <code>uses</code>. This keyword allows you to use actions or workflows developed by other GitHub users, and it's one of the best features of GitHub actions.</p>
<p>What's great is that by using this third party actions we can execute complex tasks such as deploying to an external host or building complex cloud infrastructure without the need to write every single line of necessary code.</p>
<p>As these tasks tend to be repetitive and frequently executed in many projects, we can just use a workflow developed by an official company account (such as Azure or AWS for example) or an independent open-source developer. Think about it as using a third party library. It's the same idea but taken to CI/CD workflows. Very convenient.</p>
<p>Another important thing to mention here is that in GitHub actions workflows tasks run <strong>sequentially</strong>, one after the other. And if a given task fails or throws and error, <strong>the next one won't execute</strong>. This is important because if we have a problem when installing our dependencies or a test fails, we don't want that code to be deployed.</p>
<p>Before we can push this code and see how the magic works, we first need to create a site on Netlify and get the <strong>NETLIFY_AUTH_TOKEN</strong> and <strong>NETLIFY_SITE_ID</strong>. This is quite straight forward even if you don't have previous experience with Netlify, so give it a try and Google a bit if you can't figure it out. ;)</p>
<p>Once you have these two tokens, you need to declare them as repository secrets in your GitHub repo. You can do this from the "settings" tab:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/04/image-32.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><em>Configure both Netlify secret tokens in your repo</em></p>
<p>With this in place, now our <code>prod.yaml</code> file will be able to read these two tokens and execute the Netlify deploy action.</p>
<h2 id="heading-the-magic">The Magic</h2>
<p>Now that we have everything in place, let's push our code and see how it goes.</p>
<p>After pushing, if we go to the "actions" tabs of our repo, on the left we'll see a list of all the workflows we have in our repo. And on the right we'll see a list of each execution of the selected workflow. Since our workflow executes after each push, we should see a new execution each time we push.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/04/image-33.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><em>A workflow execution</em></p>
<p>When the execution has a yellow light to the left of it, it means it's still running (executing tasks). If it has a green light it means it finished executing successfully and if the light is red, you know something went wrong, haha...</p>
<p>After clicking on the execution we can see a list of the workflow's jobs (we only had a single one).</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/04/image-34.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><em>The workflow's jobs</em></p>
<p>And after clicking on the job we can see a list of the job's tasks.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/04/image-35.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><em>The tasks of the job</em></p>
<p>Each task is expansible and within it we can see logs corresponding to that task's execution. This is quite useful for debugging purposes. ;)</p>
<p>Now if we go to our previously set up Netlify site, we should see our app up and running!</p>
<p>And now that we have our CI/CD pipeline in place, we can deploy our app after each push to the main branch, all without lifting another finger. =D</p>
<h1 id="heading-wrapping-up"><strong>Wrapping Up</strong></h1>
<p>CI/CD is a software development approach that provides several benefits to software development teams, including faster time-to-market, improved quality, increased collaboration, reduced risk, and cost-effectiveness.</p>
<p>By automating the software delivery pipeline, teams can quickly deploy new features and bug fixes, while reducing the risk of major failures and downtime.</p>
<p>With the availability of several CI/CD tools, it has become easier for teams to implement this approach and improve their software delivery process.</p>
<p>Well everyone, as always, I hope you enjoyed the article and learned something new.</p>
<p>If you want, you can also follow me on <a target="_blank" href="https://www.linkedin.com/in/germancocca/">LinkedIn</a> or <a target="_blank" href="https://twitter.com/CoccaGerman">Twitter</a>. See you in the next one!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/giphy-1.gif" alt="Image" width="600" height="400" loading="lazy"></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Setup a CI/CD Pipeline for a Next.js App using AWS ]]>
                </title>
                <description>
                    <![CDATA[ Hello Everyone! Deploying a web application is a challenging task (at least for me), especially when it comes to keeping it updated. It can take up a lot of time and energy if it has to be deployed manually every time you make a change.  But I recent... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/ci-cd-pipeline-for-nextjs-app-with-aws/</link>
                <guid isPermaLink="false">66ba109d90067134b63982bf</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Next.js ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Arunachalam B ]]>
                </dc:creator>
                <pubDate>Mon, 27 Mar 2023 21:53:46 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2023/03/Deploy-Next.js-using-AWS--1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Hello Everyone! Deploying a web application is a challenging task (at least for me), especially when it comes to keeping it updated. It can take up a lot of time and energy if it has to be deployed manually every time you make a change. </p>
<p>But I recently discovered a way to automate the deployment process for Next.js apps using AWS CodeDeploy and CodePipeline. It made my life so much easier, and I'm excited to share it with you.</p>
<p>In this tutorial, I'll guide you through the process of setting up auto-deployment for your Next.js app using the AWS services CodePipeline and CodeDeploy. By the end of it, you'll be able to save a lot of time by deploying your app automatically every time you push the code.</p>
<p>Let's get started!</p>
<h2 id="heading-table-of-contents">Table of Contents:</h2>
<ol>
<li><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></li>
<li><a class="post-section-overview" href="#heading-how-to-deploy-the-nextjs-app-to-aws-ec2">How to Deploy the Next.js App to AWS EC2</a></li>
<li><a class="post-section-overview" href="#heading-how-to-run-the-nextjs-app-in-production-mode">How to Run the Next.js App in Production Mode</a></li>
<li><a class="post-section-overview" href="#heading-how-to-run-a-nextjs-app-forever-when-the-console-is-closed">How to Run a Next.js App Forever When the Console is Closed</a></li>
<li><a class="post-section-overview" href="#heading-what-is-codedeploy">What is CodeDeploy?</a></li>
<li><a class="post-section-overview" href="#heading-how-to-setup-auto-deployment-using-codepipeline-and-codedeploy">How to Setup Auto-Deployment using CodePipeline and CodeDeploy</a></li>
<li><a class="post-section-overview" href="#heading-how-to-attach-the-iam-role-to-ec2">How to Attach the IAM Role to EC2</a></li>
<li><a class="post-section-overview" href="#heading-how-to-create-the-codepipeline">How to Create the CodePipeline</a></li>
<li><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ol>
<li>EC2 machine running Ubuntu</li>
<li>Very basic knowledge of EC2 and IAM AWS Services</li>
</ol>
<h2 id="heading-how-to-deploy-the-nextjs-app-to-aws-ec2">How to Deploy the Next.js App to AWS EC2</h2>
<p>To start simple, let's manually deploy the sample Next.js boilerplate app "hello-world" to EC2. The steps are almost the same for all Next.js applications.</p>
<h3 id="heading-login-to-ec2">Login to EC2</h3>
<p>Login to the EC2 machine which you've created using the below command:</p>
<pre><code>ssh -i /path/key-pair-name.pem instance-user-name@instance-IP-address
</code></pre><p>When you try to log into EC2, this is the common error that most people will encounter (I got it, too):</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-216.png" alt="Image" width="600" height="400" loading="lazy">
<em>Permissions 0664 for <code>.pem</code> file is too open error</em></p>
<p>This error describes that the <code>.pem</code> file should be read-protected. Only the root user should be able to read it. So, you have to set the file permission to <code>400</code>. Run the following command to achieve that:</p>
<pre><code>chmod <span class="hljs-number">400</span> key-pair-name.pem
</code></pre><p>EC2 by default comes with no software installed. Once you've logged into EC2, install NodeJS. There's an excellent <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-ubuntu-20-04">article</a> published by Digital Ocean and I use it every time I have to install Node on the server.</p>
<p>I have uploaded the <a target="_blank" href="https://github.com/5minslearn/deploy_nextjs_app">boilerplate repo</a> to Github. You can clone the repo by running the following command:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/5minslearn/deploy_nextjs_app.git
</code></pre>
<p>Navigate to the project and install the dependencies by running the below commands.</p>
<p>A quick note here. I'm a big fan of yarn for its lightning-fast dependency management. But I see most people use <code>npm</code> to manage their dependencies. If you like to use <code>npm</code>, you can replace <code>yarn install</code> with <code>npm install</code> in the below commands.</p>
<p>If you like to go with <code>yarn</code>, install yarn by following this <a target="_blank" href="https://classic.yarnpkg.com/lang/en/docs/install/#debian-stable">tutorial</a> first.</p>
<pre><code>cd deploy_nextjs_app
yarn install
</code></pre><p>Let's run the application:</p>
<pre><code>yarn dev
</code></pre><p>Hit "http://ec2-public-ip-address:3000/" on your browser and you should be able to see the following page:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-217.png" alt="Image" width="600" height="400" loading="lazy">
<em>Next.js Hello World App</em></p>
<p>There's another common issue that most people face here which we'll look at next.</p>
<h3 id="heading-how-to-fix-the-timeout-error-ec2">How to fix the timeout error (EC2)</h3>
<p>"Oh, My God! My site is loading for a long time and finally, it's throwing a timeout error. What could be the issue? Where did I made mistake?"</p>
<p>If this happens to you, then you can follow the below steps to fix it.</p>
<p>This issue basically occurs if your server does not expose port 3000. Remember, by default Next apps will be running on port 3000. But, you have to allow port 3000 from the Security Group of your EC2 console to access from your browser.</p>
<p>Login to your AWS console, select your EC2 instance, and then select the Security Group option. Click on the "Edit inbound rules" button. Add port 3000 to the list as shown in the below screenshot. Then hit the "Save rules" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-218.png" alt="Image" width="600" height="400" loading="lazy">
<em>Adding port 3000 to a security group</em></p>
<p>Visit the link "http://ec2-public-ip-address:3000/", and you'll be amazed to see that your page loads like magic.</p>
<p>So far, we've just run our app in development mode and verified that it's working.</p>
<h2 id="heading-how-to-run-the-nextjs-app-in-production-mode">How to Run the Next.js App in Production Mode</h2>
<p>To deploy the app in Production Mode, you have to build your app first. Run <code>yarn build</code> to build the app and <code>yarn start</code> to start the app in production mode.</p>
<pre><code>yarn build
yarn start
</code></pre><p>Hit "http://ec2-public-ip-address:3000/" again and this time you'll see that your app loads faster than before.</p>
<p>Apps running in Production mode will always be faster when compared to the ones running in Development mode. This is because Production apps will be optimized for performance. </p>
<h2 id="heading-how-to-run-a-nextjs-app-forever-when-the-console-is-closed">How to Run a Next.js App Forever When the Console is Closed</h2>
<p>So, you have your app running now. But you might notice that it's blocking you from closing your terminal and exiting from server connection. If you do so, your site will be down. That's where PM2 comes to play. </p>
<p>Basically, PM2 is a process manager that helps keep Node applications alive all the time. It runs in the background managing Node applications for you.</p>
<p>Install PM2 using the following command:</p>
<pre><code>sudo yarn <span class="hljs-built_in">global</span> add pm2
</code></pre><p>After PM2 installation, run the below command to run and manage your app in the background:</p>
<pre><code>pm2 start yarn --name [name-<span class="hljs-keyword">of</span>-your-app] -- start -p [port-number]
</code></pre><p>Replace <code>[name-of-your-app]</code> with your app name and <code>[port-number]</code> with 3000. Here's an example command,</p>
<pre><code>pm2 start yarn --name next_hello_world_app -- start -p <span class="hljs-number">3000</span>
</code></pre><p>Hit "http://ec2-public-ip-address:3000/" and you'll again be amazed to see your app up and running.</p>
<p>It's always a best practice to save the PM2 process. When you reboot your instance, your PM2 instances will be lost. In order to restore it to its old state, you have to save the PM2 process. Here's the command for that:</p>
<pre><code>pm2 save
</code></pre><p>Here's the command to restore your PM2 instances on reboot (don't execute this now, we'll come back to this shortly):</p>
<pre><code>pm2 resurrect
</code></pre><p>We have successfully deployed the Next.js app manually. But remember, every time you make a code change and want to see the changes on your site, you have to login into EC2, pull the latest changes, build the app, and restart the app. </p>
<p>This will consume a lot of time and I'm too lazy to do it. So let's automate this in the next step!</p>
<p>Before setting up automatic deployment you have to know how CodeDeploy works.</p>
<h2 id="heading-what-is-codedeploy">What is CodeDeploy?</h2>
<p>CodeDeploy lets you deploy your application automatically to any number of EC2 instances. We need to prepare two items before beginning this process:</p>
<ol>
<li>CodeDeploy Agent must be installed in the EC2 instance. We use this to continuously poll CodeDeploy and deploy if any new changes are available.</li>
<li>A file called <code>appspec.yml</code> must be present in the root folder. This file describes the steps to be followed for the deployment.</li>
</ol>
<p>There is awesome <a target="_blank" href="https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html">documentation</a> by AWS to help you install CodeDeploy Agent. Please follow each and every step to install CodeDeploy Agent on your EC2 machine.</p>
<p>To verify that CodeDeploy agent is installed, run the below command. If you see <em>active (running),</em> Kudos to you! CodeDeploy was installed successfully.</p>
<pre><code>sudo service codedeploy-agent status
</code></pre><p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-219.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodeDeploy Agent running status</em></p>
<p>Now let's create the <code>appspec.yml</code> file. I've written the deployment instructions in the <code>deploy.sh</code> file. It's enough to run this file in the <code>appspec.yml</code> file. If you want to learn more about <code>appspec.yml</code>, check out the AWS official <a target="_blank" href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure.html">documentation</a>.</p>
<p>Create a file called <code>appspec.yml</code> and add the following contents:</p>
<pre><code>version: <span class="hljs-number">0.0</span>
<span class="hljs-attr">os</span>: linux
<span class="hljs-attr">hooks</span>:
  ApplicationStart:
    - location: deploy.sh
      <span class="hljs-attr">timeout</span>: <span class="hljs-number">300</span>
      <span class="hljs-attr">runas</span>: ubuntu
</code></pre><p>I hope you understand the instructions in the above file. If not, here's a super simple explanation. I'm advising the CodeDeploy Agent that I'm running a Linux OS in my instance and instructing it to run the <code>deploy.sh</code> file as <code>ubuntu</code> user with the timeout set to 300 seconds. </p>
<p>Here's my <code>deploy.sh</code> file:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
cd /path/to/project/on/EC2 
git pull origin master
yarn install &amp;&amp;
yarn build &amp;&amp;
pm2 restart [name]
</code></pre><p>This file contains instructions to navigate to the project folder on EC2, pull the latest code from source control, install dependencies, build the project, and restart the project instance.</p>
<p>This file is already available in the repo. No action for you here. Now it's time to set up automatic deployment.</p>
<h2 id="heading-how-to-setup-auto-deployment-using-codepipeline-and-codedeploy">How to Setup Auto-Deployment using CodePipeline and CodeDeploy</h2>
<p>Two IAM roles have to be created to set up auto-deployment. Some complications will begin from here. To make things simple, I've attached screenshots with the appropriate items highlighted with red boxes.</p>
<h3 id="heading-create-an-iam-role-for-codedeploy">Create an IAM Role for CodeDeploy</h3>
<p>You have to create this role to deploy the code every time you push. </p>
<p>Navigate to IAM in the AWS Console by searching for "IAM" in the search bar at the top. Click Roles on the left pane and Click the "Create role" button at the top right.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-220.png" alt="Image" width="600" height="400" loading="lazy">
<em>Create IAM role</em></p>
<p>Choose AWS service in Trusted entity types and choose CodeDeploy in the Use cases section and proceed to the next step.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-221.png" alt="Image" width="600" height="400" loading="lazy">
<em>IAM role for CodeDeploy</em></p>
<p>Now, you can see that the AWSCodeDeployRole policy is the only policy available, and it'll be chosen by default in this (Permissions) step. Let's proceed to the next section. No action for you here. </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-222.png" alt="Image" width="600" height="400" loading="lazy">
<em>AWSCodeDeploy Permission</em></p>
<p>Enter a name for your IAM role. You should choose a meaningful name to identify this in the future. I'm calling it <em>service-role-for-code-deploy</em>. Review the permission in the JSON and click the Create role button at the bottom.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-223.png" alt="Image" width="600" height="400" loading="lazy">
<em>AWSCodeDeploy Permission Review</em></p>
<h3 id="heading-create-an-iam-role-for-ec2">Create an IAM role for EC2</h3>
<p>Let's create the next role. This role is for EC2. Choose AWS service in the Trusted entity type, EC2 in the Common use cases section, and choose CodeDeploy in Use cases for other AWS services. Click Next to proceed to the next section.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-224.png" alt="Image" width="600" height="400" loading="lazy">
<em>IAM role for EC2</em></p>
<p>There are a lot of policies available for EC2 and CodeDeploy. In the Add permissions section, search for <em>codedeploy</em> (No space between code and deploy) and select "AmazonEC2RoleForCodeDeploy" and proceed to the next step.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-225.png" alt="Image" width="600" height="400" loading="lazy">
<em>Adding AmazonEC2RoleForCodeDeploy permission</em></p>
<p>No change in this step. Review and give a meaningful name (I'm naming it as "code-deploy-role-for-ec2") for your role and click "Create role" button.</p>
<h2 id="heading-how-to-attach-the-iam-role-to-ec2">How to Attach the IAM Role to EC2</h2>
<p>Once the IAM role for EC2 is created, we have to attach it to the EC2 instance.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-226.png" alt="Image" width="600" height="400" loading="lazy">
<em>EC2 instance before attaching the IAM role</em></p>
<p>To attach the IAM role to the EC2 instance, open your EC2 instance, click on the "Actions" button on the top right, and select "Security" in the drop-down. Then select "Modify IAM role".</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-227.png" alt="Image" width="600" height="400" loading="lazy">
<em>Modify IAM role for EC2 instance</em></p>
<p>Select the IAM role which you created last (code-deploy-role-for-ec2) and click the "Update IAM role" button. Reboot the EC2 for the changes to take effect.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-228.png" alt="Image" width="600" height="400" loading="lazy">
<em>Update IAM role for EC2 instance</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-229.png" alt="Image" width="600" height="400" loading="lazy">
<em>EC2 instance after attaching IAM role</em></p>
<p>After rebooting the EC2, login to EC2 with SSH and run the <code>pm2 resurrect</code> command to restore the PM2 processes. Failing to do this may land you at "PM2 Process or Namespace not found error" while running automatic deployment. </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-230.png" alt="Image" width="600" height="400" loading="lazy">
<em>PM2 process restore</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-231.png" alt="Image" width="600" height="400" loading="lazy">
<em>PM2 process or namespace not found an error</em></p>
<h3 id="heading-how-to-create-the-codedeploy-application">How to Create the CodeDeploy Application</h3>
<p>In the AWS Console, search "CodeDeploy" in the search bar at the top. Select "Applications" in the left pane. Click on the "Create application" button on the top right.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-232.png" alt="Image" width="600" height="400" loading="lazy">
<em>Navigate to CodeDeploy in AWS Console</em></p>
<p>Enter the Application name, choose the "EC2/On-premises" compute platform, and click the "Create application" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-233.png" alt="Image" width="600" height="400" loading="lazy">
<em>Create CodeDeploy application</em></p>
<p>Once it's done, you'll automatically be redirected to the Deployment groups section. We have to create a deployment group. Click on the "Create deployment group" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-234.png" alt="Image" width="600" height="400" loading="lazy">
<em>Create CodeDeploy Deployment group</em></p>
<p>Enter the deployment group name, select the service role (1st created role) you created, and select the deployment type as in-place:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-235.png" alt="Image" width="600" height="400" loading="lazy">
<em>Create CodeDeploy Deployment group</em></p>
<p>In the Environment configuration section, select "Amazon EC2 instances" and select the key as Name. Enter your EC2 instance name in the value.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-236.png" alt="Image" width="600" height="400" loading="lazy">
<em>Code Deployment Group Environment configuration</em></p>
<p>In the Agent configuration section, select Never, as we installed CodeDeployAgent already. Select "CodeDeployDefault.AllAtOnce" in the Deployment settings section. Leave the "Enable load balancing" checkbox unchecked. Finally, click the "Create a deployment group" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-237.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodeDeploy deployment group configurations</em></p>
<h2 id="heading-how-to-create-the-codepipeline">How to Create the CodePipeline</h2>
<p>AWS CodePipeline helps you to automate your release pipelines for fast and reliable application and infrastructure updates. Now it's time to create our CodePipeline. In the AWS Console, search for "CodePipeline" in the search bar.</p>
<p>Select "Pipelines" in the left pane and click on "Create pipeline" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-238.png" alt="Image" width="600" height="400" loading="lazy">
<em>Create CodePipeline</em></p>
<p>Enter the Pipeline name, and Role name. Remember, we created roles for EC2 and CodeDeploy, but not for CodePipeline. AWS by default creates it from here. </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-239.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodePipeline settings</em></p>
<h3 id="heading-add-source-stage">Add Source Stage</h3>
<p>In this step, we have to connect our repo with CodePipeline to deploy the changes immediately after the code is pushed.</p>
<p>We'll be using GitHub as our source. Choose GitHub (version 2) in the source provider, and click on the "Connect to GitHub" button. This will open up a new pop-up window. Click on the "Connect to GitHub" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-240.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodePipeline adding source stage</em></p>
<p>This will take you to the GitHub authorization page where you have to sign into your GitHub account. Click on the "Install a new app" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-241.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodePipeline Github Authorization</em></p>
<p>Choose "Only select repositories" and choose your repository below that.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-242.png" alt="Image" width="600" height="400" loading="lazy">
<em>Installing AWS connector for GitHub</em></p>
<p>Once installed, it will prompt you for the password. Click the "Connect" button once you're done with your authentication.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-243.png" alt="Image" width="600" height="400" loading="lazy">
<em>Connecting GitHub to AWS</em></p>
<p>After connecting to GitHub, select the Repository name and branch name. To start the CodePipline on code change, it's important to select the check box "Start the pipeline on source code change" – otherwise auto deployment will not happen. For "Output and artifact format", select "CodePipeline default" and click the "Next" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-244.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodePipeline - Select source code repo</em></p>
<p>The next step is to add the build stage, but since we're deploying a simple app we don't need a build stage. Enterprise companies prefer to build their app using AWS CodeBuild service. Let's keep things simple and simply skip the build stage. </p>
<p>If you want me to write about CodeBuild let me know, I will try to cover it in my upcoming tutorials. </p>
<h3 id="heading-add-deploy-stage">Add Deploy Stage</h3>
<p>In the deployment stage step, choose "AWS CodeDeploy" for the "Deploy provider" and select the region where you created the above CodeDeploy application. Then select the "Application name" and "Deployment group" that we created in the previous steps and click the "Next" button.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-245.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodePipleine - Adding Deployment stage</em></p>
<p>The last step is "Review". Review everything carefully and click on the "Create pipeline" button. Once the pipeline is created it will start the deployment process. If you followed all the above steps, the pipeline should read "Succeeded" on your very first build.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-246.png" alt="Image" width="600" height="400" loading="lazy">
<em>Pipeline Succeeded</em></p>
<h3 id="heading-how-to-verify-auto-deployment">How to Verify Auto-Deployment</h3>
<p>Now let's verify if the Auto-deployment works properly. This is the Home page of our project:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-247.png" alt="Image" width="600" height="400" loading="lazy">
<em>Next.js Hello World App</em></p>
<p>Let's change the text from "Hello World" to "Welcome to 5minslearn" and push the code to GitHub.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-248.png" alt="Image" width="600" height="400" loading="lazy">
<em>Git code diff</em></p>
<p>Here we go! The CodePipeline has triggered automatically and the changes were successfully deployed.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-249.png" alt="Image" width="600" height="400" loading="lazy">
<em>CodeDeploy getting triggered automatically on code changes in Git</em></p>
<p>Now head to "http://ec2-public-ip-address:3000/", you will see the below page:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/03/image-250.png" alt="Image" width="600" height="400" loading="lazy">
<em>Next.js App after auto deployment</em></p>
<p>Congrats! 🎉 We successfully completed setting up auto-deployment for a Next.js app.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, we learned how to deploy Next.js manually on EC2 and set up auto-deployment using AWS services such as CodeDeploy and CodePipeline.</p>
<p>Hope you enjoyed reading this article! If you are stuck at any point feel free to drop your queries to my <a target="_blank" href="mailto:arun@gogosoon.com">email</a>. I'll be happy to help you!</p>
<p>If you wish to learn more about AWS, subscribe to my <a target="_blank" href="https://5minslearn.gogosoon.com/?ref=fcc_nextjs_deployment">newsletter</a> (<a target="_blank" href="https://5minslearn.gogosoon.com/?ref=fcc_nextjs_deployment">https://5minslearn.gogosoon.com/</a>) and follow me on social media. </p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ MERN App Development – How to Build a CI/CD Pipeline with Jenkins ]]>
                </title>
                <description>
                    <![CDATA[ By Rakesh Potnuru As you continue to develop your software, you must also continue to integrate it with previous code and deploy it to servers.  Manually doing this is a time-consuming process that can occasionally result in errors. So we need to do ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/automate-mern-app-deployment-with-jenkins/</link>
                <guid isPermaLink="false">66d460c5d14641365a05095d</guid>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Express ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Jenkins ]]>
                    </category>
                
                    <category>
                        <![CDATA[ mongo ]]>
                    </category>
                
                    <category>
                        <![CDATA[ node ]]>
                    </category>
                
                    <category>
                        <![CDATA[ React ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Wed, 08 Mar 2023 17:42:07 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2023/03/CICD-Pipeline-with-Jenkins.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Rakesh Potnuru</p>
<p>As you continue to develop your software, you must also continue to integrate it with previous code and deploy it to servers. </p>
<p>Manually doing this is a time-consuming process that can occasionally result in errors. So we need to do this in a continuous and automated manner – which is what you will learn in this article.</p>
<p>We'll go over how you can improve your MERN (MongoDB, Express, React, and NodeJs) app development process by setting up a CI/CD pipeline with Jenkins. You'll see how to automate deployment for faster, more efficient releases.</p>
<h2 id="heading-lets-get-started">Let's Get Started</h2>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li>Basic understanding of MERN stack technologies.</li>
<li>Basic understanding of Docker.</li>
<li>Get source code from <a target="_blank" href="https://github.com/itsrakeshhq/productivity-app">GitHub</a></li>
</ul>
<h2 id="heading-the-problem">The Problem</h2>
<p>Consider this <a target="_blank" href="https://github.com/itsrakeshhq/productivity-app">productivity app</a> – it's a MERN project that we are going to use in this article. There are numerous steps we must complete, from building the application to pushing it to the Docker hub. </p>
<p>First, we must run tests with a command to determine whether all tests pass or not. If all tests pass, we build the Docker images and then push those images to Docker Hub. If your application is extremely complex, you may need to take additional steps. </p>
<p>Now, imagine that we're doing everything manually, which takes time and can lead to mistakes.</p>
<p><img src="https://i.imgur.com/iWAmMm4.jpg" alt="Waiting for deployment without devops meme" width="600" height="400" loading="lazy">
<em>Waiting for deployment without devops meme</em></p>
<h2 id="heading-the-solution">The Solution</h2>
<p>To address this problem, we can create a CI/CD <strong>Pipeline</strong>. So, whenever you add a feature or fix a bug, this pipeline gets triggered. This automatically performs all of the steps from testing to deploying.</p>
<h2 id="heading-what-is-cicd-and-why-is-it-important">What is CI/CD and Why is it Important?</h2>
<p><strong>C</strong>ontinuous <strong>I</strong>ntegration and <strong>C</strong>ontinuous <strong>D</strong>eployment is a series of steps performed to automate software integration and deployment. CI/CD is the heart of DevOps.</p>
<p><img src="https://i.imgur.com/uMFtPwJ.png" alt="ci cd steps" width="600" height="400" loading="lazy">
<em>CI/CD steps</em></p>
<p>From development to deployment, our MERN app goes through four major stages: testing, building Docker images, pushing to a registry, and deploying to a cloud provider. All of this is done manually by running various commands. And we need to do this every time a new feature is added or a bug is fixed. </p>
<p>But this will significantly reduce developer productivity, which is why CI/CD can be so helpful in automating this process. In this article, we will cover the steps up until pushing to the registry.</p>
<p><img src="https://i.imgur.com/g2omESy.png" alt="ci cd meme" width="600" height="400" loading="lazy">
<em>CI/CD meme</em></p>
<h2 id="heading-the-project">The Project</h2>
<p>The project we are going to use in this tutorial is a very simple full-stack MERN application.</p>
<p><img src="https://i.imgur.com/GSvRlQ0.gif" alt="project demo" width="600" height="400" loading="lazy">
<em>Project demo</em></p>
<p>It contains two microservices.</p>
<ol>
<li>Frontend</li>
<li>Backend</li>
</ol>
<p>You can learn more about the project <a target="_blank" href="https://blog.itsrakesh.co/lets-build-and-deploy-a-full-stack-mern-web-application">here</a>.</p>
<p>Both of these applications contains a Dockerfile. You can learn how to dockerize a MERN application <a target="_blank" href="https://blog.itsrakesh.co/dockerizing-your-mern-stack-app-a-step-by-step-guide">here</a>.</p>
<h2 id="heading-what-is-jenkins">What is Jenkins?</h2>
<p>To run a CI/CD pipeline, we need a CI/CD server. This is where all of the steps written in a pipeline run. </p>
<p>There are numerous services available on the market, including GitHub Actions, Travis CI, Circle CI, GitLab CI/CD, AWS CodePipeline, Azure DevOps, and Google Cloud Build. Jenkins is one of the popular CI/CD tools, and it's what we'll use here.</p>
<h2 id="heading-how-to-set-up-jenkins-server-on-azure">How to Set Up Jenkins Server on Azure</h2>
<p>Because Jenkins is open source and it doesn't provide a cloud solution, we must either run it locally or self-host on a cloud provider. Now, running locally can be difficult, particularly for Windows users. As a result, I've chosen to self-host it on Azure for this demo.</p>
<p>If you want to run locally or self-host somewhere other than Azure (follow <a target="_blank" href="https://www.jenkins.io/doc/book/installing/">these</a> guides by Jenkins), skip this section and proceed to the <strong>How to Configure Jenkins</strong> section.</p>
<p>First, you'll need to sign in to your <a target="_blank" href="https://Azure.microsoft.com?wt.mc_id=studentamb_90351">Azure</a> account (Create one if you don't have one already). Open Azure Cloud Shell.</p>
<p><img src="https://i.imgur.com/IN6RXAe.png" alt="opening azure cloud shell" width="600" height="400" loading="lazy">
<em>Opening Azure Cloud Shell</em></p>
<p>Then create a directory called <code>jenkins</code> to store all the Jenkins config, and switch to that directory:</p>
<pre><code class="lang-bash">mkdir jenkins
<span class="hljs-built_in">cd</span> jenkins
</code></pre>
<p>Create a file called <code>cloud-init-jenkins.txt</code>. Open with nano or vim,</p>
<pre><code class="lang-bash">touch cloud-init-jenkins.txt
nano cloud-init-jenkins.txt
</code></pre>
<p>and paste this code into it:</p>
<pre><code class="lang-bash"><span class="hljs-comment">#cloud-config</span>
package_upgrade: <span class="hljs-literal">true</span>
runcmd:
  - sudo apt install openjdk-11-jre -y
  - wget -qO - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
  - sh -c <span class="hljs-string">'echo deb https://pkg.jenkins.io/debian-stable binary/ &gt; /etc/apt/sources.list.d/jenkins.list'</span>
  - sudo apt-get update &amp;&amp; sudo apt-get install jenkins -y
  - sudo service jenkins restart
</code></pre>
<p>Here, we'll use this file to install Jenkins after creating a virtual machine. First, we install openjdk, which is required for Jenkins to function. The Jenkins service is then restarted after we install it.</p>
<p>Next, create a resource group. (A resource group in Azure is like a container that holds all the related resources of a project in one group. Learn more about resource groups <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group?wt.mc_id=studentamb_90351">here</a>.)</p>
<pre><code class="lang-bash">az group create --name jenkins-rg --location centralindia
</code></pre>
<p><strong>Note:</strong> make sure to change the location to the one closest to you.</p>
<p>Now, create a virtual machine.</p>
<pre><code class="lang-bash">az vm create \
--resource-group jenkins-rg \
--name jenkins-vm \
--image UbuntuLTS \
--admin-username <span class="hljs-string">"azureuser"</span> \
--generate-ssh-keys \
--public-ip-sku Standard \
--custom-data cloud-init-jenkins.txt
</code></pre>
<p>You can verify the VM installation with this command:</p>
<pre><code class="lang-bash">az vm list -d -o table --query <span class="hljs-string">"[?name=='jenkins-vm']"</span>
</code></pre>
<p>Don't be confused. This command simply displays JSON data in a tabular format for easy verification.</p>
<p>Jenkins server runs on port <code>8080</code>, so we need to expose this port on our VM. You can do that like this:</p>
<pre><code class="lang-bash">az vm open-port \
--resource-group jenkins-rg \
--name jenkins-vm  \
--port 8080 --priority 1010
</code></pre>
<p>Now we can access the Jenkins dashboard in the browser with the URL <code>http://&lt;your-vm-ip&gt;:8080</code>. Use this command to get the VM IP address:</p>
<pre><code class="lang-bash">az vm show \
--resource-group jenkins-rg \
--name jenkins-vm -d \
--query [publicIps] \
--output tsv
</code></pre>
<p>You can now see the Jenkins application in your browser.</p>
<p><img src="https://i.imgur.com/Sy1Glar.png" alt="jenkins dashboard" width="600" height="400" loading="lazy">
<em>Jenkins dashboard</em></p>
<p>As you'll notice, Jenkins is asking us to provide an admin password which is automatically generated during its installation.</p>
<p>But first let's SSH into our virtual machine where Jenkins is installed.</p>
<pre><code class="lang-bash">ssh azureuser@&lt;ip_address&gt;
</code></pre>
<p>Now, type in the below command to get the password:</p>
<pre><code class="lang-bash">sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
<p>Copy and paste it. Then click <strong>Continue</strong>.</p>
<h2 id="heading-how-to-configure-jenkins">How to Configure Jenkins</h2>
<p>First, you'll need to click <strong>Install suggested plugins</strong>. It will take some time to install all the plugins.</p>
<p><img src="https://i.imgur.com/vDaaqE3.png" alt="installing suggested plugins" width="600" height="400" loading="lazy">
<em>Installing suggested plugins</em></p>
<p>An admin user is needed to restrict access to Jenkins. So go ahead and create one. After finishing, click <strong>Save and continue</strong>.</p>
<p><img src="https://i.imgur.com/qqkwQN6.png" alt="create an admin user" width="600" height="400" loading="lazy">
<em>Create an admin user</em></p>
<p>Now you will be presented with the Jenkins dashboard.</p>
<p>The first step is to install the "Blue Ocean" plugin. Jenkins has a very old interface, which may make it difficult for some people to use. This blue ocean plugin provides a modern interface for some Jenkins components (like creating a pipeline).</p>
<p>To install plugins, go to <strong>Manage Jenkins</strong> -&gt; click <strong>Manage Plugins</strong> under "System Configuration" -&gt; <strong>Available plugins</strong>. Search for "Blue Ocean" -&gt; check the box and click <strong>Download now and install after restart</strong>.</p>
<p><img src="https://i.imgur.com/dAKBLiq.png" alt="blue ocean" width="600" height="400" loading="lazy">
<em>Blue ocean</em></p>
<p>Great, we're all set. Now let's create a pipeline.</p>
<h2 id="heading-how-to-write-a-jenkinsfile">How to Write a Jenkinsfile</h2>
<p>To create a pipeline, we need a <strong>Jenkinsfile</strong>. This file contains all the pipeline configurations – stages, steps, and so on. Jenkinsfile is to Jenkins as a Dockerfile is to Docker.</p>
<p>Jenkinsfile uses the <strong>Groovy</strong> syntax. The syntax is very simple. You can understand everything by just looking at it.</p>
<p>Let's start by writing:</p>
<pre><code class="lang-groovy">pipeline {

}
</code></pre>
<p>The word 'agent' should be the first thing you mention in the pipeline. An agent is similar to a container or environment in which jobs run. You can use multiple agents to run jobs in parallel. You can find more information about Jenkins agents can <a target="_blank" href="https://www.jenkins.io/doc/book/using/using-agents/">here</a>.</p>
<pre><code class="lang-groovy">pipeline {
    agent any
}
</code></pre>
<p>Here we are telling Jenkins to use any available agent.</p>
<p>We have a total of 5 stages in our pipeline:</p>
<p><img src="https://i.imgur.com/ezvdElo.png" alt="ci cd pipeline stages" width="600" height="400" loading="lazy">
<em>CI/CD pipeline stages</em></p>
<h3 id="heading-stage-1-checkout-code">Stage 1: Checkout code</h3>
<p>Different CI/CD tools use different naming conventions. In Jenkins, these are referred to as stages. In each stage we write various steps.</p>
<p>Our first stage is checking out code from a source code management system (in our case, GitHub).</p>
<pre><code class="lang-groovy">pipeline {
    agent any

    stages {
        stage('Checkout') {
            steps {
                checkout scm
            }
        }
    }
}
</code></pre>
<p>Commit the changes and push to your GitHub repo.</p>
<p>Since we haven't created any pipelines yet, let's do that now.</p>
<p>Before we begin, we must ensure that Git is installed on our system. If you followed my previous steps to install Jenkins on an Azure VM, Git is already installed. </p>
<p>You can test it by running the following command (make you are still SSHed into the VM):</p>
<pre><code class="lang-bash">git --version
</code></pre>
<p>If it isn't already installed, you can do so with:</p>
<pre><code class="lang-bash">sudo apt install git
</code></pre>
<p>Open blue ocean. Click <strong>Create new pipeline</strong>.</p>
<p><img src="https://i.imgur.com/FNffT6p.png" alt="creating new pipeline" width="600" height="400" loading="lazy">
<em>Creating new pipeline</em></p>
<p>Then select your source code management system. If you chose GitHub, you must provide an access token for Jenkins to access your repository. I recommend clicking on <strong>Create an access token here</strong> because it is a template with all of the necessary permissions. Then click <strong>Connect</strong>.</p>
<p><img src="https://i.imgur.com/H9TUsHV.png" alt="selecting scm" width="600" height="400" loading="lazy">
<em>Selecting scm</em></p>
<p>After that, a pipeline will be created. Since our repository already contains a Jenkinsfile, Jenkins automatically detects it and runs the stages and steps we mentioned in the pipeline.</p>
<p>If everything went well, the entire page will turn green. (Other colors: <strong>blue</strong> indicates that the pipeline is running, <strong>red</strong> indicates that something went wrong in the pipeline, and <strong>gray</strong> indicates that we stopped the pipeline.)</p>
<p><img src="https://i.imgur.com/FtvJlND.png" alt="stage one successful" width="600" height="400" loading="lazy">
<em>Stage one successful</em></p>
<h3 id="heading-stage-2-run-frontend-tests">Stage 2: Run frontend tests</h3>
<p>In general, all the CI/CD pipelines contains some tests that needs to be run before deploying. So I added simple tests to both the frontend and backend. Let's start with the frontend tests.</p>
<pre><code class="lang-groovy">stage('Client Tests') {
    steps {
        dir('client') {
            sh 'npm install'
            sh 'npm test'
        }
    }
}
</code></pre>
<p>We're changing the directory to <code>client/</code> because that's where the frontend code is. And then install the dependencies with <code>npm install</code> and run the tests with <code>npm test</code> in a shell.</p>
<p>Again, before we restart the pipeline, we have to make sure node and npm are installed or not. Install node and npm with these commands in the virtual machine:</p>
<pre><code class="lang-bash">curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
</code></pre>
<p>After that, run the following:</p>
<pre><code class="lang-bash">sudo apt-get install -y nodejs
</code></pre>
<p>Now, commit the code and restart the pipeline.</p>
<p><img src="https://i.imgur.com/OWYcdDu.png" alt="run client tests" width="600" height="400" loading="lazy">
<em>Run client tests</em></p>
<h3 id="heading-stage-3-run-backend-tests">Stage 3: Run backend tests</h3>
<p>Now do the same thing for the backend tests.</p>
<p>But there is one thing we need to do before we proceed. If you take a look at the codebase and <code>activity.test.js</code>, we are using a few environment variables. So let's add these environment varibales in Jenkins.</p>
<h4 id="heading-how-to-add-environment-variables-in-jenkins">How to add environment variables in Jenkins</h4>
<p>To add environment variables, go to <strong>Manage Jenkins</strong> -&gt; click <strong>Manage Credentials</strong> under "Security" -&gt;  <strong>System</strong> -&gt; <strong>Global credentials (unrestricted)</strong> -&gt; click <strong>+ Add Credentials</strong>.</p>
<p>For <strong>Kind</strong> select "Secret text", leave <strong>Scope</strong> default, and for <strong>Secret</strong> write the secret value and <strong>ID</strong>. This is what we use when using these environment variables in the Jenkinsfile.</p>
<p>Add the following env variables:</p>
<p><img src="https://i.imgur.com/xGjg2mG.png" alt="environment variables" width="600" height="400" loading="lazy">
<em>Environment variables</em></p>
<p>Then in the Jenkinsfile, use these env variables:</p>
<pre><code class="lang-groovy">environment {
    MONGODB_URI = credentials('mongodb-uri')
    TOKEN_KEY = credentials('token-key')
    EMAIL = credentials('email')
    PASSWORD = credentials('password')
}
</code></pre>
<p>Add a stage to install dependencies, set these variables in the Jenkins environment, and run the tests:</p>
<pre><code class="lang-groovy">stage('Server Tests') {
    steps {
        dir('server') {
            sh 'npm install'
            sh 'export MONGODB_URI=$MONGODB_URI'
            sh 'export TOKEN_KEY=$TOKEN_KEY'
            sh 'export EMAIL=$EMAIL'
            sh 'export PASSWORD=$PASSWORD'
            sh 'npm test'
        }
    }
}
</code></pre>
<p>Again, commit the code and restart the pipeline.</p>
<p><img src="https://i.imgur.com/hpjMUyT.png" alt="run server tests" width="600" height="400" loading="lazy">
<em>Run server tests</em></p>
<h3 id="heading-stage-4-build-docker-images">Stage 4: Build Docker images</h3>
<p>Now, we have to specify a step to build the Docker images from the Dockerfiles.</p>
<p>Before we proceed, install Docker in the VM (if you don't already have it installed).</p>
<p>To install Docker:</p>
<pre><code class="lang-bash">sudo apt install docker.io
</code></pre>
<p>Add the user <code>jenkins</code> to the <code>docker</code> group so that Jenkins can access the Docker daemon – otherwise you'll get a permission denied error.</p>
<pre><code class="lang-bash">sudo usermod -a -G docker jenkins
</code></pre>
<p>Then restart the <code>jenkins</code> service.</p>
<pre><code class="lang-bash">sudo systemctl restart jenkins
</code></pre>
<p>Add a stage in the Jenkinsfile.</p>
<pre><code class="lang-groovy">stage('Build Images') {
    steps {
        sh 'docker build -t rakeshpotnuru/productivity-app:client-latest client'
        sh 'docker build -t rakeshpotnuru/productivity-app:server-latest server'
    }
}
</code></pre>
<p>Commit the code and restart the pipeline.</p>
<p><img src="https://i.imgur.com/USh63SD.png" alt="build docker images" width="600" height="400" loading="lazy">
<em>Build docker images</em></p>
<h3 id="heading-stage-5-push-images-to-the-registry">Stage 5: Push images to the registry</h3>
<p>As a final stage, we will push the images to Docker hub.</p>
<p>Before that, add your docker hub username and password to the Jenkins credentials manager, but for <strong>Kind</strong> choose "Username with password".</p>
<p><img src="https://i.imgur.com/ue0MMKM.png" alt="username with password type credential" width="600" height="400" loading="lazy">
<em>Username with password type credential</em></p>
<p>Add the final stage where we login and push images to Docker hub.</p>
<pre><code class="lang-groovy">stage('Push Images to DockerHub') {
    steps {
        withCredentials([usernamePassword(credentialsId: 'dockerhub', passwordVariable: 'DOCKER_PASSWORD', usernameVariable: 'DOCKER_USERNAME')]) {
            sh 'docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD'
            sh 'docker push rakeshpotnuru/productivity-app:client-latest'
            sh 'docker push rakeshpotnuru/productivity-app:server-latest'
        }
    }
}
</code></pre>
<p><img src="https://i.imgur.com/copfIou.png" alt="push images to dockerhub" width="600" height="400" loading="lazy">
<em>Push images to dockerhub</em></p>
<p>Here is the complete Jenkinsfile:</p>
<pre><code class="lang-groovy">// This is a Jenkinsfile. It is a script that Jenkins will run when a build is triggered.
pipeline {
    // Telling Jenkins to run the pipeline on any available agent.
    agent any

    // Setting environment variables for the build.
    environment {
        MONGODB_URI = credentials('mongodb-uri')
        TOKEN_KEY = credentials('token-key')
        EMAIL = credentials('email')
        PASSWORD = credentials('password')
    }

    // This is the pipeline. It is a series of stages that Jenkins will run.
    stages {
        // This state is telling Jenkins to checkout the source code from the source control management system.
        stage('Checkout') {
            steps {
                checkout scm
            }
        }

        // This stage is telling Jenkins to run the tests in the client directory.
        stage('Client Tests') {
            steps {
                dir('client') {
                    sh 'npm install'
                    sh 'npm test'
                }
            }
        }

        // This stage is telling Jenkins to run the tests in the server directory.
        stage('Server Tests') {
            steps {
                dir('server') {
                    sh 'npm install'
                    sh 'export MONGODB_URI=$MONGODB_URI'
                    sh 'export TOKEN_KEY=$TOKEN_KEY'
                    sh 'export EMAIL=$EMAIL'
                    sh 'export PASSWORD=$PASSWORD'
                    sh 'npm test'
                }
            }
        }

        // This stage is telling Jenkins to build the images for the client and server.
        stage('Build Images') {
            steps {
                sh 'docker build -t rakeshpotnuru/productivity-app:client-latest client'
                sh 'docker build -t rakeshpotnuru/productivity-app:server-latest server'
            }
        }

        // This stage is telling Jenkins to push the images to DockerHub.
        stage('Push Images to DockerHub') {
            steps {
                withCredentials([usernamePassword(credentialsId: 'dockerhub', passwordVariable: 'DOCKER_PASSWORD', usernameVariable: 'DOCKER_USERNAME')]) {
                    sh 'docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD'
                    sh 'docker push rakeshpotnuru/productivity-app:client-latest'
                    sh 'docker push rakeshpotnuru/productivity-app:server-latest'
                }
            }
        }
    }
}
</code></pre>
<p><img src="https://i.imgur.com/NQxFXhO.png" alt="pipeline ran successfully" width="600" height="400" loading="lazy">
<em>Pipeline ran successfully</em></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In summary, let's review what we've covered:</p>
<ul>
<li>We explored the significance of implementing Continuous Integration and Continuous Deployment (CI/CD) in software development.</li>
<li>We delved into the fundamentals of Jenkins and acquired knowledge on how to deploy a Jenkins server on the Azure cloud platform.</li>
<li>We customized Jenkins to meet our specific requirements.</li>
<li>Lastly, we wrote a Jenkinsfile and built a pipeline utilizing the user-friendly interface of Jenkins Blue Ocean.</li>
</ul>
<p>That's all for now! Thanks for reading 🙂.</p>
<p>Connect with me on <a target="_blank" href="https://twitter.com/rakesh_at_tweet">twitter</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Setup a CI/CD Pipeline with GitHub Actions and AWS ]]>
                </title>
                <description>
                    <![CDATA[ By Nyior Clement In this article, we'll learn how to set up a CI/CD pipeline with GitHub Actions and AWS. I've divided the guide into three parts to help you work through it: First, we'll cover some important terminology so you're not lost in a bunch... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-setup-a-ci-cd-pipeline-with-github-actions-and-aws/</link>
                <guid isPermaLink="false">66d4608bc7632f8bfbf1e46b</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 18 Jan 2022 21:26:24 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2021/12/2220.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Nyior Clement</p>
<p>In this article, we'll learn how to set up a CI/CD pipeline with GitHub Actions and AWS. I've divided the guide into three parts to help you work through it:</p>
<p>First, we'll cover some important terminology so you're not lost in a bunch of big buzzwords.</p>
<p>Second, we'll set up continuous integration so we can automatically run builds and tests.</p>
<p>And finally, we'll set up continuous delivery so we can automatically deploy our code to AWS.</p>
<p>Alright, that was a lot. Let's start by diving into each of these terms so you understand exactly what we're doing here.</p>
<h2 id="heading-part-one-demystifying-the-hefty-buzzwords">Part One: Demystifying the Hefty Buzzwords</h2>
<p>The key to making sense of the title of this piece lies in understanding the terms CI/CD Pipeline, GitHub Actions, and AWS.</p>
<p>If you already have a strong grasp of what these terms are, you can just skip to down to Part 2.</p>
<h3 id="heading-what-is-a-cicd-pipeline">What is a CI/CD Pipeline?</h3>
<p>A CI/CD Pipeline is simply a development practice. It tries to answer this one question: <em>How can we ship quality features to our production environment faster?</em> In other words, how can we hasten the feature release process without compromising on quality?</p>
<p>How does the CI/CD Pipeline help us hasten the feature release process, you might ask? </p>
<p>The diagram below depicts a typical feature delivery cycle with or without the CI/CD pipeline.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/12/Activity-diagram.jpeg" alt="Image" width="600" height="400" loading="lazy">
<em>the feature release process. Source: Author</em></p>
<p>Without the CI/CD Pipeline, each step in the diagram above will be performed manually by the developer. In essence, to build the source code, someone on your team has to manually run the command to initiate the build process. Same thing with running tests and deployment.</p>
<p>The CI/CD approach is a radical shift from the manual way of doing things. It is entirely based on the premise that we can speed up the feature release process reasonably, if we automate steps 3-6 in the diagram above. </p>
<p>With the CI/CD Pipeline, we set up a mechanism that automatically starts the build process, runs the tests, deploys to the User Acceptance Testing (UAT) environment, and finally to the production environment each time a member of the team pushes their change to the shared repo, for example.</p>
<p>Continuous Integration happens each time the build process is initiated, and tests run on a new change.  </p>
<p>Continuous Delivery happens when a newly integrated change is automatically deployed to the UAT environment and then manually deployed to the production environment from there. </p>
<p>Continuous Deployment happens when an update in the UAT environment is automatically deployed to the production environment as an official release.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/12/Activity-diagram--1-.jpeg" alt="Image" width="600" height="400" loading="lazy">
<em>Continuous Integration vs Continuous Deployment vs Continuous Delivery. Source: Author</em></p>
<p><strong>Note:</strong> If the deployment from the UAT environment to the production environment is initiated manually, then it is a Continuous Integration/Continuous Delivery setup. Otherwise, it is a Continuous Integration/Continuous Deployment set up.</p>
<p>However, we can't help but ask, what is that entity that automates the different phases of the CI/CD Pipeline? </p>
<p>There are variety of tools we can use to automate the build, tests, and deployment steps in the CI/CD Pipeline – for example, CI Circle, Travis CI, Jenkins, GitHub Actions, and so on. In this article we will be focusing on GitHub Actions.</p>
<h3 id="heading-what-are-github-actions">What are GitHub Actions?</h3>
<p>For want of a better way of making the GitHub Actions term super comprehensible, I'm going to oversimplify this. </p>
<p>In the CI/CD Pipeline, GitHub Actions is the entity that automates the boring stuff. Think of it as some plugin that comes bundled with every GitHub repository you create. </p>
<p>The plugin exists on your repo to execute whatever task you tell it to. Usually, you'd specify what tasks the plugin should execute through a YAML configuration file. Whatever command you add to the configuration file, will translate to something like this in plain English: </p>
<p>"hey GitHub Actions, each time a PR is opened on X branch, automatically build and test the new change. And each time a new change is merged into or pushed to X branch, deploy that change to Y server." </p>
<p>At the core of GitHub Actions lies five concepts: jobs, workflows,  events, actions, and runners. </p>
<p><strong>Jobs</strong> are the tasks you command GitHub Actions to execute through the YAML config file. A job could be something like telling GitHub actions to build your source code, run tests, or deploy the code that has been built to some remote server.</p>
<p><strong>Workflows</strong> are essentially automated processes that contain one or more logically related jobs. For example, you could put the build and run tests jobs into the same workflow, and the deployment job into a different workflow. </p>
<p>Remember, we mentioned that you tell GitHub Actions what job(s) to execute through a configuration file right? GitHub Actions considers each configuration file that you put in some folder in your repo a workflow<em>.</em> We will talk more about this folder in the next part. </p>
<p>So, to create a separate workflow for the deployment job and then a different workflow that combines the build and tests jobs, you'd have to add two config files to your repo. But if you are merging all the three jobs into a single workflow, then you'd need to add just one config file.</p>
<p><strong>Events</strong> are literally the events that trigger the execution of a job by GitHub Actions. Recall we mentioned passing jobs to be executed through a config file? In that config file you'd also have to specify when a job should be executed. </p>
<p>For example, is it on-PR to main? Is it on-push to main? is it on-merge to main? A job can only be executed by a GitHub Action when some event happens.</p>
<p>Okay, let me quickly correct myself. It's not always the case that some event has to happen before a job could be executed. You could schedule jobs too. </p>
<p>For example, in your config file, instead of specifying that the event that should trigger the execution of, let's say, the build-and-test job, you could schedule it to happen a 2am everyday. In fact, you could both schedule a job and specify an event for that same job.</p>
<p><strong>Actions</strong> are the reusable commands that you can reuse in your config file. You can write your custom actions or use existing ones.</p>
<p>A <strong>runner</strong> is the remote computer that GitHub Actions uses to execute the jobs you tell it to. </p>
<p>For example, when the build-and-test job is triggered based on some event, GitHub Actions will pull your code to that computer and execute the job. </p>
<p>The same thing happens in the case of the deployment job. The runner triggers the deployment of the built code to some remote server you specify. In our case, we will be using a service called AWS.</p>
<h3 id="heading-what-is-aws">What is AWS?</h3>
<p>AWS stands for Amazon Web Services. It is a platform owned by Amazon, and this platform allows you access to a broad range of cloud computing services.</p>
<p><strong>Cloud computing</strong> – I thought you said no big words? Most times businesses and even individual developers build applications just so other people can use them. For that reason, these applications have to be available on the interwebs. </p>
<p>Making an application accessible to some target users, ideally, entails uploading that application to a special computer that runs 24/7 and is super fast. </p>
<p>Imagine if it were the case that, before you could make your applications available to other internet users, you'd have to own and set up such a computer. It is quite doable, but it comes with a lot of hurdles. </p>
<p>For example, what if you just want to test out the application? You'd go through all the stress of setting up a hardware infrastructure just for testing? Furthermore, what if you want to upload 1000 different applications or your application is beginning to handle 1billion concurrent requests? Things start to get complicated.</p>
<p>Cloud computing platforms like AWS exist to save you all that stress. These platforms already have numerous of these special computers setup and kept in buildings called data centers. </p>
<p>Instead of having to setup your own hardware infrastructure from scratch, they allow you upload your application to one of their pre-configured computers over the internet. In return, you pay some certain amount to them. </p>
<p>In fact, some of these platforms have free plans that allows you test out small applications. In addition to uploading your application's code, some of these platforms also allow you host your database and store your media files, amongst other features.</p>
<p>In its most simplistic form, Cloud Computing is primarily about storing or executing (sometimes both) certain things on someone else's computer, usually, over a network.</p>
<p>So when I said AWS gives access to a wide range of cloud services, I was just saying it provides businesses and individuals with some special computer where they could upload their code, databases, and media files as a service.</p>
<p>Okay, now that we fully understand the different parts of our title, we will now restate it in the form of objectives. These objectives will then dictate what the remaining two parts in this article will contain.</p>
<p><strong>Objective 1:</strong> How to automatically build and run unit tests on push or on PR to the main branch with GitHub Actions.</p>
<p><strong>Objective 2:</strong> How to automatically deploy to AWS on push or on PR to the main branch with GitHub Actions.</p>
<h2 id="heading-part-2-continuous-integration-how-to-automatically-run-builds-and-tests">Part 2: Continuous Integration – How to Automatically Run Builds and Tests</h2>
<p>In this section, we will be seeing how we can configure GitHub Actions to automatically run builds and tests on push or pull request to the main branch of a repo.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li>A Django project setup locally with at least one view that returns some response defined. </li>
<li>A testcase written for the view(s) you've defined.</li>
</ul>
<p>I have created a demo Django project which you can grab from this <a target="_blank" href="https://github.com/Nyior/django-github-actions-aws">repository</a>:</p>
<p><code>git clone [https://github.com/Nyior/django-github-actions-aws](https://github.com/Nyior/django-github-actions-aws)</code></p>
<p>After you download the code, create a virtualenv and install the requirements via pip:</p>
<p><code>pip install -r requirements.txt</code></p>
<p><strong>Note:</strong> The project already has all the files we will be adding incrementally as we proceed. Maybe you could still download and try to make sense of the content of the files as we proceed. The project is certainly not interesting. It's just created for demo purposes.</p>
<p>Now that you have a Django project setup locally, let's configure GitHub Actions.</p>
<h3 id="heading-how-to-configure-github-actions">How to Configure GitHub Actions</h3>
<p>Okay, so we have our project setup. We also have a testcase written for the view that we have defined, but most importantly we've pushed our shiny change to GitHub. </p>
<p>The goal is to have GitHub trigger a build and run our tests each time we push or open a pull request on main/master. We just pushed our change to main, but GitHub Actions didn't trigger the build or run our tests. </p>
<p><strong>Why not?</strong> Because we haven't defined a workflow yet. Remember, a workflow is where we specify the jobs we want GitHub Actions to execute.</p>
<p>In fact, Nyior, how did you even know that no build was triggered and by extension no workflow defined? Every GitHub repo has an <code>Action</code> tab. If you navigate to that tab, you'll know if a repo has a workflow defined on it or not.</p>
<p><strong>How?</strong> See the images below.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/with-workflow.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>A GitHub Repo With a Workflow Defined on it</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/no-workflow.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>A GitHub Repo With No Workflow Defined on it</em></p>
<p>The first repo in the first image has a workflow defined on it named 'Lint and Test'. The second repo in the second image has no workflow – it's why you don't see a list with the heading 'All Workflows' as is the case with the first repo.</p>
<p>Oh okay, now I get it. So how do I define a workflow on my repo?</p>
<ul>
<li>Create a folder named <code>.github</code> in the root of your project directory.</li>
<li>Create a folder named <code>workflows</code> in the <code>.github</code> directory: This is where you'll create all your YAML files. </li>
<li>Let's create our first workflow that will contain our build and test jobs. We do that by creating a file with a <code>.yml</code> extension. Let's name this file <code>build_and_test.yml</code></li>
<li>Add the content below in the <code>yaml</code> file you just created:</li>
</ul>
<pre><code class="lang-Python">name: Build <span class="hljs-keyword">and</span> Test

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout code
      uses: actions/checkout@v2
    - name: Set up Python Environment
      uses: actions/setup-python@v2
      <span class="hljs-keyword">with</span>:
        python-version: <span class="hljs-string">'3.x'</span>
    - name: Install Dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt

    - name: Run Tests
      run: |
        python manage.py test
</code></pre>
<p>Let's make sense of each line in the file above.</p>
<ul>
<li><code>name: Build and Test</code> This is the name of our workflow. When you navigate to the actions tab, each workflow you define will be identified by the name you give it here on that list.</li>
<li><code>on:</code> This is where you specify the events that should trigger the execution of our workflow. In our config file we passed it two events. We specified the main branch as the target branch.</li>
<li><code>jobs:</code> Remember, a workflow is just a collection of jobs.</li>
<li><code>test:</code> This is the name of the job we've defined in this workflow. You could name it anything really. Notice it's the only job and the build job isn't there? Well, it's Python code so no build step is required. This is why we didn't define the build job.</li>
<li><code>runs-on</code> GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run your workflows. This where you specify the type of runner you want to use. In our case, we are using the Ubuntu Linux runner.</li>
<li>A job is made up of a series of  <code>steps</code>  that are usually executed sequentially on the same runner. In our file above, each step is marked by a hyphen. <code>name</code> represents the name of the step. Each step could either be a shell script that is being executed or an <code>action</code> . You define a step with <code>uses</code> if it's executing an <code>action</code> or you define a step with <code>run</code> if it's executing a shell script.</li>
</ul>
<p>Now that you've defined a <code>workflow</code> by adding the config file in the designated folder, you can commit and push your change to your remote repo. </p>
<p>If you navigate to the <code>Actions</code> tab of your remote repo, you should see a workflow with the name Build and Test (the name which we've given it) listed there.</p>
<h2 id="heading-part-3-continuous-delivery-how-to-automatically-deploy-our-code-to-aws">Part 3: Continuous Delivery – How to Automatically Deploy Our Code to AWS</h2>
<p>In this section, we will see how we can have GitHub Actions automatically deploy our code to AWS on push or pull request to the main branch. AWS offers a broad range of services. For this tutorial, we will be using a compute service called Elastic Beanstalk.</p>
<h3 id="heading-compute-service-elastic-beanstalk-come-onnn">Compute Service? Elastic Beanstalk? Come onnn :(</h3>
<p>No worries, relax, you'll get it. Remember we mentioned that cloud computing is all about storing and executing certain things on someone else's computer via the internet right – <strong>certain things?</strong> </p>
<p>Yes. For example, we can store and execute source code or we can just store media files. Amazon knows this, and as a result, their cloud infrastructure encompasses a plethora of service categories. Each service category allows us do <em>a certain thing out of the certain things that we can do.</em> </p>
<p>For example, there is a service category that allows the upload and execution of source codes that power our applications (<strong>Compute Service).</strong> There is the service category that allows us persist our media files (<strong>Storage Service).</strong> Then there is the service category that allows us manage our databases (<strong>Database Service)</strong>.</p>
<p>Each service category is made up of one or more services. Each service in a category just presents us with a different way of solving the problem that the category it belongs to addresses. </p>
<p>For example, each service in the compute category provides us with a different approach to deploying and executing our application code on the cloud – one problem, different approaches. <strong>Elastic Beanstalk</strong> is one of the services in the compute category. Others are, but not limited to, EC2 and Lambda.</p>
<p><strong>Amazon S3</strong> is one of the services in the storage category. And <strong>Amazon RDS</strong> is one of the services in the Database category.</p>
<p>Hopefully, you now understand what I mean by "we will be using a compute service called Elastic Beanstalk."  Of all the compute services, why Elastic Beanstalk? Well, because it's one of the easiest to work with.</p>
<h3 id="heading-that-being-said-lets-configure-stuff-lt3">That Being Said, Let's Configure Stuff &lt;3</h3>
<p>For brevity's sake we are going with the Continuous Delivery setup. In addition, we are going to have just one one deployment environment that will serve as our UAT environment.</p>
<p>To help you get the big picture right, in summary, this is how our deployment setup is going to work: on push or pull request to main, GitHub Actions will test and upload our source code to Amazon S3. The code is then pulled from Amazon S3 to our Elastic Beanstalk environment. Picture the flow this way:</p>
<p><code>GitHub -&gt; Amazon S3 -&gt; Elastic Beanstalk</code></p>
<p>Why aren't we pushing directly to Elastic Beanstalk, you might ask?</p>
<p>The only other way we could upload code directly to an Elastic Beanstalk instance with our current setup, is if we were using the <a target="_blank" href="https://pypi.org/project/awsebcli/">AWS Elastic Beanstalk CLI</a> (EB CLI). </p>
<p>Using the EB CLI requires running some shell command that would then require that we respond with some input. </p>
<p>Now, if we are deploying from our local machine to Elastic Beanstalk, when we run the EB CLI commands, we'd be there to type in the required responses. But with our current setup, those commands would be executed on GitHub Runners. We wouldn't be there to provide the required responses. The EB CLI isn't the easiest deployment tool for our use case.</p>
<p>With the approach we've picked, we'd run a shell command that uploads our code to S3 and then another command that pulls the uploaded code to our Elastic Beanstalk instance. These commands, when run, do not require that we submit some responses. Having the Amazon s3 step is the easiest way to go about this.</p>
<p>To implement our approach and have our code deployed to Elastic Beanstalk, follow the steps below:</p>
<h4 id="heading-step-1-setup-an-aws-account">Step 1: Setup an AWS Account</h4>
<p>Create an IAM. To keep things simple, when adding permissions, just add "Administrator Access" to the user (this has some security pitfalls, though). To accomplish this, follow the steps in modules 1 and 2 of <a target="_blank" href="https://aws.amazon.com/getting-started/guides/setup-environment/">this guide.</a></p>
<p>In the end, make sure to grab and keep your AWS secret and access keys. We will be needing them in the subsequent sections.</p>
<p>Now that we have an AWS account properly setup, it's time to set up our Elastic Beanstalk environment.</p>
<h4 id="heading-step-2-setup-your-elastic-beanstalk-environment">Step 2: Setup your Elastic Beanstalk Environment</h4>
<p>Once logged into your AWS account, take the following steps to set up your Elastic Beanstalk environment.</p>
<p>First, search for "elastic beanstalk" in the search field as shown in the image below. Then click on the Elastic Beanstalk service.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/search-for-elastic-bean.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>searching for elastic beanstalk.</em></p>
<p>Once you click on the Elastic Beanstalk service in the previous step, you'll be taken to the page shown in the image below. On that page, click on the "Create a New Environment" prompt. Make sure to select "Web server environment" in the next step.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/new-env.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>creating an environment</em></p>
<p>After selecting the "Web server environment" in the previous page you'll be taken to the page shown in the images below. </p>
<p>On that page, submit an application name, an environment name, and also select a platform. For this tutorial, we are going with the Python platform.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/name-env.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>an application and an environment names</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/platform.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>selecting a platform</em></p>
<p>Once you submit the form filled out in the previous step, after a while your application and its associated environment will be created. You should see the names you submitted displayed on the left side bar. </p>
<p>Grab the application name and the environment name. We will be needing them in the subsequent steps.</p>
<p>Now that we have our Elastic Beanstalk environment fully setup, it's time to configure GitHub Actions to trigger automatic deployment to AWS on push or pull request to main.</p>
<h4 id="heading-step-3-configure-your-project-for-elastic-beanstalk">Step 3: Configure your Project for Elastic Beanstalk</h4>
<p>By default, Elastic Beanstalk looks for a file named <code>application.py</code> in our project. It uses that file to run our application, but we don't have that file in our project. Do we? We need to tell Elastic Beanstalk to use the <code>wsgi.py</code> file in our project to run our application instead. To do that, take the following step:</p>
<p>Create a folder named <code>.ebextensions</code> in your project directory. In that folder create a config file. You can name it anything you want. I named mine <code>eb.config</code>. Add the content below to your config file:</p>
<pre><code>option_settings:
  aws:elasticbeanstalk:container:python:
    WSGIPath: django_github_actions_aws.wsgi:application
</code></pre><p>After completing the step above, your project directory should now look similar to the one in the image below:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/struct-1.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>demo project structure</em></p>
<p>One last thing you need to do in this section is to go to your <code>settings.py</code> file and update the <code>ALLOWED_HOSTS</code> setting to <code>all</code>:</p>
<p><code>ALLOWED_HOSTS = ['*']</code></p>
<p>Note that using the wildcard has major security drawbacks. We are only using it here for demo purposes. </p>
<p>Now that we are done configuring our project for Elastic Beanstalk, it's time to update our workflow file.</p>
<h4 id="heading-step-4-update-your-workflow-file">Step 4: Update your Workflow File</h4>
<p>There are five important pieces of information we need to complete this step: application name, environment name, access key id, secret access key, and the server region (after login, you can grab the region from the right-most section of the navbar).</p>
<p>Because the access key id and the secret access key are sensitive data, we'll hide them somewhere in our repository and access them in our workflow file. </p>
<p>To do that, head over to the settings tab of your repo, and then click on secrets as shown in the image below. There, you can create your secrets as key-value pairs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/01/secrets_new.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>embedding secret data in your repo</em></p>
<p>Next, add the deployment job to the end of your existing workflow file:</p>
<pre><code>deploy:
    needs: [test]
    runs-on: ubuntu-latest

    <span class="hljs-attr">steps</span>:
    - name: Checkout source code
      <span class="hljs-attr">uses</span>: actions/checkout@v2

    - name: Generate deployment package
      <span class="hljs-attr">run</span>: zip -r deploy.zip . -x <span class="hljs-string">'*.git*'</span>

    - name: Deploy to EB
      <span class="hljs-attr">uses</span>: einaregilsson/beanstalk-deploy@v20
      <span class="hljs-attr">with</span>:

          <span class="hljs-comment">// Remember the secrets we embedded? this is how we access them</span>
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        <span class="hljs-attr">aws_secret_key</span>: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

        <span class="hljs-comment">// Replace the values here with your names you submitted in one of </span>
        <span class="hljs-comment">// The previous sections</span>
        <span class="hljs-attr">application_name</span>: django-github-actions-aws
        <span class="hljs-attr">environment_name</span>: django-github-actions-aws

        <span class="hljs-comment">// The version number could be anything. You can find a dynamic way </span>
        <span class="hljs-comment">// Of doing this.</span>
        <span class="hljs-attr">version_label</span>: <span class="hljs-number">12348</span>
        <span class="hljs-attr">region</span>: <span class="hljs-string">"us-east-2"</span>
        <span class="hljs-attr">deployment_package</span>: deploy.zip
</code></pre><p><code>needs</code> simply tells GitHub Actions to only start executing the <code>deployment</code> job after the <code>test</code> job has been completed with a passing status.</p>
<p>The step <code>Deploy to EB</code> uses an existing action, <code>einaregilsson/beanstalk-deploy@v20</code>. Remember how we said <code>actions</code> are some reusable applications that takes care of some frequently repeated tasks for us? <code>einaregilsson/beanstalk-deploy@v20</code> is one of those actions. </p>
<p>To reinforce the above, remember that our deployment was suppose to go through the following steps: <code>GitHub -&gt; Amazon S3 -&gt; Elastic Beanstalk</code>. </p>
<p>However, throughout this tutorial, we didn't do any Amazon s3 set up. Furthermore, in our workflow file we didn't upload to an s3 bucket nor did we pull from an s3 bucket to our Elastic Beanstalk environment. </p>
<p>Normally, we are supposed to do all that, but we didn't here – because under the hood, the <code>einaregilsson/beanstalk-deploy@v20</code> action does all the heavy lifting for us. You can also create your own <code>action</code> that takes care of some repetitive tasks and make it available to other developers through the <a target="_blank" href="https://github.com/marketplace?type=actions">GitHub Marketplace.</a></p>
<p>Now that you've updated your workflow file locally, you can then commit and push this change to your remote. Your jobs will run and your code will be deployed to the Elastic Beanstalk instance you created. And that's it. <strong>We're done &gt;&gt;&gt;</strong></p>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>Wow! This was a really long one, wasn't it? In summary I explained what the terms GitHub Actions, CI/CD Pipeline, and AWS mean. In addition, we saw how we could configure GitHub Actions to automatically deploy our code to an Elastic Beanstalk instance on AWS.</p>
<p>If you love this work, and would like to stay up to date on future articles I will be putting out, let's connect on <a target="_blank" href="https://twitter.com/nyior_clement">Twitter</a>, <a target="_blank" href="https://www.linkedin.com/in/nyior/">Linkedin</a>, or <a target="_blank" href="https://github.com/Nyior">GitHub.</a> I use those channels to share what I work on, immediately after I put them out.</p>
<h3 id="heading-credits">Credits:</h3>
<p>Cover image: <a target="_blank" href="https://www.freepik.com/">www.freepik.com</a></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.plutora.com/blog/understanding-ci-cd-pipeline">https://www.plutora.com/blog/understanding-ci-cd-pipeline</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://docs.github.com/en/actions">https://docs.github.com/en/actions</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html">https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/einaregilsson/beanstalk-deploy">https://github.com/einaregilsson/beanstalk-deploy</a></div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up Continuous Integration for a Monorepo Using Buildkite ]]>
                </title>
                <description>
                    <![CDATA[ By subash adhikari A monorepo is a single repository that holds all the code and multiple projects in a single Git repository.  This setup is quite nice to work with because of its flexibility and ability to manage various services and frontends in o... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-set-up-continuous-integration-for-monorepo-using-buildkite/</link>
                <guid isPermaLink="false">66d4614c3dce891ac3a96828</guid>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ monorepo ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Fri, 02 Apr 2021 20:33:51 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2021/03/cover-1.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By subash adhikari</p>
<p>A monorepo is a single repository that holds all the code and multiple projects in a single Git repository. </p>
<p>This setup is quite nice to work with because of its flexibility and ability to manage various services and frontends in one single repository. It also eliminates the hassle of tracking changes in multiple repositories and updating dependencies as projects change.</p>
<p>On the other hand, monorepos also come with their challenges, specifically as relates to Continuous Integration. As individual sub-projects within the monorepo change, we need to identify which sub-projects changed to build and deploy them. </p>
<p>This post will serve as a step by step guide to:</p>
<ol>
<li>Configure Continuous Integration for monorepos in Bulidkite.</li>
<li>Deploy Buildkite Agents to AWS EC2 instances with autoscaling.</li>
<li>Configure GitHub to trigger Bulidkite CI pipelines.</li>
<li>Configure Buildkite to trigger appropriate pipelines when sub-projects within a monorepo change.</li>
<li>Automate all of the above using bash scripts.</li>
</ol>
<h3 id="heading-pre-requisites">Pre-requisites</h3>
<ol>
<li><a target="_blank" href="https://aws.amazon.com/free/"><strong>AWS</strong></a> account to deploy the Buildkite agents.</li>
<li>Configure <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html"><strong>AWS CLI</strong></a> to talk to AWS Account.</li>
<li><a target="_blank" href="https://buildkite.com/"><strong>Buildkite</strong></a> account to create continuous integration pipelines.</li>
<li><a target="_blank" href="https://github.com/"><strong>GitHub</strong></a> account to host the monorepo sourcecode.</li>
</ol>
<p>The complete source code is available in the <a target="_blank" href="https://github.com/adikari/buildkite-monorepo"><strong>buildkite-monorepo</strong></a> in GitHub.</p>
<h2 id="heading-project-setup">Project Setup</h2>
<p>The Buildkite workflow consists of <a target="_blank" href="https://buildkite.com/docs/pipelines">Pipelines</a> and Steps. The top-level containers for modeling and defining workflows are called Pipelines. Steps run individual tasks or commands.</p>
<p>The following diagram lists the pipelines we are setting up, their associated triggers, and each step that the pipeline runs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-109.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h3 id="heading-pull-request-workflow">Pull Request Workflow</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-110.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The above diagram visualizes the workflow for the Pull Request pipeline. </p>
<p>Creating a new Pull Request in GitHub triggers the <code>pull-request</code> pipeline in Buildkite. This pipeline then runs <code>git diff</code> to identify which folders (projects) within the monorepo have changed. </p>
<p>If it detects changes, then it will dynamically trigger the appropriate Pull Request pipeline defined for that project. Buildkite reports the status of each pipeline back to <a target="_blank" href="https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests/about-status-checks">GitHub status check.</a></p>
<h3 id="heading-merge-workflow">Merge Workflow</h3>
<p>The Pull Request is merged when all status checks in Github pass. Merging Pull Request triggers the <code>merge</code> pipeline in Buildkite.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-111.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Similar to the previous pipeline, the merge pipeline identifies the projects that have changed and triggers the corresponding <code>deploy</code> pipeline for it. The Deploy pipeline initially deploys changes to the staging environment. </p>
<p>Once the deployment to staging is complete, production deployment is manually released.</p>
<h3 id="heading-final-project-structure">Final project structure</h3>
<p>├── .buildkite<br>│   ├── diff<br>│   ├── merge.yml<br>│   ├── pipelines<br>│   │   ├── deploy.json<br>│   │   ├── merge.json<br>│   │   └── pull-request.json<br>│   └── pull-request.yml<br>├── bar-service<br>│   ├── .buildkite<br>│   │   ├── deploy.yml<br>│   │   ├── merge.yml<br>│   │   └── pull-request.yml<br>│   └── bin<br>│       └── deploy<br>├── bin<br>│   ├── create-pipeline<br>│   ├── create-secrets-bucket<br>│   ├── deploy-ci-stack<br>│   └── stack-config<br>└── foo-service<br>    ├── .buildkite<br>    │   ├── deploy.yml<br>    │   ├── merge.yml<br>    │   └── pull-request.yml<br>    └── bin<br>        └── deploy</p>
<h3 id="heading-set-up-the-project">Set Up the Project</h3>
<p>Create a new Git project and push it to GitHub. Run the following commands in the CLI.</p>
<pre><code class="lang-bash">mkdir buildkite-monorepo-example
<span class="hljs-built_in">cd</span> buildkite-monorepo-example
git init
<span class="hljs-built_in">echo</span> node_modules/ &gt; .gitignore
git add .
git commit -m <span class="hljs-string">"initialize repository"</span>
git remote add origin &lt;YOUR_GITHUB_REPO_URL&gt;
git push origin master
</code></pre>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-112.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h3 id="heading-set-up-the-buildkite-infrastructure">Set up the Buildkite infrastructure</h3>
<ol>
<li>Create a bin directory with some executable scripts inside it.</li>
</ol>
<pre><code class="lang-bash">mkdir bin 
<span class="hljs-built_in">cd</span> bin
touch create-pipeline create-secrets-bucket deploy-ci-stack
chmod +x ./*
</code></pre>
<ol start="2">
<li>Copy the following contents into <code>create-secrets-bucket</code>.</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">set</span> -eou pipefail

CURRENT_DIR=$(<span class="hljs-built_in">pwd</span>)
ROOT_DIR=<span class="hljs-string">"<span class="hljs-subst">$( dirname <span class="hljs-string">"<span class="hljs-variable">${BASH_SOURCE[0]}</span>"</span> )</span>"</span>/..

BUCKET_NAME=<span class="hljs-string">"buildkite-secrets-adikari"</span>
KEY=<span class="hljs-string">"id_rsa_buildkite"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"creating bucket <span class="hljs-variable">$BUCKET_NAME</span>.."</span>
aws s3 mb s3://<span class="hljs-variable">$BUCKET_NAME</span>

<span class="hljs-comment"># Generate SSH Key</span>
ssh-keygen -t rsa -b 4096 -f <span class="hljs-variable">$KEY</span> -N <span class="hljs-string">''</span>

<span class="hljs-comment"># Copy SSH Keys to S3 bucket</span>
aws s3 cp --acl private --sse aws:kms <span class="hljs-variable">$KEY</span> <span class="hljs-string">"s3://<span class="hljs-variable">$BUCKET_NAME</span>/private_ssh_key"</span>
aws s3 cp --acl private --sse aws:kms <span class="hljs-variable">$KEY</span>.pub <span class="hljs-string">"s3://<span class="hljs-variable">$BUCKET_NAME</span>/public_key.pub"</span>


<span class="hljs-keyword">if</span> [[ <span class="hljs-string">"<span class="hljs-variable">$OSTYPE</span>"</span> == <span class="hljs-string">"darwin"</span>* ]]; <span class="hljs-keyword">then</span>
  pbcopy &lt; id_rsa_buildkite.pub
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"public key contents copied in clipboard."</span>
<span class="hljs-keyword">else</span>
  cat id_rsa_buildkite.pub
<span class="hljs-keyword">fi</span>

<span class="hljs-comment"># Move SSH Keys to ~/.ssh directory</span>
mv ./<span class="hljs-variable">$KEY</span>* ~/.ssh
chmod 600 ~/.ssh/<span class="hljs-variable">$KEY</span>
chmod 644 ~/.ssh/<span class="hljs-variable">$KEY</span>.pub

<span class="hljs-built_in">cd</span> <span class="hljs-variable">$CURRENT_DIR</span>
</code></pre>
<p>The above script creates an S3 bucket that is used to store the ssh keys. Buildkite uses this key to connect to the Github repo. The script also generates an ssh key and sets its permission correctly.</p>
<h3 id="heading-run-the-script">Run the script</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-113.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The script copies the generated public and private keys to the <code>~/.ssh</code> folder. These keys can be used later to ssh into the EC2 instance, running the Buildkite agent for debugging.</p>
<p>Next, verify that the bucket exists and the keys are present in the new S3 bucket.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-114.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Navigate to <a target="_blank" href="https://github.com/settings/keys">https://github.com/settings/keys</a>, add a new SSH key, then paste in the contents of <code>id_rsa_buildkite.pub</code> .</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-115.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h3 id="heading-deploy-aws-elastic-ci-cloudformation-stack">Deploy AWS Elastic CI Cloudformation Stack</h3>
<p>The folks at Buildkite have created the <a target="_blank" href="https://github.com/buildkite/elastic-ci-stack-for-aws"><strong>Elastic CI Stack for AWS</strong></a><strong>,</strong> which creates a private, autoscaling Buildkite Agent cluster in AWS. Let's deploy the infrastructure to our AWS Account.</p>
<p>Create a new file <code>bin/deploy-ci-stack</code> and copy the contents of the following script in it.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">set</span> -euo pipefail

[ -z <span class="hljs-variable">$BUILDKITE_AGENT_TOKEN</span> ] &amp;&amp; { <span class="hljs-built_in">echo</span> <span class="hljs-string">"BUILDKITE_AGENT_TOKEN is not set."</span>; <span class="hljs-built_in">exit</span> 1;}

CURRENT_DIR=$(<span class="hljs-built_in">pwd</span>)
ROOT_DIR=<span class="hljs-string">"<span class="hljs-subst">$( dirname <span class="hljs-string">"<span class="hljs-variable">${BASH_SOURCE[0]}</span>"</span> )</span>"</span>/..
PARAMETERS=$(cat ./bin/stack-config | envsubst)

<span class="hljs-built_in">cd</span> <span class="hljs-variable">$ROOT_DIR</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"downloading elastic ci stack template.."</span>
curl -s https://s3.amazonaws.com/buildkite-aws-stack/latest/aws-stack.yml -O

aws cloudformation deploy \
  --capabilities CAPABILITY_NAMED_IAM \
  --template-file ./aws-stack.yml \
  --stack-name <span class="hljs-string">"buildkite-elastic-ci"</span> \
  --parameter-overrides <span class="hljs-variable">$PARAMETERS</span>

rm -f aws-stack.yml

<span class="hljs-built_in">cd</span> <span class="hljs-variable">$CURRENT_DIR</span>
</code></pre>
<p>You can get the <code>BUILDKITE_AGENT_TOKEN</code> from the <strong>Agents</strong> tab in Buildkite's Console.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-116.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Next, create a new file called <code>bin/stack-config</code>. Configuration in this file overrides the Cloudformation parameters. The complete list of parameters is available in the <a target="_blank" href="https://s3.amazonaws.com/buildkite-aws-stack/latest/aws-stack.yml">Cloudformation template</a> used by Elastic CI.</p>
<p>On line 2, replace the bucket name with the bucket created earlier.</p>
<pre><code class="lang-bash">BuildkiteAgentToken=<span class="hljs-variable">$BUILDKITE_AGENT_TOKEN</span>
SecretsBucket=buildkite-secrets-adikari
InstanceType=t2.micro
MinSize=0
MaxSize=3
ScaleUpAdjustment=2
ScaleDownAdjustment=-1
</code></pre>
<p>Next, run the script in the CLI to deploy the Cloudformation stack.</p>
<pre><code class="lang-bash">./bin/deploy-ci-stack
</code></pre>
<p>The script will take some time to finish. Open up the AWS Cloudformation console to view the progress.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-117.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-118.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The Cloudformation stack would have created an Autoscaling Group that Buildkite will use to spawn up EC2 instances. The Buildkite Agents and the builds run inside those EC2 instances.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-119.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h3 id="heading-create-build-pipelines-in-bulidkite">Create build pipelines in Bulidkite</h3>
<p>At this point, we have the infrastructure ready that's required to run Buildkite. Next, we configure Buildkite and create some Pipelines.</p>
<p>Create an nAPI Access Token at <a target="_blank" href="https://buildkite.com/user/api-access-tokens">https://buildkite.com/user/api-access-tokens</a> and set the scope to <code>write_builds</code>, <code>read_pipelines</code>, and <code>write_pipelines</code>. More information about agent tokens is in this <a target="_blank" href="https://buildkite.com/docs/agent/v3/tokens">document</a>.</p>
<p>Ensure the <code>BUILDKITE_API_TOKEN</code> is set on the environment. Either use <a target="_blank" href="https://www.npmjs.com/package/dotenv">dotenv</a> or export it to the environment before running the script.</p>
<p>Copy the contents of the following script to <code>bin/create-pipeline</code>. Pipelines can be created manually in the Buildkite Console, but it is always better to automate and create reproducible infrastructure.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">set</span> -euo pipefail

<span class="hljs-built_in">export</span> SERVICE=<span class="hljs-string">"."</span>
<span class="hljs-built_in">export</span> PIPELINE_TYPE=<span class="hljs-string">""</span>
<span class="hljs-built_in">export</span> REPOSITORY=git@github.com:adikari/buildkite-docker-example.git

CURRENT_DIR=$(<span class="hljs-built_in">pwd</span>)
ROOT_DIR=<span class="hljs-string">"<span class="hljs-subst">$( dirname <span class="hljs-string">"<span class="hljs-variable">${BASH_SOURCE[0]}</span>"</span> )</span>"</span>/..
STATUS_CHECK=<span class="hljs-literal">false</span>
BUILDKITE_ORG_SLUG=adikari <span class="hljs-comment"># update to your buildkite org slug</span>

USAGE=<span class="hljs-string">"USAGE: <span class="hljs-subst">$(basename <span class="hljs-string">"<span class="hljs-variable">$0</span>"</span>)</span> [-s|--service] service_name [-t|--type] pipeline_type
Eg: create-pipeline --type pull-request
    create-pipeline --type merge --service foo-service
    create-pipeline --type merge --status-checks
NOTE: BUILDKITE_API_TOKEN must be set in environment
ARGUMENTS:
    -t | --type           buildkite pipeline type &lt;merge|pull-request|deploy&gt; (required)
    -s | --service        service name (optional, default: deploy root pipeline)
    -r | --repository     github repository url (optional, default: buildkite-docker-example)
    -c | --status-checks      enable github status checks (optional, default: true)
    -h | --help           show this help text"</span>

[ -z <span class="hljs-variable">$BUILDKITE_API_TOKEN</span> ] &amp;&amp; { <span class="hljs-built_in">echo</span> <span class="hljs-string">"BUILDKITE_API_TOKEN is not set."</span>; <span class="hljs-built_in">exit</span> 1;}

<span class="hljs-keyword">while</span> [ <span class="hljs-variable">$#</span> -gt 0 ]; <span class="hljs-keyword">do</span>
    <span class="hljs-keyword">if</span> [[ <span class="hljs-variable">$1</span> =~ <span class="hljs-string">"--"</span>* ]]; <span class="hljs-keyword">then</span>
        <span class="hljs-keyword">case</span> <span class="hljs-variable">$1</span> <span class="hljs-keyword">in</span>
            --<span class="hljs-built_in">help</span>|-h) <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$USAGE</span>"</span>; <span class="hljs-built_in">exit</span>; ;;
            --service|-s) SERVICE=<span class="hljs-variable">$2</span>;;
            --<span class="hljs-built_in">type</span>|-t) PIPELINE_TYPE=<span class="hljs-variable">$2</span>;;
            --repository|-r) REPOSITORY=<span class="hljs-variable">$2</span>;;
            --status-check|-c) STATUS_CHECK=<span class="hljs-variable">${2:-true}</span>;;
        <span class="hljs-keyword">esac</span>
    <span class="hljs-keyword">fi</span>
    <span class="hljs-built_in">shift</span>
<span class="hljs-keyword">done</span>

[ -z <span class="hljs-string">"<span class="hljs-variable">$PIPELINE_TYPE</span>"</span> ] &amp;&amp; { <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$USAGE</span>"</span>; <span class="hljs-built_in">exit</span> 1; }

<span class="hljs-built_in">export</span> PIPELINE_NAME=$([ <span class="hljs-variable">$SERVICE</span> == <span class="hljs-string">"."</span> ] &amp;&amp; <span class="hljs-built_in">echo</span> <span class="hljs-string">""</span> || <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$SERVICE</span>-"</span>)<span class="hljs-variable">$PIPELINE_TYPE</span>

BUILDKITE_CONFIG_FILE=.buildkite/pipelines/<span class="hljs-variable">$PIPELINE_TYPE</span>.json
[ ! -f <span class="hljs-string">"<span class="hljs-variable">$BUILDKITE_CONFIG_FILE</span>"</span> ] &amp;&amp; { <span class="hljs-built_in">echo</span> <span class="hljs-string">"Invalid pipeline type: File not found <span class="hljs-variable">$BUILDKITE_CONFIG_FILE</span>"</span>; <span class="hljs-built_in">exit</span>; }

BUILDKITE_CONFIG=$(cat <span class="hljs-variable">$BUILDKITE_CONFIG_FILE</span> | envsubst)

<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$STATUS_CHECK</span> == <span class="hljs-string">"false"</span> ]; <span class="hljs-keyword">then</span>
  pipeline_settings=<span class="hljs-string">'{ "provider_settings": { "trigger_mode": "none" } }'</span>
  BUILDKITE_CONFIG=$((echo <span class="hljs-variable">$BUILDKITE_CONFIG</span>; echo <span class="hljs-variable">$pipeline_settings</span>) | jq -s add)
fi
cd <span class="hljs-variable">$ROOT_DIR</span>
echo "Creating <span class="hljs-variable">$PIPELINE_TYPE</span> pipeline.."
RESPONSE=$(curl -s POST "https://api.buildkite.com/v2/organizations/<span class="hljs-variable">$BUILDKITE_ORG_SLUG</span>/pipelines" \
  -H "Authorization: Bearer <span class="hljs-variable">$BUILDKITE_API_TOKEN</span>" \
  -d "<span class="hljs-variable">$BUILDKITE_CONFIG</span>"
)
[[ "<span class="hljs-variable">$RESPONSE</span>" == *errors* ]] &amp;&amp; { echo <span class="hljs-variable">$RESPONSE</span> | jq; exit <span class="hljs-number">1</span>; }
echo <span class="hljs-variable">$RESPONSE</span> | jq
WEB_URL=$(echo <span class="hljs-variable">$RESPONSE</span> | jq -r '.web_url')
WEBHOOK_URL=$(echo <span class="hljs-variable">$RESPONSE</span> | jq -r '.provider.webhook_url')
echo "Pipeline url: <span class="hljs-variable">$WEB_URL</span>"
echo "Webhook url: <span class="hljs-variable">$WEBHOOK_URL</span>"
echo "<span class="hljs-variable">$PIPELINE_NAME</span> pipeline created."
cd <span class="hljs-variable">$CURRENT_DIR</span>
unset REPOSITORY
unset PIPELINE_TYPE
unset SERVICE
unset PIPELINE_NAME
</code></pre>
<p>Make the script executable by setting the correct permission (chmod +x). Run <code>./bin/create-pipeline -h</code> in the CLI for help.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-120.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The script uses <a target="_blank" href="https://buildkite.com/docs/apis/rest-api">Buildkite REST API</a> to create the pipelines with the given configuration. The script uses a pipeline configuration defined as a  <code>json</code> document and posts it to the REST API. Pipeline configurations live in the <code>.bulidkite/pipelines</code> folder.</p>
<p>To define the configuration for the <code>pull-request</code> pipeline, create <code>.buildkite/pipelines/pull-request.json</code> with the following content:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"$PIPELINE_NAME"</span>,
  <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Pipeline for $PIPELINE_NAME pull requests"</span>,
  <span class="hljs-attr">"repository"</span>: <span class="hljs-string">"$REPOSITORY"</span>,
  <span class="hljs-attr">"default_branch"</span>: <span class="hljs-string">""</span>,
  <span class="hljs-attr">"steps"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"script"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">":buildkite: $PIPELINE_TYPE"</span>,
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"buildkite-agent pipeline upload $SERVICE/.buildkite/$PIPELINE_TYPE.yml"</span>
    }
  ],
  <span class="hljs-attr">"cancel_running_branch_builds"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"skip_queued_branch_builds"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"branch_configuration"</span>: <span class="hljs-string">"!master"</span>,
  <span class="hljs-attr">"provider_settings"</span>: {
    <span class="hljs-attr">"trigger_mode"</span>: <span class="hljs-string">"code"</span>,
    <span class="hljs-attr">"publish_commit_status_per_step"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"publish_blocked_as_pending"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"pull_request_branch_filter_enabled"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"pull_request_branch_filter_configuration"</span>: <span class="hljs-string">"!master"</span>,
    <span class="hljs-attr">"separate_pull_request_statuses"</span>: <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>Next, create <code>./buildkite/pipelines/merge.json</code> with the following content:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"$PIPELINE_NAME"</span>,
  <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Pipeline for $PIPELINE_NAME merge"</span>,
  <span class="hljs-attr">"repository"</span>: <span class="hljs-string">"$REPOSITORY"</span>,
  <span class="hljs-attr">"default_branch"</span>: <span class="hljs-string">"master"</span>,
  <span class="hljs-attr">"steps"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"script"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">":buildkite: $PIPELINE_TYPE"</span>,
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"buildkite-agent pipeline upload $SERVICE/.buildkite/$PIPELINE_TYPE.yml"</span>
    }
  ],
  <span class="hljs-attr">"cancel_running_branch_builds"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"skip_queued_branch_builds"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"branch_configuration"</span>: <span class="hljs-string">"master"</span>,
  <span class="hljs-attr">"provider_settings"</span>: {
    <span class="hljs-attr">"trigger_mode"</span>: <span class="hljs-string">"code"</span>,
    <span class="hljs-attr">"build_pull_requests"</span>: <span class="hljs-literal">false</span>,
    <span class="hljs-attr">"publish_blocked_as_pending"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"publish_commit_status_per_step"</span>: <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>Finally, create <code>.buildkite/pipelines/deploy.yml</code> with the following content:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"$PIPELINE_NAME"</span>,
  <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Pipeline for $PIPELINE_NAME deploy"</span>,
  <span class="hljs-attr">"repository"</span>: <span class="hljs-string">"$REPOSITORY"</span>,
  <span class="hljs-attr">"default_branch"</span>: <span class="hljs-string">"master"</span>,
  <span class="hljs-attr">"steps"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"script"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">":buildkite: $PIPELINE_TYPE"</span>,
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"buildkite-agent pipeline upload $SERVICE/.buildkite/$PIPELINE_TYPE.yml"</span>
    }
  ],
  <span class="hljs-attr">"provider_settings"</span>: {
    <span class="hljs-attr">"trigger_mode"</span>: <span class="hljs-string">"none"</span>
  }
}
</code></pre>
<p>Now, run the <code>./bin/create-pipeline</code> command to create a pull-request pipeline.</p>
<pre><code class="lang-bash">./bin/create-pipeline --<span class="hljs-built_in">type</span> pull-request --status-checks
./bin/create-pipeline --<span class="hljs-built_in">type</span> merge --status-checks
</code></pre>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-121.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Copy the <code>Webhook url</code> from the console output and create a webhook integration in GitHub. The webhook URL is available in the pipeline settings in the Buildkite console if needed in the future. </p>
<p>We need to configure the webhook only for the <code>pull-request</code> and <code>merge</code> pipelines. All other pipelines are triggered dynamically.</p>
<p>Navigate to the GitHub repository <code>Settings &gt; Webhooks</code> and add a webhook. Select <code>Just the push event</code>, then add the webhook. Repeat this for both pipelines.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-122.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Now in the Buildkite Console, there should be two newly created pipelines. 🎉</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-123.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Next, add GitHub integration to allow Buildkite to send status updates to GitHub. You only need to set up this integration once per account. It is available at <code>Setting &gt; Integrations &gt; Github</code> in the Buildkite Console.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-124.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Next, create the remaining pipelines. These pipelines will be dynamically triggered by the <code>pull-request</code> and <code>merge</code> pipelines, so we do not need to create GitHub integration.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># foo service pipelines</span>
./bin/create-pipeline --<span class="hljs-built_in">type</span> pull-request --service foo-service
./bin/create-pipeline --<span class="hljs-built_in">type</span> merge --service foo-service
./bin/create-pipeline --<span class="hljs-built_in">type</span> deploy --service foo-service

<span class="hljs-comment"># bar service pipelines</span>
./bin/create-pipeline --<span class="hljs-built_in">type</span> pull-request --service bar-service
./bin/create-pipeline --<span class="hljs-built_in">type</span> merge --service bar-service
./bin/create-pipeline --<span class="hljs-built_in">type</span> deploy --service bar-service
</code></pre>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-125.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The Buildkite Console should now have all the pipelines listed. 🥳</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-126.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h3 id="heading-set-up-buildkite-steps">Set up Buildkite Steps</h3>
<p>Now that the pipelines are ready, let's configure steps to run for each pipeline.</p>
<p>Add the following script in <code>.buildkite/diff</code>. This script diffs between all the files changed in a commit against the master branch. The output of the script is used to trigger respective pipelines dynamically.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

[ <span class="hljs-variable">$#</span> -lt 1 ] &amp;&amp; { <span class="hljs-built_in">echo</span> <span class="hljs-string">"argument is missing."</span>; <span class="hljs-built_in">exit</span> 1; }

COMMIT=<span class="hljs-variable">$1</span>

BRANCH_POINT_COMMIT=$(git merge-base master <span class="hljs-variable">$COMMIT</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"diff between <span class="hljs-variable">$COMMIT</span> and <span class="hljs-variable">$BRANCH_POINT_COMMIT</span>"</span>
git --no-pager diff --name-only <span class="hljs-variable">$COMMIT</span>..<span class="hljs-variable">$BRANCH_POINT_COMMIT</span>
</code></pre>
<p>Change the permission of the script to make it executable.</p>
<pre><code class="lang-bash">chmod +x .buildkite/diff
</code></pre>
<p>Create a new file <code>.buildkite/pullrequest.yml</code> and add the following step configuration. We use the <a target="_blank" href="https://github.com/chronotc/monorepo-diff-buildkite-plugin">buildkite-monorepo-diff</a> plugin to run the <code>diff</code> script and automatically upload and trigger the respective pipelines.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Triggering pull request pipeline"</span>
    <span class="hljs-attr">plugins:</span>
      <span class="hljs-string">chronotc/monorepo-diff#v1.1.1:</span>
        <span class="hljs-attr">diff:</span> <span class="hljs-string">".buildkite/diff ${BUILDKITE_COMMIT}"</span>
        <span class="hljs-attr">wait:</span> <span class="hljs-literal">false</span>
        <span class="hljs-attr">watch:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">"foo-service"</span>
            <span class="hljs-attr">config:</span>
              <span class="hljs-attr">trigger:</span> <span class="hljs-string">"foo-service-pull-request"</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">"bar-service"</span>
            <span class="hljs-attr">config:</span>
              <span class="hljs-attr">trigger:</span> <span class="hljs-string">"bar-service-pull-request"</span>
</code></pre>
<p>Now create the configuration for the merge pipeline by adding the following content in <code>.buildkite/merge.yml</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Triggering merge pipeline"</span>
    <span class="hljs-attr">plugins:</span>
      <span class="hljs-string">chronotc/monorepo-diff#v1.1.1:</span>
        <span class="hljs-attr">diff:</span> <span class="hljs-string">"git diff --name-only HEAD~1"</span>
        <span class="hljs-attr">wait:</span> <span class="hljs-literal">false</span>
        <span class="hljs-attr">watch:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">"foo-service"</span>
            <span class="hljs-attr">config:</span>
              <span class="hljs-attr">trigger:</span> <span class="hljs-string">"foo-service-merge"</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">"bar-service"</span>
            <span class="hljs-attr">config:</span>
              <span class="hljs-attr">trigger:</span> <span class="hljs-string">"bar-service-merge"</span>
</code></pre>
<p>At this point, we have configured the topmost level <code>pull-request</code> and <code>merge</code> pipelines. Now we need to configure individual pipelines for each service.</p>
<p>We'll configure pipelines for <code>foo-service</code> first. Create <code>foo-service/.buildkite/pull-request.yml</code> with the following content. When the <code>pull-request</code> pipeline for foo service runs, specify that the <code>lint</code> and <code>test</code> commands should run. The <code>command</code> option can also trigger other scripts.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Foo service pull request"</span>
    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"echo linting"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"echo testing"</span>
</code></pre>
<p>Next, setup a merge pipeline for the foo service by adding the following content in <code>foo-service/.buildkite/merge.yml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Run sanity checks"</span>
    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"echo linting"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"echo testing"</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Deploy to staging"</span>
    <span class="hljs-attr">trigger:</span> <span class="hljs-string">"foo-deploy"</span>
    <span class="hljs-attr">build:</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-attr">STAGE:</span> <span class="hljs-string">"staging"</span>

  <span class="hljs-bullet">-</span> <span class="hljs-string">wait</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">block:</span> <span class="hljs-string">":rocket: Release to Production"</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Deploy to production"</span>
    <span class="hljs-attr">trigger:</span> <span class="hljs-string">"foo-deploy"</span>
    <span class="hljs-attr">build:</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-attr">STAGE:</span> <span class="hljs-string">"production"</span>
</code></pre>
<p>When the <code>foo-service-merge</code> pipeline runs, here is what happens:</p>
<ol>
<li>The pipeline runs the sanity check.</li>
<li>Then <code>foo-deploy</code> pipeline is dynamically triggered. We pass the <code>STAGE</code> environment to identify which environment to run the deployment against.</li>
<li>Once the deployment to staging is complete, the pipeline is blocked and the following pipeline is not triggered automatically. The pipeline can be resumed by pressing the “Release to Production” button.</li>
<li>Unblocking the pipeline triggers <code>foo-deploy</code> pipeline again, but this time with <code>production</code> stage.</li>
</ol>
<p>Finally, add configuration for the <code>foo-deploy</code> pipeline by adding <code>foo-service/.buildkite/deploy.yml</code>. In the deploy configuration, we trigger a bash script and pass the <code>STAGE</code> variable which was received from the <code>foo-service-merge</code> pipeline.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">label:</span> <span class="hljs-string">"Deploying foo service to ${STAGE}"</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">"./foo-service/bin/deploy ${STAGE}"</span>
</code></pre>
<p>Now, create the deploy script <code>foo-service/bin/deploy</code> and add the following content:</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#!/bin/bash</span>

<span class="hljs-string">set</span> <span class="hljs-string">-euo</span> <span class="hljs-string">pipefail</span>

<span class="hljs-string">STAGE=$1</span>

<span class="hljs-string">echo</span> <span class="hljs-string">"Deploying foo service to $STAGE"</span>
</code></pre>
<p>Make the deploy script executable like this:</p>
<pre><code class="lang-bash">chmod +x ./foo-service/bin/deploy
</code></pre>
<p>The pipeline and steps configuration for <code>foo-service</code> are now complete. Repeat all the above steps above to configure pipelines for <code>bar service</code>.</p>
<h3 id="heading-test-the-overall-workflow">Test the overall workflow</h3>
<p>We have configured Buildkite and GitHub and we've set up the appropriate infrastructure to run the builds. Next, test the entire workflow and see it in action.</p>
<p>To test the workflow, start by creating a new branch and modifying some file in <code>foo-service</code>. Push the changes to GitHub and create a Pull Request.</p>
<pre><code class="lang-bash">git checkout -b change-foo-service
<span class="hljs-built_in">cd</span> foo-service &amp;&amp; touch test.txt
<span class="hljs-built_in">echo</span> testing &gt;&gt; test.txt
git add .
git commit -m <span class="hljs-string">'making some change'</span>
git push origin master
</code></pre>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-127.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Pushing changes to GitHub should trigger the <code>pull-request</code> pipeline in Buildkite, which then triggers the <code>foo-service-pull-request</code> pipeline. </p>
<p>GitHub should report the status in GitHub checks. You can enable GitHub's branch protection to require the checks to pass before merging the Pull Request.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-128.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Once all the checks have passed in GitHub, merge the Pull Request. This merge will trigger the <code>merge</code> pipeline in Buildkite.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-129.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>The changes in the foo service are detected, and <code>foo-service-merge</code> pipeline is triggered. The pipeline will eventually be blocked when the <code>foo-service-deploy</code> runs against the staging environment. </p>
<p>Unblock the pipeline by manually clicking the <code>Release to Production</code> button to run deployment against production.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/03/image-130.png" alt="Image" width="600" height="400" loading="lazy"></p>
<h2 id="heading-summary">Summary</h2>
<p>In this post, we set up a continuous integration pipeline for a monorepo using Buildkite, Github, and AWS. </p>
<p>The pipeline gets our code from the development machine to staging, then to production. The build agents and steps run in autoscaled AWS EC2 instances. </p>
<p>We also created a bunch of bash scripts to create easily reproducible versions of this setup. </p>
<p>As an improvement to the current design, consider using the <a target="_blank" href="https://github.com/buildkite-plugins/docker-compose-buildkite-plugin">buildkite-docker-compose-plugin</a> to isolate the builds in Docker containers.</p>
<p><em>Follow me on</em> <a target="_blank" href="https://twitter.com/adikari"><em>Twitter</em></a> <em>or check out my projects on</em> <a target="_blank" href="https://github.com/adikari"><em>Github</em></a><em>.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ The Best Tools for Continuous Testing – How to Keep Your Code Updates from Breaking Things ]]>
                </title>
                <description>
                    <![CDATA[ By Linda Ikechukwu These days, applications have to evolve as the needs of their target users grow and change. This is why engineering teams often adopt Agile software development principles (or any iterative variation). Agile principles involve cont... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/tools-for-continuous-testing/</link>
                <guid isPermaLink="false">66d4601a246e57ac83a2c78f</guid>
                
                    <category>
                        <![CDATA[ agile ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Software Testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 19 Jan 2021 21:57:55 +0000</pubDate>
                <media:content url="https://cdn-media-2.freecodecamp.org/w1280/6006e5440a2838549dcb4be6.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Linda Ikechukwu</p>
<p>These days, applications have to evolve as the needs of their target users grow and change. This is why engineering teams often adopt Agile software development principles (or any iterative variation).</p>
<p>Agile principles involve continuous integration and continuous delivery (CI/CD). This means that developers will frequently make code updates for new features to the existing codebase of the application. </p>
<p>So then how can you verify that a recent code addition doesn't break a part of the application? The answer is continuous testing.</p>
<h2 id="heading-what-is-continuous-testing">What is Continuous Testing?</h2>
<p>Continuous testing is a critical part of the CI/CD pipeline. It helps development teams discover if a particular code commit will break the application build and if it should be integrated or not.</p>
<p>In other words, continuous testing is the practice of integrating <a target="_blank" href="https://www.perfecto.io/blog/what-is-test-automation">automated tests</a> into a software delivery pipeline to determine the risks associated with each code release or addition. These automated tests are usually triggered during or after builds and are carried out using automation testing frameworks or tools. </p>
<p>Let me now introduce you to four recommended automation tools you can use for continuous testing.</p>
<h2 id="heading-tools-for-continuous-testing">Tools for Continuous Testing</h2>
<h3 id="heading-1-testsigma">1. TestSigma</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-101.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="https://testsigma.com/">TestSigma</a> is a cloud-based automation testing tool for continuous testing. It has a low learning curve, as automated tests can be written in plain English, and requires no coding skills. Tests can also be extended with Selenium and JS-based custom functions for more advanced use cases.</p>
<p>TestSigma can be used for web applications, native mobile apps, regression, cross-browser and data-driven testing. It also features inbuilt seamless integrations with test management, bug reporting, CI/CD, and communication tools such as GitHub, Slack, Jira, BrowserStack, Jenkins, AWS, Bamboo, Azure DevOps, Circle CI, and so on.</p>
<p>TestSigma also uses AI to reduce maintenance effort and increase productivity by identifying affected tests and potential failures upfront to save execution time &amp; cost, alongside other features. </p>
<p>The platform has a free tier, but to use all the features mentioned above, you’ll need to commit to a paid plan.</p>
<h3 id="heading-2-tricentis-tosca">2. Tricentis Tosca</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-102.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="https://www.tricentis.com/products/automate-continuous-testing-tosca/">Tosca</a> is another no-code continuous testing tool which makes it easy to learn. QA engineers with zero scripting knowledge can easily set up automated tests using a GUI.</p>
<p>Tosca is suitable for enterprise-level applications and is versatile because it supports and integrates seamlessly with over 160 technologies/languages. With Tosca, you can run tests on the web, mobile, and desktops with Windows OS (Mac and Linux are only possible with virtualization tools).</p>
<p>Tosca automatically creates and provisions on-demand test data to reduce the time it takes to provision and make reliable test data for test automation. </p>
<p>The platform offers free trials for a limited amount of time and custom pricing, which the sales team decides on based on your specific needs.</p>
<h3 id="heading-3-katalon-studio">3. Katalon Studio</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-103.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="https://www.katalon.com/">Katalon</a> is another comprehensive continuous testing tool built on top of the popular open-source Selenium and Appium. It can be used for testing web, API, mobile, and desktop applications across Windows, macOS, and Linux operating systems. </p>
<p>In fact, with Katalon, you can execute tests on all OSs, browsers, and devices, as well as on cloud, on-premise, and hybrid environments.</p>
<p>Katalon also provides other useful features like recording test steps, executing test cases, providing infrastructure, analytics reporting, and CI/CD integration with the most popular CI tools (like Jenkins, Bamboo, Azure, and CircleCI).</p>
<p>Katalon Studio is easy to get started with because it offers codeless test creation for beginners. For advanced usage, experts can extend automation capabilities using the plugins in the Katalon Store. </p>
<p>It also has extensive documentation featuring a well-organized library of tutorials alongside images and videos to help you out if you ever get stuck on something. It has a robust free tier and an enterprise tier for advanced usage.</p>
<h3 id="heading-4-watir">4. Watir</h3>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/01/image-104.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p><a target="_blank" href="http://watir.com/">Watir</a> is another continuous testing automation tool powered by the Selenium framework, and it is open-source. Watir can only run tests for web applications on Windows and can only execute simple and easily maintainable tests.</p>
<p>It is not codeless, as scripts have to be written in Ruby, Java, .NET or Python using its sister software: Watij, WatiN, and Nerodia. Regardless, it's easy to get started with if you're already familiar with Ruby because it features extensive documentation. </p>
<p>Watir can also be integrated with a couple of CI tools such as Jenkins and GitHub.</p>
<p>Even though Watir seems limited, most teams find its simplicity appealing. It is prevalent within the Ruby community and is even used by large companies like Slack and Oracle.</p>
<h2 id="heading-how-to-choose-a-continuous-testing-tool">How to choose a continuous testing tool</h2>
<p>There are other excellent continuous testing tools available aside from the four that I have mentioned above. I favour no-code testing tools because it lets teams set up and maintain automated tests much faster. </p>
<p>Regardless, here are a few things to consider before choosing a continuous testing tool:</p>
<ol>
<li><strong>Application types supported:</strong> Does the tool support your intended application type (for example, mobile, web, desktop)?</li>
<li><strong>Learning curve:</strong> How easy/difficult is it to use? Will you need to learn a new scripting language? Ideally, you should go for something with a low learning curve that you and your team can get started with in the shortest amount of time.</li>
<li><strong>Costs:</strong> Is the cost of the tool a feasible addition to your budget in the long run?</li>
<li><strong>Integration capabilities:</strong> Can it integrate seamlessly with your existing CI/CD pipeline?</li>
<li><strong>Scalability and reusability:</strong> Does the tool support scalability and reusability of test cases across multiple projects?</li>
<li><strong>Documentation and Community:</strong> How concise and rich is the documentation for the tool? You’re going to run into some mental blocks in the future, and you may not be able to get through without proper documentation and community support.</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>With the right tools, continuous testing eliminates the risks associated with frequent code releases by ensuring that only quality code is delivered to the end-user.</p>
<p>As I previously mentioned, the tools I have listed above are not an exhaustive list of all the continuous testing tool options. They're just the ones I recommend, and they may or may not be the right choice for you. </p>
<p>Do some further research, check out different tools, and settle for one that will integrate seamlessly into your current setup and meet your needs.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How I Contributed to a Major Open Source Project Without Writing Any Code ]]>
                </title>
                <description>
                    <![CDATA[ By Adam Gordon Bell I recently got a pull request merged into the popular Phoenix Framework, and I did it without writing any Elixir code. I didn't write any documentation either. What I did was help improve the build process. In this post, I'd like ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/open-source-continuous-integration/</link>
                <guid isPermaLink="false">66d45d59677cb8c6c15f314a</guid>
                
                    <category>
                        <![CDATA[ community ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ open source ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Phoenix framework ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 12 Jan 2021 18:48:35 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2021/01/Screen-Shot-2021-01-12-at-9.56.22-AM.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Adam Gordon Bell</p>
<p>I recently got a pull request merged into the popular Phoenix Framework, and I did it without writing any Elixir code. I didn't write any documentation either. What I did was help improve the build process.</p>
<p>In this post, I'd like to share the improvements I made to their build process. These improvements are not Phoenix Framework-specific and they might change the way you approach continuous integration.</p>
<p>But first, some background.</p>
<h2 id="heading-what-is-the-phoenix-framework">What is the Phoenix Framework?</h2>
<p>Phoenix is a web framework with some very interesting properties. With Phoenix, you can build rich interactive web applications without writing client-side code. </p>
<p>You can do this using a feature called LiveView which sends real-time updates from the server to update the client browser's HTML.</p>
<p>We can create a page that shows the latest tweets on a topic, in real-time, quite easily.</p>
<p>Here is an example:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">TimelineLive</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">use</span> Phoenix.LiveView

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">render</span></span>(assigns) <span class="hljs-keyword">do</span>
    render(<span class="hljs-string">"timeline.html"</span>, assigns)
  <span class="hljs-keyword">end</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">mount</span></span>(_, socket) <span class="hljs-keyword">do</span>
    Twitter.subscribe(<span class="hljs-string">"elixirphoenix"</span>)
    {<span class="hljs-symbol">:ok</span>, assign(socket, <span class="hljs-symbol">:tweets</span>, [])}
  <span class="hljs-keyword">end</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">handle_info</span></span>({<span class="hljs-symbol">:new</span>, tweet}, socket) <span class="hljs-keyword">do</span>
    {<span class="hljs-symbol">:noreply</span>,
     update(socket, <span class="hljs-symbol">:tweets</span>, <span class="hljs-keyword">fn</span> tweets -&gt;
       Enum.take([tweet | tweets], <span class="hljs-number">10</span>)
     <span class="hljs-keyword">end</span>)}
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<p><img src="https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FCorecursive%2FDUy3Kzmdsn.png?alt=media&amp;token=edf6aee0-7744-435e-9e3f-0557e000214e" alt="Image" width="600" height="400" loading="lazy">
<em>Real-Time Twitter Results with No Javascript Written</em></p>
<p>The framework is written in the programming language Elixir</p>
<p>It was created by José Valim. It looks a lot like Ruby but has very different semantics. Elixir runs on the Erlang VM, and it powers projects like Discord and is used at companies like Heroku.</p>
<h2 id="heading-how-to-reproduce-the-builds">How to Reproduce the Builds</h2>
<p>The Phoenix Framework uses GitHub Actions for their build pipeline. Like many great projects, they have a suite of unit tests that they need to run on every user contribution. </p>
<p>This isn't where their testing efforts stop though. They also have a suite of integration tests. Phoenix uses an ORM to talk to various databases and the integration tests ensure that no changes break the integration with any of the 3 supported databases.</p>
<p>This is a common pattern. Having a large number of unit tests that are easy to run and well as a handful of slower but more comprehensive integration tests is a great way to prevent bugs from being introduced into the project.</p>
<p>The Phoenix Framework takes this even further, though, as they also need to support several versions of the Elixir language and a handful of versions of Open Telecom Platform (OTP).</p>
<p>This is starting to sound complex. We have to test each change with all combinations of the following:</p>
<ul>
<li>Databases (Postgres, MySQL MSSQL)</li>
<li>Elixir (Current and Previous Version)</li>
<li>OTP (Current and Previous Version)</li>
</ul>
<p>It's relatively easy to set this up in GitHub Actions, but how would you run these tests locally? </p>
<p>Installing all these would be a lot to ask, so contributors tend to rely on GitHubActions to test these combinations. However, if everyone has to rely on pushing things to GitHub it see if the tests pass then development gets slower.</p>
<p>How do we fix this?</p>
<h2 id="heading-how-to-unify-the-test-runs">How to Unify the Test Runs</h2>
<p>This is where I got involved. I work at Earthly Technologies as an open-source developer advocate. We have a pretty cool open-source build tool, and although I occasionally contribute directly to the project my job is to be the contact point between the community using the tool and the team working on it.</p>
<p>I had heard about this reproducibility problem the Phoenix team was having. I thought I could help write a build script that could be used both in GitHub Actions and for a local development workflow. So I set to work on a PR.</p>
<h3 id="heading-running-the-tests-locally">Running The Tests Locally</h3>
<p>What I ended up creating, slightly simplified, is this:</p>
<pre><code class="lang-dockerfile">setup:
   <span class="hljs-keyword">ARG</span> ELIXIR=<span class="hljs-number">1.10</span>.<span class="hljs-number">4</span>
   <span class="hljs-keyword">ARG</span> OTP=<span class="hljs-number">23.0</span>.<span class="hljs-number">3</span>
   <span class="hljs-comment"># Pull a Docker Image to Run Build Inside Of</span>
   <span class="hljs-keyword">FROM</span> hexpm/elixir:$ELIXIR-erlang-$OTP-alpine-<span class="hljs-number">3.12</span>.<span class="hljs-number">0</span>
   ...

integration-test:
    <span class="hljs-keyword">FROM</span> +setup
    <span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
    <span class="hljs-comment"># Pull In Dependencies</span>
    <span class="hljs-keyword">RUN</span><span class="bash"> mix deps.get </span>
    <span class="hljs-comment"># Start Up Service Dependencies</span>
    WITH DOCKER --compose docker-compose.yml 
        <span class="hljs-comment"># Run Tests</span>
        <span class="hljs-keyword">RUN</span><span class="bash"> mix <span class="hljs-built_in">test</span> --include database </span>
    <span class="hljs-comment"># Stop Service Dependencies</span>
    END
</code></pre>
<p>This is an Earthfile. Its made up of several targets, like <code>setup</code> and <code>integration-test</code>. The targets can have dependencies between them.  You can use the command-line tool <code>earthly</code> to run any target and each is run in a Docker container.  Containerization is going allow us to run the build wherever we choose.</p>
<p>This example runs the <code>integration-test</code> inside of a the <code>hexpm/elixir</code> Docker container with the specified version of Elixir and OTP installed.</p>
<p>Before running the tests with  <code>mix test --include database</code>, we use Docker compose to start up all the needed dependencies:</p>
<pre><code class="lang-dockerfile"> WITH DOCKER --compose docker-compose.yml
        <span class="hljs-keyword">RUN</span><span class="bash"> mix <span class="hljs-built_in">test</span> --include database</span>
 END
</code></pre>
<p>The Docker compose file looks like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3'</span>
<span class="hljs-attr">services:</span>
  <span class="hljs-attr">postgres:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"5432:5432"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">POSTGRES_PASSWORD:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">mysql:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mysql</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"3306:3306"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MYSQL_ALLOW_EMPTY_PASSWORD:</span> <span class="hljs-string">"yes"</span>
  <span class="hljs-attr">mssql:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mcr.microsoft.com/mssql/server:2019-latest</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">ACCEPT_EULA:</span> <span class="hljs-string">Y</span>
      <span class="hljs-attr">SA_PASSWORD:</span> <span class="hljs-string">some!Password</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"1433:1433"</span>
</code></pre>
<p>These are the databases we need for testing Phoenix.</p>
<p>Now we can run the integration tests at the command line like so:</p>
<pre><code>&gt;  earthly -P +integration-test
</code></pre><p>And if we want to test a different version of Elixir, we can specify the version as build arguments:</p>
<pre><code> &gt; earthly -P --build-arg ELIXIR=<span class="hljs-number">1.11</span><span class="hljs-number">.0</span> --build-arg OTP=<span class="hljs-number">23.1</span><span class="hljs-number">.1</span> +integration-test
</code></pre><p>There are other ways to accomplish this. A combination of a makefile and dockerfiles would have worked as well. The key is to get the build logic out of a GHA specific format and into something that be run can anywhere.</p>
<h2 id="heading-how-to-run-it-in-github-actions">How to Run it in GitHub Actions</h2>
<p>To use this same process inside GitHub Actions, the only thing we need to do is adjust our GitHub Actions yaml to use Earthly for the build pipeline and we are all set.</p>
<pre><code class="lang-javascript">  integration-test-elixir:
    runs-on: ubuntu-latest
    <span class="hljs-attr">env</span>:
      FORCE_COLOR: <span class="hljs-number">1</span>

    <span class="hljs-attr">strategy</span>:
      fail-fast: <span class="hljs-literal">false</span>
      <span class="hljs-attr">matrix</span>:
        include:
          - elixir: <span class="hljs-number">1.11</span><span class="hljs-number">.1</span>
            <span class="hljs-attr">otp</span>: <span class="hljs-number">21.3</span><span class="hljs-number">.8</span><span class="hljs-number">.18</span>
          - elixir: <span class="hljs-number">1.11</span><span class="hljs-number">.1</span>
            <span class="hljs-attr">otp</span>: <span class="hljs-number">23.1</span><span class="hljs-number">.1</span>
    <span class="hljs-attr">steps</span>:
      - uses: actions/checkout@v2
      - name: Download released earth
        <span class="hljs-attr">run</span>: <span class="hljs-string">"sudo /bin/sh -c 'wget https://github.com/earthly/earthly/releases/download/v0.4.1/earthly-linux-amd64 -O /usr/local/bin/earthly &amp;&amp; chmod +x /usr/local/bin/earthly'"</span>
      - name: Execute tests
        <span class="hljs-attr">run</span>: earthly -P --build-arg ELIXIR=${{ matrix.elixir }}  --build-arg OTP=${{ matrix.otp }} +integration-test
</code></pre>
<p>There we go, we are now able to run our build pipeline locally, without any complex environment setup. We can also run the same build process on our developer machine without needing to install anything except Earthly. This makes it easier for new contributors to approach the project.</p>
<h2 id="heading-the-end-result">The End Result</h2>
<p>Eventually, with help from the Phoenix Team, I got this change approved and the Phoenix project now has an easy way to test and iterate on their build pipeline locally. And I didn't even write any Elixir code! You can find more details in the <a target="_blank" href="https://github.com/phoenixframework/phoenix/pull/4072">PR</a>.</p>
<p>Thank you for reading this article.  If you'd like to learn more about Earthly, <a target="_blank" href="http://earthly.dev/">you can find out a lot here</a>. And if you'd like my help on your open source project's build, let me know.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Update Dependencies Safely and Automatically with GitHub Actions and Renovate ]]>
                </title>
                <description>
                    <![CDATA[ By Ramón Morcillo In Software Development keeping up to date with technology updates is crucial. This is true both for developers as they learn and renew their skills, and also for the projects they work on and maintain. When you start a project, you... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/update-dependencies-automatically-with-github-actions-and-renovate/</link>
                <guid isPermaLink="false">66d460cb47a8245f78752abf</guid>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Thu, 05 Nov 2020 17:18:02 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/11/github_actions_and_renovate_logos-1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Ramón Morcillo</p>
<p>In Software Development <strong>keeping up to date with technology updates</strong> is crucial. This is true both for developers as they learn and renew their skills, and also for the projects they work on and maintain.</p>
<p>When you start a project, you normally set it up with the latest stable versions of all libraries and tools. </p>
<p>Then time goes by, the project grows, and new features and libraries are added. But <strong>the versions of the libraries and packages remains the same the team never updates them</strong>. </p>
<p>After all, why would you update them if the project works perfectly with the current versions?</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/11/this-is-fine-1.jpg" alt="This is fine" width="600" height="400" loading="lazy"></p>
<h2 id="heading-why-you-should-keep-projects-up-to-date">Why you should keep projects up to date</h2>
<p>Here are some reasons why you should keep your dependencies updated:</p>
<ul>
<li>Solving problems from old versions.</li>
<li>Adding vulnerability fixes.</li>
<li>Increasing the overall performance.</li>
<li>Adding new features. </li>
<li>...</li>
</ul>
<p>When you keep your dependencies updated you are solving problems from older versions and improving performance with new optimizations. You are also able to use new features that other developers have added. </p>
<p>All of these improvements contribute to the <em>maintainability of the code</em>, and the overall project health.</p>
<p>We all have worked on projects where the dependencies have never (or rarely) been updated. And it's no fun.</p>
<p>So how, then, do we keep our projects up to date? </p>
<p>First of all, you can run <code>npm outdated</code> to <a target="_blank" href="https://docs.npmjs.com/cli/outdated">see the latest releases</a> of the packages you are currently using.</p>
<p>You can then run <code>npm update</code> to update them (it will not update them to the major versions). But how do you know which updates will break the project and which ones won't? </p>
<p>Then you have to think about when you should update everything. When should you check for updates – every day? every week? ...month?</p>
<h2 id="heading-what-youll-learn-in-this-tutorial">What you'll learn in this tutorial</h2>
<p>This is why I did this project: to learn about GitHub Actions and use it to have a <strong>safe way to automatically update dependencies without making the project fail</strong>.</p>
<p>In this tutorial you'll learn how to use the <a target="_blank" href="https://github.com/renovatebot/renovate">Renovate app</a> to check for dependency updates and then submit Pull Requests to update them. This lets you <em>abstract</em> yourself away from checking for updates, so you can focus on more important things. </p>
<p>The point of using <a target="_blank" href="https://github.com/features/actions">GitHub Actions</a> is to set up a workflow and trigger it with every Pull Request. It will check that the build and tests pass with the updated dependencies before adding them to the project.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><a class="post-section-overview" href="#Getting-Started">Getting Started</a></li>
<li><a class="post-section-overview" href="#Set-up-GitHub-Actions-Workflow">Set up GitHub Actions Workflow</a></li>
<li><a class="post-section-overview" href="#Add-Renovate">Add Renovate</a></li>
<li><a class="post-section-overview" href="#Conclusion">Conclusion</a></li>
<li><a class="post-section-overview" href="#Useful-Resources">Useful Resources</a></li>
</ul>
<h2 id="heading-getting-started">Getting started</h2>
<p>Although <strong>this approach can be applied to any project</strong> we will use a <a target="_blank" href="https://reactjs.org">React</a> project made with <a target="_blank" href="https://github.com/facebook/create-react-app">Create React App</a>. This will give us a basic project with everything ready to work on. </p>
<p>By the way, if you do not have Node.js installed <a target="_blank" href="https://nodejs.org/en/download/">here</a> is the link to do so.</p>
<p>If you want to check out the final result before you get started, <a target="_blank" href="https://github.com/reymon359/github-actions-and-renovate">here it is</a>.</p>
<p>So let's begin by running</p>
<pre><code class="lang-bash">npx create-react-app my-app
<span class="hljs-built_in">cd</span> my-app
npm start
</code></pre>
<p>If you use npm 5.1 or earlier, you can't use <code>npx</code>. Instead, install <code>create-react-app</code> globally:</p>
<pre><code class="lang-bash">npm install -g create-react-app
</code></pre>
<p>And then run:</p>
<pre><code class="lang-bash">create-react-app my-app
</code></pre>
<h2 id="heading-set-up-github-actions-workflow">Set up Github Actions Workflow</h2>
<p>Now we will proceed to define a GitHub Actions Workflow in our repository to automate the process.</p>
<p><em><a target="_blank" href="https://github.com/features/actions">GitHub Actions</a> is a Github Feature that helps you automate your software development workflows. It can handle everything from simple tasks to custom end-to-end continuous integration (CI) and continuous deployment (CD) capabilities in your repositories.</em></p>
<p>In our root folder, we will create a new folder and name it <code>.github</code>. Inside that we'll create a <code>workflows</code> folder. This is how your project should look after these steps:</p>
<pre><code>📁 my-app
├── 📁 .github
│   └── 📁 workflows
├── ...
...
</code></pre><p>Here is where we will create and add our Workflows. Github Actions Workflows are the Continuous Integration automated processes we want to run in our project. </p>
<p>Workflows are composed of jobs that contain a set of steps. To explain them in a clearer way let's create our own workflow and go through it step by step. </p>
<p>In the <code>.github/workflows</code> directory, add a <code>.yml</code> or <code>.yaml</code> file and name it <code>main.yml</code>. I chose that name to keep things simple, but you can give it any other name like <code>build-test.yml</code> or <code>continuous-integration-workflow.yml</code>.</p>
<pre><code class="lang-text">📁 my-app
├── 📁 .github
│   └── 📁 workflows
│       └── 📄 main.yml
├── ...
...
</code></pre>
<p><a target="_blank" href="https://gist.github.com/reymon359/514cf378456457f1798293fe0ed99f3a">Here</a> is how the workflow will look in the end in case you just want to copy it and add it directly before the explanation.</p>
<pre><code class="lang-yml"><span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">Test</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [<span class="hljs-string">master</span>]
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [<span class="hljs-string">master</span>]

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build_and_test:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">strategy:</span>
      <span class="hljs-attr">matrix:</span>
        <span class="hljs-attr">node:</span> [<span class="hljs-number">10</span>, <span class="hljs-number">12</span>]

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Use</span> <span class="hljs-string">Node.js</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v1</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">node-version:</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">project</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">the</span> <span class="hljs-string">project</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span> <span class="hljs-string">--if-present</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>The first param of our workflow will be its <strong>name</strong>.</p>
<pre><code class="lang-yml"><span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">Test</span>
</code></pre>
<p>The second param is the <strong>trigger</strong>. </p>
<p>We can choose if the workflow is <strong>triggered by an event like a push or pull request to a specific branch</strong>, or we can even schedule a <a target="_blank" href="https://en.wikipedia.org/wiki/Cron">cron</a> to <strong>automatically trigger it every defined amount of time!</strong>. </p>
<p>In our project we will want to trigger it when pushing to the master branch, and when the Renovate app submits a Pull Request to update a dependency:</p>
<pre><code class="lang-yml"><span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [<span class="hljs-string">master</span>]
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [<span class="hljs-string">master</span>]
</code></pre>
<p>Next, we define the <strong>jobs</strong>.</p>
<p>In this example, there will only be one job: <strong>build and test</strong> the project, and chose the virtual machine where the job will be run.</p>
<pre><code class="lang-yml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build_and_test:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
</code></pre>
<p>Now comes the matrix where we will configure the combination of versions and systems we want to run our Workflow. In our case, we will run it on Node.js 10 and 12.</p>
<pre><code class="lang-yml">    <span class="hljs-attr">strategy:</span>
      <span class="hljs-attr">matrix:</span>
        <span class="hljs-attr">node-version:</span> [<span class="hljs-number">10</span>, <span class="hljs-number">12</span>]
</code></pre>
<p>Finally, the Workflow's steps. First is the <a target="_blank" href="https://github.com/actions/checkout">checkout action</a> which is a standard action that you must include in your workflow when you need a copy of your repository to run the workflow.</p>
<p>Then you can run other actions and processes. In our app, we will use the <strong>setup-node</strong> action with the matrix we defined before. Then we will add steps to install the project, build it, and run the tests.</p>
<pre><code class="lang-yml">    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Use</span> <span class="hljs-string">Node.js</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v1</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">node-version:</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">project</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">the</span> <span class="hljs-string">project</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span> <span class="hljs-string">--if-present</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>Now Create a GitHub Repository for the project, commit the local changes made, and push them to it.</p>
<p>Quick tip: if you want to create it faster, go to <a target="_blank" href="https://repo.new">repo.new</a> or <a target="_blank" href="https://github.new">github.new</a>. You can use <a target="_blank" href="https://gist.new">gist.new</a> for gists too! </p>
<p>Once you push your changes the Workflow will run. Then you will be able to see how it went in the <code>Actions</code> <a target="_blank" href="https://github.com/reymon359/github-actions-and-renovate/actions">tab from the GitHub Project</a>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/11/github_actions_workflows.png" alt="GitHub Actions Workflow" width="600" height="400" loading="lazy"></p>
<h2 id="heading-add-renovate">Add Renovate</h2>
<p><a target="_blank" href="https://github.com/marketplace/renovate">Renovate</a> is a free, open-source, customizable app that helps you automatically update your dependencies in software projects by receiving pull requests. </p>
<p>It is used by software companies like Google, Mozilla, and Uber, and you can use it on GitHub, Gitlab, Bitbucket, Azure DevOps, and Gitea.</p>
<p>We will add a bot that will submit pull requests to our repository when there are updates in our project dependencies. </p>
<p>The cool thing, and the whole point of our project, is that we have previously defined in our workflow to run the tests with the pull requests. So when Renovate submits one, <strong>we will automatically check if the updates proposed will break the project or not before merging them to the master branch</strong>. </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/11/thumbs_up_kid.gif" alt="Thumbs Up Kid" width="600" height="400" loading="lazy"></p>
<p>To add Renovate to our project we have to install <a target="_blank" href="https://github.com/apps/renovate">its app</a> into the project's repository. </p>
<p>Be careful when selecting the repository you want to add Renovate to and choose the one created before. If you made a mistake you want to reconfigure it, you can do it in the <a target="_blank" href="https://github.com/settings/installations">Personal Settings' Applications tab</a> from your account. </p>
<p>After a few minutes you will have to accept and merge the onboarding Pull Request that you receive.</p>
<p>Once you have it integrated, you need to configure it by updating the <code>renovate.json</code> file on the project root. Remember to pull the changes after merging the Pull Request for it to appear. </p>
<p>You can use the default configuration where renovate will submit the pull requests whenever it finds updates and waits for you to merge them:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"extends"</span>: [<span class="hljs-string">"config:base"</span>]
}
</code></pre>
<p>Or you can adapt it to the requirements of your project like <a target="_blank" href="https://github.com/renovatebot/renovate/blob/master/renovate.json">the one used by Renovate itself</a>. </p>
<p>To avoid any issues, and to learn a little more about the tool, we will use a configuration with some of its most useful features. </p>
<p>If you want to learn more about its configuration <a target="_blank" href="https://docs.renovatebot.com/">here</a> are the docs for it.</p>
<p><a target="_blank" href="https://gist.github.com/reymon359/4c4417522cd0922cfbc63ad75ca2c945">This</a> will be our <code>renovate.json</code> file. Have a look at it, and I will explain it after.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"extends"</span>: [
    <span class="hljs-string">"config:base"</span>
  ],
  <span class="hljs-attr">"packageRules"</span>: [
    {
      <span class="hljs-attr">"updateTypes"</span>: [
        <span class="hljs-string">"minor"</span>,
        <span class="hljs-string">"patch"</span>
      ],
      <span class="hljs-attr">"automerge"</span>: <span class="hljs-literal">true</span>
    }
  ],
  <span class="hljs-attr">"timezone"</span>: <span class="hljs-string">"Europe/Madrid"</span>,
  <span class="hljs-attr">"schedule"</span>: [
    <span class="hljs-string">"after 10pm every weekday"</span>,
    <span class="hljs-string">"before 5am every weekday"</span>,
    <span class="hljs-string">"every weekend"</span>
  ]
}
</code></pre>
<p>In the first part, we are telling renovate that our configuration will be an extension from the default one.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"extends"</span>: [
    <span class="hljs-string">"config:base"</span>
  ],
</code></pre>
<p>Then we have the <code>packageRules</code>. After some months using it I realized that going through checking the pull requests (from time to time) and accepting them if the tests passed was a major waste of time. </p>
<p>This is why the <code>automerge</code> is set to true, so Renovate automatically merges the pull request if the workflow passed successfully. </p>
<p>To restrict Renovate's freedom a bit, we define that it can only perform <code>automerge</code> when it is a <code>minor</code> or <code>patch</code> update. </p>
<p>This way, if it is a <code>major</code> or another kind of update, we will be the ones to check whether that update should be added or not. </p>
<p><a target="_blank" href="https://docs.renovatebot.com/configuration-options/#updatetypes">Here</a> you can find more information about the types of updates available.</p>
<pre><code class="lang-json">  <span class="hljs-string">"packageRules"</span>: [
    {
      <span class="hljs-attr">"updateTypes"</span>: [
        <span class="hljs-string">"minor"</span>,
        <span class="hljs-string">"patch"</span>
      ],
      <span class="hljs-attr">"automerge"</span>: <span class="hljs-literal">true</span>
    }
  ],
</code></pre>
<p>Lastly, we have the time schedule. If you work alone or in a team at certain hours it is nice to have updates done when you are not working to avoid unnecessary distractions.</p>
<p>We select our timezone and add a custom <a target="_blank" href="https://docs.renovatebot.com/presets-schedule/">schedule</a> for it. You can find the valid timezone names <a target="_blank" href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones">here</a>. </p>
<pre><code class="lang-json">  <span class="hljs-string">"timezone"</span>: <span class="hljs-string">"Europe/Madrid"</span>,
  <span class="hljs-string">"schedule"</span>: [
    <span class="hljs-string">"after 10pm every weekday"</span>,
    <span class="hljs-string">"before 5am every weekday"</span>,
    <span class="hljs-string">"every weekend"</span>
  ],
</code></pre>
<p>Anyway, if you do not care about the time the pull requests will be submitted, or the people that contribute to the code are in different timezones, then you can remove this part.</p>
<p>Once we have updated the configuration we push the changes to GitHub to have the Renovate app adapted to the new configuration. </p>
<p>Now you finally have the project dependencies safely up-to-date without having to check for them. <a target="_blank" href="https://github.com/reymon359/github-actions-and-renovate">Here is the resulting project</a> after following all the steps mentioned above.</p>
<p>Remember that if you added the time schedule part you will not get the pull request merged automatically until it complies with that configuration.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>There are other ways to keep the dependencies updated in an automated way. But if you use GitHub to host your code, you should take advantage and make the most of its awesome free features. </p>
<p>If you are wondering what else you can do and automate with the GitHub apps and actions, just have a look at its <a target="_blank" href="https://github.com/marketplace">Marketplace</a>.</p>
<p>In addition, you can have a look at <a target="_blank" href="https://github.com/reymon359/up-to-date-react-template">a project I made</a> that I work on from time to time. It was the basis of this tutorial. It is a bit more complex and has more features than the one from this tutorial.</p>
<p>I hope you enjoyed this article and learned about GitHub Actions and its Apps. If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on any of the social networks from <a target="_blank" href="https://ramonmorcillo.com">my site</a> or <a>by mail</a>.</p>
<h2 id="heading-useful-resources">Useful Resources</h2>
<p>Here is a collection of links and resources which I think can be useful to improve and learn more about GitHub Actions and Apps.</p>
<ul>
<li><a target="_blank" href="https://github.com/reymon359/github-actions-and-renovate">Tutorial project</a>. - The resulting project from this tutorial.</li>
<li><a target="_blank" href="https://github.com/marketplace">GitHub Marketplace</a>. - The place to find all GitHub Actions and Apps.</li>
<li><a target="_blank" href="https://help.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow">GitHub Actions Workflow Configuration</a> - The full documentation on how to set up a workflow on Github Actions.</li>
<li><a target="_blank" href="https://github.com/marketplace/renovate">Renovate GitHub app</a> - The Renovate App main page on the GitHub Marketplace.</li>
<li><a target="_blank" href="https://gist.github.com/reymon359/514cf378456457f1798293fe0ed99f3a">GitHub Actions project Workflow</a>. - The Workflow used in this tutorial.</li>
<li><a target="_blank" href="https://gist.github.com/reymon359/4c4417522cd0922cfbc63ad75ca2c945">Renovate App's configuration file</a>. - Renovate App's custom configuration file from the tutorial.</li>
<li><a target="_blank" href="https://github.com/reymon359/up-to-date-react-template">Up to Date React Template</a>. - A Personal project that uses the approach described in this tutorial.</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Add Screenshot Testing with Cypress to Your Project ]]>
                </title>
                <description>
                    <![CDATA[ By Leonardo Faria Developers are usually concerned with the quality of their code. There are different kinds of tests that help us avoid breaking code when a new feature is added in a project. But what can we do to ensure that components don't look d... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-add-screenshot-testing-with-cypress-to-your-project/</link>
                <guid isPermaLink="false">66d85178afbaabf7a144aef7</guid>
                
                    <category>
                        <![CDATA[ Code Quality ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 04 Aug 2020 16:02:09 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/08/cypress-io-logo-social-share-8fb8a1db3cdc0b289fad927694ecb415.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Leonardo Faria</p>
<p>Developers are usually concerned with the quality of their code. There are different kinds of tests that help us avoid breaking code when a new feature is added in a project. But what can we do to ensure that components don't look different over time? </p>
<p>In this post, you will learn how to use Cypress to capture parts of pages of a website. After that, you will integrate the testing tool in CI to ensure that in the future no one will make unwanted changes to your project.</p>
<p>My motivation for creating this testing strategy came from work. At <a target="_blank" href="https://www.thinkific.com">Thinkific</a> we have an internal Design System and we added Cypress to avoid surprises when working in CSS/JS files.</p>
<p>By the end of this post we will have PRs with Cypress tests:</p>
<p><img src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-bot-comment.jpg" alt="Cypress bot" width="600" height="400" loading="lazy"></p>
<h2 id="heading-before-we-start">Before we start</h2>
<p>I created a <a target="_blank" href="https://cypress-example.vercel.app/">sample website</a> to mimic a Component Library. It is a very simple website created with TailwindCSS and hosted in Vercel. It documents 2 components: <a target="_blank" href="https://cypress-example.vercel.app/badge.html">badge</a> and <a target="_blank" href="https://cypress-example.vercel.app/button.html">button</a>.</p>
<p>You can check out the <a target="_blank" href="https://github.com/leonardofaria/cypress-example">source code</a> in GitHub. The website is static and it is inside the <code>public</code> folder. You can see the website locally by running <code>npm run serve</code> and checking in the browser <a target="_blank" href="http://localhost:8000">http://localhost:8000</a>.</p>
<p><img src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-sample-website.png" alt="Sample website" width="600" height="400" loading="lazy"></p>
<h2 id="heading-adding-cypress-and-cypress-image-snapshot">Adding Cypress and Cypress Image Snapshot</h2>
<p>Start by cloning the <a target="_blank" href="https://github.com/leonardofaria/cypress-example">example repository</a>. Next, create a new branch and install <a target="_blank" href="https://www.npmjs.com/package/cypress-image-snapshot">Cypress Image Snapshot</a>, the package responsible for capturing/comparing screenshots.</p>
<pre><code class="lang-bash">git checkout -b add-cypress
npm install -D cypress cypress-image-snapshot
</code></pre>
<p>After adding the packages, a few extra steps are needed to add Cypress Image Snapshot in Cypress.</p>
<p>Create a <code>cypress/plugins/index.js</code> file with the following content:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> { addMatchImageSnapshotPlugin } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'cypress-image-snapshot/plugin'</span>);

<span class="hljs-built_in">module</span>.exports = <span class="hljs-function">(<span class="hljs-params">on, config</span>) =&gt;</span> {
  addMatchImageSnapshotPlugin(on, config);
};
</code></pre>
<p>Next, create a <code>cypress/support/index.js</code> file containing:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> { addMatchImageSnapshotCommand } <span class="hljs-keyword">from</span> <span class="hljs-string">'cypress-image-snapshot/command'</span>;

addMatchImageSnapshotCommand();
</code></pre>
<h2 id="heading-creating-the-screenshot-test">Creating the screenshot test</h2>
<p>Time to create the screenshot test. Here is the plan:</p>
<ol>
<li>Cypress will visit each page (badge and button) of the project.</li>
<li>Cypress will take a screenshot of each example in the page. The <a target="_blank" href="https://cypress-example.vercel.app/badge.html">Badge page</a> has 2 examples (Default and Pill), while the <a target="_blank" href="https://cypress-example.vercel.app/badge.html">Button page</a> has 3 examples (Default, Pill and Outline). All these examples are inside a <code>&lt;div&gt;</code> element with a <code>cypress-wrapper</code>. This class was added with the only intention being to identify what needs to be tested.</li>
</ol>
<p>The first step is creating Cypress configuration file (<code>cypress.json</code>):</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"baseUrl"</span>: <span class="hljs-string">"http://localhost:8000/"</span>,
  <span class="hljs-attr">"video"</span>: <span class="hljs-literal">false</span>
}
</code></pre>
<p>The <code>baseUrl</code> is the website running locally. As I mentioned before, <code>npm run serve</code> will serve the content of the <code>public</code> folder. The second option, <code>video</code> disables Cypress video recording, which we won't use in this project.</p>
<p>Time to create the test. In <code>cypress/integration/screenshot.spec.js</code>, add:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> routes = [<span class="hljs-string">'badge.html'</span>, <span class="hljs-string">'button.html'</span>];

describe(<span class="hljs-string">'Component screenshot'</span>, <span class="hljs-function">() =&gt;</span> {
  routes.forEach(<span class="hljs-function">(<span class="hljs-params">route</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> componentName = route.replace(<span class="hljs-string">'.html'</span>, <span class="hljs-string">''</span>);
    <span class="hljs-keyword">const</span> testName = <span class="hljs-string">`<span class="hljs-subst">${componentName}</span> should match previous screenshot`</span>;

    it(testName, <span class="hljs-function">() =&gt;</span> {
      cy.visit(route);

      cy.get(<span class="hljs-string">'.cypress-wrapper'</span>).each(<span class="hljs-function">(<span class="hljs-params">element, index</span>) =&gt;</span> {
        <span class="hljs-keyword">const</span> name = <span class="hljs-string">`<span class="hljs-subst">${componentName}</span>-<span class="hljs-subst">${index}</span>`</span>;

        cy.wrap(element).matchImageSnapshot(name);
      });
    });
  });
});
</code></pre>
<p>In the code above, I am dynamically creating tests based in the <code>routes</code> array. The test will create one image per <code>.cypress-wrapper</code> element that the page has.</p>
<p>Last, inside the <code>package.json</code> let's create the command to trigger the tests:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"test"</span>: <span class="hljs-string">"cypress"</span>
}
</code></pre>
<p>From here, there are 2 options: run Cypress in headless mode with <code>npm run cypress run</code> or use the Cypress Test Runner with <code>npm run cypress open</code>.</p>
<h3 id="heading-headless-option">Headless option</h3>
<p>Using <code>npm run cypress run</code>, the output should be similar to the next image:</p>
<p><img src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-first-test.jpg" alt="Output of first test" width="600" height="400" loading="lazy"></p>
<p>The tests will pass and 5 images will be created under the <code>/snapshots/screenshot.spec.js</code> folder.</p>
<h3 id="heading-test-runner-option">Test Runner option</h3>
<p>Using <code>npm run cypress open</code>, Cypress Test Runner will be opened and you can follow the tests step by step.</p>
<p><img src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-test-runner.jpg" alt="Cypress Test Runner screenshot" width="600" height="400" loading="lazy"></p>
<p>Our first milestone is done, so let's merge this branch to master. If you want to see the work done so far, jump in my <a target="_blank" href="https://github.com/leonardofaria/cypress-example/pull/1">Pull Request</a>. </p>
<h2 id="heading-using-cypress-inside-docker">Using Cypress inside Docker</h2>
<p>If you run the test above alternating between headless and Test Runner, you may notice that the screenshot will vary. </p>
<p>Using the Test Runner with a retina display computer, you may get retina images (2x), while the headless mode doesn't give you high-quality screenshots.</p>
<p>Also, it is important to say that the screenshots may vary according to your Operating System. </p>
<p>Linux and Windows, for instance, have apps with visible scrollbars, while macOS hides the scrollbar. </p>
<p>If the content captured in the screenshot doesn't fit a component, you may or may not have a scrollbar. If your project relies on OS default fonts, screenshots will also be different according to the environment.</p>
<p>In order to avoid these inconsistencies, tests will run inside Docker so the developer's computer won't affect the screenshot captures.</p>
<p>Let's start by creating a new branch:</p>
<pre><code class="lang-bash">git checkout -b add-docker
</code></pre>
<p>Cypress offers different Docker images - you can check out the details in <a target="_blank" href="https://docs.cypress.io/examples/examples/docker.html">their documentation</a> and <a target="_blank" href="https://www.cypress.io/blog/2019/05/02/run-cypress-with-a-single-docker-command/">their blog</a>. </p>
<p>For this example, I will use the <code>cypress/included</code> image, which includes Electron and is ready to be used.</p>
<p>We need to make two changes: change the <code>baseUrl</code> in the <code>cypress.json</code> file:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"baseUrl"</span>: <span class="hljs-string">"http://host.docker.internal:8000/"</span>,
}
</code></pre>
<p>and the <code>test</code> command in the <code>package.json</code> file: </p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"test"</span>: <span class="hljs-string">"docker run -it -e CYPRESS_updateSnapshots=$CYPRESS_updateSnapshots --ipc=host -v $PWD:/e2e -w /e2e cypress/included:4.11.0"</span>
}
</code></pre>
<p>Running <code>npm run test</code> will bring us a problem:</p>
<p><img src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-docker.jpg" alt="Output of test" width="600" height="400" loading="lazy"></p>
<p>The images are slightly different but why? Let's see what is inside the <code>__diff_output__</code> folder:</p>
<p><img src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-button-diff.png" alt="Button's difference" width="600" height="400" loading="lazy"></p>
<p>As I mentioned earlier, typography inconsistencies! The Button component uses the OS default font. Since Docker is running inside Linux, the rendered font won't be the same that I have installed on macOS. </p>
<p>Since now we moved to Docker, these screenshots are outdated. Time to update the snapshots:</p>
<pre><code class="lang-bash">CYPRESS_updateSnapshots=<span class="hljs-literal">true</span> npm run <span class="hljs-built_in">test</span>
</code></pre>
<p>Please notice that I am prefixing the test command with the environment variable <code>CYPRESS_updateSnapshots</code>.</p>
<p>The second milestone is done. In case you need help, check out my <a target="_blank" href="https://github.com/leonardofaria/cypress-example/pull/2">pull request</a>.</p>
<p>Let's merge this branch and move forward.</p>
<h2 id="heading-adding-ci">Adding CI</h2>
<p>Our next step is adding the tests in CI. There are different CI solutions in the market but for this tutorial, I will use Semaphore. I am not affiliated with them and I use their product at work, so it was for me a natural choice. </p>
<p>The configuration is straightforward and it can be adapted to other solutions like CircleCI or Github Actions.</p>
<p>Before we create our Semaphore configuration file, let's prepare our project to run in CI.</p>
<p>The first step is installing <a target="_blank" href="https://www.npmjs.com/package/start-server-and-test">start-server-and-test</a>. As the package name says, it will start a server, wait for the URL, and then run a test command:</p>
<pre><code class="lang-bash">npm install -D start-server-and-test
</code></pre>
<p>Second, edit the <code>package.json</code> file:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"test"</span>: <span class="hljs-string">"docker run -it -e CYPRESS_baseUrl=$CYPRESS_baseUrl -e CYPRESS_updateSnapshots=$CYPRESS_updateSnapshots --ipc=host -v $PWD:/e2e -w /e2e cypress/included:4.11.0"</span>,
  <span class="hljs-attr">"test:ci"</span>: <span class="hljs-string">"start-server-and-test serve http://localhost:8000 test"</span>
}
</code></pre>
<p>In the <code>test</code> script, we are adding the <code>CYPRESS_baseUrl</code> environment variable. This will allow us to change the base URL used by Cypress dynamically. Also, we are adding the <code>test:ci</code> script, which will run the package we just installed.</p>
<p>We are ready for Semaphore. Create the <code>.semaphore/semaphore.yml</code> file with the following content:</p>
<pre><code class="lang-yml"> <span class="hljs-attr">1 version:</span> <span class="hljs-string">v1.0</span>
 <span class="hljs-attr">2 name:</span> <span class="hljs-string">Cypress</span> <span class="hljs-string">example</span>
 <span class="hljs-attr">3 agent:</span>
 <span class="hljs-attr">4   machine:</span>
 <span class="hljs-attr">5     type:</span> <span class="hljs-string">e1-standard-2</span>
 <span class="hljs-attr">6     os_image:</span> <span class="hljs-string">ubuntu1804</span>
 <span class="hljs-attr">7 blocks:</span>
 <span class="hljs-attr">8   - name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Dependencies</span>
 <span class="hljs-attr">9     dependencies:</span> []
<span class="hljs-attr">10     task:</span>
<span class="hljs-attr">11       jobs:</span>
<span class="hljs-attr">12         - name:</span> <span class="hljs-string">NPM</span>
<span class="hljs-attr">13           commands:</span>
<span class="hljs-number">14</span>             <span class="hljs-bullet">-</span> <span class="hljs-string">sem-version</span> <span class="hljs-string">node</span> <span class="hljs-number">12</span>
<span class="hljs-number">15</span>             <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
<span class="hljs-number">16</span>             <span class="hljs-bullet">-</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
<span class="hljs-attr">17   - name:</span> <span class="hljs-string">Tests</span>
<span class="hljs-attr">18     dependencies:</span> [<span class="hljs-string">'Build Dependencies'</span>]
<span class="hljs-attr">19     task:</span>
<span class="hljs-attr">20       prologue:</span>
<span class="hljs-attr">21         commands:</span>
<span class="hljs-number">22</span>           <span class="hljs-bullet">-</span> <span class="hljs-string">sem-version</span> <span class="hljs-string">node</span> <span class="hljs-number">12</span>
<span class="hljs-number">23</span>           <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
<span class="hljs-attr">24       jobs:</span>
<span class="hljs-attr">25         - name:</span> <span class="hljs-string">Cypress</span>
<span class="hljs-attr">26           commands:</span>
<span class="hljs-number">27</span>             <span class="hljs-bullet">-</span> <span class="hljs-string">export</span> <span class="hljs-string">CYPRESS_baseUrl="http://$(ip</span> <span class="hljs-string">route</span> <span class="hljs-string">|</span> <span class="hljs-string">grep</span> <span class="hljs-string">-E</span> <span class="hljs-string">'(default|docker0)'</span> <span class="hljs-string">|</span> <span class="hljs-string">grep</span> <span class="hljs-string">-Eo</span> <span class="hljs-string">'([0-9]+\.){3}[0-9]+'</span> <span class="hljs-string">|</span> <span class="hljs-string">tail</span> <span class="hljs-number">-1</span><span class="hljs-string">):8000"</span>
<span class="hljs-number">28</span>             <span class="hljs-bullet">-</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">test:ci</span>
</code></pre>
<p>Breaking the configuration down in detail:</p>
<ul>
<li>Lines 1-6 define which kind of instance we will use in their environment</li>
<li>Lines 8 and 16 create 2 blocks: the first block, "Build Dependencies" will run <code>npm install</code>, downloading the dependencies we need. The second block, "Tests" will run Cypress, with a few differences.</li>
<li>In line 27, we are dynamically setting the <code>CYPRESS_baseUrl</code> environment variable based on the IP Docker is using at the moment. This will replace <code>http://host.docker.internal:8000/</code>, which may not work in all environments.</li>
<li>In line 28, we finally run the test using <code>start-server-and-test</code>: once the server is ready for connections, Cypress will run the test suite.</li>
</ul>
<p>Another milestone is done, time to merge our branch! You can check out the <a target="_blank" href="https://github.com/leonardofaria/cypress-example/pull/6/files">Pull request</a> that contains all the files from this section and check the <a target="_blank" href="https://leonardofaria.semaphoreci.com/workflows/061f6c9f-8f2d-4351-8a25-e5bc1568f67e">build inside Semaphore</a>.</p>
<h2 id="heading-recording-the-tests-in-cypressio">Recording the tests in cypress.io</h2>
<p>Reading the output of tests in CI is not very friendly. In this step, we will integrate our project with <a target="_blank" href="https://www.cypress.io/">cypress.io</a>. </p>
<p>The following steps are based on the <a target="_blank" href="https://docs.cypress.io/guides/dashboard/projects.html#Setup">Cypress documentation</a>.</p>
<p>Let's start by getting a project ID and a record key. In the terminal, create a new branch and run:</p>
<pre><code class="lang-bash">git checkout -b add-cypress-recording
CYPRESS_baseUrl=http://localhost:8000 ./node_modules/.bin/cypress open
</code></pre>
<p>Earlier I mentioned that we would be using Cypress inside Docker. But here we are opening Cypress locally since this is the only way to integrate with the website dashboard. </p>
<p>Inside Cypress, let's go to the Runs tab, click in "Set up project to record", and choose a name and visibility. We will get a <code>projectId</code> that is automatically added in the <code>cypress.json</code> file and a private record key. Here is a video of the steps:</p>

  


<p>In Semaphore, I added the record key as an environment variable called <code>CYPRESS_recordKey</code>. Next let's update our test script to use the variable:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"test:ci"</span>: <span class="hljs-string">"start-server-and-test 'serve' 8000 'npm run test -- run --record --key $CYPRESS_recordKey'"</span>
}
</code></pre>
<p>That is pretty much all that needs to be done. In the <a target="_blank" href="https://github.com/leonardofaria/cypress-example/pull/8">Pull request</a> we can see the cypress.io integration in the comments. There is even a deep link that takes us to their dashboard and shows all the screenshots. Check out the video below:</p>

  


<p>Time to merge our work, and that is the end of our integration.</p>
<h2 id="heading-testing-in-real-life">Testing in real life</h2>
<p>Imagine we are working on a change that affects the padding of the buttons: time to test if Cypress will capture the difference. </p>
<p>In the example website, let's double the horizontal padding from 16px to 32px. This change is quite simple since we are using Tailwind CSS: <code>px-4</code> gets replaced by <code>px-8</code>. Here is that <a target="_blank" href="https://github.com/leonardofaria/cypress-example/pull/9">Pull request</a>.</p>
<p>As we might expect, Cypress captured that the button doesn't match the screenshots. Visiting the page, we can check the screenshot of the broken test:</p>

  


<p>The diff file shows the original screenshot on the left, the current result on the right, and they are combined in the middle. We also have the option to download the image so we can see the issue better:</p>
<div class="full-width"><img alt="Button before and after" src="https://leonardofaria.net/wp-content/uploads/2020/08/cypress-io-broken-test.png" width="600" height="400" loading="lazy"></div>

<p>If this is not an issue, update the screenshots:</p>
<pre><code class="lang-bash">CYPRESS_updateSnapshots=<span class="hljs-literal">true</span> npm run <span class="hljs-built_in">test</span>
</code></pre>
<h2 id="heading-the-end">The end</h2>
<p>That's it for today. I hope you have learned how Cypress can be useful to ensure no one is adding unexpected changes to a project. </p>
<p>Also posted on <a target="_blank" href="https://bit.ly/30ncCYj">my blog</a>. If you like this content, follow me on <a target="_blank" href="https://twitter.com/leozera">Twitter</a> and <a target="_blank" href="https://github.com/leonardofaria">GitHub</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ What are Github Actions and How Can You Automate Tests and Slack Notifications? ]]>
                </title>
                <description>
                    <![CDATA[ Automation is a powerful tool. It both saves us time and can help reduce human error.  But automation can be tough and can sometimes prove to be costly. How can Github Actions help harden our code and give us more time to work on features instead of ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/what-are-github-actions-and-how-can-you-automate-tests-and-slack-notifications/</link>
                <guid isPermaLink="false">66b8e39047c23b7ae1ad0bdf</guid>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation testing  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ CI/CD ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous delivery ]]>
                    </category>
                
                    <category>
                        <![CDATA[ continuous deployment ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Git ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ GitHub Actions ]]>
                    </category>
                
                    <category>
                        <![CDATA[ slack ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Software Testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ tech  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Colby Fayock ]]>
                </dc:creator>
                <pubDate>Wed, 03 Jun 2020 14:45:00 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/05/github-actions.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Automation is a powerful tool. It both saves us time and can help reduce human error. </p>
<p>But automation can be tough and can sometimes prove to be costly. How can Github Actions help harden our code and give us more time to work on features instead of bugs?</p>
<ul>
<li><a class="post-section-overview" href="#heading-what-are-github-actions">What are Github Actions?</a></li>
<li><a class="post-section-overview" href="#heading-what-is-cicd">What is CI/CD?</a></li>
<li><a class="post-section-overview" href="#heading-what-are-we-going-to-build">What are we going to build?</a></li>
<li><a class="post-section-overview" href="#heading-part-0-setting-up-a-project">Part 0: Setting up a project</a></li>
<li><a class="post-section-overview" href="#heading-part-1-automating-tests">Part 1: Automating tests</a></li>
<li><a class="post-section-overview" href="#heading-part-2-post-new-pull-requests-to-slack">Part 2: Post new pull requests to Slack</a></li>
</ul>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/1n-jHHNSoTw" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
<h2 id="heading-what-are-github-actions">What are Github Actions?</h2>
<p><a target="_blank" href="https://github.com/features/actions">Actions</a> are a relatively new feature to <a target="_blank" href="https://github.com/">Github</a> that allow you to set up CI/CD workflows using a configuration file right in your Github repo.</p>
<p>Previously, if you wanted to set up any kind of automation with tests, builds, or deployments, you would have to look to services like <a target="_blank" href="https://circleci.com/">Circle CI</a> and <a target="_blank" href="https://travis-ci.org/">Travis</a> or write your own scripts. But with Actions, you have first class support to powerful tooling to automate your workflow.</p>
<h2 id="heading-what-is-cicd">What is CI/CD?</h2>
<p>CD/CD stands for Continuous Integration and Continuous Deployment (or can be Continuous Delivery). They're both practices in software development that allow teams to build projects together quickly, efficiently, and ideally with less errors.</p>
<p>Continuous Integration is the idea that as different members of the team work on code on different git branches, the code is merged to a single working branch which is then built and tested with automated workflows. This helps to constantly make sure everyone's code is working properly together and is well-tested.</p>
<p>Continuous Deployment takes this a step further and takes this automation to the deployment level. Where with the CI process, you automate the testing and the building, Continuous Deployment will automate deploying the project to an environment. </p>
<p>The idea is that the code, once through any building and testing processes, is in a deployable state, so it should be able to be deployed.</p>
<h2 id="heading-what-are-we-going-to-build">What are we going to build?</h2>
<p>We're going to tackle two different workflows.</p>
<p>The first will be to simply run some automated tests that will prevent a pull request from being merged if it is failing. We won't walk through building the tests, but we'll walk through running tests that already exist.</p>
<p>In the second part, we'll set up a workflow that sends a message to slack with a link to a pull request whenever a new one is created. This can be super helpful when working on open source projects with a team and you need a way to keep track of requests.</p>
<h2 id="heading-part-0-setting-up-a-project">Part 0: Setting up a project</h2>
<p>For this guide, you can really work through any node-based project as long as it has tests you can run for Part 1.</p>
<p>If you'd like to follow along with a simpler example that I'll be using, I've set up a new project that you can clone with a single function that has two tests that are able to run and pass.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/function-with-test.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>A function with two tests</em></p>
<p>If you'd like to check out this code to get started, you can run:</p>
<pre><code class="lang-shell">git clone --single-branch --branch start git@github.com:colbyfayock/my-github-actions.git
</code></pre>
<p>Once you have that cloned locally and have installed the dependencies, you should be able to run the tests and see them pass!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/passing-tests.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Passing tests</em></p>
<p>It should also be noted that you'll be required to have this project added as a new repository on Github in order to follow along.</p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/6919b1b9beea4823fd28375f1864d233e23f2d26">Follow along with the commit!</a></p>
<h2 id="heading-part-1-automating-tests">Part 1: Automating tests</h2>
<p>Tests are an important part of any project that allow us to make sure we're not breaking existing code while we work. While they're important, they're also easy to forget about.</p>
<p>We can remove the human nature out of the equation and automate running our tests to make sure we can't proceed without fixing what we broke.</p>
<h3 id="heading-step-1-creating-a-new-action">Step 1: Creating a new action</h3>
<p>The good news, is Github actually makes it really easy to get this workflow started as it comes as one of their pre-baked options.</p>
<p>We'll start by navigating to the <strong>Actions</strong> tab on our repository page.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-actions-dashboard.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Github Actions starting page</em></p>
<p>Once there, we'll immediately see some starter workflows that Github provides for us to dive in with. Since we're using a node project, we can go ahead and click <strong>Set up this workflow</strong> under the <strong>Node.js</strong> workflow.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-new-nodejs-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a Node.js Github Action workflow</em></p>
<p>After the page loads, Github will land you on a new file editor that already has a bunch of configuration options added.</p>
<p>We're actually going to leave this "as is" for our first step. Optionally, you can change the name of the file to <code>tests.yml</code> or something you'll remember.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-create-new-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Adding a new Github Action workflow file</em></p>
<p>You can go ahead and click <strong>Start commit</strong> then either commit it directory to the <code>master</code> branch or add the change to a new branch. For this walkthrough, I'll be committing straight to <code>master</code>.</p>
<p>To see our new action run, we can again click on the <strong>Actions</strong> tab which will navigate us back to our new Actions dashboard.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-status.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Viewing Github Action workflow events</em></p>
<p>From there, you can click on <strong>Node.js CI</strong> and select the commit that you just made above and you'll land on our new action dashboard. You can then click one of the node versions in the sidebar via <strong>build (#.x)</strong>, click the <strong>Run npm test</strong> dropdown, and we'll be able to see the output of our tests being run (which if you're following along with me, should pass!).</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-logs.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Viewing logs of a Github Action workflow</em></p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/10e397966572ed9975cac40f6ab5f41c1255a947">Follow along with the commit!</a></p>
<h3 id="heading-step-2-configuring-our-new-action">Step 2: Configuring our new action</h3>
<p>So what did we just do above? We'll walk through the configuration file and what we can customize.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-file.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Github Action Node.js workflow file</em></p>
<p>Starting from the top, we specify our name:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Node.js</span> <span class="hljs-string">CI</span>
</code></pre>
<p>This can really be whatever you want. Whatever you pick should help you remember what it is. I'm going to customize this to "Tests" so I know exactly what's going on.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]
</code></pre>
<p>The <code>on</code> key is how we specify what events trigger our action. This can be a variety of things like based on time with <a target="_blank" href="https://en.wikipedia.org/wiki/Cron">cron</a>. But here, we're saying that we want this action to run any time someone pushes commits to  <code>master</code> or someone creates a pull request targeting the <code>master</code> branch. We're not going to make a change here.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
</code></pre>
<p>This next bit creates a new job called <code>build</code>. Here we're saying that we want to use the latest version of Ubuntu to run our tests on. <a target="_blank" href="https://ubuntu.com/">Ubuntu</a> is common, so you'll only want to customize this if you want to run it on a specific environment.</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">strategy:</span>
      <span class="hljs-attr">matrix:</span>
        <span class="hljs-attr">node-version:</span> [<span class="hljs-number">10.</span><span class="hljs-string">x</span>, <span class="hljs-number">12.</span><span class="hljs-string">x</span>, <span class="hljs-number">14.</span><span class="hljs-string">x</span>]
</code></pre>
<p>Inside of our job we specify a <a target="_blank" href="https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategy">strategy</a> matrix. This allows us to run the same tests on a few different variations. </p>
<p>In this instance, we're running the tests on 3 different versions of <a target="_blank" href="https://nodejs.org/en/">node</a> to make sure it works on all of them. This is definitely helpful to make sure your code is flexible and future proof, but if you're building and running your code on a specific node version, you're safe to change this to only that version.</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Use</span> <span class="hljs-string">Node.js</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v1</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">node-version:</span> <span class="hljs-string">${{</span> <span class="hljs-string">matrix.node-version</span> <span class="hljs-string">}}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span> <span class="hljs-string">--if-present</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
</code></pre>
<p>Finally, we specify the steps we want our job to run. Breaking this down:</p>
<ul>
<li><code>uses: actions/checkout@v2</code>: In order for us to run our code, we need to have it available. This checks out our code on our job environment so we can use it to run tests.</li>
<li><code>uses: actions/setup-node@v1</code>: Since we're using node with our project, we'll need it set up on our environment. We're using this action to do that setup  for us for each version we've specified in the matrix we configured above.</li>
<li><code>run: npm ci</code>: If you're not familiar with <code>npm ci</code>, it's similar to running <code>npm install</code> but uses the <code>package-lock.json</code> file without performing any patch upgrades. So essentially, this installs our dependencies.</li>
<li><code>run: npm run build --if-present</code>: <code>npm run build</code> runs the build script in our project. The <code>--if-present</code> flag performs what it sounds like and only runs this command if the build script is present. It doesn't hurt anything to leave this in as it won't run without the script, but feel free to remove this as we're not building the project here.</li>
<li><code>run: npm test</code>: Finally, we run <code>npm test</code> to run our tests. This uses the <code>test</code> npm script set up in our <code>package.json</code> file.</li>
</ul>
<p>And with that, we've made a few tweaks, but our tests should run after we've committed those changes and pass like before!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-action-workflow-logs-npm-test.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Logs of passing tests in Github Action workflow</em></p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/087cd8e8592d1f2b520b6e44b70b0a242a9d2d72">Follow along with the commit!</a></p>
<h3 id="heading-step-3-testing-that-our-tests-fail-and-prevent-merges">Step 3: Testing that our tests fail and prevent merges</h3>
<p>Now that our tests are set up to automatically run, let's try to break it to see it work.</p>
<p>At this point, you can really do whatever you want to intentionally break the tests, but <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/1">here's what I did</a>:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/bad-changes-code-diff.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Code diff - https://github.com/colbyfayock/my-github-actions/pull/1</em></p>
<p>I'm intentionally returning different expected output so that my tests will fail. And they do!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-failing-checks.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Failing status checks on pull request</em></p>
<p>In my new pull request, my new branch breaks the tests, so it tells me my checks have failed. If you noticed though, it's still green to merge, so how can we prevent merges?</p>
<p>We can prevent pull requests from being merged by setting up a Protected Branch in our project settings.</p>
<p>First, navigate to <strong>Settings</strong>, then <strong>Branches</strong>, and click <strong>Add rule</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-add-protected-branch.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Github branch protection rules</em></p>
<p>We'll then want to set the branch name pattern to <code>*</code>, which means all branches, check the <strong>Require status checks to pass before merging option</strong>, then select all of our different status checks that we'd like to require to pass before merging.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-configure-protected-branch.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a branch protection rule in Github</em></p>
<p>Finally, hit <strong>Create</strong> at the bottom of the page.</p>
<p>And once you navigate back to the pull request, you'll notice that the messaging is a bit different and states that we need our statuses to pass before we can merge.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-failing-checks-cant-merge.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Failing tests preventing merge in pull request</em></p>
<p><em>Note: as an administrator of a repository, you'll still be able to merge, so this technically only prevents non-administrators from merging. But will give you increased messaging if the tests fail.</em></p>
<p>And with that, we have a new Github Action that runs our tests and prevents pull requests from merging if they fail.</p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/1">Follow along with the pull request!</a></p>
<p><em>Note: we won't be merging that pull request before continuing to Part 2.</em></p>
<h2 id="heading-part-2-post-new-pull-requests-to-slack">Part 2: Post new pull requests to Slack</h2>
<p>Now that we're preventing merge requests if they're failing, we want to post a message to our <a target="_blank" href="http://slack.com/">Slack</a> workspace whenever a new pull request is opened up. This will help us keep tabs on our repos right in Slack.</p>
<p>For this part of the guide, you'll need a Slack workspace that you have permissions to create a new developer app with and the ability to create a new channel for the bot user that will be associated with that app.</p>
<h3 id="heading-step-1-setting-up-slack">Step 1: Setting up Slack</h3>
<p>There are a few things we're going to walk through as we set up Slack for our workflow:</p>
<ul>
<li>Create a new app for our workspace</li>
<li>Assign our bot permissions</li>
<li>Install our bot to our workspace</li>
<li>Invite our new bot to our channel</li>
</ul>
<p>To get started, we'll create a new app. Head over to the <a target="_blank" href="https://api.slack.com/apps">Slack API Apps dashboard</a>. If you already haven't, log in to your Slack account with the Workspace you'd like to set this up with.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-create-new-app.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Creating a new Slack app</em></p>
<p>Now, click <strong>Create New App</strong> where you'll be prompted to put in a name and select a workspace you want this app to be created for. I'm going to call my app "Gitbot" as the name, but you can choose whatever makes sense for you. Then click <strong>Create App</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-add-name-new-app.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Configuring a new Slack app</em></p>
<p>Once created, navigate to the <strong>App Home</strong> link in the left sidebar. In order to use our bot, we need to assign it <a target="_blank" href="https://oauth.net/">OAuth</a> scopes so it has permissions to work in our channel, so select <strong>Review Scopes to Add</strong> on that page.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-review-scopes.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Reviewing Slack app scopes</em></p>
<p>Scroll own and you'll see a <strong>Scopes</strong> section and under that a <strong>Bot Token</strong> section. Here, click <strong>Add an OAuth Scope</strong>. For our bot, we don't need a ton of permissions, so add the <code>channels:join</code> and <code>chat:write</code> scopes and we should be good to go.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-add-scopes.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Adding scopes for a Slack app Bot Token</em></p>
<p>Now that we have our scopes, let's add our bot to our workspace. Scroll up on that same page to the top and you'll see a button that says <strong>Install App to Workspace</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-install-app-to-workspace.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Installing Slack app to a workspace</em></p>
<p>Once you click this, you'll be redirected to an authorization page. Here, you can see the scopes we selected for our bot. Next, click <strong>Allow</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-allow-workspace-permissions.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Allowing permission for Slack app to be installed to workspace</em></p>
<p>At this point, our Slack bot is ready to go. At the top of the <strong>OAuth &amp; Permissions</strong> page, you'll see a <strong>Bot User OAuth Access Token</strong>. This is what we'll use when setting up our workflow, so either copy and save this token or remember this location so you know how to find it later.</p>
<p><em>Note: this token is private - don't give this out, show it in a screencast, or let anyone see it!</em></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-oauth-token.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Copying OAuth Access Token for Slack bot user</em></p>
<p>Finally, we need to invite our Slack bot to our channel. If you open up your workspace, you can either use an existing channel or create a new channel for these notifications, but you'll want to enter the command <code>/invite @[botname]</code> which will invite our bot to our channel.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-invite-bot-to-channel.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Inviting Slack bot user to channel</em></p>
<p>And once added, we're done with setting up Slack!</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-app-bot-joined-channel.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Slack bot was added to channel</em></p>
<h3 id="heading-create-a-github-action-to-notify-slack">Create a Github Action to notify Slack</h3>
<p>Our next step will be somewhat similar to when we created our first Github Action. We'll create a workflow file which we'll configure to send our notifications.</p>
<p>While we can use our code editors to do this by creating a file in the <code>.github</code> directory, I'm going to use the Github UI.</p>
<p>First, let's navigate back to our <em>Actions</em> tab in our repository. Once there, select <strong>New workflow</strong>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-new-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a new Github Action workflow</em></p>
<p>This time, we're going to start the workflow manually instead of using a pre-made Action. Select <strong>set up a workflow yourself</strong> at the top.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-set-up-new-workflow.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Setting up a Github Action workflow manually</em></p>
<p>Once the new page loads, you'll be dropped in to a new template where we can start working. Here's what our new workflow will look like:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Slack</span> <span class="hljs-string">Notifications</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">notifySlack:</span>

    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Notify</span> <span class="hljs-string">slack</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-attr">SLACK_BOT_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.SLACK_BOT_TOKEN</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">abinoda/slack-action@master</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">args:</span> <span class="hljs-string">'{\"channel\":\"[Channel ID]\",\"blocks\":[{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"*Pull Request:* $<span class="hljs-template-variable">{{ github.event.pull_request.title }}</span>\"}},{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"*Who?:* $<span class="hljs-template-variable">{{ github.event.pull_request.user.login }}</span>\n*Request State:* $<span class="hljs-template-variable">{{ github.event.pull_request.state }}</span>\"}},{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"&lt;$<span class="hljs-template-variable">{{ github.event.pull_request.html_url }}</span>|View Pull Request&gt;\"}}]}'</span>
</code></pre>
<p>So what's happening in the above?</p>
<ul>
<li><code>name</code>: we're setting a friendly name for our workflow</li>
<li><code>on</code>: we want our workflow to trigger when there's a pull request is created that targets our <code>master</code> branch</li>
<li><code>jobs</code>: we're creating a new job called <code>notifySlack</code></li>
<li><code>jobs.notifySlack.runs-on</code>: we want our job to run on a basic setup of the latest Unbuntu</li>
<li><code>jobs.notifySlack.steps</code>: we really only have one step here - we're using a pre-existing Github Action called <a target="_blank" href="https://github.com/marketplace/actions/post-slack-message">Slack Action</a> and we're configuring it to publish a notification to our Slack</li>
</ul>
<p>There are two points here we'll need to pay attention to, the <code>env.SLACK_BOT_TOKEN</code> and the <code>with.args</code>.</p>
<p>In order for Github to communicate with Slack, we'll need a token. This is what we're setting in <code>env.SLACK_BOT_TOKEN</code>. We generated this token in the first step. Now that we'll be using this in our workflow configuration, we'll need to <a target="_blank" href="https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#creating-encrypted-secrets-for-a-repository">add it as a Git Secret in our project</a>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-slack-token-secret.jpg" alt="Image" width="600" height="400" loading="lazy">
_Github secrets including SLACK_BOT<em>TOKEN</em></p>
<p>The  <code>with.args</code> property is what we use to configure the payload to the Slack API that includes the channel ID (<code>channel</code>) and our actual message (<code>blocks</code>).</p>
<p>The payload in the arguments is stringified and escaped. For example, when expanded it looks like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"channel"</span>: <span class="hljs-string">"[Channel ID]"</span>,
  <span class="hljs-attr">"blocks"</span>: [{
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"section"</span>,
    <span class="hljs-attr">"text"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"mrkdwn"</span>,
      <span class="hljs-attr">"text"</span>: <span class="hljs-string">"*Pull Request:* ${{ github.event.pull_request.title }}"</span>
    }
  }, {
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"section"</span>,
    <span class="hljs-attr">"text"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"mrkdwn"</span>,
      <span class="hljs-attr">"text"</span>: <span class="hljs-string">"*Who?:*n${{ github.event.pull_request.user.login }}n*State:*n${{ github.event.pull_request.state }}"</span>
    }
  }, {
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"section"</span>,
    <span class="hljs-attr">"text"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"mrkdwn"</span>,
      <span class="hljs-attr">"text"</span>: <span class="hljs-string">"&lt;${{ github.event.pull_request._links.html.href }}|View Pull Request&gt;"</span>
    }
  }]
}
</code></pre>
<p><em>Note: this is just to show what the content looks like, we need to use the original file with the stringified and escaped argument.</em></p>
<p>Back to our configuration file, the first thing we set is our channel ID. To find our channel ID, you'll need to use the Slack web interface. Once you open Slack in your browser, you want to find your channel ID in the URL:</p>
<pre><code>https:<span class="hljs-comment">//app.slack.com/client/[workspace ID]/[channel ID]</span>
</code></pre><p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-web-channel-id.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Channel ID in Slack web app URL</em></p>
<p>With that channel ID, you can modify our workflow configuration and replace <code>[Channel ID]</code> with that ID:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">with:</span>
  <span class="hljs-attr">args:</span> <span class="hljs-string">'{\"channel\":\"C014RMKG6H2\",...</span>
</code></pre>
<p>The rest of the arguments property is how we set up our message. It includes variables from the Github event that we use to customize our message. </p>
<p>We won't go into tweaking that here, as what we already have will send a basic pull request message, but you can test out and build your own payload with Slack's <a target="_blank" href="https://app.slack.com/block-kit-builder/">Block Kit Builder</a>.</p>
<p><a target="_blank" href="https://github.com/colbyfayock/my-github-actions/commit/e228b9899ef3da218d1a100d06a72259d45ea19e">Follow along with the commit!</a></p>
<h3 id="heading-test-out-our-slack-workflow">Test out our Slack workflow</h3>
<p>So now we have our workflow configured with our Slack app, finally we're ready to use our bot!</p>
<p>For this part, all we need to do is create a new pull request with any change we want. To test this out, I simply <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/2">created a new branch</a> where I added a sentence to the <code>README.md</code> file.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/github-test-pull-request.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Code diff - <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/2">https://github.com/colbyfayock/my-github-actions/pull/2</a></em></p>
<p>Once you <a target="_blank" href="https://github.com/colbyfayock/my-github-actions/pull/2">create that pull request</a>, similar to our tests workflow, Github will run our Slack workflow! You can see this running in the Actions tab just like before.</p>
<p>As long as you set everything up correctly, once the workflow runs, you should now have a new message in Slack from your new bot.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/05/slack-github-notification.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Slack bot automated message about new pull request</em></p>
<p><em>Note: we won't be merging that pull request in.</em></p>
<h2 id="heading-what-else-can-we-do">What else can we do?</h2>
<h3 id="heading-customize-your-slack-notifications">Customize your Slack notifications</h3>
<p>The message I put together is simple. It tells us who created the pull request and gives us a link to it.</p>
<p>To customize the formatting and messaging, you can use the Github <a target="_blank" href="https://app.slack.com/block-kit-builder/">Block Kit Builder</a> to create your own.</p>
<p>If you'd like to include additional details like the variables I used for the pull request, you can make use of Github's available <a target="_blank" href="https://help.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#contexts">contexts</a>. This lets you pull information about the environment and the job to customize your message.</p>
<p>I couldn't seem to find any sample payloads, so here's an example of a sample <code>github</code> context payload you would expect in the event.</p>
<p><a target="_blank" href="https://gist.github.com/colbyfayock/1710edb9f47ceda0569844f791403e7e">Sample github context</a></p>
<h3 id="heading-more-github-actions">More Github actions</h3>
<p>With our ability to create new custom workflows, that's not a lot we can't automate. Github even has a <a target="_blank" href="https://github.com/marketplace?type=actions">marketplace</a> where you can browse around for one.</p>
<p>If you're feeling like taking it a step further, you can even create your own! This lets you set up scripts to configure a workflow to perform whatever tasks you need for your project.</p>
<h2 id="heading-join-in-the-conversation">Join in the conversation!</h2>
<div class="embed-wrapper">
        <blockquote class="twitter-tweet">
          <a href="https://twitter.com/colbyfayock/status/1268197100539514881"></a>
        </blockquote>
        <script defer="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></div>
<h2 id="heading-what-do-you-use-github-actions-for">What do you use Github actions for?</h2>
<p>Share with me on <a target="_blank" href="https://twitter.com/colbyfayock">Twitter</a>!</p>
<div id="colbyfayock-author-card">
  <p>
    <a href="https://twitter.com/colbyfayock">
      <img src="https://res.cloudinary.com/fay/image/upload/w_2000,h_400,c_fill,q_auto,f_auto/w_1020,c_fit,co_rgb:007079,g_north_west,x_635,y_70,l_text:Source%20Sans%20Pro_64_line_spacing_-10_bold:Colby%20Fayock/w_1020,c_fit,co_rgb:383f43,g_west,x_635,y_6,l_text:Source%20Sans%20Pro_44_line_spacing_0_normal:Follow%20me%20for%20more%20JavaScript%252c%20UX%252c%20and%20other%20interesting%20things!/w_1020,c_fit,co_rgb:007079,g_south_west,x_635,y_70,l_text:Source%20Sans%20Pro_40_line_spacing_-10_semibold:colbyfayock.com/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_68,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_145,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_222,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/w_300,c_fit,co_rgb:7c848a,g_north_west,x_1725,y_295,l_text:Source%20Sans%20Pro_40_line_spacing_-10_normal:colbyfayock/v1/social-footer-card" alt="Follow me for more Javascript, UX, and other interesting things!" width="2000" height="400" loading="lazy">
    </a>
  </p>
  <ul>
    <li>
      <a href="https://twitter.com/colbyfayock">? Follow Me On Twitter</a>
    </li>
    <li>
      <a href="https://youtube.com/colbyfayock">?️ Subscribe To My Youtube</a>
    </li>
    <li>
      <a href="https://www.colbyfayock.com/newsletter/">✉️ Sign Up For My Newsletter</a>
    </li>
  </ul>
</div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ The Best Tools to Help You Build Your Open-Source JavaScript Project ]]>
                </title>
                <description>
                    <![CDATA[ By Tyler Hawkins I recently published a package on npm: a data structures and algorithms library implemented in JavaScript. The purpose of the project is to help others learn and understand data structures and algorithms from a JavaScript perspective... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/effective-tools-for-your-open-source-javascript-project/</link>
                <guid isPermaLink="false">66d46177d14641365a050987</guid>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Developer Tools ]]>
                    </category>
                
                    <category>
                        <![CDATA[ JavaScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ npm ]]>
                    </category>
                
                    <category>
                        <![CDATA[ open source ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 10 Mar 2020 16:53:28 +0000</pubDate>
                <media:content url="https://cdn-media-2.freecodecamp.org/w1280/5f9c9c35740569d1a4ca30ac.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Tyler Hawkins</p>
<p>I recently published a package on npm: a data structures and algorithms library implemented in JavaScript.</p>
<p>The purpose of the project is to help others learn and understand data structures and algorithms from a JavaScript perspective. </p>
<p>Rather than containing only snippets of code with accompanying explanations, the project is meant to provide an eager learner with fully working code, good test cases, and a playground full of examples.</p>
<p>If you’re interested, the project can be found on npm <a target="_blank" href="https://www.npmjs.com/package/js-data-structures-and-algorithms">here</a>.</p>
<p>But, rather than talking about the project itself, what I want to write about today are all the neat tools I learned about and used while creating the project. </p>
<p>I’ve worked on tons of side projects and demos over the last six years, but each of them are very visibly just "pet projects". They in no way have the qualities that’d make them look professional or production-ready.</p>
<p>What I set out to create was something that could be considered a respectable open-source package. To do that, I decided my project would need proper documentation, tooling, linting, continuous integration, and unit tests.</p>
<p>Below are some of the tools I used. Each one serves a unique purpose. I’ve linked to the documentation for each package so you, too, can start utilizing these tools in projects of you own.</p>
<p><strong>Note</strong>: This article assumes that you are already familiar with the process of creating a simple JavaScript package and publishing it on npm. </p>
<p>If not, the npm team has some <a target="_blank" href="https://docs.npmjs.com/creating-and-publishing-unscoped-public-packages">great documentation on getting started</a> that will walk you through the initialization of a project and the steps for publishing.</p>
<p>So let's get started.</p>
<h1 id="heading-prettier">Prettier</h1>
<p>Prettier is an opinionated code formatter that automatically formats your code for you. Rather than simply using ESLint to enforce whatever formatting standards your team has agreed on, Prettier can take care of the formatting for you. </p>
<p>No more worrying about fixing your indentation and line widths! I’m using this specifically for my JavaScript, but it can handle many different languages.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/prettier.png" alt="Image" width="600" height="400" loading="lazy">
<em>Sample JavaScript before and after running Prettier</em></p>
<p>You can check out the Prettier docs here: <a target="_blank" href="https://github.com/prettier/prettier">https://github.com/prettier/prettier</a></p>
<h1 id="heading-stylelint">stylelint</h1>
<p>stylelint autoformats your CSS for you. Similar to Prettier, this tool helps you keep your CSS clean while taking care of the heavy lifting for you.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/02-stylelint.png" alt="Image" width="600" height="400" loading="lazy">
<em>Sample output from running stylelint</em></p>
<p>You can check out the stylelint docs here: <a target="_blank" href="https://github.com/stylelint/stylelint">https://github.com/stylelint/stylelint</a></p>
<h1 id="heading-eslint">ESLint</h1>
<p>ESLint handles all my other JavaScript linting for catching syntax errors and enforcing best practices.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.10.38-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>Sample output from linting with ESLint in their playground environment</em></p>
<p>You can check out the ESLint docs here: <a target="_blank" href="https://eslint.org/">https://eslint.org/</a></p>
<h1 id="heading-commitizen">Commitizen</h1>
<p>Commitizen is a CLI tool that walks you through writing your commit messages. It generates the commit message for you based on your input and ensures that the resulting commit message follows the Conventional Commits standard.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/04-comitizen.png" alt="Image" width="600" height="400" loading="lazy">
<em>Commitizen command line interface when creating a new commit</em></p>
<p>You can check out the Commitizen docs here: <a target="_blank" href="https://github.com/commitizen/cz-cli">https://github.com/commitizen/cz-cli</a></p>
<h1 id="heading-commitlint">commitlint</h1>
<p>commitlint verifies that your commit messages follow the Conventional Commits standard. As long as you use Commitizen to create your commit messages, you won’t run into any problems. </p>
<p>The real benefit of using commitlint is to catch commits that developers wrote on their own that don’t follow your formatting standards.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/05-commitlint.svg" alt="Image" width="600" height="400" loading="lazy">
<em>commitlint demo to show possible error messages</em></p>
<p>You can check out the commitlint docs here: <a target="_blank" href="https://github.com/conventional-changelog/commitlint">https://github.com/conventional-changelog/commitlint</a></p>
<h1 id="heading-lint-staged">lint-staged</h1>
<p>lint-staged runs linters against code that you’re trying to commit. This is where you can validate that your code is passing the standards being enforced by Prettier, stylelint, and ESLint.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/06-lint-staged-prettier.gif" alt="Image" width="600" height="400" loading="lazy">
<em>lint-staged example that runs ESLint on checked-in code</em></p>
<p>You can check out the lint-staged docs here: <a target="_blank" href="https://github.com/okonet/lint-staged">https://github.com/okonet/lint-staged</a></p>
<h1 id="heading-husky">Husky</h1>
<p>Husky makes it easy to run Git hooks.</p>
<p>All the previously mentioned tools can be run through Husky on Git hooks like <code>pre-commit</code> or <code>commit-msg</code>, so this is where the magic happens.</p>
<p>For instance, I’m running lint-staged and my unit tests during the <code>pre-commit</code> hook, and I’m running commitlint during the <code>commit-msg</code> hook. That means when I’m trying to check in my code, Husky does all the validation to make sure I’m abiding by all the rules I’m enforcing in my project.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.21.17-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>Sample Husky configuration that runs on the pre-commit and commit-msg Git hooks</em></p>
<p>You can check out the Husky docs here: <a target="_blank" href="https://github.com/typicode/husky">https://github.com/typicode/husky</a></p>
<h1 id="heading-rollup">Rollup</h1>
<p>Rollup is a module bundler for JavaScript. It takes all of your source code and bundles it into the files you actually want to distribute as part of your package.</p>
<p>The conventional wisdom seems to be if you’re building a web application, you should use webpack. And if you’re building a library, you should use Rollup. </p>
<p>In my case, I was building a data structures and algorithms library, so I chose to use Rollup. One benefit seems to be the output that Rollup generates is significantly smaller than what webpack outputs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.24.23-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>A very minimal Rollup config that creates an output bundle in the CommonJS format</em></p>
<p>You can check out the Rollup docs here: <a target="_blank" href="https://rollupjs.org/guide/en/">https://rollupjs.org/guide/en/</a></p>
<h1 id="heading-standard-version">Standard Version</h1>
<p>Standard Version helps automate your versioning and changelog generation.</p>
<p>Previously, I mentioned tools like Commitizen and commitlint for formatting your commits according to the Conventional Commits standard. Why, you may ask, is that helpful?</p>
<p>The answer, at least in part, is that by using a consistent commit message format, you can use tools that are able to understand what kind of changes your commits are making.</p>
<p>For example, are you fixing bugs? Adding new features? Making breaking changes people consuming your library should be aware of? Standard Version is able to understand your commit messages and then generate a changelog for you. </p>
<p>It’s also able to intelligently bump the version of your package according to the semantic versioning standard (major, minor, patch).</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.27.32-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>Sample Standard Version pre-release script that runs before version bumps</em></p>
<p>You can check out the Standard Version docs here: <a target="_blank" href="https://github.com/conventional-changelog/standard-version">https://github.com/conventional-changelog/standard-version</a></p>
<h1 id="heading-travis-ci">Travis CI</h1>
<p>Travis CI is a continuous-integration (CI) tool that can be integrated with GitHub, where my code happens to be hosted.</p>
<p>CI tools are important because they allow your commits to be tested yet again before you merge them into your master branch. You could argue using Travis CI and a tool like Husky duplicates functionality, but it’s important to keep in mind that even Husky can be bypassed by passing a <code>--no-verify</code> flag to your commit command.</p>
<p>Through GitHub, you can specify that your Travis CI jobs must be passing before code can be merged, so this adds one more layer of protection and verifies that only passing code makes it into your repo.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.29.33-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>Travis CI output from a passing build</em></p>
<p>You can check out the Travis CI docs here: <a target="_blank" href="https://docs.travis-ci.com/">https://docs.travis-ci.com/</a></p>
<h1 id="heading-codecov">Codecov</h1>
<p>Codecov is another CI tool that looks at your project’s code coverage.</p>
<p>I’m writing JavaScript unit tests using Jest. Part of my Travis CI job runs my test suite and ensures they all pass. It also pipes the code coverage output to Codecov, which then can verify if my code coverage is slipping or staying high. It also can be used in conjunction with GitHub badges, which we’ll talk about next.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.31.34-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>Codecov dashboard (look at that beautiful 100% code coverage!)</em></p>
<p>You can check out the Codecov docs here: <a target="_blank" href="https://docs.codecov.io/docs">https://docs.codecov.io/docs</a></p>
<h1 id="heading-badges">Badges</h1>
<p>Have you ever looked at a project in GitHub and seen little badges near the top of the README? Things like whether the build is passing, what the code coverage is, and what the latest version of the npm package is can all be shown using badges.</p>
<p>They’re relatively simple to add, but I think they add a really nice touch to any project. <a target="_blank" href="http://shields.io/">Shields.io</a> is a great resource for finding lots of different badges that can be added to your project, and it helps you generate the markdown to include in your README.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.33.10-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>GitHub badges for my js-data-structures-and-algorithms npm package</em></p>
<p>You can check out the Shields.io docs here: <a target="_blank" href="https://shields.io/">https://shields.io/</a></p>
<h1 id="heading-documentation">Documentation</h1>
<p>A little documentation goes a long way. In my project, I’ve added a README, CHANGELOG, contributing guidelines, code of conduct, and a license.</p>
<p>These docs serve to help people know what your project is, how to use it, what changes have been made with each release, how to contribute if they want to get involved, how they’re expected to interact with other members of the community, and what the legal terms are.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.35.02-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>The CHANGELOG for my js-data-structures-and-algorithms npm package</em></p>
<p>You can check out the documentation for my project here: <a target="_blank" href="https://github.com/thawkin3/js-data-structures-and-algorithms">https://github.com/thawkin3/js-data-structures-and-algorithms</a></p>
<h1 id="heading-github-templates">GitHub Templates</h1>
<p>Did you know you can create templates in GitHub for things like bug reports, feature requests, and pull requests? Creating these templates makes it crystal clear, for example, what information someone should be expected to provide when filing a bug.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/03/Screen-Shot-2020-03-09-at-9.36.30-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>GitHub templates for bug reports and feature requests</em></p>
<p>You can check out the GitHub templates docs here: <a target="_blank" href="https://help.github.com/en/github/building-a-strong-community/about-issue-and-pull-request-templates">https://help.github.com/en/github/building-a-strong-community/about-issue-and-pull-request-templates</a></p>
<h1 id="heading-closing">Closing</h1>
<p>That’s it. When I first showed this project to some friends, one of them commented, “Oh my build tool soup!” And he may be right. This is a lot. But I strongly believe that adding all the tooling above is worth it. It helps automate many things and helps keep your codebase clean and in working order.</p>
<p>My biggest takeaway from building this project is that setting up all of the tooling above isn’t as daunting as it may seem. Each of these tools has good documentation and helpful guides for getting started. It really wasn’t that bad, and you should feel confident adopting some (if not all) of these tools in your project, too.</p>
<p>Happy coding!</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
