<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Abraham Dahunsi - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Sun, 17 May 2026 16:31:48 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/author/Abdahunsi/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Implement Event-Driven Data Processing with Traefik, Kafka, and Docker ]]>
                </title>
                <description>
                    <![CDATA[ In modern system design, Event-Driven Architecture (EDA) focuses on creating, detecting, using, and responding to events within a system. Events are significant occurrences that can affect a system’s hardware or software, such as user actions, state ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-implement-event-driven-data-processing/</link>
                <guid isPermaLink="false">673c7ac360ba8e6675690350</guid>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Microservices ]]>
                    </category>
                
                    <category>
                        <![CDATA[ containers ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Tue, 19 Nov 2024 11:47:15 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731772751529/58ee1304-a5d9-4be4-a709-1026de99ab3e.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In modern system design, <a target="_blank" href="https://en.wikipedia.org/wiki/Event-driven_programming">Event-Driven Architecture</a> (EDA) focuses on creating, detecting, using, and responding to events within a system. Events are significant occurrences that can affect a system’s hardware or software, such as user actions, state changes, or data updates.</p>
<p>EDA enables different parts of an application to interact in a decoupled way, allowing them to communicate through events instead of direct calls. This setup lets components work independently, respond to events asynchronously, and adjust to changing business needs without major system reconfiguration, promoting agility.</p>
<p>New and <a target="_blank" href="https://en.wikipedia.org/wiki/Event-driven_architecture">modern applications now heavily rely on real-time data processing and responsiveness</a>. The EDA’s importance cannot be overstated because it provides the framework that supports those requirements. By using asynchronous communication and event-driven interactions, systems can efficiently handle high volumes of transactions and maintain performance under unstable loads. These features are particularly appreciated in environments where changes are very spontaneous, such as e-commerce platforms or IoT applications.</p>
<p>Some key components of EDA include:</p>
<ul>
<li><p><strong>Event Sources</strong>: These are the producers that generate events when significant actions occur within the system. Examples include user interactions or data changes.</p>
</li>
<li><p><strong>Listeners</strong>: These are entities that subscribe to specific events and respond when those events occur. Listeners enable the system to react dynamically to changes.</p>
</li>
<li><p><strong>Handlers</strong>: These are responsible for processing the events once they are detected by listeners, executing the necessary business logic or workflows triggered by the event.</p>
</li>
</ul>
<p>In this article, you will learn how to implement event-driven data processing using Traefik, Kafka, and Docker.</p>
<p>Here is a <a target="_blank" href="https://github.com/Abraham12611/EventMesh">simple application hosted on GitHub</a> that you can quickly run to get an overview of what you will be building today.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<p>Here is what we'll cover:</p>
<ul>
<li><p><a class="post-section-overview" href="#heading-table-of-contents">Table of Contents</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-the-technologies">Understanding the Technologies</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-the-environment">How to Set Up the Environment</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-build-the-event-driven-system">How to Build the Event-Driven System</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-integrate-traefik-with-kafka">How to Integrate Traefik with Kafka</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-testing-the-setup">Testing the Setup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<p>Let's get started!</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you begin:</p>
<ul>
<li><p>Deploy an Ubuntu 24.04 instance with at least 4 GB of RAM and a minimum of 20 GB of free disk space to accommodate Docker images, containers, and Kafka data.</p>
</li>
<li><p>Access the instance with a non-root user with sudo privileges.</p>
</li>
<li><p>Update the package index.</p>
</li>
</ul>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<h2 id="heading-understanding-the-technologies">Understanding the Technologies</h2>
<h3 id="heading-apache-kafka">Apache Kafka</h3>
<p>Apache Kafka is a distributed event streaming platform built for high-throughput data pipelines and real-time streaming applications. It acts as the backbone for implementing EDA by efficiently managing large volumes of events. Kafka uses a publish-subscribe model where producers send events to topics, and consumers subscribe to these topics to receive the events.</p>
<p>Some of the key features of Kafka include:</p>
<ul>
<li><p><strong>High Throughput</strong>: Kafka is capable of handling millions of events per second with low latency, making it suitable for high-volume applications.</p>
</li>
<li><p><strong>Fault Tolerance</strong>: Kafka's distributed architecture ensures data durability and availability even in the face of server failures. It replicates data across multiple brokers within a cluster.</p>
</li>
<li><p><strong>Scalability</strong>: Kafka can easily scale horizontally by adding more brokers to the cluster or partitions to topics, accommodating growing data needs without significant reconfiguration.</p>
</li>
</ul>
<h3 id="heading-traefik">Traefik</h3>
<p>Traefik is a modern HTTP reverse proxy and load balancer designed specifically for microservices architectures. It automatically discovers services running in your infrastructure and routes traffic accordingly. Traefik simplifies the management of microservices by providing dynamic routing capabilities based on service metadata.</p>
<p>Some of the key features of Traefik include:</p>
<ul>
<li><p>Dynamic Configuration: Traefik automatically updates its routing configuration as services are added or removed, eliminating manual intervention.</p>
</li>
<li><p>Load Balancing: It efficiently distributes incoming requests across multiple service instances, improving performance and reliability.</p>
</li>
<li><p>Integrated Dashboard: Traefik provides a user-friendly dashboard for monitoring traffic and service health in real-time.</p>
</li>
</ul>
<p>By using Kafka and Traefik in an event-driven architecture, you can build responsive systems that efficiently handle real-time data processing while maintaining high availability and scalability.</p>
<h2 id="heading-how-to-set-up-the-environment">How to Set Up the Environment</h2>
<h3 id="heading-how-to-install-docker-on-ubuntu-2404">How to Install Docker on Ubuntu 24.04</h3>
<ol>
<li>Install the required packages.</li>
</ol>
<pre><code class="lang-bash">sudo apt install ca-certificates curl gnupg lsb-release
</code></pre>
<ol start="2">
<li>Add Docker’s official GPG Key.</li>
</ol>
<pre><code class="lang-bash">curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
</code></pre>
<ol start="3">
<li>Add the Docker repository to your APT sources.</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu <span class="hljs-subst">$(lsb_release -cs)</span> stable"</span> | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<ol start="4">
<li>Update the package index again and install Docker Engine with the Docker Compose plugin.</li>
</ol>
<pre><code class="lang-bash">sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
</code></pre>
<ol start="5">
<li>Check to verify the installation.</li>
</ol>
<pre><code class="lang-bash">sudo docker run hello-world
</code></pre>
<p>Expected Output:</p>
<pre><code class="lang-bash">Unable to find image <span class="hljs-string">'hello-world:latest'</span> locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete
Digest: sha256:305243c734571da2d100c8c8b3c3167a098cab6049c9a5b066b6021a60fcb966
Status: Downloaded newer image <span class="hljs-keyword">for</span> hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
</code></pre>
<h3 id="heading-how-to-configure-docker-compose">How to Configure Docker Compose</h3>
<p>Docker Compose simplifies the management of multi-container applications, allowing you to define and run services in a single file.</p>
<ol>
<li>Create a project directory</li>
</ol>
<pre><code class="lang-bash">mkdir ~/kafka-traefik-setup &amp;&amp; <span class="hljs-built_in">cd</span> ~/kafka-traefik-setup
</code></pre>
<ol start="2">
<li>Create a <code>docker-compose.yml</code> file.</li>
</ol>
<pre><code class="lang-bash">nano docker-compose.yml
</code></pre>
<ol start="3">
<li>Add the following configuration to the file to define your services.</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.8'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">kafka:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">wurstmeister/kafka:latest</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"9092:9092"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">KAFKA_ADVERTISED_LISTENERS:</span> <span class="hljs-string">INSIDE://kafka:9092,OUTSIDE://localhost:9092</span>
      <span class="hljs-attr">KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:</span> <span class="hljs-string">INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT</span>
      <span class="hljs-attr">KAFKA_LISTENERS:</span> <span class="hljs-string">INSIDE://0.0.0.0:9092,OUTSIDE://0.0.0.0:9092</span>
      <span class="hljs-attr">KAFKA_ZOOKEEPER_CONNECT:</span> <span class="hljs-string">zookeeper:2181</span>

  <span class="hljs-attr">zookeeper:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">wurstmeister/zookeeper:latest</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"2181:2181"</span>

  <span class="hljs-attr">traefik:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">traefik:v2.9</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"80:80"</span>       <span class="hljs-comment"># HTTP traffic</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8080:8080"</span>   <span class="hljs-comment"># Traefik dashboard (insecure)</span>
    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--api.insecure=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--providers.docker=true"</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"/var/run/docker.sock:/var/run/docker.sock"</span>
</code></pre>
<p>Save your changes with <code>ctrl + o</code>, then exit with <code>ctrl + x</code>.</p>
<ol start="4">
<li>Start your services.</li>
</ol>
<pre><code class="lang-bash">docker compose up -d
</code></pre>
<p>Expected Output:</p>
<pre><code class="lang-bash">[+] Running 4/4
 ✔ Network kafka-traefik-setup_default        Created                  0.2s
 ✔ Container kafka-traefik-setup-zookeeper-1  Started                  1.9s
 ✔ Container kafka-traefik-setup-traefik-1    Started                  1.9s
 ✔ Container kafka-traefik-setup-kafka-1      Started                  1.9s
</code></pre>
<h2 id="heading-how-to-build-the-event-driven-system">How to Build the Event-Driven System</h2>
<h3 id="heading-how-to-create-event-producers">How to Create Event Producers</h3>
<p>To produce events in Kafka, you will need to implement a Kafka producer. Below is an example using Java.</p>
<ol>
<li>Create a file <a target="_blank" href="http://kafka-producer.java"><code>kafka-producer.java</code></a>.</li>
</ol>
<pre><code class="lang-bash">nano kafka-producer.java
</code></pre>
<ol start="2">
<li>Add the following configuration for a Kafka Producer.</li>
</ol>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.apache.kafka.clients.producer.KafkaProducer;
<span class="hljs-keyword">import</span> org.apache.kafka.clients.producer.ProducerRecord;
<span class="hljs-keyword">import</span> org.apache.kafka.clients.producer.RecordMetadata;

<span class="hljs-keyword">import</span> java.util.Properties;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SimpleProducer</span> </span>{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{
        <span class="hljs-comment">// Set up the producer properties</span>
        Properties props = <span class="hljs-keyword">new</span> Properties();
        props.put(<span class="hljs-string">"bootstrap.servers"</span>, <span class="hljs-string">"localhost:9092"</span>);
        props.put(<span class="hljs-string">"key.serializer"</span>, <span class="hljs-string">"org.apache.kafka.common.serialization.StringSerializer"</span>);
        props.put(<span class="hljs-string">"value.serializer"</span>, <span class="hljs-string">"org.apache.kafka.common.serialization.StringSerializer"</span>);

        <span class="hljs-comment">// Create the producer</span>
        KafkaProducer&lt;String, String&gt; producer = <span class="hljs-keyword">new</span> KafkaProducer&lt;&gt;(props);

        <span class="hljs-keyword">try</span> {
            <span class="hljs-comment">// Send a message to the topic "my-topic"</span>
            ProducerRecord&lt;String, String&gt; record = <span class="hljs-keyword">new</span> ProducerRecord&lt;&gt;(<span class="hljs-string">"my-topic"</span>, <span class="hljs-string">"key1"</span>, <span class="hljs-string">"Hello, Kafka!"</span>);
            RecordMetadata metadata = producer.send(record).get(); <span class="hljs-comment">// Synchronous send</span>
            System.out.printf(<span class="hljs-string">"Sent message with key %s to partition %d with offset %d%n"</span>, 
                              record.key(), metadata.partition(), metadata.offset());
        } <span class="hljs-keyword">catch</span> (Exception e) {
            e.printStackTrace();
        } <span class="hljs-keyword">finally</span> {
            <span class="hljs-comment">// Close the producer</span>
            producer.close();
        }
    }
}
</code></pre>
<p>Save your changes with <code>ctrl + o</code>, then exit with <code>ctrl + x</code>.</p>
<p>In the above configuration, the producer sends a message with the key "key1" and the value "Hello, Kafka!" to the topic "my-topic".</p>
<h3 id="heading-how-to-set-up-kafka-topics">How to Set Up Kafka Topics</h3>
<p>Before producing or consuming messages, you need to create topics in Kafka.</p>
<ol>
<li>Use the <a target="_blank" href="http://kafka-topics.sh"><code>kafka-topics.sh</code></a> script included with your Kafka installation to create a topic.</li>
</ol>
<pre><code class="lang-bash">kafka-topics.sh --bootstrap-server localhost:9092 --create --topic &lt;TopicName&gt; --partitions &lt;NumberOfPartitions&gt; --replication-factor &lt;ReplicationFactor&gt;
</code></pre>
<p>For example, if you want to create a topic named <code>my-topic</code> with 3 partitions and a replication factor of 1, run:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> &lt;Kafka Container ID&gt; /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 3 --replication-factor 1
</code></pre>
<p>Expected Output:</p>
<pre><code class="lang-bash">Created topic my-topic.
</code></pre>
<ol start="2">
<li>Check to confirm if the Topic was created successfully.</li>
</ol>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it kafka-traefik-setup-kafka-1 /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
</code></pre>
<p>Expected Output:</p>
<pre><code class="lang-bash">my-topic
</code></pre>
<h3 id="heading-how-to-create-event-consumers">How to Create Event Consumers</h3>
<p>After you have created your producers and topics, you can create consumers to read messages from those topics.</p>
<ol>
<li>Create a file <a target="_blank" href="http://kafka-consumer.java"><code>kafka-consumer.java</code></a>.</li>
</ol>
<pre><code class="lang-bash">nano kafka-consumer.java
</code></pre>
<ol start="2">
<li>Add the following configuration for a Kafka consumer.</li>
</ol>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.apache.kafka.clients.consumer.ConsumerConfig;
<span class="hljs-keyword">import</span> org.apache.kafka.clients.consumer.ConsumerRecords;
<span class="hljs-keyword">import</span> org.apache.kafka.clients.consumer.KafkaConsumer;
<span class="hljs-keyword">import</span> org.apache.kafka.clients.consumer.ConsumerRecord;

<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.Collections;
<span class="hljs-keyword">import</span> java.util.Properties;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SimpleConsumer</span> </span>{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{
        <span class="hljs-comment">// Set up the consumer properties</span>
        Properties props = <span class="hljs-keyword">new</span> Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, <span class="hljs-string">"localhost:9092"</span>);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, <span class="hljs-string">"my-group"</span>);
        props.put(ConsumerConfig.KEY_SERIALIZER_CLASS_CONFIG, <span class="hljs-string">"org.apache.kafka.common.serialization.StringDeserializer"</span>);
        props.put(ConsumerConfig.VALUE_SERIALIZER_CLASS_CONFIG, <span class="hljs-string">"org.apache.kafka.common.serialization.StringDeserializer"</span>);

        <span class="hljs-comment">// Create the consumer</span>
        KafkaConsumer&lt;String, String&gt; consumer = <span class="hljs-keyword">new</span> KafkaConsumer&lt;&gt;(props);

        <span class="hljs-comment">// Subscribe to the topic</span>
        consumer.subscribe(Collections.singletonList(<span class="hljs-string">"my-topic"</span>));

        <span class="hljs-keyword">try</span> {
            <span class="hljs-keyword">while</span> (<span class="hljs-keyword">true</span>) {
                <span class="hljs-comment">// Poll for new records</span>
                ConsumerRecords&lt;String, String&gt; records = consumer.poll(Duration.ofMillis(<span class="hljs-number">100</span>));
                <span class="hljs-keyword">for</span> (ConsumerRecord&lt;String, String&gt; record : records) {
                    System.out.printf(<span class="hljs-string">"Consumed message with key %s and value %s from partition %d at offset %d%n"</span>,
                                      record.key(), record.value(), record.partition(), record.offset());
                }
            }
        } <span class="hljs-keyword">finally</span> {
            <span class="hljs-comment">// Close the consumer</span>
            consumer.close();
        }
    }
}
</code></pre>
<p>Save your changes with <code>ctrl + o</code>, then exit with <code>ctrl + x</code>.</p>
<p>In the above configuration, the consumer subscribes to <code>my-topic</code> and continuously polls for new messages. When messages are received, it prints out their keys and values along with partition and offset information.</p>
<h2 id="heading-how-to-integrate-traefik-with-kafka">How to Integrate Traefik with Kafka</h2>
<h3 id="heading-configure-traefik-as-a-reverse-proxy">Configure Traefik as a Reverse Proxy.</h3>
<p>Integrating Traefik as a reverse proxy for Kafka allows you to manage incoming traffic efficiently while providing features such as dynamic routing and SSL termination.</p>
<ol>
<li>Update the <code>docker-compose.yml</code> file.</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.8'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">kafka:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">wurstmeister/kafka:latest</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"9092:9092"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">KAFKA_ADVERTISED_LISTENERS:</span> <span class="hljs-string">INSIDE://kafka:9092,OUTSIDE://localhost:9092</span>
      <span class="hljs-attr">KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:</span> <span class="hljs-string">INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT</span>
      <span class="hljs-attr">KAFKA_LISTENERS:</span> <span class="hljs-string">INSIDE://0.0.0.0:9092,OUTSIDE://0.0.0.0:9092</span>
      <span class="hljs-attr">KAFKA_ZOOKEEPER_CONNECT:</span> <span class="hljs-string">zookeeper:2181</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.enable=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.kafka.rule=Host(`kafka.example.com`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.kafka.loadbalancer.server.port=9092"</span>

  <span class="hljs-attr">zookeeper:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">wurstmeister/zookeeper:latest</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"2181:2181"</span>

  <span class="hljs-attr">traefik:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">traefik:v2.9</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"80:80"</span>        <span class="hljs-comment"># HTTP traffic</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8080:8080"</span>    <span class="hljs-comment"># Traefik dashboard (insecure)</span>
    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--api.insecure=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--providers.docker=true"</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"/var/run/docker.sock:/var/run/docker.sock"</span>
</code></pre>
<p>In this configuration, replace <a target="_blank" href="http://kafka.example.com"><code>kafka.example.com</code></a> with your actual domain name. The labels define the routing rules that Traefik will use to direct traffic to the Kafka service.</p>
<ol start="2">
<li>Restart your services.</li>
</ol>
<pre><code class="lang-bash">docker compose up -d
</code></pre>
<ol start="3">
<li><p>Access your Traefik dashboard by accessing <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a> on your web browser.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731753126986/fc124c80-1da2-43eb-9385-426bf6a12756.png" alt="Traefik dashboard on http://localhost:8080" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-load-balancing-with-traefik">Load Balancing with Traefik</h3>
<p> Traefik provides built-in load balancing capabilities that can help distribute requests across multiple instances of your Kafka producers and consumers.</p>
<h3 id="heading-strategies-for-load-balancing-event-driven-microservices">Strategies for Load Balancing Event-Driven Microservices</h3>
<ol>
<li><strong>Round Robin</strong>:</li>
</ol>
</li>
</ol>
<p>    By default, Traefik uses a round-robin strategy to distribute incoming requests evenly across all available instances of a service. This is effective for balancing load when multiple instances of Kafka producers or consumers are running.</p>
<ol start="2">
<li><strong>Sticky Sessions</strong>:</li>
</ol>
<p>    If you require that requests from a specific client always go to the same instance (for example, maintaining session state), you can configure sticky sessions in Traefik using cookies or headers.</p>
<ol start="3">
<li><strong>Health Checks</strong>:</li>
</ol>
<p>    Configure health checks in Traefik to ensure that traffic is only routed to healthy instances of your Kafka services. You can do this by adding health check parameters in the service definitions within your <code>docker-compose.yml</code> file:</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.kafka.loadbalancer.healthcheck.path=/health"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.kafka.loadbalancer.healthcheck.interval=10s"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.kafka.loadbalancer.healthcheck.timeout=3s"</span>
</code></pre>
<h2 id="heading-testing-the-setup">Testing the Setup</h2>
<h3 id="heading-verifying-event-production-and-consumption">Verifying Event Production and Consumption</h3>
<ol>
<li>Kafka provides built-in command-line tools for testing. Start a Console producer.</li>
</ol>
<pre><code class="lang-bash">    docker <span class="hljs-built_in">exec</span> -it kafka-traefik-setup-kafka-1 /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
</code></pre>
<p>    After running this command, you can type messages into the terminal, which will be sent to the specified Kafka topic.</p>
<ol start="2">
<li>Start another terminal session and start a console consumer.</li>
</ol>
<pre><code class="lang-bash">    docker <span class="hljs-built_in">exec</span> -it kafka-traefik-setup-kafka-1 /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning
</code></pre>
<p>    This command will display all messages in <code>my-topic</code>, including those produced before the consumer started.</p>
<ol start="3">
<li>To see how well your consumers are keeping up with producers, you can run the following command to check the lag for a specific consumer group.</li>
</ol>
<pre><code class="lang-bash">    docker <span class="hljs-built_in">exec</span> -it kafka-traefik-setup-kafka-1 /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group &lt;your-consumer-group&gt;
</code></pre>
<h3 id="heading-monitoring-and-logging">Monitoring and Logging</h3>
<ol>
<li><strong>Kafka Metrics</strong>:</li>
</ol>
<p>    Kafka exposes numerous metrics that can be monitored using JMX (Java Management Extensions). You can configure JMX to export these metrics to monitoring systems like Prometheus or Grafana. Key metrics to monitor include:</p>
<ul>
<li><p><strong>Message Throughput</strong>: The rate of messages produced and consumed.</p>
</li>
<li><p><strong>Consumer Lag</strong>: The difference between the last produced message offset and the last consumed message offset.</p>
</li>
<li><p><strong>Broker Health</strong>: Metrics related to broker performance, such as request rates and error rates.</p>
</li>
</ul>
<ol start="2">
<li><strong>Prometheus and Grafana Integration</strong>:</li>
</ol>
<p>    To visualize Kafka metrics, you can set up Prometheus to scrape metrics from your Kafka brokers. Follow these steps:</p>
<ul>
<li><p>Enable JMX Exporter on your Kafka brokers by adding it as a Java agent in your broker configuration.</p>
</li>
<li><p>Configure Prometheus by adding a scrape job in its configuration file (<code>prometheus.yml</code>) that points to your JMX Exporter endpoint.</p>
</li>
<li><p>Use Grafana to create dashboards that visualize these metrics in real-time.</p>
</li>
</ul>
<h3 id="heading-how-to-implement-monitoring-for-traefik">How to Implement Monitoring for Traefik</h3>
<ol>
<li><strong>Traefik Metrics Endpoint.</strong></li>
</ol>
<p>    Traefik provides built-in support for exporting metrics via Prometheus. To enable this feature, add the following configuration in your Traefik service definition within <code>docker-compose.yml</code>:</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--metrics.prometheus=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--metrics.prometheus.addservice=true"</span>
</code></pre>
<ol start="2">
<li><strong>Visualizing Traefik Metrics with Grafana</strong>.</li>
</ol>
<p>    Once Prometheus is scraping Traefik metrics, you can visualize them using Grafana:</p>
<ul>
<li><p>Create a new dashboard in Grafana and add panels that display key Traefik metrics such as:</p>
</li>
<li><p><strong>traefik_entrypoint_requests_total</strong>: Total number of requests received.</p>
</li>
<li><p><strong>traefik_backend_request_duration_seconds</strong>: Response times of backend services.</p>
</li>
<li><p><strong>traefik_service_requests_total</strong>: Total requests forwarded to backend services.</p>
</li>
</ul>
<ol start="3">
<li><strong>Setting Up Alerts</strong>.</li>
</ol>
<p>    Configure alerts in Prometheus or Grafana based on specific thresholds (e.g., high consumer lag or increased error rates).</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>    In this guide, you successfully implemented Event Driven Architecture (EDA) using Kafka and Traefik within the Ubuntu 24.04 environment.</p>
<h3 id="heading-additional-resources">Additional Resources</h3>
<p>    To learn more you can visit:</p>
<ul>
<li><p>The <a target="_blank" href="https://kafka.apache.org/documentation/">Apache Kafka Official Documentation</a></p>
</li>
<li><p>The <a target="_blank" href="https://doc.traefik.io/traefik/">Traefik Official Documentation</a></p>
</li>
<li><p>The <a target="_blank" href="https://docs.docker.com/">Docker Official Documentation</a></p>
</li>
<li><p>Vultr guide for for <a target="_blank" href="https://docs.vultr.com/set-up-traefik-proxy-as-a-reverse-proxy-for-docker-containers-on-ubuntu-24-04">setting up Traefik Proxy on Ubuntu 24.04</a></p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up Argo Workflows on Kubernetes ]]>
                </title>
                <description>
                    <![CDATA[ Argo Workflows is an open source project that enables the orchestration of multiple Kubernetes jobs. Argo Workflows is implemented as a Kubernetes custom resource definition (CRD), which means it is native to the Kubernetes ecosystem and can run on a... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/set-up-argo-workflows-on-kubernetes/</link>
                <guid isPermaLink="false">66d45d6051f567b42d9f8417</guid>
                
                    <category>
                        <![CDATA[ container ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Kubernetes ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Thu, 15 Feb 2024 23:23:44 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/02/Feature-Image.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Argo Workflows is an open source project that enables the orchestration of multiple Kubernetes jobs. Argo Workflows is implemented as a Kubernetes custom resource definition (CRD), which means it is native to the Kubernetes ecosystem and can run on any Kubernetes cluster.</p>
<p>Workflows are set steps by which individual jobs are executed. You can use Argo Workflows for various purposes, such as data processing, machine learning, CI/CD, and infrastructure automation.</p>
<p>In this article, you will set up Argo Workflows on a Kubernetes cluster and use available templates to create a Workflow to manage in a Kubernetes cluster.</p>
<h2 id="heading-argo-workflows-key-concepts">Argo Workflows Key Concepts</h2>
<p>As I mentioned above, Argo Workflows is implemented as a Kubernetes custom resource definition (CRD) by its own controller. Argo Workflows is made up of two main concepts: <code>workflow</code> and <code>Template</code>.</p>
<h3 id="heading-workflow">Workflow</h3>
<p>A workflow is a central resource in Argo Workflows that defines the workflow to be executed and stores the workflow’s state.</p>
<p>A workflow consists of a specification that contains an entrypoint and a list of templates.</p>
<p>A workflow can model complex logic using directed acyclic graphs (DAGs) or steps to capture the dependencies or sequences between the templates.</p>
<h3 id="heading-template">Template</h3>
<p>A template is a core concept in Argo Workflows that defines the instructions to execute in a workflow step. A template can be one of the following types:</p>
<ul>
<li><strong>Container</strong>: Allows you to be able to define the container. Here is an example:</li>
</ul>
<pre><code class="lang-YAML">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">hello-world</span>
    <span class="hljs-attr">container:</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">alpine:latest</span>
      <span class="hljs-attr">command:</span> [<span class="hljs-string">sh</span>, <span class="hljs-string">-c</span>]
      <span class="hljs-attr">args:</span> [<span class="hljs-string">"echo hello world"</span>]
</code></pre>
<ul>
<li><strong>Script</strong>: A template that runs a script in a container image. This is similar to the container type, but allows you to write the script inline instead of using a separate file. Here is an example:</li>
</ul>
<pre><code class="lang-YAML">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">factorial</span>
    <span class="hljs-attr">inputs:</span>
      <span class="hljs-attr">parameters:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">num</span>
    <span class="hljs-attr">script:</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">python:alpine</span> <span class="hljs-number">3.6</span>
      <span class="hljs-attr">command:</span> [<span class="hljs-string">python</span>]
      <span class="hljs-attr">source:</span> <span class="hljs-string">|
        def factorial(n):
          if n == 0:
            return 1
          else:
            return n * factorial(n-1)
        print(factorial(int({{inputs.parameters.num}})))</span>
</code></pre>
<ul>
<li><strong>Resource</strong>: This template allows direct manipulation of cluster resources. It can be used for operations such as retrieval, creation, modification, or deletion, including GET, CREATE, APPLY, PATCH, REPLACE, or DELETE requests. Here is an example:</li>
</ul>
<pre><code class="lang-YAML">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">create-configmap</span>
    <span class="hljs-attr">resource:</span>
      <span class="hljs-attr">action:</span> <span class="hljs-string">create</span>
      <span class="hljs-attr">manifest:</span> <span class="hljs-string">|
        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: my-config
        data:
          foo: bar
          hello: world</span>
</code></pre>
<ul>
<li><strong>DAG</strong>: This template defines a directed acyclic graph of other templates. In this example, the DAG has three tasks: A, B, and C. Task A runs first, then tasks B and C run in parallel. Here is an example:</li>
</ul>
<pre><code class="lang-YAML">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-dag</span>
    <span class="hljs-attr">dag:</span>
      <span class="hljs-attr">tasks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">A</span>
        <span class="hljs-attr">template:</span> <span class="hljs-string">echo</span>
        <span class="hljs-attr">arguments:</span>
          <span class="hljs-attr">parameters:</span> [{<span class="hljs-attr">name:</span> <span class="hljs-string">message</span>, <span class="hljs-attr">value:</span> <span class="hljs-string">A</span>}]
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">B</span>
        <span class="hljs-attr">dependencies:</span> [<span class="hljs-string">A</span>]
        <span class="hljs-attr">template:</span> <span class="hljs-string">echo</span>
        <span class="hljs-attr">arguments:</span>
          <span class="hljs-attr">parameters:</span> [{<span class="hljs-attr">name:</span> <span class="hljs-string">message</span>, <span class="hljs-attr">value:</span> <span class="hljs-string">B</span>}]
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">C</span>
        <span class="hljs-attr">dependencies:</span> [<span class="hljs-string">A</span>]
        <span class="hljs-attr">template:</span> <span class="hljs-string">echo</span>
        <span class="hljs-attr">arguments:</span>
          <span class="hljs-attr">parameters:</span> [{<span class="hljs-attr">name:</span> <span class="hljs-string">message</span>, <span class="hljs-attr">value:</span> <span class="hljs-string">C</span>}]
</code></pre>
<p>Now, let's get started.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow this guide, make sure you have the following:</p>
<ul>
<li><p>A Kubernetes cluster. To create a new Kubernetes cluster, follow this <a target="_blank" href="https://k21academy.com/docker-kubernetes/three-node-kubernetes-cluster/">guide</a></p>
</li>
<li><p>Basic knowledge of what Kubernetes is. <a target="_blank" href="https://kubernetes.io/docs/concepts/overview/">You can learn more about Kubernetes from their official docs</a></p>
</li>
<li><p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/">The Kubectl CLI</a></p>
</li>
</ul>
<h2 id="heading-step-1-install-argo-workflows">Step 1 - Install Argo Workflows</h2>
<ol>
<li>Using <code>Kubectl</code>, create a namespace for Argo Workflows to segregate your kubernetes cluster resources.</li>
</ol>
<pre><code class="lang-bash">$ kubectl create namespace argo
</code></pre>
<ol start="2">
<li>Install the latest Argo Workflows release file from <a target="_blank" href="https://github.com/argoproj/argo-workflows/releases">Argo Workflows github page</a>.</li>
</ol>
<pre><code class="lang-bash">$ kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v&lt;VERSION&gt;/install.yaml
</code></pre>
<p>The version of Argo Workflows used in this guide is v3.5.0.</p>
<ol start="3">
<li>Check if all the resources have been installed correctly.</li>
</ol>
<pre><code class="lang-bash">$ kubectl get all -n argo
</code></pre>
<p>Output:</p>
<pre><code class="lang-bash">NAME                                      READY   STATUS    RESTARTS   AGE
pod/workflow-controller-7f8c9f8f5-9qj2l   1/1     Running   0          2m
pod/argo-server-6f8f9c9f8f-6kx4d          1/1     Running   0          2m

NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/argo-server                   ClusterIP   10.3.240.123    &lt;none&gt;        2746/TCP   2m
service/workflow-controller-metrics   ClusterIP   10.3.240.124    &lt;none&gt;        9090/TCP   2m

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/workflow-controller   1/1     1            1           3m05s
deployment.apps/argo-server           1/1     1            1           3m07s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/workflow-controller-7f8c9f8f5   1         1         1       3m33s
replicaset.apps/argo-server-6f8f9c9f8f          1         1         1       2m33s
</code></pre>
<h2 id="heading-step-2-start-the-argo-ui-for-monitoring">Step 2 - Start the Argo UI for Monitoring</h2>
<p>Argo Server has a graphical user interface that you can use to manage and monitor your Kubernetes cluster Workflows.</p>
<p>In this guide, you will bypass the authentication process of requesting a token to access the Argo web interface because it cannot be accessed publicly. This is called the <strong>Server Authentication</strong> mode, although you can use the other mode, <strong>Client Authentication</strong>, which requires that you request a token to be able to access the web interface.</p>
<ol>
<li>Change the authentication mode to Server Authentication.</li>
</ol>
<pre><code class="lang-bash">$ kubectl patch deployment \
  argo-server \
  --namespace argo \
  --<span class="hljs-built_in">type</span>=<span class="hljs-string">'json'</span> \
  -p=<span class="hljs-string">'[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [
  "server",
  "--auth-mode=server"
]}]'</span>
</code></pre>
<p>Output:</p>
<pre><code class="lang-bash">deployment.apps/argo-server patched
</code></pre>
<p>Note that this mode is not recommended for production environments, as it creates a significant security risks. A more secure option is to use the <strong>client authentication mode</strong>, which require clients to provide their Kubernetes bearer token.</p>
<ol start="2">
<li>Configure Kubernetes Role-Based Access Control (RBAC) to grant Argo Admin-level permissions for managing resources within the <code>argo</code> namespace.</li>
</ol>
<pre><code class="lang-bash">$ kubectl create rolebinding argo-default-admin --clusterrole=admin --serviceaccount=argo:default -n argo
</code></pre>
<ol start="3">
<li>Forward the Argo server web interface port with <code>kubectl port-forward</code>.</li>
</ol>
<pre><code class="lang-bash">$ kubectl -n argo port-forward deployment/argo-server 2746:2746
</code></pre>
<p>Using a browser like Chrome, visit the IP Address <code>http://localhost:2746</code>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/02/Screenshot-20240213-003321.png" alt="Screenshot-20240213-003321" width="600" height="400" loading="lazy"></p>
<h2 id="heading-create-a-new-workflow">Create a New Workflow</h2>
<p>You can use a YAML manifest to define Agro Workflows and apply to your Kubernetes cluster.</p>
<ol>
<li>Create a new Workflow file.</li>
</ol>
<pre><code class="lang-bash">Nano argo-workflow.yaml
</code></pre>
<ol start="2">
<li>Add the following to the file:</li>
</ol>
<pre><code class="lang-YAML"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Workflow</span>
<span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">demo-workflow</span>
<span class="hljs-attr">spec:</span>
    <span class="hljs-attr">entrypoint:</span> <span class="hljs-string">main</span>
    <span class="hljs-attr">templates:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">main</span>
    <span class="hljs-attr">container:</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">command:</span> [<span class="hljs-string">"/bin/sh"</span>]
        <span class="hljs-attr">args:</span> [<span class="hljs-string">"-c"</span>, <span class="hljs-string">"echo 'The first step of the Workflow'"</span>]
</code></pre>
<p>Here is a quick breakdown of the components of the file:</p>
<ul>
<li><p><code>entrypoint</code> specifies the entry point for the workflow, which is defined as <code>main</code>.</p>
</li>
<li><p><code>templates</code> contains a list of templates, which define the steps or tasks within the workflow.</p>
</li>
<li><p><code>name</code> is the name of the template, which is set as <code>main</code>.</p>
</li>
<li><p><code>container</code> specifies a container-based step within the workflow.</p>
</li>
<li><p><code>image</code> specifies the Docker image to use for the container, set here as <code>busybox</code>.</p>
</li>
<li><p><code>command</code> specifies the command to be executed inside the container, set as <code>["/bin/sh"]</code>.</p>
</li>
<li><p><code>args</code> specifies the arguments to be passed to the command inside the container, set as <code>["-c", "echo 'The first step of the Workflow'"]</code>. This command will run <code>echo</code> to print "The first step of the Workflow".</p>
</li>
</ul>
<ol start="3">
<li>Apply the Workflow to your cluster:</li>
</ol>
<pre><code class="lang-bash">$ kubectl -n argo create -f argo-workflow.yaml
</code></pre>
<p>Here's the output:</p>
<pre><code class="lang-bash">workflow.argoproj.io/hello-world-nb42c created
</code></pre>
<h2 id="heading-how-to-manage-argo-workflows">How to Manage Argo Workflows</h2>
<ol>
<li>To list all workflows within the <code>argo</code> namespace, do the following:</li>
</ol>
<pre><code class="lang-bash">$ kubectl -n argo get wf
</code></pre>
<p>Here's the output:</p>
<pre><code class="lang-bash">NAME                      STATUS        AGE     MESSAGE
demo-workflow             Succeeded     4m23s
</code></pre>
<ol start="2">
<li>To see the pod logs for your Workflow, do the following:</li>
</ol>
<pre><code class="lang-bash">$ kubectl -n argo logs demo-workflow
</code></pre>
<p>Here's the output:</p>
<pre><code class="lang-bash">This template is the first step of the Workflow
time=<span class="hljs-string">"2024-02-13T19:56:54.629Z"</span> level=info msg=<span class="hljs-string">"sub-process exited"</span> argo=<span class="hljs-literal">true</span> error=<span class="hljs-string">"&lt;nil&gt;"</span>
</code></pre>
<ol start="3">
<li>To delete a workflow, do this:</li>
</ol>
<pre><code class="lang-bash">$ kubectl -n argo delete wf workflow-name
</code></pre>
<ol start="4">
<li>To suspend or resume a workflow, do this:</li>
</ol>
<pre><code class="lang-bash">$ kubectl -n argo <span class="hljs-built_in">suspend</span> wf workflow-name
$ kubectl -n argo resume wf workflow-name
</code></pre>
<ol start="5">
<li>To submit a workflow using the Argo CLI, do this:</li>
</ol>
<pre><code class="lang-bash">$ argo submit -n argo workflow.yaml
</code></pre>
<p>You can learn more about Argo Workflows on their <a target="_blank" href="https://argo-workflows.readthedocs.io/en/latest/">official documentation.</a></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You have now explored Argo Workflows and successfully set it up. This powerful tool enables you to create logic using DAGs, or individual steps, helping you execute various tasks through different templates. You can also interact with your workflows and keep track of their progress by utilizing tools like Argo CLI, Argo UI, and Argo Events.</p>
<p>By using Argo Workflows, you can take advantage of Kubernetes scalability and flexibility to ensure reliable execution of tasks.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Use OpenTelementry to Trace Node.js Applications ]]>
                </title>
                <description>
                    <![CDATA[ Observability refers to our ability to "see" and understand what's happening inside a system by looking at its external signals (like logs, metrics, and traces). Observability involves collecting and analyzing data from sources within a system to mon... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-use-opentelementry-to-trace-node-js-applications/</link>
                <guid isPermaLink="false">66d45d5ad7a4e35e38434924</guid>
                
                    <category>
                        <![CDATA[ data ]]>
                    </category>
                
                    <category>
                        <![CDATA[ node js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ performance ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Sat, 03 Feb 2024 00:21:14 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/01/feature-image-1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Observability refers to our ability to "see" and understand what's happening inside a system by looking at its external signals (like logs, metrics, and traces).</p>
<p>Observability involves collecting and analyzing data from sources within a system to monitor its performance and address problems effectively.</p>
<h2 id="heading-why-is-observability-useful">Why is Observability Useful?</h2>
<ol>
<li><p><strong>Detecting and Troubleshooting Problems:</strong> Observability plays a role in identifying and diagnosing issues within a system. When something goes wrong, having access to data helps pinpoint the cause and resolve problems more quickly.</p>
</li>
<li><p><strong>Optimizing Performance:</strong> Through monitoring metrics and performance indicators, observability helps in optimizing the performance of your system. This includes identifying bottlenecks, improving resource utilization, and ensuring operation.</p>
</li>
<li><p><strong>Planning for Future Capacity:</strong> Understanding how your system behaves over time is vital for planning capacity requirements. Observability data can reveal trends, peak usage periods, and resource needs, helping your decisions regarding scaling.</p>
</li>
<li><p><strong>Enhancing User Experience:</strong> By observing user interactions with your system through logs and metrics, you can improve the user experience. It assists in recognizing patterns, preferences, and potential areas that can be enhanced for user satisfaction.</p>
</li>
</ol>
<h2 id="heading-why-should-i-use-opentelementary">Why Should I Use OpenTelementary?</h2>
<p>Observability is essential for ensuring the reliability and availability of your Node.js applications. But manually instrumenting your code to collect and export telemetry data, such as traces, metrics, and logs, can become very stressful.</p>
<p>Manual instrumentation is very tedious, error-prone, and inconsistent. It can also introduce additional overhead and complexity to your application logic.</p>
<p>In this guide, you will learn how to use OpenTelemetry’s auto-instrumentation to help you achieve effortless Node.js monitoring.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you go through this guide, make sure you have the following:</p>
<ul>
<li><p>A Node.js application</p>
</li>
<li><p>A Datadog account and an API key. If you don't have one, you can <a target="_blank" href="https://us5.datadoghq.com/signup">sign up here to get one</a>.</p>
</li>
<li><p>A Backend service. You can use a backend service like Zepkin or Jaeger to store and analyze trace data. For this guide, we'll be using Jaeger.</p>
</li>
<li><p>Some basic knowledge of <a target="_blank" href="https://www.freecodecamp.org/news/helpful-linux-commands-you-should-know/">Linux commands</a>. You should be familiar with using the command line and editing configuration files.</p>
</li>
</ul>
<h2 id="heading-prepare-your-application">Prepare Your Application</h2>
<p>In this guide, you will be using a Node.js application that has two services that transfer data between themselves. You will use OpenTelemetry’s Node.js client library to send trace data to an OpenTelementay collector.</p>
<p>Firstly, clone the Repo Locally:</p>
<pre><code class="lang-bash">$ git <span class="hljs-built_in">clone</span> https://github.com/&lt;github-account&gt;/nodejs-example.git
</code></pre>
<p>Then run the application:</p>
<pre><code class="lang-bash">npm install
</code></pre>
<p>Go to the directory of the first service using this command:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">cd</span> &lt;ServiceA&gt;
</code></pre>
<p>And start the first service.</p>
<pre><code class="lang-bash">$ node index.js
</code></pre>
<p>Then go to the directory of the second service</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">cd</span> &lt;ServiceB&gt;
</code></pre>
<p>And start the second service.</p>
<pre><code class="lang-bash">$ node index.js
</code></pre>
<p>Open Service A, in this case port <code>5555</code>, and input some information. Then repeat the same for Service B.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/01/ServiceASshot.png" alt="ServiceASshot" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-set-up-opentelementary">How to Set Up OpenTelementary</h2>
<p>After starting the services, it's time to install the OpenTelementary modules you'll need for auto-instrumentation.</p>
<p>Here are what we need to install:</p>
<pre><code class="lang-bash">$ npm install --save @opentelemetry/api

$ npm install --save @opentelemetry/instrumentation

$ npm install --save @opentelemetry/tracing

$ npm install --save @opentelemetry/exporter-trace-otlp-http

$ npm install --save @opentelemetry/resources

$ npm install --save @opentelemetry/semantic-conventions

$ npm install --save @opentelemetry/auto-instrumentations-node

$ npm install --save @opentelemetry/sdk-node

$ npm install --save @opentelemetry/exporter-jaeger
</code></pre>
<p>Here's break down of what each module does:</p>
<ul>
<li><p><code>@opentelemetry/api</code>: This module provides the OpenTelemetry API for Node.js.</p>
</li>
<li><p><code>@opentelemetry/instrumentation</code>: The instrumentation libraries provide automatic instrumentation for your Node.js application. They automatically capture telemetry data without requiring manual code modifications.</p>
</li>
<li><p><code>@opentelemetry/tracing</code>: This module contains the core tracing functionality for OpenTelemetry in your Node.js application. It includes the Tracer and Span interfaces, which are important for capturing and representing distributed traces within your applications.</p>
</li>
<li><p><code>@opentelemetry/exporter-trace-otlp-http</code>: This exporter module enables sending trace data to an OpenTelemetry Protocol (OTLP) compatible backend over HTTP.</p>
</li>
<li><p><code>@opentelemetry/resources</code>: This module provides a way to define and manage resources associated with traces.</p>
</li>
<li><p><code>@opentelemetry/semantic-conventions</code>: This module defines a set of semantic conventions for tracing. It establishes a common set of attribute keys and value formats to ensure consistency in how telemetry data is represented and interpreted.</p>
</li>
<li><p><code>@opentelemetry/auto-instrumentations-node</code>: This module simplifies the process of instrumenting your application by automatically applying instrumentation to supported libraries.</p>
</li>
<li><p><code>@opentelemetry/sdk-node</code>: The Software Development Kit (SDK) for Node.js provides the implementation of the OpenTelemetry API.</p>
</li>
<li><p><code>@opentelemetry/exporter-jaeger</code>: This exporter module allows exporting trace data to Jaeger. Jaeger provides a user-friendly interface for monitoring and analyzing trace data.</p>
</li>
</ul>
<h2 id="heading-configure-the-nodejs-application">Configure the Node.js Application</h2>
<p>Next, add a Node.js SDk tracer to handle the instantiation and shutdown of the tracing.</p>
<p>To add the tracer, create a file <code>tracer.js</code>:</p>
<pre><code class="lang-bash">$ nano tracer.js
</code></pre>
<p>Then add the following code to the file:</p>
<pre><code class="lang-javascript"><span class="hljs-meta">
"use strict"</span>;

<span class="hljs-keyword">const</span> {
    BasicTracerProvider,
    SimpleSpanProcessor,
} = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@opentelemetry/tracing"</span>);
<span class="hljs-comment">// Import the JaegerExporter</span>
<span class="hljs-keyword">const</span> { JaegerExporter } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@opentelemetry/exporter-jaeger"</span>);
<span class="hljs-keyword">const</span> { Resource } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@opentelemetry/resources"</span>);
<span class="hljs-keyword">const</span> {
    SemanticResourceAttributes,
} = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@opentelemetry/semantic-conventions"</span>);

<span class="hljs-keyword">const</span> opentelemetry = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@opentelemetry/sdk-node"</span>);
<span class="hljs-keyword">const</span> {
    getNodeAutoInstrumentations,
} = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@opentelemetry/auto-instrumentations-node"</span>);

<span class="hljs-comment">// Create a new instance of JaegerExporter with the options</span>
<span class="hljs-keyword">const</span> exporter = <span class="hljs-keyword">new</span> JaegerExporter({
    <span class="hljs-attr">serviceName</span>: <span class="hljs-string">"YOUR-SERVICE-NAME"</span>,
    <span class="hljs-attr">host</span>: <span class="hljs-string">"localhost"</span>, <span class="hljs-comment">// optional, can be set by OTEL_EXPORTER_JAEGER_AGENT_HOST</span>
    <span class="hljs-attr">port</span>: <span class="hljs-number">16686</span> <span class="hljs-comment">// optional</span>
});

<span class="hljs-keyword">const</span> provider = <span class="hljs-keyword">new</span> BasicTracerProvider({
    <span class="hljs-attr">resource</span>: <span class="hljs-keyword">new</span> Resource({
        [SemanticResourceAttributes.SERVICE_NAME]:
            <span class="hljs-string">"YOUR-SERVICE-NAME"</span>,
    }),
});
<span class="hljs-comment">// Add the JaegerExporter to the span processor</span>
provider.addSpanProcessor(<span class="hljs-keyword">new</span> SimpleSpanProcessor(exporter));

provider.register();
<span class="hljs-keyword">const</span> sdk = <span class="hljs-keyword">new</span> opentelemetry.NodeSDK({
    <span class="hljs-attr">traceExporter</span>: exporter,
    <span class="hljs-attr">instrumentations</span>: [getNodeAutoInstrumentations()],
});

sdk
    .start()
    .then(<span class="hljs-function">() =&gt;</span> {
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Tracing initialized"</span>);
    })
    .catch(<span class="hljs-function">(<span class="hljs-params">error</span>) =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Error initializing tracing"</span>, error));

process.on(<span class="hljs-string">"SIGTERM"</span>, <span class="hljs-function">() =&gt;</span> {
    sdk
        .shutdown()
        .then(<span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Tracing terminated"</span>))
        .catch(<span class="hljs-function">(<span class="hljs-params">error</span>) =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Error terminating tracing"</span>, error))
        .finally(<span class="hljs-function">() =&gt;</span> process.exit(<span class="hljs-number">0</span>));
</code></pre>
<p>Here is a simple breakdown of the code:</p>
<ul>
<li><p>The code starts by importing the modules <code>BasicTracerProvider</code> and <code>SimpleSpanProcessor</code> for setting up tracing from the OpenTelemetry library</p>
</li>
<li><p>It then imports the JaegerExporter module for exporting trace data to Jaeger.</p>
</li>
<li><p>The code creates a new instance of the JaegerExporter, specifying the service name, host, and port.</p>
</li>
<li><p>It then creates a <code>BasicTracerProvider</code> and adds the JaegerExporter to the span processor using <code>SimpleSpanProcessor</code>.</p>
</li>
<li><p>The provider is registered, setting it as the default provider for the application.</p>
</li>
<li><p>An OpenTelemetry SDK instance is created, configuring it with the JaegerExporter and enabling auto-instrumentations for Node.js.</p>
</li>
<li><p>The OpenTelemetry SDK is started, initializing tracing.</p>
</li>
<li><p>A handler for the SIGTERM signal is set up to shut down tracing when the application is terminated.</p>
</li>
<li><p>The code then configures the trace provider with a trace exporter. To verify the instrumentation, <code>ConsoleSpanExporter</code> is used to print some of the tracer output to the console.</p>
</li>
</ul>
<h2 id="heading-how-to-set-up-opentelemetry-to-export-the-traces">How to Set Up OpenTelemetry to Export the Traces</h2>
<p>Next, you'll need to write the configurations to collect and export data in the OpenTelemetry Collector.</p>
<p>Create a file <code>config.yaml</code>:</p>
<pre><code class="lang-yaml">
<span class="hljs-attr">receivers:</span>
  <span class="hljs-attr">otlp:</span>
    <span class="hljs-attr">protocols:</span>
      <span class="hljs-attr">grpc:</span>
      <span class="hljs-attr">http:</span>

<span class="hljs-attr">exporters:</span>
  <span class="hljs-attr">datadog:</span>
    <span class="hljs-attr">api:</span> <span class="hljs-comment"># Replace with your Datadog API key</span>
      <span class="hljs-attr">key:</span> <span class="hljs-string">"&lt;YOUR_DATADOG_API_KEY&gt;"</span>
    <span class="hljs-comment"># Optional:</span>
    <span class="hljs-comment">#   - endpoint: https://app.datadoghq.eu  # For EU region</span>

<span class="hljs-attr">processors:</span>
  <span class="hljs-attr">batch:</span>

<span class="hljs-attr">extensions:</span>
  <span class="hljs-attr">pprof:</span>
    <span class="hljs-attr">endpoint:</span> <span class="hljs-string">:1777</span>
  <span class="hljs-attr">zpages:</span>
    <span class="hljs-attr">endpoint:</span> <span class="hljs-string">:55679</span>
  <span class="hljs-attr">health_check:</span>

<span class="hljs-attr">service:</span>
  <span class="hljs-attr">extensions:</span> [<span class="hljs-string">health_check</span>, <span class="hljs-string">pprof</span>, <span class="hljs-string">zpages</span>]
  <span class="hljs-attr">pipelines:</span>
    <span class="hljs-attr">traces:</span>
      <span class="hljs-attr">receivers:</span> [<span class="hljs-string">otlp</span>]
      <span class="hljs-attr">processors:</span> [<span class="hljs-string">batch</span>]
      <span class="hljs-attr">exporters:</span> [<span class="hljs-string">datadog</span>]
</code></pre>
<p>The configuration sets up OpenTelemetry with the OTLP (OpenTelemetry Protocol) receiver and the Datadog exporter. Here’s a break down of the code:</p>
<ul>
<li><p><code>receivers</code>: Specifies the components that receive the telemetry data. In this case, it includes the OTLP receiver, which supports both gRPC and HTTP protocols.</p>
</li>
<li><p><code>exporters</code>: Defines the components responsible for exporting telemetry data. Here, it configures the Datadog exporter, providing the Datadog API key. Additionally, an optional <code>endpoint</code> is provided for using Datadog's EU region.</p>
</li>
<li><p><code>processors</code>: Specifies the data processing components. In this case, the <code>batch</code> processor is used to batch and send data in larger chunks for efficiency.</p>
</li>
<li><p><code>extensions</code>: Defines additional components that extend the functionality. Here, it includes extensions for pprof (profiling data), zpages (debugging pages), and a health check extension.</p>
</li>
<li><p><code>service</code>: Configures the overall service behavior, including the extensions and pipelines. The <code>extensions</code> section lists the extensions to be used, and the <code>pipelines</code> section configures the telemetry data pipeline. Here, the traces pipeline includes the OTLP receiver, the batch processor, and the Datadog exporter.</p>
</li>
</ul>
<p>This code is configured by the collector with the Datadog exporter to send the traces to Datadog Distributed Tracing services. However, there are other distributed tracing services that you can use like New Relic, Logzio, and Zipkin.</p>
<h2 id="heading-how-to-start-the-application">How to Start the Application</h2>
<p>After correctly setting up auto-instrumentation, start the application again to test and verify the tracing configuration.</p>
<p>Begin by starting the OpenTelemetry Collector:</p>
<pre><code class="lang-bash">./otelcontribcol_darwin_amd64 --config ./config.yaml
</code></pre>
<p>The collector will start on port 4317.</p>
<p>Next, go to the directory of the first service:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">cd</span> &lt;ServiceA&gt;
</code></pre>
<p>Then start the first service with the “--require './tracer.js'” parameter to enable the application instrumentation.</p>
<pre><code class="lang-bash">$ node --require <span class="hljs-string">'./tracer.js'</span> index.js
</code></pre>
<p>Repeat this to start the second service.</p>
<p>Using a browser like Chrome, go to the endpoints of your two applications' services, add some data, and send some requests to test the tracing configuration.</p>
<p>Once the requests are made, these traces are picked up by the collector, which then dispatches them to the distributed tracing backend specified by the exporter configuration in the collector's configuration file.</p>
<p>It's worth noting that our tracer not only facilitates the transmission of traces to the designated backend, but also exports them to the console at the same time.</p>
<p>This dual functionality allows for real-time visibility into the traces being generated and sent, helping us in the monitoring and debugging processes.</p>
<p>Now, let’s use Jaeger UI to monitor the traces:</p>
<p>Start Jaeger with the following command:</p>
<pre><code class="lang-bash">docker run -d --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 5775:5775/udp \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.32
</code></pre>
<p>Using a browser, start Jaeger UI at the http://localhost:16686/ endpoint.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/01/image4-1024x508.png" alt="image4-1024x508" width="600" height="400" loading="lazy"></p>
<p>There you have it! The initiation of the trace starting from the inception point of one service, navigating through a sequence of operations.</p>
<p>This path is created as the service starts its operations, resulting in the set up of the other service to fulfill the original request you initiated earlier.</p>
<p>The trace provides a visual narrative of what happens between these services, offering insights into each step of the process.</p>
<h2 id="heading-how-can-you-use-observability-data">How Can You Use Observability Data?</h2>
<ol>
<li><p><strong>Monitoring Metrics:</strong> Keep an eye on key metrics such as response times, error rates, and resource usage. Sudden spikes or anomalies can indicate issues that require attention.</p>
</li>
<li><p><strong>Logging:</strong> Log data provides detailed information about events and actions within a system. Analyzing logs helps in understanding the sequence of activities and tracing the steps leading to an issue.</p>
</li>
<li><p><strong>Tracing:</strong> Tracing involves tracking the flow of requests or transactions across different components of a system. This helps in understanding the journey of a request and identifying any bottlenecks or delays.</p>
</li>
<li><p><strong>Alerting:</strong> Set up alerts based on specific conditions or thresholds. When certain metrics exceed predefined limits, alerts can notify you in real-time, allowing for immediate action.</p>
</li>
<li><p><strong>Visualization:</strong> Use graphical representations and dashboards to visualize complex data. This makes it easier to identify patterns, trends, and correlations in the observability data.</p>
</li>
</ol>
<p>Observability, when implemented effectively, empowers teams to proactively manage and improve the performance, reliability, and user experience of their systems. It's a crucial aspect of modern software development and operations.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this guide you learned how to auto-instrument Node.js applications with little code by:</p>
<ul>
<li><p>Installing and configuring the OpenTelemetry Node.js SDK and the auto-instrumentation package</p>
</li>
<li><p>Enabling automatic tracing and metrics collection for your Node.js applications and their dependencies</p>
</li>
<li><p>Exporting to visualize your telemetry data on a backend, Jaeger.</p>
</li>
</ul>
<p>Using OpenTelemetry’s auto-instrumentation can help you gain valuable insights into the performance and behavior of your Node.js applications without having to manually instrument each library or framework.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Install NVIDIA CUDA Toolkit on Ubuntu ]]>
                </title>
                <description>
                    <![CDATA[ The NVIDIA Compute Unified Device Architecture (CUDA) Toolkit is a software platform that allows developers to tap into the computing power of NVIDIA processing and GPU-accelerated applications. CUDA is also a programming model and an API that enable... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-install-nvidia-cuda-toolkit-on-ubuntu/</link>
                <guid isPermaLink="false">66d45d58b3016bf139028cf1</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Ubuntu ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Mon, 29 Jan 2024 21:25:58 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/01/Feature-image.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>The NVIDIA Compute Unified Device Architecture (CUDA) Toolkit is a software platform that allows developers to tap into the computing power of NVIDIA processing and GPU-accelerated applications.</p>
<p>CUDA is also a programming model and an API that enables programmers to write code that can run on both the CPU and GPU while also handling data transfer between them.</p>
<p>By using the CUDA Toolkit, you can improve performance, scalability, and efficiency across a range of applications. These include computing, deep learning, computer vision, gaming, and more.</p>
<p>The toolkit supports programming languages like C, C++, Fortran, Python, and Java. It seamlessly integrates with frameworks and libraries such as TensorFlow, PyTorch OpenCV, and cuDNN.</p>
<p>Also, the use of the CUDA Toolkit extends across different domains, such as healthcare finance, robotics, the automotive industry, and entertainment. If you're looking to speed up image processing or natural language processing, enhance cryptography, or advance ray tracing techniques, the CUDA Toolkit empowers you to solve problems faster and more efficiently.</p>
<p>In terms of compatibility, the CUDA Toolkit offers support for Linux distributions, including Ubuntu, Debian, Fedora, CentOS, and OpenSUSE.</p>
<p>In this article, I will guide you through the process of installing the CUDA Toolkit on Ubuntu 22.04, which happens to be the LTS (Long Term Support) version of Ubuntu.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To install the CUDA Toolkit on Ubuntu 22.04, you need the following:</p>
<ul>
<li><p><a target="_blank" href="https://developer.nvidia.com/cuda-gpus">A supported NVIDIA GPU with a minimum compute capability of 3.0</a></p>
</li>
<li><p><a target="_blank" href="https://docs.nvidia.com/deploy/cuda-compatibility/">NVIDIA driver compatible with the CUDA Toolkit version</a></p>
</li>
</ul>
<p>In this guide I will be using a <a target="_blank" href="https://docs.paperspace.com/core/compute/machine-types">Paperspace GPU instance</a> with Ubuntu 22.04 LTS operating system.</p>
<p>Please note that you can use any other Cloud Service provider, like Google Cloud and Vultr, or even your own computer, as long as it meets the requirements listed above.</p>
<p>To get started, you'll need to create a new user, like <code>seconduser</code> and then switch to the new user.</p>
<h2 id="heading-install-cuda-toolkit">Install CUDA Toolkit</h2>
<p>You can install CUDA using the release file or alternatively, by using Conda. In this guide, we will be installing CUDA with the release file from the official Toolkit Archive.</p>
<h3 id="heading-step-1-download-the-cuda-release-file">Step 1: Download the CUDA release file.</h3>
<pre><code class="lang-bash"> $ wget https://developer.download.nvidia.com/compute/cuda/12.0.1/local_installers/cuda_12.0.1_525.85.12_linux.run
</code></pre>
<h3 id="heading-step-2-execute-the-release-file">Step 2: Execute the release file.</h3>
<pre><code class="lang-bash"> $ sudo sh cuda_12.0.1_525.85.12_linux.run
</code></pre>
<p>You will be prompted to accept the End User License Agreement, then press <code>Enter</code> to configure your installation.</p>
<p>Once the installation is complete, you should see an output similar to this:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/01/carbon--6-.png" alt="carbon--6-" width="600" height="400" loading="lazy"></p>
<h2 id="heading-configuration-and-verification">Configuration and Verification</h2>
<h3 id="heading-step-1-configure-the-server">Step 1: Configure the server</h3>
<p>Configure the server to work with the CUDA toolkit. Move the CUDA path to the system’s <code>PATH</code>, then add the CUDA Toolkit library path to the <code>LD_LIBRARY_PATH</code> so that the CUDA toolkit link loader will be updated with the location of shared libraries.</p>
<pre><code class="lang-bash">  $ <span class="hljs-built_in">echo</span> <span class="hljs-string">"export PATH=/usr/local/cuda-12.0/bin<span class="hljs-variable">${PATH:+:${PATH}</span>}"</span> &gt;&gt; /home/seconduser/.bashrc
</code></pre>
<pre><code class="lang-bash">  $ <span class="hljs-built_in">echo</span> <span class="hljs-string">"export LD_LIBRARY_PATH=/usr/local/cuda-12.0/lib64<span class="hljs-variable">${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}</span>}"</span> &gt;&gt; /home/seconduser/.bashrc
</code></pre>
<h3 id="heading-step-2-activate-environment">Step 2: Activate Environment</h3>
<p>After configuring the server to work with the CUDA toolkit, activate the environment variable changes so that the system can find and use CUDA.</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">source</span> /home/seconduser/.bashrc
</code></pre>
<h3 id="heading-step-3-verify-installation">Step 3: Verify Installation</h3>
<pre><code class="lang-bash"> $ nvidia-smi
</code></pre>
<p>Output:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/01/carbon--7-.png" alt="carbon--7-" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-4-check-package-installation">Step 4: Check Package Installation.</h3>
<p>Verify that the package from the CUDA Toolkit is successfully installed on your server.</p>
<pre><code class="lang-bash"> $ nvcc --version
</code></pre>
<p>Output:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/01/carbon--8-.png" alt="carbon--8-" width="600" height="400" loading="lazy"></p>
<h2 id="heading-testing">Testing</h2>
<p>To test your newly installed CUDA programs, you will use some already-made test scripts by CUDA that will allow you to comprehensively verify the compatibility and functionality of your CUDA-enabled environment.</p>
<h3 id="heading-step-1-clone-the-test-scripts-repository">Step 1: Clone the test scripts repository</h3>
<pre><code class="lang-bash"> $ git <span class="hljs-built_in">clone</span> https://github.com/NVIDIA/cuda-samples.git
</code></pre>
<h3 id="heading-step-2-go-to-the-directory-containing-the-devicequery-sample-script">Step 2: Go to the directory containing the deviceQuery sample script.</h3>
<pre><code class="lang-bash"> $ <span class="hljs-built_in">cd</span> cuda-samples/Samples/1_Utilities/deviceQuery
</code></pre>
<h3 id="heading-step-3-compile-the-script">Step 3: Compile the script.</h3>
<pre><code class="lang-bash"> $ make
</code></pre>
<h3 id="heading-step-4-run-the-script">Step 4: Run the script.</h3>
<pre><code class="lang-bash"> $ ./deviceQuery
</code></pre>
<p>Your output should look similar to the one below if your CUDA program ran the script successfully:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/01/carbon--9-.png" alt="carbon--9-" width="600" height="400" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, you learned how to install the CUDA Toolkit on Ubuntu 22.04.</p>
<p>Some of the best practices for using CUDA on Ubuntu are:</p>
<ul>
<li><p>Keep your system and NVIDIA drivers up to date to ensure the compatibility and stability of the CUDA Toolkit.</p>
</li>
<li><p>Use the CUDA APT PPA to install and update the CUDA Toolkit easily and quickly.</p>
</li>
<li><p>Use the nvcc compiler options and flags to optimize and debug your CUDA code.</p>
</li>
<li><p>Use the CUDA libraries and tools to enhance and simplify your CUDA development process.</p>
</li>
<li><p>Follow the CUDA coding standards and best practices to write efficient and maintainable CUDA code.</p>
</li>
</ul>
<p>Here are some resources to learn more about CUDA:</p>
<ul>
<li><p><a target="_blank" href="https://docs.nvidia.com/cuda/">CUDA Official Docs</a></p>
</li>
<li><p><a target="_blank" href="https://riptutorial.com/cuda/example/13338/compiling-and-running-the-sample-programs">CUDA Refresher Tutorial</a></p>
</li>
<li><p><a target="_blank" href="https://cuda-tutorial.readthedocs.io/en/latest/tutorials/tutorial01/">Read The Docs: Say Hello to CUDA</a></p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Secure Your Web Server with Continuous Integration Using NGINX and CircleCI ]]>
                </title>
                <description>
                    <![CDATA[ Web servers are responsible for delivering web pages and various resources to clients through the internet. They can exist either as software or hardware components. But unfortunately, they often become targets for hackers and malicious individuals s... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/secure-web-server-with-continuous-integration-using-nginx-and-circleci/</link>
                <guid isPermaLink="false">66d45d5e230dff016690579b</guid>
                
                    <category>
                        <![CDATA[ Continuous Integration ]]>
                    </category>
                
                    <category>
                        <![CDATA[ nginx ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Security ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Fri, 19 Jan 2024 16:46:59 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/01/feature-image.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Web servers are responsible for delivering web pages and various resources to clients through the internet. They can exist either as software or hardware components.</p>
<p>But unfortunately, they often become targets for hackers and malicious individuals seeking to exploit any vulnerabilities to compromise data and disrupt functionality. As a result, you'll need to prioritize the security of your web server by updating it and implementing safeguards against threats.</p>
<p>To enhance the security of your web server, one effective approach is to use <a target="_blank" href="https://www.freecodecamp.org/news/what-is-ci-cd/">Continous Integration</a> (CI). CI is a DevOps technique that allows the automated merging of code modifications from software engineers into a single repository. This practice enhances code quality, minimizes bugs and speeds up code delivery.</p>
<p>By using CI, you can automate the testing, building, and deployment processes for your web servers’ code and configuration. You can also ensure that your web server consistently maintains a stable state.</p>
<p>In this tutorial, I'll guide you through the process of strengthening the security of your web server by using two popular and powerful tools: <a target="_blank" href="https://www.freecodecamp.org/news/nginx/">NGINX</a> and CircleCI.</p>
<p>NGINX, which is an open source web server, provides a range of features and modules that can greatly enhance the security of your web server. These include SSL/TLS encryption, security headers, and support for HTTP/2.</p>
<p>On the other hand, CircleCI offers both cloud-based and self-hosted options for Continous Integration (CI) and Continuous Delivery (CD), enabling seamless deployment processes.</p>
<p>By following this guide, you will learn how to:</p>
<ul>
<li><p>Configure NGINX to use SSL/TLS encryption and security headers</p>
</li>
<li><p>Create a GitHub repository and push your NGINX configuration files to it</p>
</li>
<li><p>Create a CircleCI project and link it to your GitHub repository</p>
</li>
<li><p>Create a CircleCI configuration file and define your CI pipeline</p>
</li>
<li><p>Test and deploy your web server with CircleCI</p>
</li>
</ul>
<h3 id="heading-here-is-what-well-cover">Here is what we'll cover:</h3>
<ul>
<li><a class="post-section-overview" href="#">Prerequisites</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#Configure-NGINX-to-Use-SSL/TLS-Encryption">Step 1: Configure NGINX to Use SSL/TLS Encryption</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 2: Configure NGINX to Include Security Headers</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 3: Create a GitHub Repository and Push Your NGINX Configuration</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 4: Create a CircleCI Project and Link it to Your GitHub Repository</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 5: Create a CircleCI Configuration File and Define Your CI Pipeline</a></li>
</ul>
<ul>
<li><a class="post-section-overview" href="#">Step 6: Test and Deploy Your Web Server with CircleCI</a></li>
</ul>
<p>Let's get started!</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you start this guide, you need to ensure you have the following:</p>
<ul>
<li><p>A web server running NGINX. If you don't have one, you can follow this <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04">guide</a> to install NGINX on Ubuntu 20.04. You can also use any other operating system or cloud provider that supports NGINX.</p>
</li>
<li><p>A GitHub account. If you don't have one, you can sign up for free <a target="_blank" href="https://github.com/join">here</a>.</p>
</li>
<li><p>A CircleCI account. If you don't have one, you can sign up for free <a target="_blank" href="https://circleci.com/signup/">here</a>. You will also need to link your GitHub account to your CircleCI account.</p>
</li>
<li><p>Some basic knowledge of <a target="_blank" href="https://www.freecodecamp.org/news/learn-web-development-with-this-free-20-hour-course/">web development</a> and <a target="_blank" href="https://www.freecodecamp.org/news/helpful-linux-commands-you-should-know/">Linux commands</a>. You should be familiar with the concepts of web servers, SSL/TLS encryption, security headers, and CI. You should also be comfortable with using the command line and editing configuration files.</p>
</li>
</ul>
<p>Once you have these, you are ready to proceed with the next steps.</p>
<h2 id="heading-step-1-configure-nginx-to-use-ssltls-encryption">Step 1: Configure NGINX to Use SSL/TLS Encryption</h2>
<p>SSL/TLS encryption ensures the transmission of data between your web server and clients. It safeguards against interception or manipulation of information. It also plays a role in verifying the identity and reliability of your web server.</p>
<p>You need an SSL/TLS certificate to use SSL/TLS for your web server. An SSL/TLS certificate contains information about your web server, such as its domain name, owner, and public key. The validity of the certificate is verified through the unique digital signature from a Certificate Authority (CA).</p>
<p>You can either purchase an SSL/TLS certificate from a commercial CA, such as DigiCert, Symantec, or GlobalSign, or you can just get one for free from a non-profit CA, such as Let's Encrypt. You can also create your own self-signed certificate, but this is not recommended for production use, as it will not be trusted by most browsers and clients.</p>
<p>In this guide, you will use Let's Encrypt to get a free SSL/TLS certificate for your web server. To use Let's Encrypt, you need to install a software client on your web server that can communicate with the CA and perform the necessary tasks.</p>
<p>One of the most common and recommended clients for Let's Encrypt is Certbot. Certbot is a command-line tool that can automatically request, install, and renew certificates for your web server. It can also configure your web server to use the certificates and enable HTTPS.</p>
<p>To install Certbot on your web server, run the following commands:</p>
<pre><code class="lang-bash">sudo apt update
sudo apt install certbot python3-certbot-nginx
</code></pre>
<p>After installing Certbot, use it to request and install a certificate for your web server. You need to provide your domain name and your email address for the certificate.</p>
<p>To request and install a certificate for your web server, run the following command:</p>
<pre><code class="lang-bash">sudo certbot --nginx -d yourdomain.com
</code></pre>
<p>Replace yourdomain.com with your actual domain name.</p>
<p>Follow the prompts and answer the questions. Certbot will automatically verify your domain ownership, obtain a certificate, and install it on your web server. It will also ask you whether you want to redirect all HTTP traffic to HTTPS. Choose option 2 to enable the redirection.</p>
<p>After the process is completed, you will see a message like this:</p>
<p><img src="https://i.ibb.co/wBVfh1R/carbon-1.png" alt="certbot-success message" width="1623" height="628" loading="lazy"></p>
<p>You have now successfully configured NGINX to use SSL/TLS encryption with a certificate from Let's Encrypt. You can now access your web server using HTTPS and see the lock icon in your browser.</p>
<p><img src="https://i.ibb.co/b3pqJBB/secureverification2.png" alt="Lock Icon" width="755" height="387" loading="lazy"></p>
<p>You can also test your web server security using online tools, such as SSL Labs. You should see a grade of A or higher.</p>
<p>Note: Let's Encrypt certificates are valid for 90 days. Certbot can automatically renew them for you before they expire.</p>
<p>To enable the automatic renewal, you need to create a CRON job or a systemd timer that runs the following command at least once per day:</p>
<pre><code class="lang-bash">sudo certbot renew
</code></pre>
<p>You can also test the renewal process manually by running the following command:</p>
<pre><code class="lang-bash">sudo certbot renew --dry-run
</code></pre>
<p>This will perform a trial run without making any changes.</p>
<p>If you encounter any errors or issues, you can check the <a target="_blank" href="https://eff-certbot.readthedocs.io/en/latest/">Certbot documentation</a> or the <a target="_blank" href="https://community.letsencrypt.org/">Let's Encrypt community forum</a> for help.</p>
<h2 id="heading-step-2-configure-nginx-to-include-security-headers">Step 2: Configure NGINX to Include Security Headers</h2>
<p>Security headers help instruct the browser to apply certain security policies or restrictions when handling your web content. They can prevent or mitigate common web attacks, such as cross-site scripting (XSS), clickjacking, content injection, and more.</p>
<p>In this step, you will be adding security headers to your NGINX configuration. These headers include X Frame Options, X Content Type Options, X XSS Protection, and Content Security Policy.</p>
<h3 id="heading-x-frame-options">X-Frame-Options</h3>
<p>The X-Frame-Options header tells the browser whether or not to allow your web page to be displayed in a frame, iframe, embed, or object element. This can help you prevent clickjacking attacks, where an attacker overlays a hidden frame on top of your web page and tricks the user into clicking on it.</p>
<p>There are three possible values for this header:</p>
<ul>
<li><p>DENY: This value prevents your web page from being displayed in any frame.</p>
</li>
<li><p>SAMEORIGIN: This value allows your web page to be displayed in a frame only if the frame is from the same origin as your web page.</p>
</li>
<li><p>ALLOW-FROM URI: This value allows your web page to be displayed in a frame only if the frame is from the specified URI.</p>
</li>
</ul>
<p>To enable the X-Frame-Options header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> X-Frame-Options <span class="hljs-string">"SAMEORIGIN"</span>;
</code></pre>
<p>This will allow your web page to be displayed in a frame only if the frame is from the same origin as your web page. You can change the value to DENY or ALLOW-FROM uri according to your needs.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h3 id="heading-x-content-type-options">X-Content-Type-Options</h3>
<p>The X Content Type Options header instructs the browser to perform MIME-type sniffing. This feature attempts to determine the content type of a resource by analyzing its content or file extension.</p>
<p>By using this header you can safeguard against content injection attacks. These attacks involve uploading a file with a content type to exploit the browser’s interpretation of it.</p>
<p>There is only one possible value for this header:</p>
<ul>
<li>nosniff: This value prevents the browser from performing MIME type sniffing.</li>
</ul>
<p>To enable the X-Content-Type-Options header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> X-Content-Type-Options <span class="hljs-string">"nosniff"</span>;
</code></pre>
<p>This will prevent the browser from performing MIME type sniffing on your web resources.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h3 id="heading-x-xss-protection">X-XSS-Protection</h3>
<p>The X-XSS-Protection header tells the browser to enable or disable its built-in XSS filter, which can detect and block some types of XSS attacks. This can help you prevent XSS attacks, where an attacker injects malicious code into your web page that executes in the browser.</p>
<p>There are three possible values for this header:</p>
<ul>
<li><p>0: This value disables the XSS filter.</p>
</li>
<li><p>1: This value enables the XSS filter and sanitizes the page if an XSS attack is detected.</p>
</li>
<li><p>1; mode=block: This value enables the XSS filter and blocks the page if an XSS attack is detected.</p>
</li>
</ul>
<p>To enable the X-XSS-Protection header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> X-XSS-Protection <span class="hljs-string">"1; mode=block"</span>;
</code></pre>
<p>This will enable the XSS filter and block the page if an XSS attack is detected. You can change the value to 0 or 1 according to your needs.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h3 id="heading-content-security-policy">Content-Security-Policy</h3>
<p>The Content-Security-Policy header tells the browser to enforce a set of rules that restrict what sources and types of content can be loaded and executed on your web page. This can help you prevent XSS, content injection, and other types of attacks that rely on loading malicious or untrusted content.</p>
<p>The value of this header is a complex policy that consists of multiple directives and values. Each directive specifies a type of content and a list of sources that are allowed or disallowed for that content.</p>
<p>For example, the following policy allows only scripts styles from the same origin, and images from the same origin or yourdomain.com:</p>
<pre><code class="lang-nginx">Content-Security-Policy: default-<span class="hljs-attribute">src</span> <span class="hljs-string">'none'</span>; script-<span class="hljs-attribute">src</span> <span class="hljs-string">'self'</span>; style-<span class="hljs-attribute">src</span> <span class="hljs-string">'self'</span>; img-<span class="hljs-attribute">src</span> <span class="hljs-string">'self'</span> yourdomain.com;
</code></pre>
<p>The Content-Security-Policy header is very powerful and flexible, but also very complicated and error-prone. You need to carefully design and test your policy to ensure that it does not break your web functionality or introduce new vulnerabilities. You can use tools like CSP Evaluator or CSP Scanner to check and improve your policy.</p>
<p>To enable the Content-Security-Policy header in NGINX, add the following line to your server block in your NGINX configuration file (/etc/nginx/sites-enabled/example.conf):</p>
<pre><code class="lang-nginx"><span class="hljs-attribute">add_header</span> Content-Security-Policy <span class="hljs-string">"default-src 'none'; script-src 'self'; style-src 'self'; img-src 'self' yourdomain.com;"</span>;
</code></pre>
<p>This will enforce the policy that you described above. You can change the policy according to your needs.</p>
<p>Save the file and restart NGINX to apply the changes.</p>
<h2 id="heading-step-3-create-a-github-repository-and-push-your-nginx-configuration">Step 3: Create a GitHub Repository and Push Your NGINX Configuration</h2>
<p>To create a GitHub repository and push your NGINX configuration files to it, follow these steps:</p>
<h3 id="heading-create-a-github-repository">Create a GitHub repository</h3>
<p>First, log in to your GitHub account and go to the GitHub homepage.</p>
<p>Click on the plus icon in the top right corner of the page, and select "New repository" from the dropdown menu.</p>
<p><img src="https://i.ibb.co/vkWX0BJ/dropdownmenu-edited.png" alt="Dropdown Menu" width="1486" height="383" loading="lazy"></p>
<p>On the next page, enter a name for your repository in the "Repository name" field. This should be a short and descriptive name that accurately reflects the contents of the repository. For example, you can name it "nginx-config".</p>
<p>In the "Description" field, you can enter a longer description of the repository if you want. This is optional, but it can be helpful to provide more information about the purpose of the repository.</p>
<p>For example, you can write "A repository for storing and managing my NGINX configuration files".</p>
<p>You can set the visibility to whatever you prefer. If you want others to be able to see your work, set it to "Public". Otherwise, set it to "Private".</p>
<p>Leave the "Initialize this repository with a README" option unchecked, as you want to create an empty repository.</p>
<p><img src="https://i.ibb.co/SQVJbqh/settingupnewrepo.png" alt="Settting up New Repository" width="1882" height="786" loading="lazy"></p>
<p>Click on the "Create repository" button to create the repository.</p>
<p>Your new empty repository will be created and you will be taken to the repository page.</p>
<h3 id="heading-push-your-nginx-configuration-files-to-the-github-repository">Push your NGINX configuration files to the GitHub repository</h3>
<p>On your web server, navigate to the directory where your NGINX configuration files are located. By default, this is /etc/nginx on most Linux distributions.</p>
<p>Initialize a new Git repository in this directory by running the following command:</p>
<pre><code class="lang-bash">git init
</code></pre>
<p>This will create a new .git directory in the current directory, which will be used to store all the version control information for your project.</p>
<p>Add all the configuration files that you want to include in the repository by running the following command:</p>
<pre><code class="lang-bash">git add .
</code></pre>
<p>This will add all the files in the current directory and its subdirectories to the repository. You can also specify individual files to be added by replacing the (.) with the file names, separated by spaces.</p>
<p>Then commit the files to the repository by running the following command:</p>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"Initial commit"</span>
</code></pre>
<p>This will create the first commit in the repository, which will include all the files that were added in the previous step. The -m flag is used to specify a commit message, which should briefly describe the changes that were made in this commit.</p>
<p>Go back to your GitHub repository page and copy the URL of your repository. You can find it under the "Code" section. It should look something like this:</p>
<p><img src="https://i.ibb.co/GThsRSV/code-Button.png" alt="github-url" width="675" height="532" loading="lazy"></p>
<p>On your web server, add the URL of your GitHub repository as a remote for your Git repository by running the following command:</p>
<pre><code class="lang-bash">git remote add origin https://github.com/username/nginx-config.git
</code></pre>
<p>Replace username with your GitHub username and nginx-config with your repository name. The origin is the name of the remote, which you can change to anything you want.</p>
<p>Push your local Git repository to the GitHub repository by running the following command:</p>
<pre><code class="lang-bash">git push -u origin master
</code></pre>
<p>This will push your master branch, which is the default branch in Git, to the origin remote, which is the GitHub repository that you created. The -u flag is used to set the upstream for your branch, which means that you can use git push or git pull without specifying the remote or the branch in the future.</p>
<p>Enter your GitHub username and password when prompted. If you have enabled two-factor authentication, you will need to use a personal access token instead of your password. You can generate one from your GitHub settings page.</p>
<p>You have successfully created a GitHub repository and pushed your NGINX configuration files to it. You can now view and manage your configuration files on GitHub.</p>
<h2 id="heading-step-4-create-a-circleci-project-and-link-it-to-your-github-repository">Step 4: Create a CircleCI Project and Link it to Your GitHub Repository</h2>
<p>CircleCI is a platform that offers cloud-based and self-hosted options for continuous integration and delivery. It allows you to create and run pipelines that automate and streamline your web server deployment and update process.</p>
<p>To use CircleCI, you need to create a CircleCI project and link it to your GitHub repository. This will enable CircleCI to access your code and configuration files, and trigger builds whenever you push to GitHub.</p>
<p>To create a CircleCI project and link it to your GitHub repository, follow these steps:</p>
<h3 id="heading-sign-up-for-circleci-and-connect-your-github-account">Sign up for CircleCI and connect your GitHub account</h3>
<p>Start by logging in to your CircleCI account or sign up for free <a target="_blank" href="https://circleci.com/login/">here</a>.</p>
<p>On the CircleCI dashboard, click on the "Create Project" button in the top right corner of the page.</p>
<p><img src="https://i.ibb.co/wyPsNjk/project-button.png" alt="Create project Button" width="1865" height="648" loading="lazy"></p>
<p>On the next page, select "GitHub" as your version control provider and click on the "Connect with GitHub" button.</p>
<p><img src="https://i.ibb.co/MhywfHF/choosing-github.png" alt="Choosing Github" width="1907" height="786" loading="lazy"></p>
<p>On the next page, authorize CircleCI to access your GitHub account by clicking on the "Authorize circleci" button.</p>
<p>On the next page, Enter a “Project Name” and follow the remaining instructions to successfully create a CircleCI project.</p>
<p><img src="https://i.ibb.co/MkYZBTF/Project-Name.png" alt="Enter Project Name" width="1432" height="290" loading="lazy"></p>
<h3 id="heading-create-a-circleci-project-and-link-it-to-your-github-repository">Create a CircleCI project and link it to your GitHub repository</h3>
<p>Next, create a new SSH key pair in your terminal:</p>
<pre><code class="lang-bash">ssh-keygen -t ed25519 -f ~/.ssh/project_key -C email@example.com
</code></pre>
<p>Then copy the private key generated:</p>
<pre><code class="lang-bash">pbcopy &lt; ~/.ssh/project_key
</code></pre>
<p>Next, enter it in the private key field:</p>
<p><img src="https://i.ibb.co/0Qs2TNJ/Private-SSH-Key.png" alt="Enter private SSH Key" width="1043" height="161" loading="lazy"></p>
<p>You will see a list of your GitHub repositories. Find the repository that you created in the previous step and click on the "Create Project" button next to it.</p>
<p>Now you will see a list of templates for different languages and frameworks. Since you are using Python and Flask, select the "Python" template and click on the "Use this config" button.</p>
<p>On the next page, you will see the generated CircleCI configuration file (config.yml) that defines your pipeline. You can review and edit the file if you want, or leave it as it is for now. Click on the "Start building" button to create the project and link it to your GitHub repository.</p>
<p>Your new CircleCI project will be created and linked to your GitHub repository.</p>
<p>You have now successfully created a CircleCI project and linked it to your GitHub repository. You can now configure and run your pipeline on CircleCI.</p>
<h2 id="heading-step-5-create-a-circleci-configuration-file-and-define-your-ci-pipeline">Step 5: Create a CircleCI Configuration File and Define Your CI Pipeline</h2>
<p>A CircleCI configuration file is a YAML file that defines your CI pipeline. A CI pipeline is a sequence of jobs that run whenever you push changes to your GitHub repository. Each job consists of steps that perform specific tasks, such as running commands, installing dependencies, or deploying your web server.</p>
<p>In this step, you will create a CircleCI configuration file and define your CI pipeline. You will use the Python template that you selected in the previous step as a starting point, and modify it to suit our needs. You will also explain what each step in the pipeline does and how it helps to automate and secure your web server deployment.</p>
<h3 id="heading-create-a-circleci-configuration-file">Create a CircleCI configuration file</h3>
<p>On your web server, navigate to the directory where your NGINX configuration files are located. By default, this is /etc/nginx on most Linux distributions.</p>
<p>Create a new directory called .circleci in this directory by running the following command:</p>
<pre><code class="lang-bash">mkdir .circleci
</code></pre>
<p>This is where you will store our CircleCI configuration file.</p>
<p>Then create a new file called config.yml in the .circleci directory by running the following command:</p>
<pre><code class="lang-bash">touch .circleci/config.yml
</code></pre>
<p>This is your CircleCI configuration file.</p>
<p>Open the config.yml file with your preferred text editor and paste the following code:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">2.1</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/python:3.9</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            pip install -r requirements.txt
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            pytest
</span>  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">machine:</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">ubuntu-2004:202101-01</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">add_ssh_keys:</span>
          <span class="hljs-attr">fingerprints:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"YOUR_FINGERPRINT"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">Nginx</span> <span class="hljs-string">configuration</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            scp -r nginx root@YOUR_IP:/etc
            ssh root@YOUR_IP "systemctl restart nginx"
</span>
<span class="hljs-attr">workflows:</span>
  <span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">build-and-deploy:</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">deploy:</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
          <span class="hljs-attr">filters:</span>
            <span class="hljs-attr">branches:</span>
              <span class="hljs-attr">only:</span> <span class="hljs-string">main</span>
</code></pre>
<p>This is your CircleCI configuration file that defines your CI pipeline. You will explain each part of the file in the next section.</p>
<p>Finally, fave and close the file.</p>
<h3 id="heading-define-your-ci-pipeline">Define your CI pipeline</h3>
<p>Let's go through each part of the config.yml file and see what it does.</p>
<ul>
<li><p>Line 1: This indicates the version of the CircleCI platform you are using. 2.1 is the most recent version.</p>
</li>
<li><p>Line 3: The jobs level contains a collection of children, representing your jobs. You specify the names for these jobs, for example, build, test, deploy.</p>
</li>
<li><p>Line 6: This is the Docker image. The example shows cimg/python:3.9, which is a CircleCI-provided image that contains Python 3.9 and other common tools.</p>
</li>
<li><p>Line 9: The run directive executes a shell command or script. You can specify a name and a command for each run directive.</p>
</li>
<li><p>Line 11: The command attribute is a list of shell commands that you want to execute. In this case, you are installing the dependencies for your web application using pip.</p>
</li>
<li><p>Line 12: This is another run directive that runs the tests for your web application using pytest.</p>
</li>
<li><p>Line 13: deploy is the second child in the jobs collection. This job is responsible for deploying your NGINX configuration to your web server.</p>
</li>
<li><p>Line 14: This specifies that you are using a machine executor for this job. A machine executor provides a full virtual machine with root access and various tools installed.</p>
</li>
<li><p>Line 15: This is the machine image. The example shows ubuntu-2004:202101-01, which is a CircleCI-provided image that contains Ubuntu 20.04 and other common tools.</p>
</li>
<li><p>Line 16: The steps collection is a list of run directives and other commands that you want to execute in this job.</p>
</li>
<li><p>Line 18: The add_ssh_keys command adds your SSH keys to the machine. You need to provide the fingerprints of the keys that you want to use. You can generate and add SSH keys from your CircleCI settings page.</p>
</li>
<li><p>Line 21: The command attribute is a list of shell commands that you want to execute. In this case, you are using SCP to copy your NGINX configuration files from the machine to your web server, and SSH to restart the NGINX service on your web server. You need to replace YOUR_FINGERPRINT with the fingerprint of your SSH key, and YOUR_IP with the IP address of your web server.</p>
</li>
<li><p>Line 24: This indicates the version of the workflow syntax you are using. 2 is the most recent version.</p>
</li>
<li><p>Line 29: The required attribute specifies the dependencies of this job. In this case, you are saying that the deploy job requires the build job to finish successfully before running.</p>
</li>
<li><p>Line 30: The filters attribute specifies the conditions for running this job. In this case, you are saying that the deploy job should only run on the main branch of our GitHub repository.</p>
</li>
</ul>
<h3 id="heading-push-your-circleci-configuration-file-to-your-github-repository">Push your CircleCI configuration file to your GitHub repository</h3>
<p>On your web server, add, commit, and push your CircleCI configuration file to your GitHub repository by running the following commands:</p>
<pre><code class="lang-bash">git add .circleci/config.yml
git commit -m <span class="hljs-string">"Add CircleCI config file"</span>
git push origin main
</code></pre>
<p>This will trigger a new build on CircleCI and run your CI pipeline.</p>
<p>Go to your CircleCI dashboard and click on the build-and-deploy workflow.</p>
<p>You can click on each job to see the details and logs of the steps.</p>
<p>Wait for the workflow to finish.</p>
<p>You have successfully created a CircleCI configuration file and defined your CI pipeline. You can now automate and secure your web server deployment with CircleCI. You can also modify and improve your configuration file according to your needs.</p>
<h2 id="heading-step-6-test-and-deploy-your-web-server-with-circleci">Step 6: Test and Deploy Your Web Server with CircleCI</h2>
<p>Now that you have created a CircleCI project and a configuration file for your CI pipeline, you can test and deploy your web server with CircleCI. You can trigger and monitor your CI pipeline from the CircleCI web app or the command line. You can also verify that your web server is deployed and secured correctly by using online tools or by accessing your web server using HTTPS.</p>
<h3 id="heading-trigger-and-monitor-your-ci-pipeline-from-the-circleci-web-app">Trigger and monitor your CI pipeline from the CircleCI web app</h3>
<p>To trigger and monitor your CI pipeline from the CircleCI web app, follow these steps:</p>
<ul>
<li><p>Go to the CircleCI dashboard.</p>
</li>
<li><p>On the dashboard, you will see a list of your projects and pipelines. Find the project that you created in the previous step and click on it.</p>
</li>
<li><p>On the project page, you will see a list of your branches and workflows. Find the branch that you pushed your CircleCI configuration file to in the previous step and click on the build-and-deploy workflow.</p>
</li>
<li><p>On the workflow page, you will see a graphical representation of your pipeline, showing the status and duration of each job and step. You can click on each job or step to see the details and logs of the commands that were executed.</p>
</li>
<li><p>Wait for the workflow to finish. If everything goes well, you will see a green check mark next to each job and step, indicating that they were successful.</p>
</li>
</ul>
<p>You have successfully triggered and monitored your CI pipeline from the CircleCI web app. You can also trigger and monitor your CI pipeline from the command line.</p>
<h3 id="heading-trigger-and-monitor-your-ci-pipeline-from-the-command-line">Trigger and monitor your CI pipeline from the command line</h3>
<p>To trigger and monitor your CI pipeline from the command line, follow these steps:</p>
<ul>
<li><p>On your web server, navigate to the directory where your NGINX configuration files are located. By default, this is /etc/nginx on most Linux distributions.</p>
</li>
<li><p>Make some changes to your configuration files, such as adding or removing security headers, and save them.</p>
</li>
<li><p>Add, commit, and push your changes to your GitHub repository by running the following commands:</p>
</li>
</ul>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">"Update Nginx configuration"</span>
git push origin main
</code></pre>
<p>This will trigger a new build on CircleCI and run your CI pipeline.</p>
<p>To monitor your CI pipeline from the command line, you can use the CircleCI CLI, which is a tool that allows you to interact with CircleCI from your terminal. You can install the CircleCI CLI by following the instructions on the official website.</p>
<p>After installing the CircleCI CLI, you can use the <code>circleci</code> command to perform various actions, such as listing your projects, pipelines, workflows, jobs, and artifacts. You can also use the --help flag to see the available options and arguments for each command.</p>
<p>To monitor your CI pipeline from the command line, you can use the circleci pipeline command to list and describe your pipelines.</p>
<p>For example, you can run the following command to list the pipelines for your project:</p>
<pre><code class="lang-bash">circleci pipeline list --org-slug &lt;VCS&gt;/&lt;your-vcs-org-or-username&gt; --project-slug &lt;VCS&gt;/&lt;your-repo-name&gt;
</code></pre>
<p>Replace <code>&lt;VCS&gt;</code> with either gh or bb depending on your version control system. Replace <code>&lt;your-vcs-org-or-username&gt;</code> with your GitHub or Bitbucket organization or username. Replace <code>&lt;your-repo-name&gt;</code> with your repository name. You will see something like this:</p>
<p><img src="https://i.ibb.co/JKqc0cn/carbon-5.png" alt="Command Output" width="1631" height="358" loading="lazy"></p>
<p>You can use the pipeline ID or number to describe a specific pipeline and see its details, such as the status, workflows, jobs, and steps. For example, you can run the following command to describe the latest pipeline for your project:</p>
<pre><code class="lang-bash">circleci pipeline describe --org-slug &lt;VCS&gt;/&lt;your-vcs-org-or-username&gt; --project-slug &lt;VCS&gt;/&lt;your-repo-name&gt; --pipeline-number &lt;number&gt;
</code></pre>
<p>Replace <code>&lt;number&gt;</code> with the pipeline number that you want to describe.</p>
<p>Wait for the pipeline to finish. If everything goes well, you will see a success message for each job and step, indicating that they were successful.</p>
<p>You have successfully triggered and monitored your CI pipeline from the command line. You can also verify that your web server is deployed and secured correctly.</p>
<h3 id="heading-verify-that-your-web-server-is-deployed-and-secured-correctly">Verify that your web server is deployed and secured correctly</h3>
<p>To verify that your web server is deployed and secured correctly, you can use online tools or access your web server using HTTPS. Here are some examples:</p>
<p>To verify that your web server is using the latest Nginx configuration that you pushed to GitHub, you can use a tool like <a target="_blank" href="https://curl.se/">curl</a> or <a target="_blank" href="https://www.gnu.org/software/wget/">wget</a> to make a request to your web server and inspect the response headers.</p>
<p>For example, you can run the following command to see the security headers that your web server is sending:</p>
<pre><code class="lang-bash">curl -I https://www.yourdomain.com
</code></pre>
<p>Replace yourdomain.com with your actual domain name.</p>
<p>You can compare the headers with the ones that you configured in your NGINX configuration file and see if they match.</p>
<p>To verify that your web server is using the SSL/TLS certificate that you installed with Certbot, you can use a tool like <a target="_blank" href="https://www.ssllabs.com/ssltest/">SSL Labs</a> or <a target="_blank" href="https://www.htbridge.com/ssl/">HTBridge</a> to scan your web server and check its SSL/TLS configuration and rating. You can check the details of your certificate, such as the issuer, validity, and chain. You can also check the grade of your SSL/TLS configuration, which should be A or higher.</p>
<p>To verify that your web server is accessible and functional using HTTPS, you can simply open your web browser and visit your web server using HTTPS. For example, you can go to https://www.yourdomain.com and see your web page.</p>
<p>You can check the lock icon in your browser, which indicates that your connection is secure. You can also click on the icon and see the details of your certificate and connection.</p>
<p><img src="https://i.ibb.co/dGwr57V/certificate-viewer.png" alt="Viewing Certificate" width="688" height="674" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, you have learned how to secure your web server using NGINX and CicleCI. NGINX and CircleCI, when used together, can provide a powerful solution for ensuring the continuous security of your web applications.</p>
<p>Stay ahead of the curve by integrating these technologies into your workflow, and empower your team to deliver secure and reliable web services.</p>
<p>Please don't forget to share this guide with your colleagues, friends, and online communities if you find it insightful.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Manage Secrets in Docker ]]>
                </title>
                <description>
                    <![CDATA[ Protecting sensitive data like API keys, passwords, and certificates in your Docker projects can be quite daunting and error-prone. And many devs neglect it when dealing with applications deployed in containers. There is no global way of storing and ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/manage-secrets-in-docker/</link>
                <guid isPermaLink="false">66d45d5cd7a4e35e38434928</guid>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Security ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abraham Dahunsi ]]>
                </dc:creator>
                <pubDate>Wed, 03 Jan 2024 16:26:00 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2023/12/feature-image.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Protecting sensitive data like API keys, passwords, and certificates in your Docker projects can be quite daunting and error-prone. And many devs neglect it when dealing with applications deployed in containers.</p>
<p>There is no global way of storing and sharing secrets in containers. Traditional methods of storing them, such as hardcoding them in source code and tucking them away in text files, leave them vulnerable to exposure and exploitation.</p>
<p>Docker Secrets was introduced to address this issue. Docker secrets offer a secure way to store your sensitive data, safeguarding your applications from the pitfalls of careless storage.</p>
<p>In this article, you will learn what Docker secrets are and their default structures. I will also walk you through the steps on how to manage secrets in Docker to safeguard your sensitive data and unlock the full potential of your Docker applications.</p>
<h2 id="heading-what-are-docker-secrets">What are Docker Secrets?</h2>
<p>To effectively manage secrets, you must first understand what they are. Docker secrets include a wide array of sensitive data, including:</p>
<ul>
<li><p>Credentials such as usernames and passwords for databases, servers, third-party services, and more.</p>
</li>
<li><p>API keys, which are unique keys that grant access to external APIs and services.</p>
</li>
<li><p>Digital certificates, which are used for secure communication and authentication, such as SSL/TLS certificates.</p>
</li>
<li><p>Encryption keys, which are keys used to encrypt sensitive data or files.</p>
</li>
<li><p>Tokens, such as Access tokens for authentication and authorization purposes.</p>
</li>
<li><p>Software licenses that may contain sensitive information.</p>
</li>
<li><p>Other sensitive data that could pose a security risk if exposed.</p>
</li>
</ul>
<h3 id="heading-how-secrets-are-stored">How Secrets Are Stored</h3>
<p>Docker offers several pathways to create and reference secrets within your containerized environment:</p>
<ul>
<li><p>Files: You can store secrets in plain text files, but this method is less secure and not recommended for production environments.</p>
</li>
<li><p>Environment Variables: You can set secrets as environment variables within containers, but this may still expose them in logs and process listings.</p>
</li>
<li><p>Docker Secrets: You can utilize Docker's built-in secrets management feature for basic encryption and access control.</p>
</li>
<li><p>Docker Compose: You can define secrets within your Docker Compose files for convenient management during development and testing.</p>
</li>
<li><p>Docker Swarm Secrets: You can leverage advanced secret management capabilities for clustered Docker environments, providing secure storage and granular access control.</p>
</li>
</ul>
<h2 id="heading-solutions-for-managing-key-secrets">Solutions for Managing Key Secrets</h2>
<p>Docker offers two main ways to manage your sensitive data: built-in solutions within the platform itself and external solutions for more advanced needs.</p>
<h3 id="heading-built-in-solutions">Built-in solutions</h3>
<p>For those starting out or seeking basic secret management, Docker's built-in solutions offer a convenient entry point (this guide is based on the built-in solutions):</p>
<ul>
<li><p>Docker Secrets: This lightweight method allows creating and injecting basic secrets into containers via CLI or Compose files. While easy to use, it lacks advanced features like centralized storage and granular access control. It is best suited for simple deployments with minimal secrets.</p>
</li>
<li><p>Docker Swarm Secrets: For clustered environments, Swarm secrets offer a step up in security. Secrets reside securely on Swarm managers and are distributed to nodes on demand. You gain granular access control and audit trails, making it ideal for production deployments with multiple secrets.</p>
</li>
</ul>
<h3 id="heading-external-solutions">External solutions</h3>
<p>For robust security and advanced features, external secrets management platforms stand out. Here are some popular options:</p>
<ul>
<li><p>Vault: A feature-rich, open-source platform offering encryption, access control, logging, and audit trails. It integrates seamlessly with various tools and platforms, making it a versatile choice for complex deployments.</p>
</li>
<li><p>Keywhiz: Another open-source option, Keywhiz, focuses on ease of use and cloud-native deployments. Its lightweight design makes it ideal for managing secrets in Kubernetes environments.</p>
</li>
<li><p>AWS Secrets Manager: If you're an AWS user, this native service provides secure storage, rotation, and retrieval of secrets, integrating seamlessly with your existing infrastructure.</p>
</li>
<li><p>Cloud Native Options: Most major cloud providers offer dedicated secrets management services like Azure Key Vault and GCP Secret Manager. These leverage the security and scalability of their respective platforms for secure and streamlined management.</p>
</li>
</ul>
<h3 id="heading-how-to-choose-the-right-solution">How to choose the right solution</h3>
<p>The optimal approach depends on your personal or team’s specific needs and environment. Consider these factors:</p>
<ul>
<li><p>Application complexity: Complex applications with numerous secrets likely require the advanced features of an external solution.</p>
</li>
<li><p>Deployment environment: Production environments demand robust security and access control, favoring external options.</p>
</li>
<li><p>Team expertise: Evaluate your team's familiarity with managing and integrating external tools.</p>
</li>
<li><p>Scalability needs: If you envision scaling your containerized infrastructure, choose solutions that cater to multi-node deployments.</p>
</li>
</ul>
<p>Remember, there's no one-size-fits-all approach. Prioritize secure practices for creating, storing, rotating, and deleting secrets, regardless of your chosen method.</p>
<h2 id="heading-how-to-get-started-with-managing-secrets">How to Get Started with Managing Secrets</h2>
<p>Now, let's get practical, as I will show you how to create and manage Docker Secrets.</p>
<h3 id="heading-how-to-create-a-docker-secret">How to create a Docker secret</h3>
<p>Start by creating a file with a secret, like a file containing your password:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">echo</span> yourpassword &gt; password.txt
</code></pre>
<p>Next, create the Docker secret object using the <code>docker secret</code> command:</p>
<pre><code class="lang-bash">$ docker secret create your_secret ./path/to/password.txt
</code></pre>
<p>In the process of creating and managing Docker Secrets, you start by generating a file containing a secret, such as a password, using the <code>echo</code> command and saving it to a file (such as <code>password.txt</code>).</p>
<p>After that, you use the <code>docker secret create</code> command to establish a Docker secret object named <code>your_secret</code> from the contents of the specified file (<code>./path/to/password.txt</code>). This sequence of commands enables the creation and storage of confidential information, like passwords, into your Docker environment securely.</p>
<h3 id="heading-how-to-initiate-a-service-using-the-secret">How to initiate a service using the secret</h3>
<p>Now to create a service that uses the secret, use this template:</p>
<pre><code class="lang-bash">$ docker service create --name &lt;service_name&gt;  --secret &lt;secret_name&gt;   &lt;image_name&gt;:&lt;tag&gt;
</code></pre>
<p>The <code>docker service create</code> command starts the creation of a new service within a Docker Swarm cluster. To customize and configure the service, several options are available:</p>
<ol>
<li><p><code>--name &lt;service_name&gt;</code>: Assigns a specific name to the service, aiding in easy identification. For example, <code>--name my-nginx-service</code> designates the service as <code>my-nginx-service</code>.</p>
</li>
<li><p><code>--secret &lt;secret_name&gt;</code>: Uses an existing Docker Swarm secret into the service. This allows the secret to be accessed within the containers associated with the service. For instance, <code>--secret your_secret</code> associates the <code>your_secret</code> secret with the service.</p>
</li>
<li><p><code>&lt;image_name&gt;:&lt;tag&gt;</code>: Specifies the Docker image to be utilized for the service, including its tag. For example, <code>&lt;image_name&gt;:&lt;tag&gt;</code> can be replaced with <code>nginx:latest</code> to use the latest version of the Nginx image for the service.</p>
</li>
</ol>
<h2 id="heading-how-to-manage-docker-secrets">How to Manage Docker Secrets</h2>
<h3 id="heading-how-to-list-docker-secrets">How to list Docker secrets</h3>
<p>You can check out the list of Docker Secrets available by using the <code>Docker ls</code> command:</p>
<pre><code class="lang-bash">$ docker secret ls
</code></pre>
<p>This is what the output should look like:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/12/Screenshot-2023-12-23-171215-1.png" alt="Screenshot-2023-12-23-171215-1" width="600" height="400" loading="lazy"></p>
<p>This command does not provide comprehensive information about your Docker secrets, as only the metadata is displayed. To inspect a specific secret's metadata, use the <code>Docker Inspect</code> command:</p>
<pre><code class="lang-bash">$ docker secret inspect &lt;secret_name&gt; <span class="hljs-comment">#use 'your_secret' for this example</span>
</code></pre>
<p>When executed, it provides a comprehensive output that includes metadata about the specified secret, such as its ID, version, and labels. Additionally, it displays the secret's creation and update timestamps. This inspection command is valuable for administrators and developers seeking to understand the properties and attributes of a Docker secret, aiding in effective management and utilization within the Docker Swarm cluster.</p>
<p>However, Docker secrets prioritize security, and unveiling their contents through CLI commands would compromise their integrity. To access the secret's content, it must be attached to a service within your Docker Swarm. Subsequently, the secret becomes accessible as a file within the containers associated with that service.</p>
<h4 id="heading-how-to-remove-docker-secrets">How to remove Docker secrets</h4>
<p>To remove and delete unused Docker Secrets use the <code>docker secret rm</code> command:</p>
<pre><code class="lang-bash">$ docker secret rm your_secret
</code></pre>
<p>Remember, this action is irreversible, so make sure to double-check your choice!</p>
<h2 id="heading-how-to-use-docker-secrets-with-docker-compose">How to Use Docker Secrets with Docker Compose</h2>
<p>If you are not a big fan of using the command-line interface, then don’t worry, as you can still use Docker compose files to manage Docker secrets.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.1'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">web:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginxdemos/hello</span>
    <span class="hljs-attr">secrets:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">your_secret</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">your_external_secret</span>
  <span class="hljs-attr">nginx:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"80:80"</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./conf/nginx.conf:/etc/nginx/nginx.conf</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">web</span>

<span class="hljs-attr">secrets:</span>
  <span class="hljs-attr">your_file_secret:</span>
    <span class="hljs-attr">file:</span>  <span class="hljs-string">./path/to/password.txt</span>
  <span class="hljs-attr">your_external_secret:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>This YAML configuration is a Docker Compose file, specifying the setup for two services –<code>web</code> and <code>nginx</code>– along with some associated configurations and secrets.</p>
<p>Let's me breakdown what I did.</p>
<h3 id="heading-services-configuration">Services configuration</h3>
<p>The <code>web service</code> uses the image <code>nginxdemos/hello</code> and includes two secrets: <code>your_secret</code> and <code>your_external_secret</code>.</p>
<p>The <code>nginx service</code> uses the <code>nginx:latest</code> image and maps port 80 on the host to port 80 on the container.</p>
<p>Also, it mounts the local file <code>./conf/nginx.conf</code> into the container at <code>/etc/nginx/nginx.conf</code> and depends on the <code>web' service</code>, which means that <code>web</code> needs to be running before <code>nginx</code> starts.</p>
<h3 id="heading-secrets-configuration">Secrets configuration</h3>
<p>There are two secrets defined.</p>
<p>The first one is <code>your_file_secret</code>, which reads the content of <code>./path/to/password.txt</code> and creates a secret from it.</p>
<p>The second one is <code>your_external_secret</code>, which specifies that this secret is external. This implies that the actual secret content is managed externally, and the configuration here is a reference to that external secret.</p>
<p>In summary, this Docker Compose file example sets up two services, <code>web</code> and <code>nginx</code>, with the <code>nginx</code> service depending on <code>web</code>. Secrets, both from files (<code>your_file_secret</code>) and external sources (<code>your_external_secret</code>), are utilized for the secure handling of sensitive information within the services. The <code>nginx</code> service also has additional configurations related to port mapping and volume mounting.</p>
<h2 id="heading-best-practices">Best Practices</h2>
<p>Below are some tips and best practices to follow when working with Docker Secrets.</p>
<h3 id="heading-how-to-choose-a-method">How to Choose a Method</h3>
<p>Selecting the right method depends on several factors:</p>
<ul>
<li><p>Application Needs: Consider the complexity of your application and its security requirements. Simple applications might thrive with built-in Docker secrets, while complex deployments might necessitate an external solution.</p>
</li>
<li><p>Infrastructure: Analyze your existing infrastructure and tools. If you already utilize specific cloud platforms or orchestration engines, their native secrets management solutions might offer seamless integration and familiarity.</p>
</li>
<li><p>Desired Control: Assess your need for advanced features like granular access control, automated rotation, and centralized management. External solutions typically offer greater control compared to built-in options.</p>
</li>
</ul>
<h3 id="heading-how-to-secure-secret-creation-and-storage">How to Secure Secret Creation and Storage</h3>
<p>The foundation of secure secrets management lies in protecting the data itself:</p>
<ul>
<li><p>Encryption: Implement encryption for secrets both at rest (stored on disk) and in transit (transmitted between systems). Utilize strong encryption algorithms and key management best practices.</p>
</li>
<li><p>Access Control: Enforce the principle of least privilege. Grant access to secrets only on a need-to-know basis. Consider employing role-based access control (RBAC) mechanisms for granular control.</p>
</li>
</ul>
<h3 id="heading-secret-rotation-and-lifecycle-management">Secret Rotation and Lifecycle Management</h3>
<p>Preventing the compromise of even a single secret can significantly mitigate risk. Employ these measures:</p>
<ul>
<li><p>Rotation: Regularly rotate your secrets, even if they haven't been exposed. Define automated rotation schedules based on best practices and your specific security needs.</p>
</li>
<li><p>Lifecycle Management: Implement secure deletion processes for outdated or unused secrets. Avoid simply deleting files, as data recovery tools might still access them. Secure deletion methods offer a safer approach.</p>
</li>
</ul>
<h3 id="heading-how-to-integrate-secrets-with-cicd-pipelines">How to Integrate Secrets with CI/CD Pipelines</h3>
<p>Incorporate secrets management seamlessly into your development workflow:</p>
<ul>
<li><p>Injection Techniques: Utilize environment variables, dynamic secret injection tools, or file mounts to provide secrets to containers only when needed. This avoids their storage within container images.</p>
</li>
<li><p>Automated Credential Management: Integrate your chosen secrets management solution with your CI/CD pipeline to automate credential retrieval, rotation, and injection as part of your deployment process.</p>
</li>
<li><p>Minimize Exposure: Avoid logging secrets in plain text during the build or deployment stages. Implement masking techniques or utilize tools that handle secrets securely within your CI/CD pipeline.</p>
</li>
</ul>
<p>Using these best practices and customizing them to your specific needs, you can build a robust and secure foundation for managing your Docker secrets.</p>
<p>Remember, the solution you choose is just one piece of the puzzle. Constant vigilance, adherence to best practices, and regular security audits are crucial for maintaining a resilient defense against potential threats.</p>
<h2 id="heading-additional-resources">Additional Resources</h2>
<p>Below are some additional resources to use if you want to learn more about Docker Secrets and other security practices:</p>
<ul>
<li><p><a target="_blank" href="https://docs.docker.com/">Docker documentation</a> Official guides for Docker secrets and secure practices.</p>
</li>
<li><p>Secret management platforms guides: Each solution provides extensive documentation and tutorials.</p>
<ul>
<li><p><a target="_blank" href="https://docs.docker.com/engine/swarm/secrets/">Official Docker Docs guide on managing Docker secrets (read for more in-depth knowledge)</a></p>
</li>
<li><p><a target="_blank" href="https://developer.hashicorp.com/vault/doc">A guide to using an external solution: Vault</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html">A guide to AWS Secrets manager</a></p>
</li>
</ul>
</li>
<li><p>Community Forums and Blogs: Actively engage with the security and Docker communities for further learning and support.</p>
<ul>
<li><p><a target="_blank" href="https://stackoverflow.com/">Stackoverflow</a></p>
</li>
<li><p><a target="_blank" href="https://www.cncf.io/blog/">The official blog page of the Cloud Native Computing Foundation (CNCF)</a></p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Now, in this guide, you've learned what Docker secrets are, how they are stored, the different methods of storing them, and how to manage Docker secrets in your personal projects.</p>
<p>Remember, secure secrets are the cornerstones of secure applications. Build your fortress wisely, and your data will remain safe and sound.</p>
<p>Please feel free to ask if you have any specific questions on any of these aspects!</p>
<p>Also, if you liked this guide and found it insightful, please make sure to share it with your colleagues and online communities.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
