<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Linux - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Thu, 07 May 2026 09:27:11 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/linux/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ Why Chrome OS Is the Operating System the AI Era Was Built For ]]>
                </title>
                <description>
                    <![CDATA[ Chrome OS runs on a read-only filesystem. You can't install executables on the host. There's no traditional desktop environment. Everything that interacts with the underlying system does so through a  ]]>
                </description>
                <link>https://www.freecodecamp.org/news/why-chrome-os-is-the-ai-os/</link>
                <guid isPermaLink="false">69e2765cfd22b8ad62611ba8</guid>
                
                    <category>
                        <![CDATA[ Chrome OS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Cloud Computing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AWS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ cybersecurity ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Christopher Galliart ]]>
                </dc:creator>
                <pubDate>Fri, 17 Apr 2026 18:05:16 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/c4116a06-9e42-4da5-a152-0fe1433e0857.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Chrome OS runs on a read-only filesystem. You can't install executables on the host. There's no traditional desktop environment. Everything that interacts with the underlying system does so through a sandboxed browser, a containerized Linux terminal, or a cloud connection.</p>
<p>For years, that list of constraints was the reason people dismissed it. But in 2026, it's the reason Chrome OS might be the most correctly designed operating system for what's coming.</p>
<p>The security architecture treats the endpoint as untrusted by default. The containerized Linux environment gives developers a full headless stack without compromising the host. And an upcoming OS-level rewrite, Aluminium, puts Google's on-device AI models directly into the kernel.</p>
<p>This article covers security architecture, the container-based developer environment, cloud-streamed creative tools via AWS NICE DCV, cloud gaming, and what Aluminium OS means for on-device AI.</p>
<h3 id="heading-heres-what-well-cover">Here's what we'll cover:</h3>
<ol>
<li><p><a href="#heading-security-first-architecture-in-the-era-of-ai-powered-threats">Security-First Architecture in an Era of AI-Powered Threats</a></p>
</li>
<li><p><a href="#heading-a-headless-linux-stack-thats-more-flexible-than-it-looks">A Headless Linux Stack That's More Flexible Than It Looks</a></p>
</li>
<li><p><a href="#heading-aws-nice-dcv-changes-the-creative-tools-conversation">AWS NICE DCV Changes the Creative Tools Conversation</a></p>
</li>
<li><p><a href="#heading-cloud-gaming-works">Cloud Gaming Works</a></p>
</li>
<li><p><a href="#heading-aluminum-os-on-device-models-on-googles-own-architecture">Aluminium OS: On-Device Models on Google's Own Architecture</a></p>
</li>
<li><p><a href="#heading-where-this-lands">Where This Lands</a></p>
</li>
</ol>
<h2 id="heading-security-first-architecture-in-an-era-of-ai-powered-threats">Security-First Architecture in an Era of AI-Powered Threats</h2>
<p>Threat actors are getting better tools. Models like Mythos are lowering the barrier for generating convincing phishing campaigns, crafting polymorphic malware, and automating social engineering at scale.</p>
<p>Traditional operating systems present exactly the attack surface these tools target: writable system files, user-installable executables, patches that sit uninstalled for weeks because someone clicked "remind me later."</p>
<p>Chrome OS sidesteps most of this by design. The root filesystem is read-only and cryptographically verified on every boot through a process called Verified Boot.</p>
<p>If anything has modified the OS files since the last verified state, whether that's malware, a compromised package, or a rogue AI agent that decided to start deleting system files, the device detects it at startup and either self-corrects or refuses to boot.</p>
<p>Persistence across reboots isn't difficult. It's architecturally impossible through software alone.</p>
<p>Updates happen silently. While you're working, the system downloads the next OS version to an inactive partition. On your next reboot, it pivots to the updated version. No prompts, no deferred patches, no exposure window.</p>
<p>Major updates ship every four to six weeks. Security patches land every two to three weeks. The gap between vulnerability discovery and remediation is measured in days.</p>
<p>Chrome OS consistently doesn't appear in the top 50 products by CVE count in the NIST vulnerability database. Windows and the Linux kernel sit near the top every year. When AI is actively being weaponized to find and exploit vulnerabilities faster than humans can patch them, a read-only, verified, automatically updated endpoint is a different category of security posture.</p>
<p>The tradeoff is trust. Chrome OS's security model means trusting Google as the root authority for your entire computing stack: updates, certificate trust, telemetry. Organizations with strict data sovereignty requirements should weigh that dependency carefully.</p>
<h2 id="heading-a-headless-linux-stack-thats-more-flexible-than-it-looks">A Headless Linux Stack That's More Flexible Than It Looks</h2>
<p>Chrome OS is a text-based operating system. There's no native GUI layer. Stop and sit with that for a second, because it's the thing that makes people dismiss Chrome OS and also the thing that makes it work.</p>
<p>The entire graphical interface you interact with IS the Chrome browser. The Ash shell, Chrome's window manager, is the desktop. You don't install applications onto it the way you install .exe files on Windows or drag .app bundles into a macOS Applications folder. If it isn't running in a browser tab, an Android VM, or a Linux container, it doesn't run. That restriction is what keeps the host locked down, and it's what makes everything else possible.</p>
<p>Under the hood, Chrome OS runs a minimal virtual machine called Termina through crosvm, Google's Rust-based VM monitor.</p>
<p>Inside Termina, LXD manages Linux containers. The default container, penguin, is a Debian environment with a special trick: it bridges GUI-based Linux applications directly into the Chrome OS desktop through a Wayland proxy called Sommelier. Install VS Code, GIMP, or LibreOffice in penguin and they show up in your Chrome OS app launcher, running in windows alongside your browser tabs. For a lot of developers, penguin alone covers the daily workflow.</p>
<p>But Termina gives you more than penguin. Through the LXD layer you can spin up independent containers that are fully isolated operating systems: Arch, Alpine, Ubuntu, whatever you need.</p>
<p>These aren't attached to the GUI bridge. They run headless, natively, with their own systemd, their own package managers, their own persistent state. Need a clean Ubuntu environment to test a deployment script without touching your main setup? <code>lxc launch</code> and you're there. Need to blow it away? <code>lxc delete</code> and it's gone. No orphaned files on the host, no cross-contamination between environments.</p>
<p>The key distinction from Docker is that LXD runs system containers (full OS emulation) rather than application containers. You get background services, persistent daemons, the works. You can also run Docker inside any of these LXD containers if you need application-level containerization on top of that.</p>
<p>Snapshot your entire environment with <code>lxc snapshot</code> before a risky dependency install and roll back instantly if something breaks. That kind of safety net is broader than version control alone: it captures your full OS configuration, not just code.</p>
<p>Pair this with browser-native tools like GitHub Codespaces, Google Colab, AWS CloudShell, or vscode.dev, and the terminal handles your local tooling while the browser handles everything else.</p>
<p>AI coding assistants like Claude and Gemini already operate natively in the browser. The distance between "cloud IDE" and "local IDE" keeps shrinking.</p>
<p>There are friction points: no custom kernel modules inside Crostini. Nested KVM requires Intel Gen 10+ processors. VPN routing into the Linux container from the Chrome OS host can be a headache, with WireGuard requiring userspace workarounds inside the container.</p>
<p>But none of these break the core architecture for cloud-native work. They're just worth knowing about before you commit.</p>
<h2 id="heading-aws-nice-dcv-changes-the-creative-tools-conversation">AWS NICE DCV Changes the Creative Tools Conversation</h2>
<p>One of the longest-standing arguments against Chrome OS has been the absence of professional creative software. There's no Premiere, no DaVinci Resolve, no Blender, no Ableton. For years, this was a dead-end conversation.</p>
<p>AWS NICE DCV (Desktop Cloud Visualization) reopens it. DCV is a high-performance remote display protocol that streams GPU-accelerated desktop sessions from EC2 instances to any device, including a Chromebook running the browser-based DCV client. It supports OpenGL, Vulkan, and DirectX rendering, with adaptive encoding that adjusts to network conditions. On AWS, the DCV license is free. You pay only for the EC2 compute time.</p>
<p>Netflix engineers use DCV to stream content creation applications to remote artists. Volkswagen runs 3D CAD simulations across their engineering division through it. A VFX studio called RVX used it to deliver visual effects for HBO's The Last of Us, streaming Nuke, Maya, Houdini, and Blender to artists distributed across Europe from servers in Iceland. Their team said it was the best remote experience they'd ever worked with.</p>
<p>So: a Chromebook connected to a g5.xlarge EC2 instance (one A10G GPU) can run Blender, DaVinci Resolve, or any other GPU-accelerated creative application with full hardware acceleration. The rendering happens in the data center. DCV streams the pixels. The creative professional gets a responsive, high-fidelity workspace on a $400 machine that couldn't locally render a single frame.</p>
<p>The constraints are connectivity and cost. You need sustained bandwidth (25+ Mbps for 1080p work, more for 4K multi-monitor setups) and leaving a GPU instance running around the clock adds up. But for studios and professionals who already budget for high-end workstations, the math often pencils out, especially when you factor in zero local hardware maintenance and the ability to scale GPU power on demand.</p>
<h2 id="heading-cloud-gaming-works">Cloud Gaming Works</h2>
<p>GeForce NOW survived where Stadia failed because it made a better business decision: bring your own games. Connect your existing Steam, Epic, or Ubisoft library and stream from NVIDIA's server-side hardware. The Ultimate tier now runs on RTX 5080-class infrastructure. 4K at 120fps with ray tracing, on a fanless Chromebook.</p>
<p>Chrome OS has a structural advantage as a cloud gaming client. GeForce NOW runs natively in the Chromium browser via WebRTC, and users consistently report less micro-stuttering and tighter input handling than the standalone Windows desktop app. Under good network conditions, measured total latency runs 13 to 14ms, with sub-3ms ping documented near datacenter proximity. That's below human perceptual threshold for most game types.</p>
<p>Anti-cheat systems like Easy Anti-Cheat and Riot Vanguard are a non-issue in this model. They run on the server where the game executes, not on your local endpoint. On-device gaming isn't viable on Chrome OS and likely never will be. The architecture isn't designed for it, and even projects attempting to bridge local GPUs hit bottlenecks in the container layers. Cloud gaming is the path, and it works.</p>
<p>The limiting factors are network-dependent. Latency spikes above 500ms on bad connections make fast-twitch games unplayable, and NVIDIA's 100-hour monthly cap on the Ultimate tier has drawn criticism. But cloud gaming on Chrome OS has crossed the line from novelty to daily-driver viable for most use cases.</p>
<h2 id="heading-aluminium-os-on-device-models-on-googles-own-architecture">Aluminium OS: On-Device Models on Google's Own Architecture</h2>
<p>The most consequential near-term development for Chrome OS is Project Aluminium, a ground-up rewrite that replaces the current Chrome OS foundation with a native Android kernel. Not another bolted-on compatibility layer: a new operating system built on Android 16, designed to run Android applications natively with direct hardware acceleration instead of routing them through the resource-heavy ARCVM virtual machine that currently eats CPU cycles on even basic app launches.</p>
<p>The AI story is the real story. Aluminium is being built with Gemini models integrated directly into the OS: the file system, the application launcher, the window manager.</p>
<p>Google serving their own proprietary models on their own devices, using an architecture optimized specifically to run them, is a level of vertical integration that no other OS vendor has in the pipeline. Apple has the silicon advantage for local inference. Google has the model-to-OS integration advantage. Those are competing theses about where AI compute should live, and both are worth taking seriously.</p>
<p>The rollout timeline from court documents and leaked roadmaps puts a trusted tester program on select hardware in late 2026, premium tablets by early 2027, and general consumer availability in 2028. Chrome OS Classic gets maintained through existing support obligations until 2033 or 2034.</p>
<p>The launch won't be perfect. Google's track record on platform transitions gives the community earned skepticism. But the ability to iterate a natively AI-integrated OS on hardware they control is the kind of capability that compounds over time.</p>
<h2 id="heading-where-this-lands">Where This Lands</h2>
<p>Two years ago, calling Chrome OS a serious platform for development or creative work would have been a stretch. Today you can run a full Debian environment with systemd daemons, snapshot your workspace, stream Blender from a GPU-backed data center, play AAA games at 4K on hardware you don't own, and do all of it from a verified, read-only endpoint that patches itself while you sleep.</p>
<p>The remaining gaps are real. But they're concentrated in workflows that are themselves moving to the cloud. Chrome OS was designed around assumptions about computing that used to be premature. They're not premature anymore.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Run Rust on Jupyter Notebooks ]]>
                </title>
                <description>
                    <![CDATA[ If you've ever wanted to combine the power of Rust with the interactive goodness of Jupyter notebooks, you're in the right place. Maybe you're tired of compiling every single time you want to test a s ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-run-rust-on-jupyter-notebooks/</link>
                <guid isPermaLink="false">699879483dc17c4862f498f9</guid>
                
                    <category>
                        <![CDATA[ Rust ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Jupyter Notebook  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ WSL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Tutorial ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Daniel Iwugo ]]>
                </dc:creator>
                <pubDate>Fri, 20 Feb 2026 15:10:00 +0000</pubDate>
                <media:content url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/5e1e335a7a1d3fcc59028c64/6e411f5d-65a1-407d-a4f0-0beceb1e784b.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>If you've ever wanted to combine the power of Rust with the interactive goodness of Jupyter notebooks, you're in the right place. Maybe you're tired of compiling every single time you want to test a snippet, learn Rust in a more interactive way, or just have a crazy idea pop into your head like I do.</p>
<p>Most people think Jupyter is just for Python and data science stuff, but apparently you can run Rust in one, too.</p>
<p>In this tutorial, we’ll be taking a look at:</p>
<ol>
<li><p><a href="#heading-what-is-evcxr">What is EvCxR?</a></p>
</li>
<li><p><a href="#heading-how-to-install-the-rust-jupyter-kernel">How to Install the Rust Jupyter kernel</a></p>
</li>
<li><p><a href="#heading-step-4-write-your-first-rust-code">How to run your first Rust code in a notebook</a></p>
</li>
<li><p><a href="#heading-handy-tips-and-tricks">Handy Tips and Tricks</a></p>
</li>
<li><p><a href="#heading-common-issues-and-solutions">Common Issues and Solutions</a></p>
</li>
<li><p><a href="#heading-when-not-to-use-jupyter-for-rust">When NOT to Use Jupyter for Rust</a></p>
</li>
</ol>
<p><strong>Friendly Disclaimer</strong>: This tutorial assumes you know the basics of both Rust and Jupyter. If you break something, that's on you, mate 🙂.</p>
<p>So without further ado, let's jump in.</p>
<h2 id="heading-what-is-evcxr"><strong>What is EvCxR?</strong></h2>
<p>EvCxR (pronounced "Evaluator" to my fellow linguists’ horror) is a Rust REPL and Jupyter kernel. It's basically the magic that lets you run Rust code interactively in Jupyter notebooks instead of the traditional compile-run-debug cycle.</p>
<p>The name stands for "Evaluation Context for Rust", and it’s an open source project actively maintained on GitHub. Here are a few things that make this terribly named tool absolutely brilliant:</p>
<ol>
<li><p><strong>Interactive development:</strong> It lets you test Rust snippets without creating a whole project 🧪</p>
</li>
<li><p><strong>Prototyping:</strong> You can quickly try out ideas before committing to a full implementation 💡</p>
</li>
<li><p><strong>Data visualisation:</strong> And yes, you can even plot charts with Rust (more on that later) 📊</p>
</li>
</ol>
<h2 id="heading-how-to-install-the-rust-jupyter-kernel">How to Install the Rust Jupyter kernel</h2>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<p>Before we dive into the installation, make sure you have these sorted:</p>
<ol>
<li><p><strong>A Linux System:</strong> Or at least, Windows Subsystem for Linux (There’s a little note below for Windows users.)</p>
</li>
<li><p><strong>The Rust toolchain:</strong> You can get it from <a href="https://rustup.rs/">rustup.rs</a> if you haven't already</p>
</li>
<li><p><strong>Jupyter:</strong> Install via pip – <code>pip install jupyter</code></p>
</li>
<li><p><strong>Patience:</strong> This might take a minute or two ⏱️</p>
</li>
</ol>
<p>Once you’ve got all that, we can get rusty (pun intended).</p>
<p><strong>Note:</strong> If you’re using Windows, you’ll need to do a little extra to get started. Here’s the quick rundown:</p>
<ol>
<li><p>Go to <a href="https://visualstudio.microsoft.com/visual-cpp-build-tools/">https://visualstudio.microsoft.com/visual-cpp-build-tools/</a></p>
</li>
<li><p>Download and run the installer</p>
</li>
<li><p>Select <strong>"Desktop development with C++"</strong></p>
</li>
<li><p>Install it (it's large, ~5GB)</p>
</li>
</ol>
<h3 id="heading-step-1-install-evcxr"><strong>Step 1: Install EvCxR</strong></h3>
<p>Open your terminal and run this command:</p>
<pre><code class="language-rust">cargo install evcxr_jupyter
</code></pre>
<p>Now go grab a cup of joe ☕. This will take a few minutes as Cargo downloads and compiles everything. And don't panic if it seems stuck. Rust compilation is thorough but not particularly fast.</p>
<p>If you get any errors about missing system libraries, you might need to install some dependencies. On Ubuntu/Debian, try:</p>
<pre><code class="language-bash">sudo apt install jupyter-notebook jupyter-core python-ipykernel
sudo apt install cmake
</code></pre>
<p>On macOS with Homebrew:</p>
<pre><code class="language-bash">brew install cmake jupyter
</code></pre>
<h3 id="heading-step-2-install-the-jupyter-kernel"><strong>Step 2: Install the Jupyter Kernel</strong></h3>
<p>Once the installation finishes, you’ll need to register the EvCxR kernel with Jupyter:</p>
<pre><code class="language-bash">evcxr_jupyter --install
</code></pre>
<p>You should see output that looks something like this at the end:</p>
<pre><code class="language-plaintext">Installation complete
</code></pre>
<h3 id="heading-step-3-launch-jupyter-and-create-a-rust-notebook"><strong>Step 3: Launch Jupyter and Create a Rust Notebook</strong></h3>
<p>Let’s test out our baby. Fire up Jupyter:</p>
<pre><code class="language-bash">jupyter notebook
</code></pre>
<p>Your browser should open automatically (if it doesn't, copy the URL from the terminal).</p>
<p>In the Jupyter interface:</p>
<ol>
<li><p>Click <strong>New</strong> in the top right</p>
</li>
<li><p>Select <strong>Rust</strong> from the dropdown (or "evcxr" depending on your version)</p>
</li>
<li><p>A new notebook opens</p>
</li>
</ol>
<p>Welcome to interactive Rust! 🦀</p>
<h3 id="heading-step-4-write-your-first-rust-code"><strong>Step 4: Write Your First Rust Code</strong></h3>
<p>Let's start with a classic:</p>
<pre><code class="language-rust">println!("Hello my fellow Rustaceans! 🦀");
</code></pre>
<p>Hit <code>Shift + Enter</code> to run the cell. You should see the output appear below the cell. Simple as that.</p>
<p>Note that notebooks execute code at the top level, so you don’t have to wrap it around the <code>main()</code> function. If you still want to do that, you’re going to have to call it like this:</p>
<pre><code class="language-rust">fn main(){
    println!("Hello my fellow Rustaceans! 🦀");
}
//Calling the function
main()
</code></pre>
<p>Now let's try something more interesting:</p>
<pre><code class="language-rust">fn fibonacci(n: u32) -&gt; u32 {
    match n {
        0 =&gt; 0,
        1 =&gt; 1,
        _ =&gt; fibonacci(n - 1) + fibonacci(n - 2)
    }
}

for i in 0..10 {
    println!("fibonacci({}) = {}", i, fibonacci(i));
}
</code></pre>
<p>Run it and watch the Fibonacci sequence appear.</p>
<pre><code class="language-plaintext">fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(3) = 2
fibonacci(4) = 3
fibonacci(5) = 5
fibonacci(6) = 8
fibonacci(7) = 13
fibonacci(8) = 21
fibonacci(9) = 34
</code></pre>
<h2 id="heading-handy-tips-and-tricks"><strong>Handy Tips and Tricks</strong></h2>
<p>Functions aren’t the only things that behave differently when using Rust in notebooks. Here are a few other things you might want to keep in mind:</p>
<h3 id="heading-variables-persist-between-cells">Variables Persist Between Cells</h3>
<p>Unlike traditional Rust compilation, variables you define in one cell stick around for the next cells:</p>
<pre><code class="language-rust">let mut counter = 0;
</code></pre>
<p>Then in the next cell:</p>
<pre><code class="language-rust">counter += 1;
println!("Counter: {}", counter);
</code></pre>
<p>The output would be:</p>
<pre><code class="language-plaintext">Counter: 1
</code></pre>
<p>This is great for building up complex examples step by step.</p>
<h3 id="heading-you-can-use-external-crates">You Can Use External Crates</h3>
<p>Add dependencies with the <code>:dep</code> command in one cell:</p>
<pre><code class="language-rust">:dep serde = { version = "1.0", features = ["derive"] }
:dep serde_json = "1.0"
</code></pre>
<p>Then use them normally in the next:</p>
<pre><code class="language-rust">use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
struct Person {
    name: String,
    age: u32,
}

let person = Person {
    name: "Amina".to_string(),
    age: 24,
};

let json = serde_json::to_string(&amp;person).unwrap();
println!("{}", json);
</code></pre>
<p>Output:</p>
<pre><code class="language-plaintext">{"name":"Amina","age":24}
</code></pre>
<p>Pretty neat, huh?</p>
<h3 id="heading-visualisation-support">Visualisation Support</h3>
<p>You can even create graphs. To get started, install the <code>plotters</code> crate:</p>
<pre><code class="language-rust">:dep plotters = { version = "0.3", default-features = false, features = ["evcxr", "all_series", "bitmap_backend", "bitmap_encoder"] }
</code></pre>
<p>Then create a simple sine graph:</p>
<pre><code class="language-rust">use plotters::prelude::*;

let root = SVGBackend::new("sine_wave.svg", (640, 480)).into_drawing_area();
root.fill(&amp;WHITE).unwrap();

let mut chart = ChartBuilder::on(&amp;root)
    .caption("Sine Wave", ("Arial", 20))
    .margin(5)
    .x_label_area_size(30)
    .y_label_area_size(30)
    .build_cartesian_2d(-3.14..3.14, -1.2..1.2)
    .unwrap();

chart.configure_mesh().draw().unwrap();

chart.draw_series(LineSeries::new(
    (-314..314).map(|x| {
        let x = x as f64 / 100.0;
        (x, x.sin())
    }),
    &amp;RED,
)).unwrap();

root.present().unwrap();
println!("Plot saved to sine_wave.svg");
</code></pre>
<p>Output:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771272251271/c07b1c22-4ea1-408c-984a-4179a47058d9.png" alt="Sine wave graph showing output of the code" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p><strong>A word on plotting:</strong> You can actually display plots directly inline in your notebook. But if you're using WSL with VSCode (like I do), inline plotting may not work properly due to rendering issues on the notebook interface. That’s why I used it as an svg file that I can easily view in my text editor.</p>
<h3 id="heading-checking-types">Checking Types</h3>
<p>Not sure what type something is? Use <code>:vars</code>. This shows all variables and their types:</p>
<pre><code class="language-rust">let x = vec![1, 2, 3];
</code></pre>
<pre><code class="language-rust">:vars
</code></pre>
<p>Output:</p>
<pre><code class="language-plaintext">Variable	    Type
       x	Vec&lt;i32&gt;
</code></pre>
<h2 id="heading-common-issues-and-solutions">Common Issues and Solutions</h2>
<h3 id="heading-compilation-errors-everywhere">Compilation Errors Everywhere</h3>
<p>If you're getting weird compilation errors, remember:</p>
<ul>
<li><p>Each cell is compiled separately</p>
</li>
<li><p>You might need to reimport things in each cell</p>
</li>
</ul>
<h3 id="heading-slow-execution">Slow Execution</h3>
<p>The first time you run code in a session, it's slow due to the compilation overhead. Subsequent runs are faster. If it's really slow, you might want to:</p>
<ul>
<li><p>Use release mode: <code>:opt 2</code></p>
</li>
<li><p>Reduce dependency features to only what you need</p>
</li>
<li><p>Consider if Jupyter is the right tool for your use case</p>
</li>
</ul>
<h3 id="heading-dependencies-not-loading">Dependencies Not Loading</h3>
<p>If a crate won't load:</p>
<ul>
<li><p>Make sure the version exists on <a href="http://crates.io">crates.io</a></p>
</li>
<li><p>Check your internet connection (it needs to download)</p>
</li>
<li><p>Try specifying features explicitly</p>
</li>
<li><p>Clear the cargo cache if things get really wonky: <code>rm -rf ~/.evcxr</code></p>
</li>
</ul>
<h2 id="heading-when-not-to-use-jupyter-for-rust"><strong>When NOT to Use Jupyter for Rust</strong></h2>
<p>Jupyter notebooks are great for learning and experimenting, but they're not always the best choice in:</p>
<ul>
<li><p><strong>Production code:</strong> Use proper projects with cargo</p>
</li>
<li><p><strong>Performance-critical code:</strong> The overhead isn't worth it</p>
</li>
<li><p><strong>Large applications:</strong> Notebooks get very messy, very fast</p>
</li>
<li><p><strong>Team collaboration:</strong> Version control with notebooks is quite the nightmare</p>
</li>
</ul>
<p>Stick to notebooks for prototyping and quick experiments. For anything serious, fire up your favourite editor and create a proper Rust project.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Let's summarise what you've learned:</p>
<ol>
<li><p>How to install the EvCxR Jupyter kernel</p>
</li>
<li><p>How to create and run Rust notebooks</p>
</li>
<li><p>How to use external crates in notebooks</p>
</li>
<li><p>Tips and tricks for interactive Rust development</p>
</li>
</ol>
<p>Jupyter notebooks make Rust more accessible for learning and experimentation. Give it a go next time you want to try out a quick Rust snippet without the ceremony of creating a full project. And with that, we've come to the end of this tutorial.</p>
<p>Cheers.</p>
<h2 id="heading-resources">Resources</h2>
<ol>
<li><p><a href="https://github.com/evcxr/evcxr">EvCxR GitHub Repository</a></p>
</li>
<li><p><a href="https://doc.rust-lang.org/book/">Rust Book</a></p>
</li>
<li><p><a href="https://jupyter.org/documentation">Jupyter Documentation</a></p>
</li>
</ol>
<h2 id="heading-acknowledgements">Acknowledgements</h2>
<p>Thanks to <a href="https://www.linkedin.com/in/a-n-u-o/">Anuoluwapo Victor</a>, <a href="https://www.linkedin.com/in/a-n-u-o/">Chinaza Nwukwa,</a> <a href="https://www.linkedin.com/in/chinaza-nwukwa-22a256230/">Holumidey Mer</a><a href="https://www.linkedin.com/in/mercy-holumidey-88a542232/">cy</a>, <a href="https://www.linkedin.com/in/mercy-holumidey-88a542232/">Favour Ojo,</a> <a href="https://www.linkedin.com/in/favour-ojo-906883199/">Georgina</a> <a href="https://www.linkedin.com/in/georgina-awani-254974233/">Awani</a>, <a href="https://www.linkedin.com/in/georgina-awani-254974233/">and my family</a> for the inspiration, support and knowledge used to put this post together.</p>
<p>And thanks to the EvCxR project maintainers for making this possible, the Rust community for being awesome, and to anyone reading this for wanting to learn. You inspire me daily.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Use the NixOS Linux Distro – A Tutorial for Developers ]]>
                </title>
                <description>
                    <![CDATA[ NixOS is a Linux distribution based on the Nix package manager and the Nix language. It’s first stable release was in 2013, and it uses a declarative, reproducible system configuration that allows atomic upgrades and rollbacks. The Nix language is a ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-use-the-nixos-linux-distro-a-tutorial-for-developers/</link>
                <guid isPermaLink="false">6967bce8f1306e271c8038cf</guid>
                
                    <category>
                        <![CDATA[ NixOS ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Nix ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Rajdeep Singh ]]>
                </dc:creator>
                <pubDate>Wed, 14 Jan 2026 15:57:28 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768330530946/99ecef9a-4654-4281-9443-2039455c121e.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>NixOS is a Linux distribution based on the Nix package manager and the Nix language. It’s first stable release was in 2013, and it uses a declarative, reproducible system configuration that allows atomic upgrades and rollbacks.</p>
<p>The Nix language is a specialized, purely functional programming language. It’s used by the Nix package manager to build packages and the NixOS operating system for declarative system configuration and software packaging. </p>
<p>Unlike traditional Linux distributions, NixOS utilizes the Nix programming language to describe the entire system, including packages, services, users, networking, and even the bootloader – all of which are defined through a declarative configuration. This approach enables NixOS to generate complete system profiles, allowing for reproducible deployments, atomic upgrades, and easier system rollbacks.</p>
<p>In simpler terms, in NixOS, you can configure your programs, services, and users, and install new system-wide packages or applications directly within the <code>configuration.nix</code> file – which you can then share directly with others.</p>
<p>Also, if anything goes wrong with your current NixOS generation during the system build time, you can roll back to a previous NixOS generation (after switching to a new generation – more on this below).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768044273038/62862f39-0706-4611-8a91-0b3ab2839a02.png" alt="NixOS - what it is, and what it is not" class="image--center mx-auto" width="3000" height="1500" loading="lazy"></p>
<p>In this tutorial, I’ll explain in detail what NixOS is, how it works, its benefits, and how to set it up on your machine or laptop in a beginner-friendly way.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents:</strong></h2>
<ol>
<li><p><a target="_blank" href="https://preview.freecodecamp.org/69416680eb9d6846d92f037e#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-a-declarative-configuration">What is a Declarative Configuration?</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-why-is-the-declarative-approach-declarative-configuration-used-in-nixos">Why is the Declarative Approach (Declarative Configuration) used in NixOS?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-are-reproducible-systems">What Are Reproducible Systems?</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-does-nixos-work">How Does NixOS Work?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-are-the-benefits-of-using-nixos">What Are the Benefits of Using NixOS?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-when-would-nixos-not-be-the-best-choice">When Would NixOS Not Be the Best Choice?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-nixos-and-the-nix-package-manager-on-your-laptop">How to Set Up NixOS and the Nix Package Manager on Your Laptop</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-install-a-package-in-nixos">How to Install a Package in NixOS?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-faq">FAQ</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To work with NixOS and the Nix package manager, there are no specific prerequisites if you have at least a couple years of experience with Ubuntu, Debian, or any other distribution. Having some basic knowledge of the Nix language is a plus.</p>
<h2 id="heading-what-is-a-declarative-configuration">What is a Declarative Configuration?</h2>
<p>In the declarative approach, we use a file (such as a YAML, JSON, or Nix) to describe the configuration for hardware and software components, such as systems, networking, users, boot loader, services (like with systems), and more, in one file.</p>
<p>NixOS uses the <code>configuration.nix</code> file for its declarative configuration. By default, the <code>configuration.nix</code> file looks like this:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># /etc/nixos/configuration.nix</span>

{ config, pkgs, ... }:
{

  imports = [
      ./hardware-configuration.nix
   ];

  boot.loader.systemd-boot.enable = <span class="hljs-literal">true</span>;
  boot.loader.efi.canTouchEfiVariables = <span class="hljs-literal">true</span>;

  networking.hostName = <span class="hljs-string">"nixos"</span>; <span class="hljs-comment"># Define your hostname.</span>

  <span class="hljs-comment"># Enable networking</span>
  networking.networkmanager.enable = <span class="hljs-literal">true</span>;

  <span class="hljs-comment"># Set your time zone.</span>
  time.timeZone = <span class="hljs-string">"Asia/Kolkata"</span>;

  <span class="hljs-comment"># Select internationalisation properties.</span>
  i18n.defaultLocale = <span class="hljs-string">"en_IN"</span>;

  i18n.extraLocaleSettings = {
    LC_ADDRESS = <span class="hljs-string">"en_IN"</span>;
    LC_IDENTIFICATION = <span class="hljs-string">"en_IN"</span>;
    LC_MEASUREMENT = <span class="hljs-string">"en_IN"</span>;
    LC_MONETARY = <span class="hljs-string">"en_IN"</span>;
    LC_NAME = <span class="hljs-string">"en_IN"</span>;
    LC_NUMERIC = <span class="hljs-string">"en_IN"</span>;
    LC_PAPER = <span class="hljs-string">"en_IN"</span>;
    LC_TELEPHONE = <span class="hljs-string">"en_IN"</span>;
    LC_TIME = <span class="hljs-string">"en_IN"</span>;
  };

  <span class="hljs-comment"># Enable the X11 windowing system.</span>
  services.xserver.enable = <span class="hljs-literal">true</span>;

  <span class="hljs-comment"># Enable the GNOME Desktop Environment.</span>
  services.xserver.displayManager.gdm.enable = <span class="hljs-literal">true</span>;
  services.xserver.desktopManager.gnome.enable = <span class="hljs-literal">true</span>;

  <span class="hljs-comment"># remove preinstall or  unused package in gnome</span>
  environment.gnome.excludePackages = with pkgs; [ gnome-tour gnome.gnome-music nixos-render-docs  ];
  services.xserver.excludePackages = with  pkgs; [ xterm ];

  <span class="hljs-comment"># Configure keymap in X11</span>
  services.xserver.xkb = {
    layout = <span class="hljs-string">"us"</span>;
    variant = <span class="hljs-string">""</span>;
  };

  <span class="hljs-comment"># Enable sound with pipewire.</span>
  sound.enable = <span class="hljs-literal">true</span>;
  hardware.pulseaudio.enable = <span class="hljs-literal">false</span>;
  security.rtkit.enable = <span class="hljs-literal">true</span>;
  services.pipewire = {
    <span class="hljs-built_in">enable</span> = <span class="hljs-literal">true</span>;
    alsa.enable = <span class="hljs-literal">true</span>;
    alsa.support32Bit = <span class="hljs-literal">true</span>;
    pulse.enable = <span class="hljs-literal">true</span>;
  };

  <span class="hljs-comment"># Define a user account. Don't forget to set a password with ‘passwd’.</span>
  users.users.officialrajdeepsingh = {
    isNormalUser = <span class="hljs-literal">true</span>;
    description = <span class="hljs-string">"officialrajdeepsingh"</span>;
    extraGroups = [ <span class="hljs-string">"networkmanager"</span> <span class="hljs-string">"wheel"</span> <span class="hljs-string">"docker"</span> ];
    packages = with pkgs; [
      google-chrome
    ];
  };

  <span class="hljs-comment"># Enable automatic login for the user.</span>
  services.xserver.displayManager.autoLogin.enable = <span class="hljs-literal">true</span>;
  services.xserver.displayManager.autoLogin.user = <span class="hljs-string">"officialrajdeepsingh"</span>;

  <span class="hljs-comment"># Workaround for GNOME autologin: https://github.com/NixOS/nixpkgs/issues/103746#issuecomment-945091229</span>
  systemd.services.<span class="hljs-string">"getty@tty1"</span>.<span class="hljs-built_in">enable</span> = <span class="hljs-literal">false</span>;
  systemd.services.<span class="hljs-string">"autovt@tty1"</span>.<span class="hljs-built_in">enable</span> = <span class="hljs-literal">false</span>;


  services.openssh = {
      <span class="hljs-built_in">enable</span> = <span class="hljs-literal">true</span>;
      settings = {
        PasswordAuthentication = <span class="hljs-literal">true</span>;
      };
  };
  system.stateVersion = <span class="hljs-string">"23.05"</span>; <span class="hljs-comment"># Did you read the comment?</span>
}
</code></pre>
<p>You can edit the configuration.nix file to easily enable NGINX and Git using NixOS options. NixOS options let you choose whether to turn features on or off and configure how they should work.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># /etc/nixos/configuration.nix</span>

services.nginx.enable = <span class="hljs-literal">true</span>;
programs.git.enable = <span class="hljs-literal">true</span>;

....
</code></pre>
<p>Every time you modify the <code>configuration.nix</code> file to apply changes to NixOS, you’ll need to build your NixOS using the following command:</p>
<pre><code class="lang-bash">sudo nixos-rebuild switch
</code></pre>
<p>The <code>nixos-rebuild</code> command generates a new NixOS generation based on your configuration file. This <strong>generation</strong> is a complete, immutable snapshot of your system's configuration (packages, services, settings) that’s created every time you run that <code>nixos-rebuild</code> command.</p>
<p>The switch flag helps to build and activate the new generation at the same time, and make it the default boot in NixOS.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768129947654/a21a336e-277e-4566-8b69-d4574ef86c27.png" alt="NixOS generation list" class="image--center mx-auto" width="992" height="718" loading="lazy"></p>
<p>After a system update, if the new generation isn’t desirable, you can roll back or switch to a previous generation using the <code>nixos-rebuild switch --rollback</code> command. For example, if I’m currently on generation 22, which is the default boot, and I don't like this generation, I can use the <code>rollback</code> flag to switch back to generation 21 in NixOS.</p>
<p>The NixOS rollback feature helps you test new functions and features to see if they work properly when you install new applications or programs on NixOS.</p>
<h3 id="heading-why-is-the-declarative-approach-declarative-configuration-used-in-nixos">Why is the <strong>Declarative Approach (Declarative Configuration) used in NixOS?</strong></h3>
<p>The Declarative Approach is particularly useful because it allows you to share your NixOS configuration file with other developers using GitHub and GitLab. By using the <code>configuration.nix</code> file, other developers can build the same system or machine (making it reproducible).</p>
<p>This approach also helps you manage configurations because all your settings are in one place, making it easy to adjust them at any time.</p>
<h3 id="heading-what-are-reproducible-systems">What Are Reproducible Systems?</h3>
<p>This concept of reproducibility is important. In NixOS, we can achieve reproducibility using the <code>configuration.nix</code> file. For example, when two developers use the same NixOS configuration file on their machines, they can achieve highly reproducible and consistent system setups.</p>
<p>This is one feature that makes NixOS so useful for developers, teams, CI/CD, servers, and DevOps. With NixOS, you can avoid the common issue of "it doesn't work on my machine."</p>
<h2 id="heading-how-does-nixos-work">How Does NixOS Work?</h2>
<p>NixOS works differently compared to traditional Linux distributions. In traditional Linux distros, such as Ubuntu and Debian, you can use the apt command to install a new application or program in your distro, like this:</p>
<pre><code class="lang-bash">sudo apt install git <span class="hljs-comment"># Install git package</span>

sudo apt install nodejs <span class="hljs-comment"># Install node.js package</span>

sudo apt install npm  <span class="hljs-comment"># Install NPM package</span>
</code></pre>
<p>But as we discussed above, NixOS uses a declarative configuration approach that’s immutable, reproducible, and portable.</p>
<p>This means that you can’t install any packages or programs like Git, Chrome, Firefox, Node, Deno, Bun, and so on using the apt, dkpg, or pacman commands (as NixOS uses its own package manager, Nix).</p>
<p>Instead, you edit the <code>configuration.nix</code> file and mention your package and program, such as Node.js, NGINX, or Git in the file and rebuild your NixOS using the nixos-rebuild command.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># /etc/nixos/configuration.nix</span>

services.nginx.enable = <span class="hljs-literal">true</span>;
programs.git.enable = <span class="hljs-literal">true</span>;

environment.systemPackages = [
  pkgs.nodejs_24
];
</code></pre>
<p>Again, this creates a new generation every time you modify the NixOS configuration and run the <code>nixos-rebuild</code> command in your system.</p>
<p>To understand in more detail how NixOS works, check out this <a target="_blank" href="https://nixos.org/guides/how-nix-works/">more in-depth tutorial</a>.</p>
<h2 id="heading-what-are-the-benefits-of-using-nixos">What Are the Benefits of Using NixOS?</h2>
<p>There are many benefits to using NixOS over a traditional distro, some of which I’ve already mentioned. Let’s summarize them here:</p>
<ol>
<li><p><strong>Declarative configuration</strong>: All settings for your system, including programs, applications, and services, are written in a single configuration file rather than being installed manually.</p>
</li>
<li><p><strong>Instant rollbacks</strong>: There's no need to worry about breaking your system. If something goes wrong during an update, you can easily revert to the previous version.</p>
</li>
<li><p><strong>Safe updates</strong>: Updates either complete successfully or don’t apply at all, ensuring your system never ends up in a half-broken state.</p>
</li>
<li><p><strong>Reproducible systems</strong>: With the same configuration, you can recreate the same system every time, eliminating issues like "it doesn’t work on my machine."</p>
</li>
<li><p><strong>No dependency conflicts</strong>: Multiple versions of applications can coexist without issues, allowing different programs such as Node.js and Python to operate together seamlessly.</p>
</li>
<li><p><strong>Extensive package ecosystem</strong>: The Nix Packages collection comprises thousands of up-to-date packages maintained by the NixOS community.</p>
</li>
<li><p><strong>Setting up a new machine</strong>: Copy your configuration file, rebuild the system, and complete the setup in just a few minutes, whether for a laptop or a server.</p>
</li>
<li><p><strong>Immutable system</strong>: The design of an immutable system keeps core system paths unchanged. This prevents accidental alterations and enhances reliability, as core components become read-only and cannot be modified after the initial build.</p>
</li>
</ol>
<h2 id="heading-when-would-nixos-not-be-the-best-choice">When Would NixOS Not Be the Best Choice?</h2>
<p>There are various situations where NixOS may not be the best choice for you. Here are some of the main issues:</p>
<ol>
<li><p>NixOS has a steep learning curve, particularly for beginner and intermediate developers.</p>
</li>
<li><p>It doesn’t offer a simple one-click installation solution for applications and programs on your machine or laptop.</p>
</li>
<li><p>You can’t install system-wide applications or programs without editing the NixOS configuration files and rebuilding the system.</p>
</li>
<li><p>NixOS lacks a larger community and readily available tutorials compared to Ubuntu, and its documentation is not very beginner-friendly – so you may need to rely on your own resources.</p>
</li>
</ol>
<h2 id="heading-how-to-set-up-nixos-and-the-nix-package-manager-on-your-laptop">How to Set Up NixOS and the Nix Package Manager on Your Laptop</h2>
<p>Now we’re ready to dive in and set up NixOS. But what you need to install depends on your operating system:</p>
<ul>
<li><p>On macOS or Windows, you’ll install Nix, the package manager. This lets you use Nix to install and manage software on your existing operating system. You <strong>don’t</strong> install NixOS itself on macOS or Windows.</p>
</li>
<li><p>On Linux, installing NixOS means installing a new OS (it’s like installing Ubuntu or Debian).</p>
</li>
</ul>
<p>Because NixOS is a full operating system, you’ll need to install it on a fresh machine or partition, which will typically erase existing data during installation unless you set up dual-booting.</p>
<p>And remember, while NixOS is a Linux distribution, it works differently (than Ubuntu or Debian, for example) because the entire system is configured declaratively using Nix.</p>
<h3 id="heading-install-nix-package-manager">Install Nix Package Manager</h3>
<p>The following command helps you to install the Nix language and the Nix Package Manager on macOS and Windows (via WSL).</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Windows: Multi-user installation (recommended)</span>
sh &lt;(curl --proto <span class="hljs-string">'=https'</span> --tlsv1.2 -L https://nixos.org/nix/install) --daemon

<span class="hljs-comment"># Windows: Single-user installation</span>
sh &lt;(curl --proto <span class="hljs-string">'=https'</span> --tlsv1.2 -L https://nixos.org/nix/install) --no-daemon

<span class="hljs-comment"># MacOS:</span>
sh &lt;(curl --proto <span class="hljs-string">'=https'</span> --tlsv1.2 -L https://nixos.org/nix/install)
</code></pre>
<p>If you’re a newcomer, before running the command I recommend <a target="_blank" href="https://nixos.org/download/#nix-install-macos">watching this tutorial on YouTube</a> and checking out the official documentation.</p>
<h3 id="heading-install-nixos">Install NixOS</h3>
<p>You can install the NixOS distro with a Linux command. Before proceeding, make sure you meet the following requirements:</p>
<ul>
<li><p>A USB drive (8 GB or more)</p>
</li>
<li><p>A second computer (to create the USB)</p>
</li>
<li><p>A backup of your data (installation can erase the disk)</p>
</li>
<li><p>An internet connection (Wi-Fi or Ethernet)</p>
</li>
</ul>
<p>There are multiple steps for installing NixOS on your machine or laptop. You can check out the following tutorial, which describes in detail how you can install NixOS on your machine very easily.</p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/N39_cg8QyT4" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
<p> </p>
<h2 id="heading-how-to-install-a-package-in-nixos">How to Install a Package in NixOS</h2>
<p>NixOS has a large registry of active packages, with 120,000 packages available. Every package you install from the stable channel is built reproducibly and reviewed by the Nix community, so you shouldn’t encounter any issues.</p>
<p>Installing a new package on NixOS using the Nix Package Manager is quite simple. First, visit the <a target="_blank" href="https://search.nixos.org/packages">NixOS Packages Search</a> site.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767107353823/ae26031b-5395-4e36-87ed-d7b4cdd2be9f.png" alt="Search package on NixOS packages website" class="image--center mx-auto" width="1920" height="961" loading="lazy"></p>
<p>Then just search for the package that you’re looking for. In our case, we’ll search for Neovim – and then just type the package name in the search input and hit enter.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768142068118/34333980-65ae-4231-a7a2-9919e3135bcd.png" alt="Search for neovim on the packages website" class="image--center mx-auto" width="1920" height="961" loading="lazy"></p>
<p>Copy the resulting code it shows, and paste it into your <code>configuration.nix</code> file:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># /etc/nixos/configuration.nix</span>

environment.systemPackages = [
  pkgs.neovim <span class="hljs-comment"># add inside file.</span>
];
</code></pre>
<p>Then rebuild your NixOS using the <code>sudo nixos-rebuild switch</code> command. Remember that you’ll need to do this whenever you make changes to the <code>configuration.nix</code> file.</p>
<h3 id="heading-demo-how-to-install-neovim-in-nixos">Demo (How to Install Neovim in NixOS)</h3>
<p><a target="_blank" href="https://www.youtube.com/watch?v=wFP9CbaeMe0"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768146169478/d941876f-abf0-49f6-b1cc-182c608c094e.gif" alt="Screenshot of a text editor displaying a configuration file with commented instructions and settings for installing software packages like Firefox on a system. The interface includes options such as Exit, Save, and Execute at the bottom. Copyright by Low Orbit Flux" class="image--center mx-auto" width="640" height="360" loading="lazy"></a></p>
<h2 id="heading-faq">FAQ</h2>
<h3 id="heading-is-nixos-only-for-advanced-users">Is NixOS only for advanced users?</h3>
<p>No, NixOS can be a great fit for anyone. Due to its steeper learning curve, beginners may struggle at first – but once you understand it, I bet you’ll love it.</p>
<h3 id="heading-why-is-the-nix-language-so-weird">Why is the Nix language so weird?</h3>
<p>Nix is purely functional and is designed for reproducible builds. It’s not a general-purpose language and is rather a configuration language. You’ll only need 10–15% of the language for daily use.</p>
<h3 id="heading-how-is-nixos-different-from-ubuntu-arch">How is NixOS different from Ubuntu / Arch?</h3>
<p>Let’s compare some important NixOS features with other distros:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Ubuntu / Arch</td><td>NixOS</td></tr>
</thead>
<tbody>
<tr>
<td>Install apps</td><td>Manual</td><td>Declarative</td></tr>
<tr>
<td>Rollbacks</td><td>No</td><td>Yes</td></tr>
<tr>
<td>System config</td><td>Spread everywhere</td><td>One file</td></tr>
<tr>
<td>Reproducibility</td><td>Hard</td><td>Built-in</td></tr>
<tr>
<td>Learning curve</td><td>Low</td><td>High</td></tr>
</tbody>
</table>
</div><h3 id="heading-can-i-use-nixos-for-development">Can I use NixOS for development?</h3>
<p>Yes, NixOS is an excellent distro for:</p>
<ul>
<li><p>Frontend development (Node, Bun, Deno)</p>
</li>
<li><p>Backend development (Go, Rust, Python)</p>
</li>
<li><p>Consistent development environments across machines</p>
</li>
<li><p>CI/CD reproducibility</p>
</li>
</ul>
<h3 id="heading-is-nixos-good-for-daily-use">Is NixOS good for daily use?</h3>
<p>Yes – NixOS has an initial learning phase that can be difficult, but once you get past it, you can use NixOS on your work laptops, servers, home PCs, and development machines.</p>
<h3 id="heading-should-i-learn-nixos-as-a-beginner-developer">Should I learn NixOS as a beginner developer?</h3>
<p>To be honest, if you want quick results, NixOS may not be for you. But if you're aiming for long-term mastery, you will definitely like NixOS.</p>
<h3 id="heading-does-nixos-work-on-macos-and-windows">Does NixOS work on macOS and Windows?</h3>
<p>Yes, you can use Nix and the Nix Package Manager on macOS and Windows, and they work well. You can install and package software using the Nix configuration file as mentioned in this tutorial.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I think that NixOS is the best Linux distro – but it’s not super beginner-friendly. Still, I prefer it because it’s a powerful and reliable distro that’s designed for users who value control, reproducibility, and safety.</p>
<p>Managing the entire system through declarative configuration enables consistent setups, safe upgrades, and easy rollbacks. This makes it especially suitable for developers, DevOps engineers, and infrastructure-focused teams.</p>
<p>Just keep in mind that NixOS is not for everyone. Its learning curve and configuration-driven workflow can be steep for beginners, casual users, or those who want quick, click-and-install convenience. So check it out and decide if it’s right for you and your team.</p>
<p>To Learn more beginner tutorials on NixOS, check out <a target="_blank" href="https://medium.com/thenixos">the NixOS</a> publication on Medium.</p>
<ul>
<li><p><a target="_blank" href="https://medium.com/thenixos/what-is-declarative-configuration-in-nixos-understanding-declarative-vs-imperative-approaches-d24d4d144df6">What is Declarative Configuration in NixOS? Understanding Declarative vs Imperative Approaches</a></p>
</li>
<li><p><a target="_blank" href="https://medium.com/thenixos/understand-the-difference-between-home-manager-vs-nix-flake-in-nixos-0511dc8c1a93"><strong>Understand the difference between Home Manager vs Nix Flake in NixOS?</strong></a></p>
</li>
<li><p><a target="_blank" href="https://medium.com/thenixos/why-did-i-choose-nixos-and-what-are-the-advantages-and-disadvantages-of-using-nixos-afaaf95f7d8e">Why did I choose NixOS, and what are the advantages and disadvantages of using NixOS?</a></p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up GitHub CLI on WSL2 ]]>
                </title>
                <description>
                    <![CDATA[ Recently, I set up WSL2 and Ubuntu on my Windows 11 to work on some open-source projects. Since I also maintain these projects, I installed GitHub CLI to ease my workflow. I successfully installed the GitHub CLI, but failed to authenticate it. The er... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/github-cli-wsl2-guide/</link>
                <guid isPermaLink="false">689e444cbfe79386885372b0</guid>
                
                    <category>
                        <![CDATA[ GitHub ]]>
                    </category>
                
                    <category>
                        <![CDATA[ WSL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Ayu Adiati ]]>
                </dc:creator>
                <pubDate>Thu, 14 Aug 2025 20:17:16 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755202477019/fbc68131-107a-40ae-9dae-c14224d0866a.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Recently, I set up WSL2 and Ubuntu on my Windows 11 to work on some open-source projects. Since I also maintain these projects, I installed <a target="_blank" href="https://cli.github.com/">GitHub CLI</a> to ease my workflow. I successfully installed the GitHub CLI, but failed to authenticate it.</p>
<p>The error message <code>failed to authenticate via web browser: Too many requests have been made in the same timeframe. (slow_down)</code> appeared on my terminal, while on the web browser, it indicated that the authentication was successful.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754718774837/0d1de969-a1e3-4f0a-a3ce-e3c4661ce0d0.png" alt="A message says &quot;Congratulations, you're all set,&quot; marking GitHub CLI authentication is successful " class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>I googled and found some workarounds that I tried, but only one worked like a charm!</p>
<p>After finally solving the tricky authentication issue for GitHub CLI on WSL2, I've put together this guide. It's a complete walkthrough for a solution that works, covering everything from a smooth installation to ongoing management.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-install-github-cli-on-wsl2">How to Install GitHub CLI on WSL2</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-authenticate-github-cli-on-wsl2-with-your-github-account">How to Authenticate GitHub CLI on WSL2 with Your GitHub Account</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-upgrade-github-cli-on-wsl2">How to Upgrade GitHub CLI on WSL2</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-uninstall-github-cli-on-wsl2">How to Uninstall GitHub CLI on WSL2</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-revoke-github-cli-access-on-github">How to Revoke GitHub CLI Access on GitHub</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-final-words">Final Words</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before getting started, ensure that you have these installed on your Windows machine:</p>
<ul>
<li><p>WSL2</p>
</li>
<li><p>A Linux distro</p>
</li>
<li><p>Windows PowerShell</p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/windows/terminal/install">Windows Terminal</a> (optional)</p>
</li>
</ul>
<p>To follow the instructions in this article, you can use Windows PowerShell terminal as an administrator.</p>
<p>Alternatively, if you have Windows Terminal installed, you can use the Linux terminal by clicking the ‘down arrow’ icon at the top and selecting the distro.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754677301223/7e846117-3fd1-42a2-ab3e-029e94672aca.png" alt="Dropdown menu at Windows Terminal" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-install-github-cli-on-wsl2">How to Install GitHub CLI on WSL2</h2>
<p>You can use the installation process described here if you use Ubuntu, Debian, or Raspberry Pi OS (apt) distros. For other distros other than those mentioned here, you can walk through the installation process that's available on the <a target="_blank" href="https://github.com/cli/cli/blob/trunk/docs/install_linux.md">GitHub CLI official docs</a>.</p>
<p>To install GitHub CLI in WSL2:</p>
<ol>
<li><p>Run this command:</p>
<pre><code class="lang-bash"> (<span class="hljs-built_in">type</span> -p wget &gt;/dev/null || (sudo apt update &amp;&amp; sudo apt install wget -y)) \
     &amp;&amp; sudo mkdir -p -m 755 /etc/apt/keyrings \
     &amp;&amp; out=$(mktemp) &amp;&amp; wget -nv -O<span class="hljs-variable">$out</span> https://cli.github.com/packages/githubcli-archive-keyring.gpg \
     &amp;&amp; cat <span class="hljs-variable">$out</span> | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg &gt; /dev/null \
     &amp;&amp; sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
     &amp;&amp; sudo mkdir -p -m 755 /etc/apt/sources.list.d \
     &amp;&amp; <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main"</span> | sudo tee /etc/apt/sources.list.d/github-cli.list &gt; /dev/null \
     &amp;&amp; sudo apt update \
     &amp;&amp; sudo apt install gh -y
</code></pre>
</li>
<li><p>Type your Linux password when you get prompted.</p>
</li>
<li><p>Ensure the GitHub CLI is installed by running <code>gh --version</code> command. If the installation is successful, you should see something like this in your terminal:</p>
<pre><code class="lang-bash"> gh version 2.76.2 (2025-07-30)
 https://github.com/cli/cli/releases/tag/v2.76.2
</code></pre>
</li>
</ol>
<h2 id="heading-how-to-authenticate-github-cli-on-wsl2-with-your-github-account">How to Authenticate GitHub CLI on WSL2 with Your GitHub Account</h2>
<p>Before you can use GitHub CLI, you must first authenticate it. You will get an <code>HTTP 401: Bad credentials (https://api.github.com/graphql)</code> error message if you run any GitHub CLI command without authenticating.</p>
<p>To authenticate GitHub CLI with your GitHub account:</p>
<ol>
<li><p>Run the <code>gh auth login</code> command in your terminal.</p>
</li>
<li><p>You will receive several prompts, and you need to choose the methods you prefer. Here’s what I selected in each prompt:</p>
<pre><code class="lang-plaintext"> ? Where do you use GitHub? GitHub.com
 ? What is your preferred protocol for Git operations on this host? HTTPS
 ? How would you like to authenticate GitHub CLI? Login with a web browser
</code></pre>
<p> After answering all prompts, you should get the message to copy a one-time code as shown below. You <strong>don’t need to copy the code</strong> at this point.</p>
<pre><code class="lang-bash"> ! First copy your one-time code: XXXX—XXXX
</code></pre>
</li>
<li><p>Press ‘Enter’. It automatically opens the "Device Activation" page on your browser.</p>
</li>
<li><p>Click the green ‘Continue’ button.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754848322666/2a4af9ab-c197-4ec9-802f-ad9b4f24375c.png" alt="GitHub Device Activation page on a browser" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p> GitHub should ask you to enter the code displayed on your terminal, as shown in the screenshot below. But here’s the trick! <strong>Don’t paste any code, and don’t close the browser</strong>. Let’s first get back to your terminal.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754722491767/d84534da-522f-4e82-84c2-a1bfc75940ef.png" alt="GitHub Device Activation page on a browser" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p> Now you might get this error message on your terminal:</p>
<pre><code class="lang-bash"> grep: /proc/sys/fs/binfmt_misc/WSLInterop: No such file or directory
 WSL Interopability is disabled. Please <span class="hljs-built_in">enable</span> it before using WSL.
 grep: /proc/sys/fs/binfmt_misc/WSLInterop: No such file or directory
 [error] WSL Interoperability is disabled. Please <span class="hljs-built_in">enable</span> it before using WSL.
</code></pre>
</li>
<li><p>Press <code>Ctrl + C</code> to stop the process if it's still running, or let it stop by itself. Once it's stopped, you should see this message:</p>
<pre><code class="lang-bash"> failed to authenticate via web browser: Too many requests have been made <span class="hljs-keyword">in</span> the same timeframe. (slow_down)
</code></pre>
</li>
<li><p>Run the <code>gh auth login</code> command again and repeat the process to select the methods of your choice. This time, when it asks you to press ‘Enter’, <strong>don’t press it</strong>.</p>
</li>
<li><p>Copy the latest code and return to the "Device Activation" page that you left open in your browser.</p>
</li>
<li><p>Paste the code that you copied and click the green ‘Continue’ button.</p>
</li>
<li><p>Click the green ‘Authorize github’ button after GitHub redirects you to the “Authorize GitHub CLI” page. You should now see the message “Congratulations, you're all set!”</p>
</li>
<li><p>Get back to your terminal and press ‘Enter’. Doing so triggers these actions:</p>
<ul>
<li><p>It automatically opens a new “Device Activation” page in your browser. You can safely ignore this.</p>
</li>
<li><p>In the terminal, you first see the error message as in step 4. Don’t do anything and wait for a little bit. Then, you get:</p>
<pre><code class="lang-bash">  ✓ Authentication complete.
  - gh config <span class="hljs-built_in">set</span> -h github.com git_protocol https
  ✓ Configured git protocol
  ! Authentication credentials saved <span class="hljs-keyword">in</span> plain text
  ✓ Logged <span class="hljs-keyword">in</span> as YOUR-GITHUB-USERNAME
  ! You were already logged <span class="hljs-keyword">in</span> to this account
</code></pre>
</li>
</ul>
</li>
</ol>
<p>And GitHub CLI is now successfully authenticated!</p>
<blockquote>
<p>Credit goes to <a target="_blank" href="https://github.com/cli/cli/discussions/6884#discussioncomment-10176332">username “ikeyan” on GitHub for their GitHub CLI authentication solution</a>!</p>
</blockquote>
<h2 id="heading-how-to-upgrade-github-cli-on-wsl2">How to Upgrade GitHub CLI on WSL2</h2>
<p>It’s always a good practice to regularly check for package and dependency updates, and upgrade to the newest version when it’s available — this includes GitHub CLI. To check for updates and upgrade the version of GitHub CLI:</p>
<ol>
<li><p>Run the <code>sudo apt update</code> command in your terminal. This command fetches the list of available updates.</p>
</li>
<li><p>Type your Linux password when you get prompted.</p>
</li>
<li><p>If you need to upgrade your GitHub CLI, run <code>sudo apt install gh</code>. This command installs the newest version of GitHub CLI.</p>
</li>
<li><p>Type your Linux password when you get prompted.</p>
</li>
</ol>
<p>Now your GitHub CLI has the newest version.</p>
<h2 id="heading-how-to-uninstall-github-cli-on-wsl2">How to Uninstall GitHub CLI on WSL2</h2>
<p>If one day you feel like you don’t need to use GitHub CLI anymore, you can uninstall it by following these steps:</p>
<ol>
<li><p>Run the <code>sudo apt remove gh</code> command in your terminal.</p>
</li>
<li><p>Type your Linux password when you get prompted.</p>
</li>
<li><p>Press ‘Y’ to continue the uninstall process.</p>
</li>
</ol>
<p>GitHub CLI is now uninstalled from your WSL environment.</p>
<h2 id="heading-how-to-revoke-github-cli-access-on-github">How to Revoke GitHub CLI Access on GitHub</h2>
<p>After uninstalling the GitHub CLI, you might think your account access is gone, but it's not. The authentication you granted is still active. If you don't plan on using the CLI again, it's a good practice to revoke this access.</p>
<p>Here's how to do it directly from your GitHub account:</p>
<ol>
<li><p>On your GitHub account, click your profile picture on the top right and click ‘Settings’.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754725091482/8fb8a0fd-8dbd-4342-9fe8-309a13d72c39.png" alt="Settings option on dropdown menu at GitHub" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<ol start="2">
<li><p>On the left side bar, find ‘Integrations’ and click ‘Applications’.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754815240842/ca49d207-6ee2-476f-a53d-bde53b2d57dd.png" alt="Applications tab in the Integrations settings on GitHub" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
<li><p>Click the ‘Authorized OAuth Apps’ tab on top.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754815346304/a360f7dc-7024-44c3-8e19-15d94b35ce8e.png" alt="Authorized OAuth Apps tab on GitHub" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
<li><p>Find GitHub CLI and click the ‘three dots’ icon next to it.</p>
</li>
<li><p>Click ‘Revoke’.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754725454783/dd544380-482a-4385-97c1-4ebc35026658.png" alt="Revoke option on GitHub to revoke an authorized OAuth apps" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
</li>
<li><p>Confirm it by clicking the ‘I understand, revoke access’ button.</p>
</li>
</ol>
</li>
</ol>
<p>Now, GitHub CLI doesn’t have access to your GitHub account.</p>
<hr>
<h2 id="heading-final-words">Final Words</h2>
<p>🖼️ Credit cover image: <a target="_blank" href="http://undraw.co">undraw.co</a></p>
<p>Thank you for reading! Last, you can find me on <a target="_blank" href="https://twitter.com/@AdiatiAyu">X</a> and <a target="_blank" href="https://www.linkedin.com/in/adiatiayu/">LinkedIn</a>. Let's connect! 😊</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Schedule Tasks in Red Hat Enterprise Linux ]]>
                </title>
                <description>
                    <![CDATA[ Red Hat Enterprise Linux (RHEL) is a leading enterprise-grade Linux distribution widely regarded as the gold standard for mission-critical server environments. It provides robust, secure, and scalable solutions for organizations ranging from small bu... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-schedule-tasks-in-red-hat-enterprise-linux/</link>
                <guid isPermaLink="false">685c9989f2073d62fe9b82f5</guid>
                
                    <category>
                        <![CDATA[ RHEL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ rhcsa ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Hang Hu ]]>
                </dc:creator>
                <pubDate>Thu, 26 Jun 2025 00:51:21 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750869114329/79072c41-988a-41f2-9e2f-25618d78fefc.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Red Hat Enterprise Linux (RHEL) is a leading enterprise-grade Linux distribution widely regarded as the gold standard for mission-critical server environments. It provides robust, secure, and scalable solutions for organizations ranging from small businesses to Fortune 500 companies, powering everything from web servers and databases to cloud infrastructure and containerized applications.</p>
<p>You can use RHEL's task scheduling capabilities in scenarios like automating system maintenance (for example, log rotation or backup operations), managing routine administrative tasks (like user account cleanup or security updates), or orchestrating complex workflows in enterprise environments. These scheduling tools are essential for maintaining system health and ensuring that critical operations run without manual intervention.</p>
<p>For system administrators, think of task scheduling as the backbone of automated system management, enabling you to set up processes that run reliably in the background while you focus on more strategic initiatives. Its power lies in its flexibility and reliability, making it an indispensable skill for anyone managing Linux systems in production environments.</p>
<p>In this tutorial, you’ll learn how to schedule tasks in Red Hat Enterprise Linux using various built-in tools and techniques. This content is part of <strong>Schedule Future Tasks</strong>, which is Chapter 2 of the <a target="_blank" href="https://labex.io/courses/red-hat-system-administration-rh134-labs">Red Hat System Administration (RH134) course</a>. RH134 is a fundamental course for the Red Hat Certified System Administrator (RHCSA) certification, one of the most respected credentials in the Linux administration field.</p>
<p>This hands-on tutorial provides practical experience with the scheduling concepts covered in the RH134 curriculum, giving you the skills needed to automate tasks effectively in enterprise RHEL environments.</p>
<h3 id="heading-heres-what-well-cover">Here's what we'll cover:</h3>
<ul>
<li><p><a class="post-section-overview" href="#heading-how-to-schedule-a-one-time-job-with-at">How to Schedule a One-time Job with 'at'</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-manage-at-jobs">How to Manage 'at' jobs</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-schedule-recurring-user-jobs-with-crontab">How to Schedule Recurring User Jobs with 'crontab'</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-manage-user-crontab-entries">How to Manage User 'crontab' Entries</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-schedule-recurring-system-jobs-with-cron-directories">How to Schedule Recurring System Jobs with cron Directories</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-configure-systemd-timers-for-recurring-tasks">How to Configure systemd Timers for Recurring Tasks</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-manage-temporary-files-with-systemd-tmpfiles">How to Manage Temporary Files with systemd-tmpfiles</a></p>
</li>
</ul>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>This tutorial is designed to be beginner-friendly! You just need basic familiarity with using the Linux command line. If you can navigate directories and run simple commands, you're ready to start.</p>
<p>For those looking to deepen their RHEL knowledge, the <a target="_blank" href="https://labex.io/skilltrees/rhel">RHEL Skill Tree</a> offers comprehensive hands-on labs including <a target="_blank" href="https://labex.io/courses/red-hat-system-administration-rh124-labs">RH124</a>, <a target="_blank" href="https://labex.io/courses/red-hat-system-administration-rh134-labs">RH134</a>, <a target="_blank" href="https://labex.io/courses/red-hat-enterprise-linux-automation-with-ansible-rh294">RH294</a>, and other courses for RHCSA and RHCE certifications.</p>
<p>Don't worry if you're new to Red Hat Enterprise Linux – I'll explain everything step by step, and these concepts work on most Linux distributions too.</p>
<h2 id="heading-how-to-schedule-a-one-time-job-with-at"><strong>How to Schedule a One-time Job with 'at'</strong></h2>
<p>First, let’s learn how to schedule a job to run once at a future time using the <code>at</code> command. The <code>at</code> command is useful for executing commands that don’t need to be run repeatedly. We will schedule a simple job, inspect its details, and then remove it.</p>
<p>In this tutorial, we will work directly on the local system to learn task scheduling. You’ll execute all commands in your current terminal environment.</p>
<p>Let's schedule a job to print the current date and time into a file named <code>~/myjob.txt</code> in your home directory. We'll schedule it to run 3 minutes from now:</p>
<pre><code class="lang-bash">at now + 3 minutes &lt;&lt; EOF
date &gt; ~/myjob.txt
EOF
</code></pre>
<p>The <code>warning: commands will be executed using /bin/sh</code> message is normal. The <code>job N at ...</code> output indicates the job number and the scheduled execution time. Make a note of the job number, as you will need it later.</p>
<p>Next, let's schedule another job interactively. This method is useful for entering multiple commands or more complex scripts. We will schedule a job to append "Hello from at job!" to <code>~/at_output.txt</code> 5 minutes from now:</p>
<pre><code class="lang-bash">at now + 5 minutes
</code></pre>
<p>After typing the command and pressing Enter, you will see an <code>at&gt;</code> prompt. Type your command and then press <code>Ctrl+d</code> to finish:</p>
<pre><code class="lang-bash">at &gt; <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello from at job!"</span> &gt;&gt; ~/at_output.txt
at &gt; Ctrl+d
</code></pre>
<p>To view the jobs currently in the <code>at</code> queue, use the <code>atq</code> command. This command lists all pending <code>at</code> jobs for the current user.</p>
<pre><code class="lang-bash">atq
</code></pre>
<p>The output will show the job number, the scheduled time, the queue, and the user who scheduled it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750726190789/d2dd54c0-80a0-4bb2-8561-3114bb279387.png" alt="Output of atq command showing scheduled jobs" class="image--center mx-auto" width="672" height="333" loading="lazy"></p>
<p>You can inspect the commands that a specific <code>at</code> job will run using the <code>at -c</code> command followed by the job number. Replace <code>N</code> with one of the job numbers you noted earlier.</p>
<pre><code class="lang-bash">at -c N
</code></pre>
<p>This command will display the shell script that <code>at</code> will execute for that job. You should see the <code>date &gt; ~/myjob.txt</code> or <code>echo "Hello from at job!" &gt;&gt; ~/at_output.txt</code> command within the output.</p>
<p>Finally, to remove a scheduled <code>at</code> job, use the <code>atrm</code> command followed by the job number. Let's remove the first job we scheduled. Replace <code>N</code> with the job number of your first job.</p>
<pre><code class="lang-bash">atrm N
</code></pre>
<p>After removing the job, you can use <code>atq</code> again to verify that it is no longer in the queue.</p>
<pre><code class="lang-bash">atq
</code></pre>
<p>You should now only see the second job (if it hasn't executed yet) or an empty queue if both jobs have been removed or executed.</p>
<p>This completes the first step of scheduling one-time jobs with the <code>at</code> command.</p>
<h2 id="heading-how-to-manage-at-jobs"><strong>How to Manage 'at' jobs</strong></h2>
<p>Now, let’s delve deeper into managing <code>at</code> jobs, including scheduling jobs with different queues and verifying their execution. Understanding <code>at</code> queues can be useful for prioritizing tasks or separating different types of one-time jobs.</p>
<p>We will continue working on the local system to explore more advanced <code>at</code> job management features.</p>
<p>The <code>at</code> command allows you to specify a queue using the <code>-q</code> option. Queues are single letters from <code>a</code> to <code>z</code>. Queue <code>a</code> is the default, and jobs in queues <code>a</code> through <code>z</code> are executed with decreasing niceness (priority). Queue <code>a</code> has the highest priority, and queue <code>z</code> has the lowest. Queue <code>b</code> is reserved for batch jobs.</p>
<p>Let's schedule a job in queue <code>g</code> (a lower priority queue) to run in 2 minutes. This job will create a file named <code>~/queue_g_job.txt</code> with a timestamp:</p>
<pre><code class="lang-bash">at -q g now + 2 minutes &lt;&lt; EOF
date &gt; ~/queue_g_job.txt
EOF
</code></pre>
<p>You will see output similar to <code>job N at ...</code>. Note down this job number.</p>
<p>Next, let's schedule another job, this time in queue <code>b</code> (batch queue), which is typically used for jobs that can run when system load is low. This job will append "Batch job executed!" to <code>~/batch_job.txt</code>. We'll schedule it to run 4 minutes from now:</p>
<pre><code class="lang-bash">at -q b now + 4 minutes &lt;&lt; EOF
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Batch job executed!"</span> &gt;&gt; ~/batch_job.txt
EOF
</code></pre>
<p>Again, note down the job number.</p>
<p>To see all pending jobs, including those in different queues, use <code>atq</code>.</p>
<pre><code class="lang-bash">atq
</code></pre>
<p>You should now see both jobs listed, with their respective queue letters (<code>g</code> and <code>b</code>).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750726218380/bcc9d551-0530-48d1-bf7f-46073c6f77a6.png" alt="Output of atq command showing scheduled jobs" class="image--center mx-auto" width="589" height="325" loading="lazy"></p>
<p>Now, wait for your scheduled jobs to execute. Wait for at least 5 minutes to allow all jobs to complete. You can check if the files created by your <code>at</code> jobs exist and contain the expected content.</p>
<p>Check <code>~/queue_g_job.txt</code>:</p>
<pre><code class="lang-bash">cat ~/queue_g_job.txt
</code></pre>
<p>You should see a date and time string.</p>
<p>Check <code>~/batch_job.txt</code>:</p>
<pre><code class="lang-bash">cat ~/batch_job.txt
</code></pre>
<p>You should see "Batch job executed!".</p>
<p>If the files are not present or empty, it might mean the jobs haven't executed yet, or there was an issue with the command. You can re-check <code>atq</code> to see if they are still pending.</p>
<h2 id="heading-how-to-schedule-recurring-user-jobs-with-crontab"><strong>How to Schedule Recurring User Jobs with 'crontab'</strong></h2>
<p>Next, you’ll learn how to schedule recurring tasks for a specific user using <code>crontab</code>. Unlike <code>at</code> jobs, which run once, <code>cron</code> jobs run repeatedly at specified intervals. This is ideal for routine maintenance, data backups, or generating reports.</p>
<p>We will continue working on the local system to learn about user crontab management.</p>
<p>The <code>crontab</code> command allows users to create, edit, and view their own <code>cron</code> jobs. Each user has their own <code>crontab</code> file.</p>
<p>To edit your <code>crontab</code> file, use the <code>crontab -e</code> command. This will open your <code>crontab</code> file in the default text editor (usually <code>vim</code>).</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p><strong>Vim editor instructions:</strong></p>
<ul>
<li><p>Press <code>i</code> to enter insert mode (you'll see <code>-- INSERT --</code> at the bottom)</p>
</li>
<li><p>Use arrow keys to navigate</p>
</li>
<li><p>To save and exit: Press <code>Esc</code> to exit insert mode, then type <code>:wq</code> and press <code>Enter</code></p>
</li>
<li><p>To exit without saving: Press <code>Esc</code>, then type <code>:q!</code> and press <code>Enter</code></p>
</li>
</ul>
<p>Inside the editor, you will add a new line to define your <code>cron</code> job. A <code>cron</code> entry has five time-and-date fields, followed by the command to be executed. The fields are:</p>
<ul>
<li><p><strong>Minute (0-59)</strong></p>
</li>
<li><p><strong>Hour (0-23)</strong></p>
</li>
<li><p><strong>Day of Month (1-31)</strong></p>
</li>
<li><p><strong>Month (1-12)</strong></p>
</li>
<li><p><strong>Day of Week (0-7, where 0 or 7 is Sunday)</strong></p>
</li>
</ul>
<p>You can use <code>*</code> as a wildcard to mean "every" for a field, or <code>/</code> to specify step values (for example, <code>*/5</code> for every 5 minutes).</p>
<p>Let's schedule a job that appends the current date and time to a file named <code>~/my_cron_log.txt</code> every minute. This will allow us to quickly observe the <code>cron</code> job in action.</p>
<p>Follow these steps in Vim:</p>
<ol>
<li><p>Press <code>i</code> to enter insert mode</p>
</li>
<li><p>Add the following line to the <code>crontab</code> file:</p>
</li>
</ol>
<pre><code class="lang-bash">* * * * * /usr/bin/date &gt;&gt; ~/my_cron_log.txt
</code></pre>
<ol start="3">
<li><p>Press <code>Esc</code> to exit insert mode</p>
</li>
<li><p>Type <code>:wq</code> and press <code>Enter</code> to save and exit</p>
</li>
</ol>
<p>You should see a message indicating that a new <code>crontab</code> has been installed:</p>
<pre><code class="lang-plaintext">crontab: installing new crontab
</code></pre>
<p>To verify that your <code>cron</code> job has been successfully added, you can list your <code>crontab</code> entries using the <code>crontab -l</code> command:</p>
<pre><code class="lang-bash">crontab -l
</code></pre>
<p>You should see the line you just added:</p>
<pre><code class="lang-plaintext">* * * * * /usr/bin/date &gt;&gt; ~/my_cron_log.txt
</code></pre>
<p>Now, wait for a minute or two to allow the <code>cron</code> job to execute at least once. You can check the current time to see when the next minute mark will occur:</p>
<pre><code class="lang-bash">date
</code></pre>
<p>After waiting for at least two minutes to allow the cron job to execute a couple of times, check the content of the <code>~/my_cron_log.txt</code> file.</p>
<pre><code class="lang-bash">cat ~/my_cron_log.txt
</code></pre>
<p>You should see one or more lines, each containing a date and time, indicating that your <code>cron</code> job has executed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750726409656/bfd85cf0-316a-4c1d-89c2-0d60c30cd33f.png" alt="Cron job output in log file" class="image--center mx-auto" width="607" height="281" loading="lazy"></p>
<pre><code class="lang-plaintext">Mon Apr 8 10:30:01 AM EDT 2025
Mon Apr 8 10:31:01 AM EDT 2025
</code></pre>
<h2 id="heading-how-to-manage-user-crontab-entries"><strong>How to Manage User 'crontab' Entries</strong></h2>
<p>Now you will learn more advanced techniques for managing user <code>crontab</code> entries, including editing existing jobs, adding multiple jobs, and understanding special <code>cron</code> strings. Effective <code>crontab</code> management is crucial for automating routine tasks.</p>
<p>We will continue working on the local system to explore advanced crontab management techniques.</p>
<p>Let's start by adding a new <code>cron</code> job. This job will append "Hello from cron!" to <code>~/cron_messages.txt</code> every two minutes.</p>
<p>Open your <code>crontab</code> for editing:</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>In Vim:</p>
<ol>
<li><p>Press <code>i</code> to enter insert mode</p>
</li>
<li><p>Add the following line to the <code>crontab</code> file:</p>
</li>
</ol>
<pre><code class="lang-bash">*/2 * * * * <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello from cron!"</span> &gt;&gt; ~/cron_messages.txt
</code></pre>
<ol start="3">
<li><p>Press <code>Esc</code> to exit insert mode</p>
</li>
<li><p>Type <code>:wq</code> and press <code>Enter</code> to save and exit</p>
</li>
</ol>
<p>Verify that the entry is added:</p>
<pre><code class="lang-bash">crontab -l
</code></pre>
<p>You should see the newly added line.</p>
<p>Now, let's add another <code>cron</code> job that runs daily at 08:00 AM. This job will record the disk usage of your home directory to <code>~/disk_usage.log</code>.</p>
<p>Open your <code>crontab</code> for editing again:</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>In Vim:</p>
<ol>
<li><p>Press <code>i</code> to enter insert mode</p>
</li>
<li><p>Add the following line below the previous one:</p>
</li>
</ol>
<pre><code class="lang-bash">0 8 * * * du -sh ~ &gt;&gt; ~/disk_usage.log
</code></pre>
<ol start="3">
<li><p>Press <code>Esc</code> to exit insert mode</p>
</li>
<li><p>Type <code>:wq</code> and press <code>Enter</code> to save and exit</p>
</li>
</ol>
<p>Verify that both entries are present:</p>
<pre><code class="lang-bash">crontab -l
</code></pre>
<p>You should now see both <code>cron</code> jobs listed.</p>
<p><code>cron</code> also supports special strings that can simplify common schedules. These include <code>@reboot</code>, <code>@yearly</code>, <code>@annually</code>, <code>@monthly</code>, <code>@weekly</code>, <code>@daily</code>, <code>@midnight</code>, and <code>@hourly</code>. For example, <code>@hourly</code> is equivalent to <code>0 * * * *</code>.</p>
<p>Let's add a job that runs hourly and records the system uptime to <code>~/uptime_log.txt</code>.</p>
<p>Open your <code>crontab</code> for editing:</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>In Vim:</p>
<ol>
<li><p>Press <code>i</code> to enter insert mode</p>
</li>
<li><p>Add the following line:</p>
</li>
</ol>
<pre><code class="lang-bash">@hourly uptime &gt;&gt; ~/uptime_log.txt
</code></pre>
<ol start="3">
<li><p>Press <code>Esc</code> to exit insert mode</p>
</li>
<li><p>Type <code>:wq</code> and press <code>Enter</code> to save and exit</p>
</li>
</ol>
<p>Verify all three entries:</p>
<pre><code class="lang-bash">crontab -l
</code></pre>
<p>You should now see all three <code>cron</code> jobs.</p>
<p>To demonstrate the effect of these jobs, we will wait for a short period. Since the jobs are scheduled at different intervals, we won't see all of them execute immediately, but we can verify the setup.</p>
<p>Wait for at least 3 minutes to allow the <code>*/2</code> job to run at least once.</p>
<p>Check the <code>~/cron_messages.txt</code> file:</p>
<pre><code class="lang-bash">cat ~/cron_messages.txt
</code></pre>
<p>You should see at least one "Hello from cron!" message.</p>
<pre><code class="lang-plaintext">Hello from cron!
</code></pre>
<p>The <code>~/disk_usage.log</code> and <code>~/uptime_log.txt</code> files might not be created yet, depending on the current time, as they are scheduled for daily and hourly execution, respectively. The important part is that their entries are correctly configured in your <code>crontab</code>.</p>
<h2 id="heading-how-to-schedule-recurring-system-jobs-with-cron-directories"><strong>How to Schedule Recurring System Jobs with</strong> <code>cron</code> <strong>Directories</strong></h2>
<p>In this step, you will learn how to schedule recurring system-wide tasks using <code>cron</code> directories. Unlike user <code>crontab</code> entries, which are specific to a user, system <code>cron</code> jobs are managed by the root user and affect the entire system. These are typically used for system maintenance, log rotation, and other administrative tasks.</p>
<p>We will continue working on the local system to explore system-wide cron job configuration.</p>
<p>System-wide <code>cron</code> jobs are defined in <code>/etc/crontab</code> or by placing scripts in specific directories:</p>
<ul>
<li><p><code>/etc/cron.hourly/</code>: Scripts in this directory run once an hour.</p>
</li>
<li><p><code>/etc/cron.daily/</code>: Scripts in this directory run once a day.</p>
</li>
<li><p><code>/etc/cron.weekly/</code>: Scripts in this directory run once a week.</p>
</li>
<li><p><code>/etc/cron.monthly/</code>: Scripts in this directory run once a month.</p>
</li>
</ul>
<p>These directories are processed by the <code>run-parts</code> utility, which executes all executable files within them.</p>
<p>To manage system <code>cron</code> jobs, you need root privileges. Since the labex user has sudo access, we can use <code>sudo</code> for the required commands.</p>
<p>Let's create a simple script that logs a message to the system log. We will place this script in <code>/etc/cron.hourly/</code> to make it run hourly.</p>
<p>First, create the script file <code>/etc/cron.hourly/my_hourly_script</code>:</p>
<pre><code class="lang-bash">sudo nano /etc/cron.hourly/my_hourly_script
</code></pre>
<p>Add the following content to the file:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
logger <span class="hljs-string">"Hourly cron job executed at <span class="hljs-subst">$(date)</span>"</span>
</code></pre>
<p>Save and exit the editor (<code>Ctrl+o</code>, <code>Enter</code>, <code>Ctrl+x</code> in <code>nano</code>).</p>
<p>Next, you need to make the script executable. Without execute permissions, <code>run-parts</code> will ignore it.</p>
<pre><code class="lang-bash">sudo chmod +x /etc/cron.hourly/my_hourly_script
</code></pre>
<p>Now, let's verify that the script is executable:</p>
<pre><code class="lang-bash">ls -l /etc/cron.hourly/my_hourly_script
</code></pre>
<p>You should see <code>x</code> in the permissions, for example: <code>-rwxr-xr-x</code>.</p>
<p>Since <code>cron.hourly</code> jobs run once an hour, we can't wait for a full hour to verify its execution in this tutorial. But we can manually trigger the <code>run-parts</code> command for the hourly directory to simulate its execution.</p>
<pre><code class="lang-bash">sudo run-parts /etc/cron.hourly/
</code></pre>
<p>This command will execute all executable scripts in <code>/etc/cron.hourly/</code>. The script we created uses the <code>logger</code> command to write messages to the system log.</p>
<p>In a real RHEL system, you would be able to check the system logs using <code>journalctl</code> or <code>/var/log/messages</code> to verify that the script executed successfully.</p>
<p>This completes the system cron job management step. The script will remain in place and would execute hourly in a real system environment.</p>
<h2 id="heading-how-to-configure-systemd-timers-for-recurring-tasks"><strong>How to Configure</strong> <code>systemd</code> <strong>Timers for Recurring Tasks</strong></h2>
<p>Next, you will learn about <code>systemd</code> timers, which are a modern alternative to <code>cron</code> for scheduling tasks on Linux systems. <code>systemd</code> timers offer more flexibility and better integration with the <code>systemd</code> ecosystem.</p>
<p><code>systemd</code> timers work in conjunction with <code>systemd</code> service units. A timer unit (<code>.timer</code> file) defines when a task should run, and a service unit (<code>.service</code> file) defines what task should be executed.</p>
<p>We will continue working on the local system to explore systemd timer configuration.</p>
<p>You will need root privileges to create <code>systemd</code> unit files in system directories. Since the labex user has sudo access, we can use <code>sudo</code> for the required commands.</p>
<p>Let's create a simple service that logs a message to a file. We will place this service unit file in <code>/etc/systemd/system/</code> which is where custom service units are typically stored.</p>
<p>Create the service unit file <code>/etc/systemd/system/my-custom-task.service</code>:</p>
<pre><code class="lang-bash">sudo nano /etc/systemd/system/my-custom-task.service
</code></pre>
<p>Add the following content to the file:</p>
<pre><code class="lang-ini"><span class="hljs-section">[Unit]</span>
<span class="hljs-attr">Description</span>=My Custom Scheduled Task

<span class="hljs-section">[Service]</span>
<span class="hljs-attr">Type</span>=<span class="hljs-literal">on</span>eshot
<span class="hljs-attr">ExecStart</span>=/bin/bash -c <span class="hljs-string">'echo "My custom task executed at $(date)" &gt;&gt; /var/log/my-custom-task.log'</span>
</code></pre>
<p>Save and exit the editor (<code>Ctrl+o</code>, <code>Enter</code>, <code>Ctrl+x</code> in <code>nano</code>).</p>
<p>Next, create the timer unit file <code>/etc/systemd/system/my-custom-task.timer</code>. This timer will activate our service every 5 minutes.</p>
<pre><code class="lang-bash">sudo nano /etc/systemd/system/my-custom-task.timer
</code></pre>
<p>Add the following content to the file:</p>
<pre><code class="lang-ini"><span class="hljs-section">[Unit]</span>
<span class="hljs-attr">Description</span>=Run My Custom Scheduled Task every <span class="hljs-number">5</span> minutes

<span class="hljs-section">[Timer]</span>
<span class="hljs-attr">OnCalendar</span>=*:<span class="hljs-number">0</span>/<span class="hljs-number">5</span>
<span class="hljs-attr">Persistent</span>=<span class="hljs-literal">true</span>

<span class="hljs-section">[Install]</span>
<span class="hljs-attr">WantedBy</span>=timers.target
</code></pre>
<p>Save and exit the editor.</p>
<p><strong>Explanation of</strong> <code>OnCalendar</code>:</p>
<ul>
<li><p><code>*:0/5</code> means "every 5 minutes".</p>
<ul>
<li><p><code>*</code> for year, month, day, hour (any value).</p>
</li>
<li><p><code>0/5</code> for minute, meaning starting at minute 0, every 5 minutes (0, 5, 10, ..., 55).</p>
</li>
</ul>
</li>
</ul>
<p>In a typical <code>systemd</code> environment, you would now run <code>systemctl daemon-reload</code> to make <code>systemd</code> aware of the new unit files, and then <code>systemctl enable --now my-custom-task.timer</code> to start the timer.</p>
<p>Let's verify the existence of the created files:</p>
<pre><code class="lang-bash">ls -l /etc/systemd/system/my-custom-task.service
ls -l /etc/systemd/system/my-custom-task.timer
</code></pre>
<p>You should see output indicating that both files exist.</p>
<p>To simulate the execution of the service, you can manually run the command defined in <code>ExecStart</code>:</p>
<pre><code class="lang-bash">sudo /bin/bash -c <span class="hljs-string">'echo "My custom task executed at $(date)" &gt;&gt; /var/log/my-custom-task.log'</span>
</code></pre>
<p>Now, check the log file to see the output:</p>
<pre><code class="lang-bash">sudo cat /var/<span class="hljs-built_in">log</span>/my-custom-task.log
</code></pre>
<p>You should see the message you just logged:</p>
<pre><code class="lang-plaintext">My custom task executed at Tue Jun 10 06:54:40 UTC 2025
</code></pre>
<p>This completes the systemd timer configuration step. The service and timer unit files will remain in place for reference.</p>
<h2 id="heading-how-to-manage-temporary-files-with-systemd-tmpfiles"><strong>How to Manage Temporary Files with</strong> <code>systemd-tmpfiles</code></h2>
<p>Now you’ll learn how to manage temporary files and directories using <code>systemd-tmpfiles</code>. This utility is part of <code>systemd</code> and is responsible for creating, deleting, and cleaning up volatile and temporary files and directories. It's commonly used to manage <code>/tmp</code>, <code>/var/tmp</code>, and other temporary storage locations, ensuring that old files are removed periodically.</p>
<p>We will continue working on the local system to explore systemd-tmpfiles configuration.</p>
<p>You will need root privileges to configure <code>systemd-tmpfiles</code>. Since the labex user has sudo access, we can use <code>sudo</code> for the required commands.</p>
<p><code>systemd-tmpfiles</code> reads configuration files from <code>/etc/tmpfiles.d/</code> and <code>/usr/lib/tmpfiles.d/</code>. These files define rules for creating, deleting, and managing files and directories.</p>
<p>Let's create a custom configuration file to manage a new temporary directory. We will create a directory <code>/run/my_temp_dir</code> and configure <code>systemd-tmpfiles</code> to clean files older than 1 minute from it.</p>
<p>Create the configuration file <code>/etc/tmpfiles.d/my_temp_dir.conf</code>:</p>
<pre><code class="lang-bash">sudo nano /etc/tmpfiles.d/my_temp_dir.conf
</code></pre>
<p>Add the following content to the file:</p>
<pre><code class="lang-bash">d /run/my_temp_dir 0755 labex labex 1m
</code></pre>
<p><strong>Explanation of the line:</strong></p>
<ul>
<li><p><code>d</code>: Specifies that this entry defines a directory.</p>
</li>
<li><p><code>/run/my_temp_dir</code>: The path to the directory.</p>
</li>
<li><p><code>0755</code>: The permissions for the directory.</p>
</li>
<li><p><code>labex labex</code>: The owner and group for the directory.</p>
</li>
<li><p><code>1m</code>: The age after which files in this directory should be deleted (1 minute).</p>
</li>
</ul>
<p>Save and exit the editor (<code>Ctrl+o</code>, <code>Enter</code>, <code>Ctrl+x</code> in <code>nano</code>).</p>
<p>Now, let's tell <code>systemd-tmpfiles</code> to apply this configuration. The <code>--create</code> option will create the directory if it doesn't exist.</p>
<pre><code class="lang-bash">sudo systemd-tmpfiles --create /etc/tmpfiles.d/my_temp_dir.conf
</code></pre>
<p>Verify that the directory has been created with the correct permissions and ownership:</p>
<pre><code class="lang-bash">ls -ld /run/my_temp_dir
</code></pre>
<p>You should see output similar to:</p>
<pre><code class="lang-plaintext">drwxr-xr-x 2 labex labex 6 Jun 10 06:55 /run/my_temp_dir
</code></pre>
<p>Next, let's create a test file inside this new temporary directory:</p>
<pre><code class="lang-bash">sudo touch /run/my_temp_dir/test_file.txt
</code></pre>
<p>Verify the file exists:</p>
<pre><code class="lang-bash">ls -l /run/my_temp_dir/test_file.txt
</code></pre>
<p>Now, we need to wait for more than 1 minute for the file to become "old" according to our configuration. Wait for at least 70 seconds (1 minute and 10 seconds).</p>
<p>After waiting for more than 1 minute, we will manually run <code>systemd-tmpfiles</code> with the <code>--clean</code> option to trigger the cleanup process based on our configuration.</p>
<pre><code class="lang-bash">sudo systemd-tmpfiles --clean /etc/tmpfiles.d/my_temp_dir.conf
</code></pre>
<p>Finally, check if the <code>test_file.txt</code> has been removed:</p>
<pre><code class="lang-bash">ls -l /run/my_temp_dir/test_file.txt
</code></pre>
<p>You should get a "No such file or directory" error, indicating that <code>systemd-tmpfiles</code> successfully cleaned up the old file.</p>
<p>This completes configuring the systemd-tmpfiles. The configuration file and temporary directory will remain in place for reference.</p>
<h2 id="heading-summary"><strong>Summary</strong></h2>
<p>In this tutorial, you learned how to schedule and manage one-time tasks using the <code>at</code> command, including scheduling jobs interactively and non-interactively, viewing the <code>at</code> queue with <code>atq</code>, and deleting pending jobs with <code>atrm</code>. You also learned how to schedule recurring user-specific tasks using <code>crontab</code>, including how to edit, list, and remove cron jobs, and you learned the cron syntax for specifying execution times.</p>
<p>We also demonstrated how to schedule system-wide recurring tasks by placing scripts in standard cron directories (<code>/etc/cron.hourly</code>, <code>/etc/cron.daily</code>, etc.) and how to create custom cron jobs in <code>/etc/cron.d</code>.</p>
<p>Finally, you explored advanced task scheduling with <code>systemd</code> timers, learning to create and enable service and timer units for recurring tasks, and how to manage temporary files and directories using <code>systemd-tmpfiles</code> for automated cleanup.</p>
<p>This comprehensive tutorial provided practical experience in managing diverse task scheduling needs on RHEL systems, from simple one-off commands to complex recurring system processes.</p>
<p>To practice the operations from this tutorial, try the interactive hands-on lab: <a target="_blank" href="https://labex.io/labs/rhel-schedule-tasks-in-red-hat-enterprise-linux-588897?course=red-hat-system-administration-rh134-labs">Schedule Tasks in Red Hat Enterprise Linux</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Configure Network Interfaces in Linux ]]>
                </title>
                <description>
                    <![CDATA[ Networking is an essential part of any Linux system. Proper networking allows communication between devices and the internet. Understanding the network interface is vital when setting up servers, solving connectivity issues, and managing device traff... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/configure-network-interfaces-in-linux/</link>
                <guid isPermaLink="false">6850922657a503eb47ff3b2b</guid>
                
                    <category>
                        <![CDATA[ networking ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Eti Ijeoma ]]>
                </dc:creator>
                <pubDate>Mon, 16 Jun 2025 21:52:38 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750110739161/ebf2347c-ac63-4fab-ad2f-5d9229e77eaa.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Networking is an essential part of any Linux system. Proper networking allows communication between devices and the internet. Understanding the network interface is vital when setting up servers, solving connectivity issues, and managing device traffic flow.</p>
<p>A common problem faced in networking is losing connectivity after modifying the network settings, which leads to an inability to access the system. This usually happens due to a misconfigured IP address, incorrect settings, and a poor understanding of network interface configurations.</p>
<p>In this article, we’ll guide you through understanding these network interface configurations, setting up and managing network interfaces on Linux, checking available interfaces, configuring static and dynamic IP addresses, and best practices to consider when setting up network interfaces. At the end of this article, you’ll have a solid foundation in network interfaces.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-what-are-network-interfaces">What are Network Interfaces?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-types-of-network-interfaces-in-linux">Types of Network Interfaces in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-why-network-interfaces-matter">Why Network Interfaces Matter</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-list-network-interfaces-in-linux">How to List Network Interfaces in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-configure-network-interfaces-in-linux">How to Configure Network Interfaces in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-a-network-bridge-in-linux">How to Set Up a Network Bridge in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-best-practices-for-configuring-network-interfaces-in-linux">Best Practices for Configuring Network Interfaces in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-what-are-network-interfaces">What are Network Interfaces?</h2>
<p>A network interface is a connection point within the Linux system that allows communication with other devices within the network<strong>.</strong> It is how the Linux kernel links the software side of the network with the hardware side. Linux systems provide many network interfaces that help to facilitate communication between the system and other external networks. </p>
<p>Linux network interfaces are essential for troubleshooting, configuration, management, and optimization of networking tasks. Understanding what they are and how they work allows you to optimize your server networking and security.</p>
<h2 id="heading-types-of-network-interfaces-in-linux">Types of Network Interfaces in Linux</h2>
<p>Network interfaces can be classified into two main categories: physical and virtual network interfaces.</p>
<h3 id="heading-physical-network-interfaces">Physical Network Interfaces</h3>
<p>Physical network adapters are the hardware components of the network interface that connect the system to a physical network. These physical networks include Wi-Fi and Ethernet. These adapters, commonly called Network Interface Cards (NIC), can be identified by their device names, such as wlan0 and eth0. They include the following:</p>
<ol>
<li><p><strong>Ethernet Interface (eth0, eth1, and so on)</strong></p>
<p> Ethernet interface is used for wired connections via an Ethernet card and helps configure high-speed networking. It can be used in data centres and servers. </p>
</li>
<li><p><strong>Wi-Fi interface (wlan0, wlan1, and so on)</strong></p>
<p> This represents a wireless network adapter, and it enables wireless connectivity via Wi-Fi networks to the servers.</p>
</li>
</ol>
<h3 id="heading-virtual-network-interfaces">Virtual Network Interfaces</h3>
<p>Virtual network interfaces are software-based interfaces managed by the Linux operating system. They integrate network virtualization technologies like Docker or KVM. There are several virtual network interfaces, and the most common ones include:</p>
<ul>
<li><p><strong>Loopback interface</strong>: This is a special interface that allows a system to communicate internally. It is permanently assigned the IP address 127.0.0.1, referred to as the <a target="_blank" href="http://localhost">localhost</a>.</p>
</li>
<li><p><strong>Bridge Interface</strong>: They are used to connect multiple network interfaces. It is useful for virtualization environments (for example, Linux KVM, Docker networking).</p>
</li>
<li><p><strong>Tunnel Interface</strong>: This is used for VPNs and networking tunnels. It helps to facilitate the passage of encrypted network traffic.</p>
</li>
</ul>
<h2 id="heading-why-network-interfaces-matter">Why Network Interfaces Matter</h2>
<p>Network interfaces form an essential component of a Linux system. It enables communication between devices and the internet, and properly configuring these interfaces provides the following benefits:</p>
<p><strong>Seamless connectivity</strong>: Network interfaces allow devices to communicate over local networks and the internet, enabling proper data exchange between servers and networks.</p>
<p><strong>Proper network management</strong>: Administrators can configure network interfaces by creating, managing, and assigning static or dynamic IPs and optimizing traffic flow.</p>
<p><strong>Improved security</strong>: Administrators can configure network interfaces with firewalls and VPNs to secure data and prevent unauthorized access.</p>
<p><strong>It provides support for virtualization and containerization</strong>: Virtual network interfaces provide proper communication between virtual machines, Docker containers, and other physical servers. This makes them essential for creating and managing DevOps environments.</p>
<h2 id="heading-how-to-list-network-interfaces-in-linux">How to List Network Interfaces in Linux</h2>
<p>You can check the available network interfaces within the Linux environment using the following commands.</p>
<ol>
<li><p><strong>Using the</strong> <code>ip</code> <strong>command:</strong></p>
<p> To list all network interfaces and their status, you can use the <code>ip link show</code> command. It displays details about the network interfaces, like the name, status, and MAC address.</p>
</li>
<li><p><strong>Using the</strong> <code>ifconfig</code> <strong>command</strong></p>
<p> To list all network interfaces, use this command: <code>ifconfig -a</code>. The command also displays details about the network interfaces and their current state.</p>
</li>
<li><p><strong>Using</strong> <a target="_blank" href="https://networkmanager.dev/docs/api/latest/nmcli.html"><code>nmcli</code></a> <strong>for NetworkManager-controlled systems</strong></p>
<p> To check the status of all network interfaces managed by NetworkManager, run:</p>
<p> <code>nmcli device status</code>.</p>
</li>
<li><p><strong>Using the</strong> <code>/sys/class/net/</code> <strong>directory</strong></p>
<p> To list all network interfaces, run <code>ls /sys/class/net/</code> This command is useful for scripting and automation because it provides a reliable way to check available interfaces programmatically.</p>
</li>
</ol>
<h2 id="heading-how-to-configure-network-interfaces-in-linux">How to Configure Network Interfaces in Linux</h2>
<p>Network interface configuration is essential for managing Linux servers and workstations. Understanding this configuration will help ensure smooth connectivity within your systems. This section will give you the correct information on configuring network interfaces.</p>
<h3 id="heading-assign-a-static-ip-address">Assign a Static IP Address</h3>
<p>A static IP address ensures the device maintains the same IP after each reboot. This is particularly useful for servers and devices that need consistent addressing. To assign a static IP address, the NetworkManager Command Line Interface (<strong>nmcli</strong>) provides a command-line utility to configure the network interface as shown below.</p>
<pre><code class="lang-bash">nmcli connection modify eth0 ipv4.addresses 192.168.1.100/24   <span class="hljs-comment"># set a static IPv4 address and subnet mask</span>

nmcli connection modify eth0 ipv4.gateway 192.168.1.1          <span class="hljs-comment"># define the default gateway</span>

nmcli connection modify eth0 ipv4.dns <span class="hljs-string">"8.8.8.8 8.8.4.4"</span>        <span class="hljs-comment"># configure primary and secondary DNS servers</span>

nmcli connection modify eth0 ipv4.method manual                <span class="hljs-comment"># switch the interface from DHCP to manual mode</span>

nmcli connection up eth0                                       <span class="hljs-comment"># bring the interface down and up to apply changes</span>
</code></pre>
<p>These commands set a fixed IP, gateway, and DNS on eth0, switch the interface to manual mode, and restart it so the new settings take effect. The settings persist across reboots because they are stored by <code>NetworkManager</code></p>
<h3 id="heading-assign-a-temporary-ip-address">Assign a Temporary IP Address</h3>
<p>The <code>ip</code> command lets you configure interfaces dynamically (not persistent across reboots):</p>
<pre><code class="lang-bash">ip addr add 192.168.1.100/24 dev eth0     <span class="hljs-comment"># assign 192.168.1.100/24 to interface eth0 (temporary)</span>

ip route add default via 192.168.1.1      <span class="hljs-comment"># set the default gateway to 192.168.1.1</span>
</code></pre>
<p>These two commands give eth0 the IP <code>192.168.1.100/24</code> and point all outbound traffic to the gateway <code>192.168.1.1</code>. The settings last only until the next reboot or interface reset.</p>
<h3 id="heading-assign-an-ip-address-with-ifconfig-deprecated">Assign an IP Address with ifconfig (deprecated)</h3>
<p>Older systems still ship with <code>ifconfig</code> and <code>route</code>. These commands are also temporary.</p>
<pre><code class="lang-bash">ifconfig eth0 192.168.1.100 netmask 255.255.255.0 up  <span class="hljs-comment"># assign 192.168.1.100/24 to eth0 and bring it up</span>

route add default gw 192.168.1.1 eth0                <span class="hljs-comment"># set the default gateway to 192.168.1.1 via eth0</span>
</code></pre>
<blockquote>
<p><strong>Note:</strong> Prefer <code>ip</code> or <code>nmcli</code> on modern systems.</p>
</blockquote>
<h3 id="heading-enable-dhcp-with-nmcli">Enable DHCP with nmcli</h3>
<p>A DHCP-assigned address lets the network hand out an IP address automatically.</p>
<pre><code class="lang-bash">nmcli connection modify eth0 ipv4.method auto   <span class="hljs-comment"># switch eth0 to use DHCP for automatic addressing</span>

nmcli connection up eth0                        <span class="hljs-comment"># restart the connection so the new DHCP setting takes effect</span>
</code></pre>
<p>To renew or request a lease directly:</p>
<pre><code class="lang-bash">dhclient eth0   <span class="hljs-comment"># manually request or renew an IP address via DHCP on interface eth0</span>
</code></pre>
<p>These commands set eth0 to use DHCP, restart the link so the change takes effect, and (optionally) trigger an instant lease renewal.</p>
<h3 id="heading-assign-multiple-ip-addresses-to-one-interface">Assign Multiple IP Addresses to One Interface</h3>
<p>A network interface can have multiple addresses assigned to it, making it applicable to host multiple services on a single interface.</p>
<p><strong>Using IP command (Temporary Assignment)</strong></p>
<pre><code class="lang-bash">ip addr add 192.168.1.101/24 dev eth0   <span class="hljs-comment"># add an extra IPv4 address to eth0 (temporary)</span>

ip addr add 2001:db8::1/64 dev eth0     <span class="hljs-comment"># add an IPv6 address to eth0 (temporary)</span>
</code></pre>
<p>These two commands attach an extra IPv4 and an IPv6 address to eth0 until the interface resets or the system reboots</p>
<p><strong>Persistent Configuration (Netplan)</strong></p>
<p>Edit the <code>/etc/netplan/01-netcfg.yaml</code> file:</p>
<pre><code class="lang-bash">network:

  version: 2

  renderer: networkd

  ethernets:

    eth0:

      addresses:

        - 192.168.1.100/24

        - 192.168.1.101/24

        - 2001:db8::1/64
</code></pre>
<p>After editing the file, run <code>sudo netplan apply</code> to make the additional addresses stick across reboots.</p>
<h2 id="heading-how-to-set-up-a-network-bridge-in-linux">How to Set Up a Network Bridge in Linux</h2>
<p>A network bridge allows multiple interfaces to act as a single network segment, which is useful in virtualization (KVM, Docker).</p>
<p><strong>Using</strong> <code>brctl</code> <strong>(bridge-utils package)</strong></p>
<pre><code class="lang-bash">brctl addbr br0                       <span class="hljs-comment"># create a new bridge interface named br0</span>

brctl addif br0 eth0                  <span class="hljs-comment"># add physical interface eth0 to the bridge</span>

ip addr add 192.168.1.100/24 dev br0  <span class="hljs-comment"># assign an IP address to the bridge, not to eth0</span>

ip link <span class="hljs-built_in">set</span> br0 up                    <span class="hljs-comment"># bring the bridge interface online</span>
</code></pre>
<p>These commands create bridge br0, attach eth0 to it, give the bridge its own IP, and bring it online.</p>
<h4 id="heading-ia"> </h4>
<p><strong>Using nmcli (for NetworkManager-managed systems)</strong></p>
<pre><code class="lang-bash">nmcli connection add <span class="hljs-built_in">type</span> bridge ifname br0                       <span class="hljs-comment"># create a new bridge named br0</span>

nmcli connection modify br0 bridge.stp no                         <span class="hljs-comment"># turn off Spanning Tree Protocol</span>

nmcli connection add <span class="hljs-built_in">type</span> bridge-slave ifname eth0 master br0     <span class="hljs-comment"># attach physical interface eth0 to br0</span>

nmcli connection up br0                                           <span class="hljs-comment"># bring the bridge online so settings take effect</span>
</code></pre>
<p>This sequence builds the same bridge through NetworkManager, disables <a target="_blank" href="https://en.wikipedia.org/wiki/Spanning_Tree_Protocol">STP</a> for faster convergence, links eth0 as a slave, and activates the bridge so guests can reach the network.</p>
<h2 id="heading-best-practices-for-configuring-network-interfaces-in-linux">Best Practices for Configuring Network Interfaces in Linux</h2>
<h3 id="heading-make-your-configurations-persistent"><strong>Make Your Configurations Persistent</strong></h3>
<p>One of the mistakes network engineers make in Linux networking is making changes that do not persist after rebooting. While specific commands can modify the network settings temporarily, they do not save these changes permanently.</p>
<p>To ensure that these network settings survive server reboots, modify system configuration files such as <code>/etc/network/interfaces</code>. Once you ensure that all changes are persistent, there will be no unexpected disruptions when a system restarts.</p>
<h3 id="heading-assign-static-ips-for-servers"><strong>Assign Static IPs for Servers</strong></h3>
<p>Static IP addresses are the best for servers and critical infrastructure. Unlike DHCP addresses, which can change over time, static IP addresses are more stable and reliable. For services like web hosting and database management, static IPs play a key role, as IP addresses do not need to change.</p>
<h3 id="heading-secure-your-network-interfaces"><strong>Secure Your Network Interfaces</strong></h3>
<p>Network interfaces are the entry points into a system, so if they are misconfigured, they could pose a considerable security risk. To reduce attacks, administrators should turn off all unused network interfaces by modifying the configuration file to prevent automatic activation. Additionally, you should use firewall tools to control the traffic that tries to reach the system.</p>
<h3 id="heading-monitor-your-network-interfaces"><strong>Monitor Your Network Interfaces</strong></h3>
<p>As a system administrator, monitoring network interfaces helps prevent downtime and ensure proper network reliability. You can check the status of your network interfaces by running commands like <code>link show</code> or <code>if-config -a</code>. You can also monitor them in real time using tools like Netstat. Monitoring your systems ensures that network issues are detected early enough, reducing downtime and improving network stability.</p>
<h3 id="heading-constantly-update-network-packages"><strong>Constantly Update Network Packages</strong></h3>
<p>You must constantly update network management tools and drivers because it helps to implement security patches and other performance improvements, as outdated network packages can cause security vulnerabilities. There are specific network-related packages such as <code>network-manager</code>, <code>bridge-utils</code> and <code>iproute2</code>.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Setting up network interfaces in Linux is a fundamental skill every system administrator should have. Whether configuring static IP addresses or enabling DHCP, understanding these concepts will ensure that your systems are stable and have proper connectivity. Implementing best practices like monitoring traffic and securing the network interface gives you the best results. As you continue working with Linux, you can experiment with different configurations to deepen your understanding of network interfaces.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Get Information About Your Linux System Through the Command Line ]]>
                </title>
                <description>
                    <![CDATA[ Whether you’ve just gained access to a new Linux system, ethically hacked into one as part of a security test, or you’re just curious to know more about your current machine, this article will guide you through the process. You’ll learn how you can g... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/get-linux-system-info-through-cli/</link>
                <guid isPermaLink="false">68495738cb7b75f7a33a73c4</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ handbook ]]>
                    </category>
                
                    <category>
                        <![CDATA[ cli ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Zaira Hira ]]>
                </dc:creator>
                <pubDate>Wed, 11 Jun 2025 10:15:20 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749636399891/4b457f71-2d18-463a-b98a-e19ff5a6b769.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Whether you’ve just gained access to a new Linux system, ethically hacked into one as part of a security test, or you’re just curious to know more about your current machine, this article will guide you through the process.</p>
<p>You’ll learn how you can get information related to your OS (operating system), kernel, CPU, memory, processes, disks, networks, and installed software. You’ll explore the commands and their outputs in detail.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-why-its-important-to-understand-your-linux-system">Why It's Important to Understand Your Linux System</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-os-amp-kernel-information-in-linux">How to Get Your OS &amp; Kernel Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-cpu-information-in-linux">How to Get Your CPU Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-memory-information-in-linux">How to Get Your Memory Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-disk-amp-filesystem-information-in-linux">How to Get Your Disk &amp; Filesystem Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-hardware-information-in-linux">How to Get Your Hardware Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-network-interfaces-amp-status-information-in-linux">How to Get Your Network Interfaces &amp; Status Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-software-amp-services-information-in-linux">How to Get Your Software &amp; Services Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-logs-amp-dmesg-in-formation-in-linux">How to Get Your Logs &amp; Dmesg In formation in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-get-your-securityuser-audit-information-in-linux">How to Get Your Security/User Audit Information in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-visually-appealing-commands">Visually Appealing Commands</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-why-its-important-to-understand-your-linux-system">Why It's Important to Understand Your Linux System</h2>
<h3 id="heading-system-administration">System Administration</h3>
<p>System administrators need to have an understanding of the system so they are able to:</p>
<ul>
<li><p>Manage users, groups, and permissions effectively.</p>
</li>
<li><p>Configure services like web servers, databases, and so on.</p>
</li>
<li><p>Automate repetitive tasks with scripts and cron jobs.</p>
</li>
</ul>
<h3 id="heading-troubleshooting">Troubleshooting</h3>
<p>When the system is in a problematic state, a solid understanding of the system specification and configuration helps you to:</p>
<ul>
<li><p>Identify and resolve system errors quickly.</p>
</li>
<li><p>Analyze system logs and monitor performance.</p>
</li>
<li><p>Diagnose network and hardware issues.</p>
</li>
</ul>
<h3 id="heading-security-auditing">Security Auditing</h3>
<p>If you are in a security related role, knowing your system in depth helps you to:</p>
<ul>
<li><p>Monitor logs for unauthorized access.</p>
</li>
<li><p>Configure firewalls and security policies.</p>
</li>
<li><p>Detect and remove malicious processes or software.</p>
</li>
</ul>
<h3 id="heading-performance-optimization">Performance Optimization</h3>
<p>If you know how to gather information related to system resources, you can measure them and create a projection for the future use. You can also:</p>
<ul>
<li><p>Tune system parameters for better efficiency.</p>
</li>
<li><p>Monitor resource usage (CPU, memory, disk, I/O).</p>
</li>
<li><p>Eliminate bottlenecks and optimize workloads.</p>
</li>
</ul>
<h3 id="heading-proactive-maintenance">Proactive Maintenance</h3>
<p>It is a good practice to be able to prevent issues before they occur. Once you know your system well, you can:</p>
<ul>
<li><p>Schedule regular updates and backups.</p>
</li>
<li><p>Ensure system reliability and uptime.</p>
</li>
</ul>
<p>Understanding your Linux system gives you greater control, enhances system stability, and improves your overall effectiveness as a system administrator or power user.</p>
<p>In the next section, we’ll discuss some essential commands for gathering system information.</p>
<h2 id="heading-how-to-get-your-os-amp-kernel-information-in-linux">How to Get Your OS &amp; Kernel Information in Linux</h2>
<h3 id="heading-uname-a-command"><code>uname -a</code> Command</h3>
<p><code>uname -a</code> provides full kernel information:</p>
<pre><code class="lang-bash">uname -a
Linux ip-172-31-90-178 6.8.0-1024-aws <span class="hljs-comment">#26-Ubuntu SMP Tue Feb 18 17:22:37 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux</span>
</code></pre>
<p>Here is what each part means in the above command:</p>
<ul>
<li><p><code>Linux</code>: The kernel name.</p>
</li>
<li><p><code>ip-172-31-90-178</code>: The network hostname of the system.</p>
</li>
<li><p><code>6.8.0-1024-aws</code>: The kernel version and AWS-specific build.</p>
</li>
<li><p><code>#26-Ubuntu</code>: The kernel build number.</p>
</li>
<li><p><code>SMP</code>: Symmetric Multi-Processing, indicating that the kernel is compiled for multiple processors.</p>
</li>
<li><p><code>Tue Feb 18 17:22:37 UTC 2025</code>: The date and time when the kernel was compiled.</p>
</li>
<li><p><code>x86_64 x86_64 x86_64</code>: The machine hardware name (architecture), processor type, and platform type, all indicating 64-bit x86 architecture.</p>
</li>
<li><p><code>GNU/Linux</code>: The operating system name.</p>
</li>
</ul>
<p>Based on this output, I’m running on an AWS EC2 instance with a 64-bit Ubuntu Linux distribution using a kernel that was specifically built for AWS infrastructure.</p>
<h3 id="heading-uname-r-and-uname-s-commands"><code>uname -r</code> and <code>uname -s</code> Commands</h3>
<p>The <code>uname -r</code> and <code>uname -s</code> commands specify the kernel version and OS type information:</p>
<pre><code class="lang-bash">uname -r
6.11.0-25-generic

uname -s
Linux
</code></pre>
<h3 id="heading-cat-etcos-release-command"><code>cat /etc/os-release</code> Command</h3>
<p>The <code>cat /etc/os-release</code> command provides distribution information:</p>
<pre><code class="lang-bash">cat /etc/os-release
PRETTY_NAME=<span class="hljs-string">"Ubuntu 24.04.2 LTS"</span>
NAME=<span class="hljs-string">"Ubuntu"</span>
VERSION_ID=<span class="hljs-string">"24.04"</span>
VERSION=<span class="hljs-string">"24.04.2 LTS (Noble Numbat)"</span>
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL=<span class="hljs-string">"https://www.ubuntu.com/"</span>
SUPPORT_URL=<span class="hljs-string">"https://help.ubuntu.com/"</span>
BUG_REPORT_URL=<span class="hljs-string">"https://bugs.launchpad.net/ubuntu/"</span>
PRIVACY_POLICY_URL=<span class="hljs-string">"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"</span>
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
</code></pre>
<p>Here is what each part means in the above command:</p>
<ul>
<li><p><code>PRETTY_NAME="Ubuntu 24.04.2 LTS"</code>: The user-friendly name of the distribution including version and LTS (Long Term Support) designation.</p>
</li>
<li><p><code>NAME="Ubuntu"</code>: The name of the Linux distribution.</p>
</li>
<li><p><code>VERSION_ID="24.04"</code>: The version number of the Ubuntu release (Year/Month format).</p>
</li>
<li><p><code>VERSION="24.04.2 LTS (Noble Numbat)"</code>: The complete version information including:</p>
<p>  • <code>24.04</code>: Major version (released April 2024)</p>
<p>  • <code>.2</code>: Point release number</p>
<p>  • <code>LTS</code>: Long Term Support</p>
<p>  • <code>Noble Numbat</code>: The release codename</p>
</li>
<li><p><code>VERSION_CODENAME=noble</code>: The codename for this Ubuntu release ("Noble").</p>
</li>
<li><p><code>ID=ubuntu</code>: The machine-readable name of the operating system.</p>
</li>
<li><p><code>ID_LIKE=debian</code>: Indicates that Ubuntu is based on Debian Linux.</p>
</li>
<li><p><code>HOME_URL</code>, <code>SUPPORT_URL</code>, <code>BUG_REPORT_URL</code>, <code>PRIVACY_POLICY_URL</code> : Various official URLs for Ubuntu resources.</p>
</li>
<li><p><code>UBUNTU_CODENAME=noble</code>: Reiterates the codename of this Ubuntu release.</p>
</li>
<li><p><code>LOGO=ubuntu-logo</code>: Specifies the logo identifier for the distribution.</p>
</li>
</ul>
<p>This output shows that I’m running Ubuntu 24.04.2 LTS (codenamed "Noble Numbat"), which is a Long Term Support release of Ubuntu. Being an LTS version means it will receive security updates and support for an extended period (typically 5 years for Ubuntu LTS releases).</p>
<h3 id="heading-hostnamectl-command"><code>hostnamectl</code> Command</h3>
<p><code>hostnamectl</code> shows the hostname, OS, and kernel info:</p>
<pre><code class="lang-bash">hostnamectl
 Static hostname: ip-172-31-90-178
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: ec272830b6dca2da0d11e41b292cfc99
         Boot ID: dd12f48ff01b44a796991d99ce1bcfde
  Virtualization: xen
Operating System: Ubuntu 24.04.2 LTS              
          Kernel: Linux 6.8.0-1024-aws
    Architecture: x86-64
 Hardware Vendor: Xen
  Hardware Model: HVM domU
Firmware Version: 4.11.amazon
   Firmware Date: Thu 2006-08-24
    Firmware Age: 18y 9month 1w 2d
</code></pre>
<p>In the above command, here is what each part means:</p>
<ul>
<li><p><code>Static hostname: "ip-172-31-90-178"</code>: This is the permanent hostname of the system, stored in <code>/etc/hostname</code>.</p>
</li>
<li><p><code>Icon name: "computer-vm"</code>: A symbolic icon identifier for the system, used by some desktop environments.</p>
</li>
<li><p><code>Chassis: "vm"</code>: Indicates this is running in a virtual machine environment.</p>
</li>
<li><p><code>Machine ID: "ec272830b6dca2da0d11e41b292cfc99"</code>: A unique identifier for this system, stored in <code>/etc/machine-id</code>.</p>
</li>
<li><p><code>Boot ID: "dd12f48ff01b44a796991d99ce1bcfde"</code>: A unique identifier that changes with each system boot.</p>
</li>
<li><p><code>Virtualization: "xen"</code>: Shows that this system is running on Xen virtualization (common for AWS instances).</p>
</li>
<li><p><code>Operating System: "Ubuntu 24.04.2 LTS"</code>: The current OS distribution and version.</p>
</li>
<li><p><code>Kernel: "Linux 6.8.0-1024-aws"</code>: The current Linux kernel version, specifically an AWS-optimized kernel.</p>
</li>
<li><p><code>Architecture: "x86-64"</code>: The CPU architecture of the system.</p>
</li>
<li><p><code>Hardware Vendor: "Xen" Hardware Model: "HVM domU"</code>: Indicates this is a Xen HVM (Hardware Virtual Machine) domain user instance.</p>
</li>
<li><p>Firmware Details:</p>
<ul>
<li><p><code>Version: 4.11.amazon</code>: This is the version of the firmware/BIOS specifically customized for AWS environments.</p>
</li>
<li><p><code>Date: Thu 2006-08-24</code>: This is the release date of the firmware. The date might seem old (2006) but this is normal for AWS instances.</p>
</li>
<li><p><code>Age: 18y 9month 1w</code> : This shows how old the firmware is relative to the current date calculated from the firmware date (2006) to now (2025). While the firmware seems old, it is still maintained and secure.</p>
</li>
</ul>
</li>
</ul>
<p>This overall output shows that I’m running Ubuntu 24.04.2 LTS on an AWS EC2 instance using Xen virtualization. The system is using an AWS-optimized kernel and is configured as a HVM (Hardware Virtual Machine) instance.</p>
<h2 id="heading-how-to-get-your-cpu-information-in-linux">How to Get Your CPU Information in Linux</h2>
<h3 id="heading-lscpu-command"><code>lscpu</code> Command</h3>
<p><code>lscpu</code> shows CPU architecture, cores, threads, and virtualization information:</p>
<pre><code class="lang-bash">lscpu
Architecture:             x86_64
  CPU op-mode(s):         32-bit, 64-bit
  Address sizes:          46 bits physical, 48 bits virtual
  Byte Order:             Little Endian
CPU(s):                   1
  On-line CPU(s) list:    0
Vendor ID:                GenuineIntel
  Model name:             Intel(R) Xeon(R) CPU E5-2686 v4 @ 2
                          .30GHz
    CPU family:           6
    Model:                79
    Thread(s) per core:   1
    Core(s) per socket:   1
    Socket(s):            1
    Stepping:             1
    BogoMIPS:             4599.99
    Flags:                fpu vme de pse tsc msr pae mce cx8 
                          apic sep mtrr pge mca cmov pat pse3
                          6 clflush mmx fxsr sse sse2 ht sysc
                          all nx rdtscp lm constant_tsc rep_g
                          ood nopl xtopology cpuid tsc_known_
                          freq pni pclmulqdq ssse3 fma cx16 p
                          cid sse4_1 sse4_2 x2apic movbe popc
                          nt tsc_deadline_timer aes xsave avx
                           f16c rdrand hypervisor lahf_lm abm
                           pti fsgsbase bmi1 avx2 smep bmi2 e
                          rms invpcid xsaveopt
Virtualization features:  
  Hypervisor vendor:      Xen
  Virtualization <span class="hljs-built_in">type</span>:    full
Caches (sum of all):      
  L1d:                    32 KiB (1 instance)
  L1i:                    32 KiB (1 instance)
  L2:                     256 KiB (1 instance)
  L3:                     45 MiB (1 instance)
NUMA:                     
  NUMA node(s):           1
  NUMA node0 CPU(s):      0
Vulnerabilities:          
  Gather data sampling:   Not affected
  Itlb multihit:          KVM: Mitigation: VMX unsupported
  L1tf:                   Mitigation; PTE Inversion
  Mds:                    Vulnerable: Clear CPU buffers attem
                          pted, no microcode; SMT Host state 
                          unknown
  Meltdown:               Mitigation; PTI
  Mmio stale data:        Vulnerable: Clear CPU buffers attem
                          pted, no microcode; SMT Host state 
                          unknown
  Reg file data sampling: Not affected
  Retbleed:               Not affected
  Spec rstack overflow:   Not affected
  Spec store bypass:      Vulnerable
  Spectre v1:             Mitigation; usercopy/swapgs barrier
                          s and __user pointer sanitization
  Spectre v2:             Mitigation; Retpolines; STIBP disab
                          led; RSB filling; PBRSB-eIBRS Not a
                          ffected; BHI Retpoline
  Srbds:                  Not affected
  Tsx async abort:        Not affected
</code></pre>
<p>Here is a brief explanation of the output above:</p>
<p>1. Basic CPU Info</p>
<ul>
<li><p>Architecture: <code>x86_64</code> (64-bit)</p>
</li>
<li><p>CPU Model: Intel Xeon E5-2686 v4 (2.3 GHz)</p>
</li>
<li><p>Cores/Threads: 1 core, 1 thread (no Hyper-Threading)</p>
</li>
<li><p>Physical CPU (Socket): 1</p>
</li>
</ul>
<p>2. Performance &amp; Features</p>
<ul>
<li><p>Cache Sizes:</p>
<ul>
<li><p>L1: 32 KiB (data) + 32 KiB (instructions)</p>
</li>
<li><p>L2: 256 KiB</p>
</li>
<li><p>L3: 45 MiB (large, typical for Xeon)</p>
</li>
</ul>
</li>
<li><p>Flags: Supports AVX, AES, SSE4.1/4.2 (useful for encryption/vector ops).</p>
</li>
</ul>
<p>3. Virtualization</p>
<ul>
<li><p>Hypervisor: Running on Xen (full virtualization).</p>
</li>
<li><p>Virtualization Support: Yes (Intel VT-x).</p>
</li>
</ul>
<p>4. Security (Vulnerabilities)</p>
<ul>
<li><p>Meltdown/Spectre: Mostly mitigated (PTI, Retpolines).</p>
</li>
<li><p>MDS/MMIO: Vulnerable (no microcode fixes).</p>
</li>
<li><p>Spec Store Bypass: Vulnerable (no mitigation).</p>
</li>
</ul>
<p>5. NUMA (Memory)</p>
<ul>
<li>Single NUMA node (no multi-processor complexity).</li>
</ul>
<p>The output shows that my machine is a single-core Intel Xeon (in a virtualized/cloud environment) with large L3 cache but has some unpatched CPU vulnerabilities.</p>
<h3 id="heading-cat-proccpuinfo-command"><code>cat /proc/cpuinfo</code> Command</h3>
<p><code>cat /proc/cpuinfo</code> provides more in-depth details about the CPU:</p>
<pre><code class="lang-bash">cat /proc/cpuinfo 
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 79
model name    : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
stepping    : 1
microcode    : 0xd000404
cpu MHz        : 2299.998
cache size    : 46080 KB
physical id    : 0
siblings    : 1
core id        : 0
cpu cores    : 1
apicid        : 0
initial apicid    : 0
fpu        : yes
fpu_exception    : yes
cpuid level    : 13
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_stale_data bhi
bogomips    : 4599.99
clflush size    : 64
cache_alignment    : 64
address sizes    : 46 bits physical, 48 bits virtual
power management:
</code></pre>
<h3 id="heading-nproc-command"><code>nproc</code> Command</h3>
<p><code>nproc</code> shows the core count:</p>
<pre><code class="lang-bash">nproc
1
</code></pre>
<p>The above command output shows there is one available processor.</p>
<h2 id="heading-how-to-get-your-memory-information-in-linux">How to Get Your Memory Information in Linux</h2>
<h3 id="heading-free-h-command"><code>free -h</code> Command</h3>
<p>You can use the <code>free -h</code> command to know the total/used/free RAM:</p>
<pre><code class="lang-bash">free -h
               total        used        free      shared  buff/cache   available
Mem:           957Mi       406Mi       218Mi       920Ki       522Mi       551Mi
Swap:             0B          0B          0B
</code></pre>
<p>Here is a breakdown of the output shared above:</p>
<ul>
<li><p><code>total</code>: The total amount of physical memory (RAM) or swap space available on the system.</p>
</li>
<li><p><code>used</code>: The amount of memory currently being used by applications and the system. Calculated as: <code>total - free - buffers - cache</code>.</p>
</li>
<li><p><code>free</code>: The amount of memory that is completely unused.</p>
</li>
<li><p><code>shared</code>: Memory that may be simultaneously accessed by multiple programs.</p>
</li>
<li><p><code>buff/cache</code>: Combines two types of memory:</p>
<ul>
<li><p>Buffers: Memory used for block device I/O buffering.</p>
</li>
<li><p>Cache: Memory used for file system page cache - This memory can be reclaimed when needed by applications.</p>
</li>
<li><p><code>available</code>: It includes the 'free' memory plus memory that can be reclaimed from <code>buff/cache</code>. This is the most important column for determining if you have enough memory.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-vmstat-command"><strong>vmstat</strong> Command</h3>
<p><code>vmstat</code> stands for Virtual Memory Statistics, a tool to monitor system performance. It provides information about memory usage, CPU activity, Processes, Disk I/O and Swap usage.</p>
<p>You can also use <code>vmstat</code> to extract live information. Here is how you can do that:</p>
<pre><code class="lang-bash">vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
 r  b   swpd   free   buff  cache   si   so    bi    bo   <span class="hljs-keyword">in</span>   cs us sy id wa st gu
 1  0      0 238264  46120 489056    0    0     3     8   23    0  0  0 82  0 18  0
 0  0      0 238264  46120 489060    0    0     0     0  240  120  0  1 98  0  1  0
 0  0      0 238264  46120 489060    0    0     0     0  239  124  0  0 98  0  2  0
 0  0      0 238264  46120 489060    0    0     0     0  199  101  0  0 95  0  5  0
 0  0      0 238264  46120 489060    0    0     0     0   36   25  0  0 78  0 22  0
</code></pre>
<p>Here is what the above command is doing:</p>
<ol>
<li><p>Captures 5 snapshots of system performance.</p>
</li>
<li><p>Each snapshot is taken 1 second apart, giving near real-time insights.</p>
</li>
<li><p>Displays key metrics about:</p>
<ul>
<li><p>Memory usage (free, buffered, cached).</p>
</li>
<li><p>CPU activity (user, system, idle, waiting).</p>
</li>
<li><p>Processes (running, blocked).</p>
</li>
<li><p>Disk I/O (blocks read/written).</p>
</li>
<li><p>Swap usage (if swapping is happening).</p>
</li>
</ul>
</li>
</ol>
<p>Note that, you can replace the interval and number of snapshots accordingly.</p>
<p>Here’s a detailed breakdown of the output above:</p>
<ul>
<li><p><code>Procs</code>:</p>
<ul>
<li><p><code>r</code>: Number of processes waiting for run time.</p>
</li>
<li><p><code>b</code>: Number of processes in uninterruptible sleep</p>
</li>
</ul>
</li>
<li><p><code>Memory</code> (in KB):</p>
<ul>
<li><p><code>swpd</code>: Amount of virtual memory used</p>
</li>
<li><p><code>free</code>: Amount of idle memory</p>
</li>
<li><p><code>buff</code>: Memory used as buffers</p>
</li>
<li><p><code>cache</code>: Memory used as cache</p>
</li>
</ul>
</li>
<li><p><code>Swap</code>:</p>
<ul>
<li><p><code>si</code>: Memory swapped in from disk (KB/s)</p>
</li>
<li><p><code>so</code>: Memory swapped out to disk (KB/s)</p>
</li>
</ul>
</li>
<li><p><code>IO</code>:</p>
<ul>
<li><p><code>bi</code>: Blocks received from a block device (blocks/s)</p>
</li>
<li><p><code>bo</code>: Blocks sent to a block device (blocks/s)</p>
</li>
</ul>
</li>
<li><p><code>System</code>:</p>
<ul>
<li><p><code>in</code>: Number of interrupts per second</p>
</li>
<li><p><code>cs</code>: Number of context switches per second</p>
</li>
</ul>
</li>
<li><p><code>CPU</code> (percentages):</p>
<ol>
<li><p><code>us</code>: Time spent running user code</p>
</li>
<li><p><code>sy</code>: Time spent running system code</p>
</li>
<li><p><code>id</code>: Time spent idle</p>
</li>
<li><p><code>wa</code>: Time spent waiting for IO</p>
</li>
<li><p><code>st</code>: Time stolen from a virtual machine</p>
</li>
<li><p><code>gu</code>: Time running guest code (virtual CPU)</p>
</li>
</ol>
</li>
</ul>
<p>From the output, you can see that my system:</p>
<ul>
<li><p>Has very low CPU usage (high idle percentage)</p>
</li>
<li><p>Has no swap being used (<code>swpd = 0</code>)</p>
</li>
<li><p>Has about <code>99MB</code> free memory</p>
</li>
<li><p>Shows minimal IO activity</p>
</li>
<li><p>Is running in a virtualized environment (notice the <code>st</code> (stolen) time column has non-zero value</p>
</li>
</ul>
<p>The first line shows averages since the last reboot, while subsequent lines show the real-time statistics for each second.</p>
<h3 id="heading-cat-procmeminfo-command"><code>cat /proc/meminfo</code> Command</h3>
<p><code>cat /proc/meminfo</code> shows detailed memory stats:</p>
<pre><code class="lang-bash">cat /proc/meminfo
MemTotal:         980384 kB
MemFree:          245100 kB
MemAvailable:     585896 kB
Buffers:           46184 kB
Cached:           393672 kB
SwapCached:            0 kB
Active:           141404 kB
Inactive:         356376 kB
Active(anon):      47672 kB
Inactive(anon):    29300 kB
Active(file):      93732 kB
Inactive(file):   327076 kB
Unevictable:       36528 kB
Mlocked:           27152 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Zswap:                 0 kB
Zswapped:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         94488 kB
Mapped:            97936 kB
Shmem:               920 kB
KReclaimable:      95396 kB
Slab:             148672 kB
SReclaimable:      95396 kB
SUnreclaim:        53276 kB
KernelStack:        2444 kB
PageTables:         3224 kB
SecPageTables:         0 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      490192 kB
Committed_AS:     508912 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        9988 kB
VmallocChunk:          0 kB
Percpu:            14848 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
Unaccepted:            0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:       71680 kB
DirectMap2M:      976896 kB
</code></pre>
<p>Here is a detailed breakdown of the output shared above:</p>
<ul>
<li><p>Total Memory and Available Memory:</p>
<ul>
<li><p><code>MemTotal</code>: Total physical RAM available.</p>
</li>
<li><p><code>MemFree</code>: Completely unused memory.</p>
</li>
<li><p><code>MemAvailable</code>: Memory available for new applications.</p>
</li>
</ul>
</li>
<li><p>Memory Caches and Buffers:</p>
<ul>
<li><p><code>Buffers</code>: Memory used for block device I/O buffering.</p>
</li>
<li><p><code>Cached</code>: Memory used for file system cache.</p>
</li>
<li><p><code>SwapCached</code>: Memory pages stored in both RAM and swap.</p>
</li>
</ul>
</li>
<li><p>Active vs Inactive Memory:</p>
<ul>
<li><p><code>Active</code>: Recently used memory.</p>
</li>
<li><p><code>Inactive</code>: Less recently used memory.</p>
</li>
<li><p><code>Active(anon)</code>: Recently used anonymous memory.</p>
</li>
<li><p><code>Active(file)</code>: Recently used file-backed memory.</p>
</li>
</ul>
</li>
<li><p>Swap Information:</p>
<ul>
<li><p><code>SwapTotal</code>: Swap space configured.</p>
</li>
<li><p><code>SwapFree</code>: Swap space available.</p>
</li>
<li><p><code>Zswap</code>: Compressed swap in RAM.</p>
</li>
</ul>
</li>
<li><p>Other Important Metrics:</p>
<ul>
<li><p><code>Dirty</code>: Memory waiting to be written to disk.</p>
</li>
<li><p><code>Mapped</code>: Files mapped into memory.</p>
</li>
<li><p><code>Slab</code>: Kernel data structures cache.</p>
</li>
<li><p><code>CommitLimit</code>: Total memory available for allocation.</p>
</li>
<li><p><code>Committed_AS</code>: Total memory currently allocated.</p>
</li>
</ul>
</li>
</ul>
<p>A healthy memory usage is indicated by a good amount of available memory, active caching mechanisms in place and no memory pressure (no swap usage needed).</p>
<h2 id="heading-how-to-get-your-disk-amp-filesystem-information-in-linux">How to Get Your Disk &amp; Filesystem Information in Linux</h2>
<h3 id="heading-tree-d-l-1-command"><code>tree -d -L 1</code> Command</h3>
<p><code>tree -d -L 1</code> shows the file system details from the folder it is executed in. To find the complete file system details, run it from the root <code>/</code> folder:</p>
<pre><code class="lang-bash">tree -d -L 1
.
├── bin -&gt; usr/bin
├── bin.usr-is-merged
├── boot
├── dev
├── etc
├── home
├── lib -&gt; usr/lib
├── lib.usr-is-merged
├── lib64 -&gt; usr/lib64
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin -&gt; usr/sbin
├── sbin.usr-is-merged
├── snap
├── srv
├── sys
├── tmp
├── usr
└── var

25 directories
</code></pre>
<p>The command output of <code>tree -d -L 1</code> shows a directory tree structure with the following options:</p>
<ul>
<li><p><code>-d</code>: Shows only directories (ignores files)</p>
</li>
<li><p><code>-L 1</code>: Limits the depth of the tree to one level (only shows the immediate subdirectories)</p>
</li>
<li><p><code>df -h</code>: mounted filesystems and usage:</p>
<pre><code class="lang-bash">  df -h
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/root        29G  2.6G   26G   9% /
  tmpfs           479M     0  479M   0% /dev/shm
  tmpfs           192M  908K  191M   1% /run
  tmpfs           5.0M     0  5.0M   0% /run/lock
  /dev/xvda16     881M  144M  676M  18% /boot
  /dev/xvda15     105M  6.1M   99M   6% /boot/efi
  tmpfs            96M   12K   96M   1% /run/user/1000
</code></pre>
<p>  The above output from the <code>df -h</code> command shows the following disk space usage information:</p>
<ul>
<li><p><code>Filesystem</code>: The name of the mounted filesystem/device.</p>
</li>
<li><p><code>Size</code>: Total size of the filesystem.</p>
</li>
<li><p><code>Used</code>: Amount of space used.</p>
</li>
<li><p><code>Avail</code>: Amount of space available.</p>
</li>
<li><p><code>Use%</code>: Percentage of space used.</p>
</li>
<li><p><code>Mounted on</code>: The mount point where the filesystem is attached</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-lsblk-command"><code>lsblk</code> Command</h3>
<p><code>lsblk</code> stands for ‘list block devices’ and shows information about all available block devices like hard drives, SSDs, and so on.</p>
<pre><code class="lang-bash">lsblk
NAME     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0      7:0    0 26.3M  1 loop /snap/amazon-ssm-agent/9881
loop1      7:1    0 73.9M  1 loop /snap/core22/1748
loop2      7:2    0 44.4M  1 loop /snap/snapd/23545
loop3      7:3    0 50.9M  1 loop /snap/snapd/24505
loop4      7:4    0 73.9M  1 loop /snap/core22/1963
loop5      7:5    0 27.2M  1 loop /snap/amazon-ssm-agent/11320
xvda     202:0    0   30G  0 disk 
├─xvda1  202:1    0   29G  0 part /
├─xvda14 202:14   0    4M  0 part 
├─xvda15 202:15   0  106M  0 part /boot/efi
└─xvda16 259:0    0  913M  0 part /boot
</code></pre>
<p>The output above shows the following details:</p>
<ul>
<li><p><code>NAME</code>: Device name.</p>
</li>
<li><p><code>MAJ:MIN</code>: Major and minor device numbers.</p>
</li>
<li><p><code>RM</code>: Removable flag (1 for removable, 0 for fixed).</p>
</li>
<li><p><code>SIZE</code>: Device size.</p>
</li>
<li><p><code>RO</code>: Read-only flag (1 for read-only, 0 for read-write).</p>
</li>
<li><p><code>TYPE</code>: Device type (disk, part for partition, loop for loop device).</p>
</li>
<li><p><code>MOUNTPOINTS</code>: Where the device is mounted.</p>
</li>
</ul>
<h3 id="heading-fdisk-l-command"><code>fdisk -l</code> Command</h3>
<p><code>fdisk -l</code> shows all disk devices and their partitions on your system:</p>
<pre><code class="lang-bash">Disk /dev/xvda: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel <span class="hljs-built_in">type</span>: gpt
Disk identifier: E3478E01-32E3-4FC2-8E79-1BCCDE89C2D7

Device        Start      End  Sectors  Size Type
/dev/xvda1  2099200 62914526 60815327   29G Linux filesystem
/dev/xvda14    2048    10239     8192    4M BIOS boot
/dev/xvda15   10240   227327   217088  106M EFI System
/dev/xvda16  227328  2097152  1869825  913M Linux extended boot
</code></pre>
<p>The above output shows the partition information for the the main system disk (<code>/dev/xvda</code>) which is 30 GiB in size and has four partitions:</p>
<ul>
<li><p><code>/dev/xvda1</code>: <code>29G</code> Linux filesystem (main system partition).</p>
</li>
<li><p><code>/dev/xvda14</code>: <code>4M</code> BIOS boot partition.</p>
</li>
<li><p><code>/dev/xvda15</code>: <code>106M</code> EFI System partition (for UEFI boot).</p>
</li>
<li><p><code>/dev/xvda16</code>: <code>913M</code> Linux extended boot partition.</p>
</li>
</ul>
<h3 id="heading-mount-command"><code>mount</code> Command</h3>
<p><code>mount</code> shows all currently mounted filesystems in the format: <code>device/source "on" mount_point "type" filesystem_type (mount_options)</code>, displaying where and how each filesystem is attached to your system's directory tree.</p>
<p>Here is an example line from the output of <code>mount</code>:</p>
<pre><code class="lang-bash">/dev/xvda1 on / <span class="hljs-built_in">type</span> ext4 (rw,relatime,discard,errors=remount-ro,commit=30)
</code></pre>
<p>Some common mount options you’ll see are:</p>
<ul>
<li><p><code>rw</code>: Read-write access.</p>
</li>
<li><p><code>ro</code>: Read-only access.</p>
</li>
<li><p><code>nosuid</code>: Disable SUID/SGID bits.</p>
</li>
<li><p><code>nodev</code>: Prevent device file interpretation.</p>
</li>
<li><p><code>noexec</code>: Prevent execution of binaries.</p>
</li>
<li><p><code>relatime</code>: Update access times relatively.</p>
</li>
</ul>
<h3 id="heading-du-sh-command"><code>du -sh *</code> Command</h3>
<p><code>du -sh *</code> provides a summary of the disk usage for each file and directory in the current directory (good for finding disk hogs):</p>
<pre><code class="lang-bash">du -sh *
4.0K    file1.txt
8.0K    file2.txt
12K     directory1
20K     directory2
</code></pre>
<h2 id="heading-how-to-get-your-hardware-information-in-linux">How to Get Your Hardware Information in Linux</h2>
<h3 id="heading-lshw-command"><code>lshw</code> Command</h3>
<p>The <code>lshw</code> command provides detailed information about the computer's hardware configuration. It can report:</p>
<ul>
<li><p>Memory configuration.</p>
</li>
<li><p>Firmware version.</p>
</li>
<li><p>Mainboard configuration.</p>
</li>
<li><p>CPU version and speed.</p>
</li>
<li><p>Cache configuration.</p>
</li>
<li><p>Bus speed and more.</p>
</li>
</ul>
<p>It's particularly useful for system administrators and users who need to gather detailed hardware information. The command can output information in various formats including HTML, XML, JSON, or plain text.</p>
<p>Here is a portion of the output from <code>lshw</code>:</p>
<pre><code class="lang-bash">*-pci
          description: Host bridge
          product: 440FX - 82441FX PMC [Natoma]
          vendor: Intel Corporation
          physical id: 100
          bus info: pci@0000:00:00.0
          version: 02
          width: 32 bits
          clock: 33MHz
        *-isa
             description: ISA bridge
             product: 82371SB PIIX3 ISA [Natoma/Triton II]
             vendor: Intel Corporation
             physical id: 1
             bus info: pci@0000:00:01.0
             version: 00
             width: 32 bits
             clock: 33MHz
             capabilities: isa bus_master
             configuration: latency=0
</code></pre>
<h3 id="heading-lspci-command"><code>lspci</code> Command</h3>
<p><code>lspci</code> displays information about all PCI (Peripheral Component Interconnect) buses and devices connected to your system.</p>
<pre><code class="lang-bash">lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
</code></pre>
<p>From the output, we can see that:</p>
<ul>
<li><p>Each line starts with a <code>bus:device.function</code> address (like "<code>00:00.0</code>")</p>
</li>
<li><p>Following the address is the device class and the specific hardware details:</p>
<ul>
<li><p>A Host bridge (<code>Intel 440FX)</code>, which manages communications between the CPU and other components.</p>
</li>
<li><p>An ISA bridge (<code>Intel PIIX3</code>), for legacy device support.</p>
</li>
<li><p>An IDE interface for storage devices.</p>
</li>
<li><p>An ACPI bridge for power management.</p>
</li>
<li><p>A VGA graphics controller (Cirrus Logic).</p>
</li>
<li><p>A Xen Platform Device (this suggests you're running in a Xen virtualized environment).</p>
</li>
</ul>
</li>
</ul>
<p>The command is particularly useful for:</p>
<ul>
<li><p>Troubleshooting hardware issues</p>
</li>
<li><p>Verifying hardware detection</p>
</li>
<li><p>Finding hardware details for driver installation</p>
</li>
<li><p>Checking system configuration</p>
</li>
</ul>
<h2 id="heading-how-to-get-your-network-interfaces-amp-status-information-in-linux">How to Get Your Network Interfaces &amp; Status Information in Linux</h2>
<h3 id="heading-ip-a-command"><code>ip a</code> Command</h3>
<p><code>ip a</code> displays information about all network interfaces on your system:</p>
<pre><code class="lang-bash">ip -a
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt;
- This is the loopback interface (localhost)
- MTU (Maximum Transmission Unit) is 65536 bytes
- IP address: 127.0.0.1/8 (IPv4)
- IPv6 address: ::1/128

2. Network Interface (enX0):
enX0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt;
- This is your main network interface
- MTU is 9001 bytes
- MAC address (link/ether): 12:16:a6:d3:b3:61
- IPv4 address: 172.31.90.178/20
- IPv6 address: fe80::1016:a6ff:fed3:b361/64 (Link-local)
</code></pre>
<p>Here are the key elements in the output:</p>
<ul>
<li><p>Interface state (UP/DOWN).</p>
</li>
<li><p>MAC address (link/ether).</p>
</li>
<li><p>IPv4 and IPv6 addresses.</p>
</li>
<li><p>Network scope (host, global, link).</p>
</li>
<li><p>Address validity lifetime (valid_lft).</p>
</li>
<li><p>Broadcast address (brd).</p>
</li>
</ul>
<h3 id="heading-ip-r-command"><code>ip r</code> Command</h3>
<p><code>ip r</code> shows the system’s routing table:</p>
<pre><code class="lang-bash">ip r
default via 172.31.80.1 dev enX0 proto dhcp src 172.31.90.178 metric 100 
172.31.0.2 via 172.31.80.1 dev enX0 proto dhcp src 172.31.90.178 metric 100 
172.31.80.0/20 dev enX0 proto kernel scope link src 172.31.90.178 metric 100 
172.31.80.1 dev enX0 proto dhcp scope link src 172.31.90.178 metric 100
</code></pre>
<p>The above <code>ip r</code> output shows my system's routing table with the following routes:</p>
<ul>
<li><p>Default Route (Gateway):</p>
<ul>
<li><p>Default via <code>172.31.80.1</code>: All traffic not matching other rules goes through this gateway.</p>
</li>
<li><p>Using interface <code>enX0</code>.</p>
</li>
<li><p>Configured via DHCP.</p>
</li>
<li><p>Source IP: <code>172.31.90.178</code>.</p>
</li>
</ul>
</li>
<li><p>Local Network:</p>
<ul>
<li><p><code>172.31.80.0/20</code>: Local subnet (covers IPs from <code>172.31.80.0</code> to <code>172.31.95.255</code>)</p>
</li>
<li><p>Directly connected to <code>enX0</code> interface</p>
</li>
<li><p>Kernel-managed route (proto kernel)</p>
</li>
<li><p>For packets originating from <code>172.31.90.178</code></p>
</li>
</ul>
</li>
<li><p>DHCP Route:</p>
<ul>
<li><p>Direct route to DHCP server (<code>172.31.80.1</code>)</p>
</li>
<li><p>Via interface <code>enX0</code></p>
</li>
</ul>
</li>
</ul>
<p>All routes have a metric of 100, which determines route priority (lower values are preferred).</p>
<p><code>netstat -tuln</code> shows active listening ports:</p>
<pre><code class="lang-bash">netstat -tuln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 127.0.0.54:53           0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN     
tcp6       0      0 :::80                   :::*                    LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
udp        0      0 127.0.0.54:53           0.0.0.0:*                          
udp        0      0 127.0.0.53:53           0.0.0.0:*                          
udp        0      0 172.31.90.178:68        0.0.0.0:*                          
udp        0      0 127.0.0.1:323           0.0.0.0:*                          
udp6       0      0 ::1:323                 :::*
</code></pre>
<h2 id="heading-how-to-get-your-software-amp-services-information-in-linux">How to Get Your Software &amp; Services Information in Linux</h2>
<h3 id="heading-installed-packages">Installed packages</h3>
<p>You can check installed packages with <code>dpkg -l</code>, <code>apt list --installed</code> (Debian/Ubuntu). Here is a snippet from the output:</p>
<pre><code class="lang-bash">vim-common/noble-updates,noble-security,now 2:9.1.0016-1ubuntu7.8 all [installed,automatic]
vim-runtime/noble-updates,noble-security,now 2:9.1.0016-1ubuntu7.8 all [installed,automatic]
vim-tiny/noble-updates,noble-security,now 2:9.1.0016-1ubuntu7.8 amd64 [installed,automatic]
vim/noble-updates,noble-security,now 2:9.1.0016-1ubuntu7.8 amd64 [installed,automatic]
</code></pre>
<h3 id="heading-service-status">Service status</h3>
<p><code>systemctl list-units --type=service</code> lists the services. You can also use <code>systemctl status &lt;service&gt;</code> and replace <code>&lt;service&gt;</code> with the one you want.</p>
<p>Here’s the output for <code>cron.service</code>:</p>
<pre><code class="lang-bash">systemctl status cron.service
● cron.service - Regular background program processing daemon
     Loaded: loaded (/usr/lib/systemd/system/cron.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-05-14 19:46:58 UTC; 2 weeks 5 days ago
       Docs: man:cron(8)
   Main PID: 625 (cron)
      Tasks: 1 (<span class="hljs-built_in">limit</span>: 1129)
     Memory: 1.7M (peak: 4.7M)
        CPU: 20.890s
     CGroup: /system.slice/cron.service
             └─625 /usr/sbin/cron -f -P

Jun 03 09:25:01 ip-172-31-90-178 CRON[121748]: pam_unix(cron:session): session closed <span class="hljs-keyword">for</span> user root
Jun 03 09:35:01 ip-172-31-90-178 CRON[121817]: pam_unix(cron:session): session opened <span class="hljs-keyword">for</span> user root(uid=0) by root(uid=0)
Jun 03 09:35:01 ip-172-31-90-178 CRON[121818]: (root) CMD (<span class="hljs-built_in">command</span> -v debian-sa1 &gt; /dev/null &amp;&amp; debian-sa1 1 1)
Jun 03 09:35:01 ip-172-31-90-178 CRON[121817]: pam_unix(cron:session): session closed <span class="hljs-keyword">for</span> user root
Jun 03 09:45:01 ip-172-31-90-178 CRON[122050]: pam_unix(cron:session): session opened <span class="hljs-keyword">for</span> user root(uid=0) by root(uid=0)
Jun 03 09:45:01 ip-172-31-90-178 CRON[122051]: (root) CMD (<span class="hljs-built_in">command</span> -v debian-sa1 &gt; /dev/null &amp;&amp; debian-sa1 1 1)
Jun 03 09:45:01 ip-172-31-90-178 CRON[122050]: pam_unix(cron:session): session closed <span class="hljs-keyword">for</span> user root
Jun 03 09:55:01 ip-172-31-90-178 CRON[122318]: pam_unix(cron:session): session opened <span class="hljs-keyword">for</span> user root(uid=0) by root(uid=0)
Jun 03 09:55:01 ip-172-31-90-178 CRON[122319]: (root) CMD (<span class="hljs-built_in">command</span> -v debian-sa1 &gt; /dev/null &amp;&amp; debian-sa1 1 1)
Jun 03 09:55:01 ip-172-31-90-178 CRON[122318]: pam_unix(cron:session): session closed <span class="hljs-keyword">for</span> user root
lines 5-21/21 (END)
</code></pre>
<h3 id="heading-processes"><strong>Processes</strong></h3>
<p><code>ps aux</code> shows all processes with their respective status:</p>
<pre><code class="lang-bash">ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  1.4  22556 13952 ?        Ss   May14   0:35 /usr/lib/systemd/systemd --system --deserialize=63
root           2  0.0  0.0      0     0 ?        S    May14   0:00 [kthreadd]
root           3  0.0  0.0      0     0 ?        S    May14   0:00 [pool_workqueue_release]
root           4  0.0  0.0      0     0 ?        I&lt;   May14   0:00 [kworker/R-rcu_g]
root           5  0.0  0.0      0     0 ?        I&lt;   May14   0:00 [kworker/R-rcu_p]
root           6  0.0  0.0      0     0 ?        I&lt;   May14   0:00 [kworker/R-slub_]
.
.
.
</code></pre>
<p>Here's an explanation of each column in the <code>ps aux</code> output:</p>
<ul>
<li><p><code>USER</code>: The owner of the process</p>
</li>
<li><p><code>PID</code>: Process ID number</p>
</li>
<li><p><code>%CPU</code>: CPU usage percentage</p>
</li>
<li><p><code>%MEM</code>: Memory usage percentage</p>
</li>
<li><p><code>VSZ</code>: Virtual Memory Size in kilobytes (total program size)</p>
</li>
<li><p><code>RSS</code>: Resident Set Size in kilobytes (actual memory used)</p>
</li>
<li><p><code>TTY</code>: Terminal associated with the process ('?' means no terminal)</p>
</li>
<li><p><code>STAT</code>: Process state code:</p>
<ul>
<li><p><code>S</code>: Sleeping</p>
</li>
<li><p><code>R</code>: Running</p>
</li>
<li><p><code>I</code>: Idle</p>
</li>
<li><p><code>Z</code>: Zombie</p>
</li>
<li><p><code>T</code>: Stopped</p>
</li>
<li><p><code>s</code>: Session leader</p>
</li>
<li><p><code>&lt;</code>: High priority</p>
</li>
<li><p><code>N</code>: Low priority</p>
</li>
</ul>
</li>
<li><p><code>START</code>: Time when the process started</p>
</li>
<li><p><code>TIME</code>: Cumulative CPU time used</p>
</li>
<li><p><code>COMMAND</code>: The command with all its arguments</p>
</li>
</ul>
<h3 id="heading-top-and-htop-commands"><code>top</code> and <code>htop</code> Commands</h3>
<p><code>top</code> or <code>htop</code> can be used for live usage overview, and for showing a dynamic view of system performance and running processes. Here's what it displays:</p>
<ul>
<li><p>System Overview:</p>
<ul>
<li><p>System uptime and number of logged-in users.</p>
</li>
<li><p>Load average values for the last 1, 5, and 15 minutes.</p>
</li>
<li><p>Total number of processes and their states (running, sleeping, stopped, zombie)</p>
</li>
</ul>
</li>
<li><p>Resource Usage:</p>
<ul>
<li><p>CPU usage breakdown (user, system, idle, etc.).</p>
</li>
<li><p>Memory usage (total, free, used, cached).</p>
</li>
<li><p>Swap space usage</p>
</li>
<li><p>Process List:Shows a sorted list of running processes (by default sorted by CPU usage)For each process, displays:</p>
<ul>
<li><p>Process ID (PID).</p>
</li>
<li><p>User who owns the process.</p>
</li>
<li><p>CPU and memory usage.</p>
</li>
<li><p>Process priority and nice value.</p>
</li>
<li><p>Memory usage details (virtual, resident, shared).</p>
</li>
<li><p>Process status.</p>
</li>
<li><p>Running time.</p>
</li>
<li><p>Command name.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<pre><code class="lang-bash">    top - 10:04:25 up 19 days, 14:17,  1 user,  load average: 0.00, 0.00, 0.00
    Tasks: 104 total,   1 running, 103 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni, 88.0 id,  0.0 wa,  0.0 hi,  0.0 si, 12.0 st 
    MiB Mem :    957.4 total,    247.3 free,    366.1 used,    533.7 buff/cache     
    MiB Swap:      0.0 total,      0.0 free,      0.0 used.    591.3 avail Mem 

        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                              
          1 root      20   0   22556  13952   9728 S   0.0   1.4   0:35.08 systemd                                              
          2 root      20   0       0      0      0 S   0.0   0.0   0:00.16 kthreadd                                             
          3 root      20   0       0      0      0 S   0.0   0.0   0:00.00 pool_workqueue_release                               
          4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/R-rcu_g                                      
          5 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/R-rcu_p                                      
          6 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/R-slub_                                      
          7 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/R-netns                                      
         10 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/0:0H-events_highpri                          
         12 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/R-mm_pe                                      
         13 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_tasks_rude_kthread                               
         14 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_tasks_trace_kthread
</code></pre>
<p>    The top command updates this information regularly (by default every 3 seconds) and is commonly used for:</p>
<ul>
<li><p>Monitoring system performance</p>
</li>
<li><p>Identifying resource-intensive processes</p>
</li>
<li><p>Troubleshooting system slowdowns</p>
</li>
<li><p>Getting a quick overview of system health</p>
<p>  You can also interact with top while it's running using various keyboard commands (like 'k' to kill a process, '1' to see cpu cores, etc.).</p>
</li>
</ul>
<h2 id="heading-how-to-get-your-logs-amp-dmesg-in-formation-in-linux">How to Get Your Logs &amp; Dmesg In formation in Linux</h2>
<p>Based on the system configuration, a number of logs are generated. These can be audit logs, system logs, cron logs, and so on. They all carry useful information. Here are some commands that you can use to view logs:</p>
<ul>
<li><p><code>dmesg | less</code>: Kernel ring buffer (hardware issues, boot messages)</p>
</li>
<li><p><code>journalctl -xe</code>: Recent critical logs (systemd systems)</p>
</li>
<li><p><code>/var/log/syslog</code> or <code>/var/log/messages</code>: General system logs</p>
</li>
</ul>
<h2 id="heading-how-to-get-your-securityuser-audit-information-in-linux">How to Get Your Security/User Audit Information in Linux</h2>
<p><code>whoami</code> shows the current user’s username.</p>
<pre><code class="lang-bash">whoami
ubuntu
</code></pre>
<p><code>id</code> shows detailed information about a user's identity on the system.</p>
<pre><code class="lang-bash">id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),24(cdrom),27(sudo),30(dip),105(lxd)
</code></pre>
<p>Let's break down the output:</p>
<ul>
<li><p>User ID (uid): <code>uid=1000(ubuntu)</code> means the user ID is 1000, with username "ubuntu"</p>
</li>
<li><p>Primary Group ID (gid): <code>gid=1000(ubuntu)</code> means the primary group ID is 1000, named "ubuntu"</p>
</li>
<li><p>Supplementary Groups (groups): The user belong to the following groups:</p>
<ul>
<li><p><code>ubuntu (1000)</code>: Your primary group.</p>
</li>
<li><p><code>adm (4)</code>: For system monitoring tasks.</p>
</li>
<li><p><code>cdrom (24)</code>: For accessing CD-ROM devices.</p>
</li>
<li><p><code>sudo (27)</code>: Allows you to execute commands with superuser privileges.</p>
</li>
<li><p><code>dip (30)</code>: For managing dial-up connections.</p>
</li>
<li><p><code>lxd (105)</code>: For managing LXD containers.</p>
</li>
</ul>
</li>
</ul>
<p>The <code>id</code> command is useful for checking user and group IDs, verifying group memberships, troubleshooting permissions issues and confirming sudo access.</p>
<p><code>who</code> displays information about users currently logged into the system:</p>
<pre><code class="lang-bash">who
ubuntu   pts/0        2025-06-03 08:45 (39.43.159.5)
</code></pre>
<p>The output breakdown is shown below:</p>
<ul>
<li><p>Username: "<code>ubuntu</code>"</p>
</li>
<li><p>Terminal: "<code>pts/0</code>" (pseudo-terminal)</p>
</li>
<li><p>Login time: "<code>2025-06-03 08:45"</code></p>
</li>
<li><p>Remote host: "<code>(39.43.159.5)</code>" - the IP address from where the connection was made</p>
</li>
<li><p><code>w</code>- shows who is logged in and what they are doing:</p>
</li>
</ul>
<pre><code class="lang-bash">w
 10:21:46 up 19 days, 14:35,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU  WHAT
ubuntu   pts/0    39.43.159.5      08:45   44:56   0.00s  0.02s sshd: ubuntu [priv]
</code></pre>
<p>Here is the result breakdown:</p>
<p>First line:</p>
<ul>
<li><p><code>10:21:46</code>: Current system time</p>
</li>
<li><p><code>up 19 days, 14:35</code>: System uptime (how long the system has been running)</p>
</li>
<li><p><code>1 user</code>: Number of users currently logged in</p>
</li>
<li><p><code>load average: 0.24, 0.05, 0.02</code>: System load averages for the past 1, 5, and 15 minutes</p>
<ul>
<li><p>Numbers below 1.0 indicate low system load</p>
</li>
<li><p>Higher numbers indicate more system load/stress</p>
</li>
</ul>
</li>
</ul>
<p>Second line shows the column headers for the user information below:</p>
<ul>
<li><p><code>USER</code>: Username.</p>
</li>
<li><p><code>TTY</code>: Terminal device being used.</p>
</li>
<li><p><code>FROM</code>: Remote host from where the user is connected.</p>
</li>
<li><p><code>LOGIN@</code>: Time when the user logged in.</p>
</li>
<li><p><code>IDLE</code>: Time since the user's last activity.</p>
</li>
<li><p><code>JCPU</code>: CPU time used by all processes attached to the tty.</p>
</li>
<li><p><code>PCPU</code>: CPU time used by the current process.</p>
</li>
<li><p><code>WHAT</code>: Current process/command being run.</p>
</li>
</ul>
<p><code>last</code> shows a history of user logins and system reboots:</p>
<pre><code class="lang-bash">last
ubuntu   pts/1        39.43.159.5      Tue Jun  3 10:15 - 10:17  (00:02)
ubuntu   pts/0        39.43.159.5      Tue Jun  3 08:45   still logged <span class="hljs-keyword">in</span>
ubuntu   pts/0        39.43.159.5      Tue Jun  3 05:23 - 08:29  (03:06)
ubuntu   pts/0        39.43.159.5      Sun Jun  1 06:32 - 12:24  (05:52)
ubuntu   pts/0        39.43.159.5      Thu May 22 05:39 - 05:58  (00:18)
ubuntu   pts/0        139.135.32.93    Wed May 21 14:45 - 14:47  (00:01)
ubuntu   pts/0        139.135.32.93    Wed May 21 11:58 - 13:49  (01:51)
ubuntu   pts/0        39.43.159.5      Wed May 21 05:05 - 05:12  (00:06)
ubuntu   pts/0        39.43.159.5      Tue May 20 18:41 - 21:45  (03:04)
ubuntu   pts/0        39.43.159.5      Thu May 15 06:12 - 06:12  (00:00)
ubuntu   pts/0        39.43.159.5      Thu May 15 06:05 - 06:12  (00:07)
ubuntu   pts/0        18.206.107.27    Wed May 14 20:06 - 20:08  (00:01)
ubuntu   pts/0        182.185.185.39   Wed May 14 19:48 - 19:50  (00:01)
reboot   system boot  6.8.0-1024-aws   Wed May 14 19:46   still running

wtmp begins Wed May 14 19:46:47 2025
</code></pre>
<p>Each line shows:</p>
<ul>
<li><p>Username (in this case, all logins are from 'ubuntu' user).</p>
</li>
<li><p>Terminal device (<code>pts/0</code> indicates a pseudo-terminal, typically used for SSH connections).</p>
</li>
<li><p>Remote host IP address (where the connection came from).</p>
</li>
<li><p>Login time and date.</p>
</li>
<li><p>Logout time or status.</p>
</li>
<li><p>Session duration in parentheses.</p>
</li>
</ul>
<p><code>sudo -l</code> shows what the current user can do with sudo.</p>
<pre><code class="lang-bash">sudo -l
Matching Defaults entries <span class="hljs-keyword">for</span> ubuntu on ip-172-31-90-178:
    env_reset, mail_badpass, secure_path=/usr/<span class="hljs-built_in">local</span>/sbin\:/usr/<span class="hljs-built_in">local</span>/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin,
    use_pty

User ubuntu may run the following commands on ip-172-31-90-178:
    (ALL : ALL) ALL
    (ALL) NOPASSWD: ALL
</code></pre>
<p>This output indicates that the 'ubuntu' user has:</p>
<ul>
<li><p>Full sudo access (can execute any command)</p>
</li>
<li><p>No password requirement for sudo commands</p>
</li>
<li><p>Complete administrative privileges on the system</p>
</li>
</ul>
<h2 id="heading-visually-appealing-commands">Visually Appealing Commands</h2>
<p>In this section you’ll learn about two commands that display the information we have seen before in a presentable and aesthetic form.</p>
<p><code>neofetch</code> - displays system info along with the distribution logo:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748945743174/9cef1af7-fce8-4657-ad26-7d75b5755dd1.png" alt="Terminal output of the neofetch command displaying Ubuntu system information, including OS, kernel, uptime, CPU, GPU, memory, and a colorful ASCII logo" class="image--center mx-auto" width="1026" height="749" loading="lazy"></p>
<p><code>btop</code> displays dynamic stats with different modes:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748945510465/8c8c200c-bb1a-4123-8db7-c30bb6a1c9bf.gif" alt="A realtime snapshot of the btop system monitor showing real-time CPU, memory, disk, and network usage in a terminal. Colorful graphs display performance metrics for processes, temperatures, and uptime" class="image--center mx-auto" width="1920" height="956" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Thank you for reading the article until the end. If you found it helpful, consider sharing it with others.</p>
<p><strong>Stay Connected and Continue Your Learning Journey!</strong></p>
<p>I read every message, come say hi 👋</p>
<ol>
<li><p><strong>Connect with me on</strong>:</p>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/zaira-hira/">LinkedIn</a>: I share content related to Linux, Cyber security and DevOps. Leave a recommendation on LinkedIn and endorse me on relevant skills.</p>
</li>
<li><p><a target="_blank" href="https://discord.gg/9zfbjEDs">Discord</a> community: Hang around with other devs or share your accomplishments.</p>
</li>
<li><p><a target="_blank" href="https://twitter.com/hira_zaira">X</a>: I share pre-launch updates and some behind the scenes.</p>
</li>
</ul>
</li>
<li><p><strong>Get access to exclusive content</strong>: For one-on-one help and exclusive content go <a target="_blank" href="https://buymeacoffee.com/zairah/extras">here</a>.</p>
</li>
</ol>
<p>My <a target="_blank" href="https://www.freecodecamp.org/news/author/zaira/">articles</a> are part of my mission to increase accessibility to quality content for everyone. Each piece takes a lot of time and effort to write. This article will be free, forever. If you've enjoyed my work and want to keep me motivated, consider <a target="_blank" href="https://buymeacoffee.com/zairah">buying me a coffee</a>.</p>
<p>Thank you once again and happy learning!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Use Wireshark Filters to Analyze Your Network Traffic ]]>
                </title>
                <description>
                    <![CDATA[ Wireshark is an open-source tool widely regarded as the gold standard for network packet analysis. It allows you to capture live network traffic or inspect pre-recorded capture files, breaking down the data into individual packets for detailed examin... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/use-wireshark-filters-to-analyze-network-traffic/</link>
                <guid isPermaLink="false">67ee83d004f007db33e0f920</guid>
                
                    <category>
                        <![CDATA[ #cybersecurity ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Wireshark ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Hang Hu ]]>
                </dc:creator>
                <pubDate>Thu, 03 Apr 2025 12:49:20 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743684532493/cc26aa99-fc7a-4b47-ab16-60dac77561fd.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Wireshark is an open-source tool widely regarded as the gold standard for network packet analysis. It allows you to capture live network traffic or inspect pre-recorded capture files, breaking down the data into individual packets for detailed examination.</p>
<p>You can use Wireshark in scenarios like troubleshooting network performance issues (for example, slow connections or dropped packets), investigating suspicious activity (like detecting malware or unauthorized access), or learning how protocols like HTTP, TCP, or DNS function in real-world environments.</p>
<p>For beginners, think of it as a window into the invisible world of network communication, revealing what’s happening behind the scenes when you browse the web, send an email, or stream a video. Its power lies in its ability to provide granular insights, making it an indispensable tool for network administrators, cybersecurity enthusiasts, and anyone curious about how networks operate.</p>
<p>In this tutorial, you will learn how to use Wireshark display filters to analyze network traffic and spot potential security threats. Wireshark is a powerful network protocol analyzer that can capture and dissect network packets, which is crucial for cybersecurity professionals.</p>
<h3 id="heading-heres-what-well-cover">Here’s what we’ll cover:</h3>
<ul>
<li><p><a class="post-section-overview" href="#heading-how-to-start-wireshark-and-analyze-network-traffic">How to Start Wireshark and Analyze Network Traffic</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-work-with-network-capture-files">How to Work with Network Capture Files</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-the-wireshark-interface">Understanding the Wireshark Interface</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-and-applying-basic-display-filters">Understanding and Applying Basic Display Filters</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-advanced-filtering-techniques">Advanced Filtering Techniques</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-analyzing-security-related-traffic">Analyzing Security-Related Traffic</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-analyzing-sample-traffic-and-generating-new-traffic">Analyzing Sample Traffic and Generating New Traffic</a></p>
</li>
</ul>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before we start, you'll need to know <strong>Linux Basic Syntax.</strong> You can learn it through this <a target="_blank" href="https://labex.io/skilltrees/linux">Linux Skill Tree</a>.</p>
<p>Don't worry if you're new to <a target="_blank" href="https://labex.io/skilltrees/wireshark"><strong>Wireshark</strong></a> – I’ll explain everything as we go.</p>
<h2 id="heading-how-to-start-wireshark-and-analyze-network-traffic"><strong>How to Start Wireshark and Analyze Network Traffic</strong></h2>
<p>In this step, we're going to start using Wireshark. First, you'll learn how to launch it. Then, you'll either capture network traffic or use a provided sample file for analysis. Understanding the Wireshark interface is crucial, as it helps you view and analyze packet data.</p>
<h3 id="heading-installing-wireshark-on-ubuntu-2204">Installing Wireshark on Ubuntu 22.04</h3>
<p>Before you can start using Wireshark, you need to install it. Open a terminal window and run the following commands:</p>
<pre><code class="lang-bash">sudo apt update
sudo apt install wireshark -y
</code></pre>
<h3 id="heading-launching-wireshark"><strong>Launching Wireshark</strong></h3>
<p>To start Wireshark, you need to open a terminal window. You can do this by clicking on the terminal icon in the taskbar or by pressing <code>Ctrl+Alt+T</code>. Once the terminal is open, you'll use a command to start Wireshark. In the terminal, type the following command and press Enter:</p>
<pre><code class="lang-bash">wireshark
</code></pre>
<p>This command tells your system to start the Wireshark application. After a few seconds, Wireshark will open. You should see a window similar to the one shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385586635/78f76c20-c8d0-48d2-bdb7-17ff3f5fc261.png" alt="Wireshark Main Interface Example" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-work-with-network-capture-files"><strong>How to Work with Network Capture Files</strong></h2>
<p>For this part of the tutorial, you have two options:</p>
<h3 id="heading-option-1-use-the-provided-sample-file"><strong>Option 1: Use the Provided Sample File</strong></h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Download a sample packet capture file with mixed traffic</span>
wget -q https://s3.amazonaws.com/tcpreplay-pcap-files/smallFlows.pcap -O /home/labex/project/sample.pcapng

<span class="hljs-comment"># Make sure the user has access to the file</span>
chmod 644 /home/labex/project/sample.pcapng
</code></pre>
<p>I’ve prepared a sample capture file for you at <code>/home/labex/project/sample.pcapng</code>. This file contains a variety of network traffic that you can analyze.</p>
<p>To open this file:</p>
<ol>
<li><p>In Wireshark, go to File &gt; Open</p>
</li>
<li><p>Navigate to <code>/home/labex/project/sample.pcapng</code></p>
</li>
<li><p>Click "Open"</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385612148/dbeb3d39-db15-4363-a499-e8b527b43d84.png" alt="Wireshark Open File Screenshot" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>The file will load in Wireshark, showing various packets that have been captured previously.</p>
<h3 id="heading-option-2-capture-your-own-traffic"><strong>Option 2: Capture Your Own Traffic</strong></h3>
<p>If you prefer to capture your own traffic:</p>
<ol>
<li><p>In the Wireshark main window, look for the list of available network interfaces.</p>
</li>
<li><p>Find the <code>eth1</code> interface. In this lab environment, <code>eth1</code> is the main network interface we'll use for capturing packets.</p>
</li>
<li><p>Double-click on <code>eth1</code>. This action immediately starts the packet capture process.</p>
</li>
<li><p>Generate some network traffic by opening a new terminal and running:</p>
<pre><code class="lang-bash"> curl www.google.com
</code></pre>
</li>
<li><p>Once you've captured enough packets (aim for at least 20-30 packets), click the red square "Stop" button in the Wireshark toolbar.</p>
</li>
</ol>
<h2 id="heading-understanding-the-wireshark-interface"><strong>Understanding the Wireshark Interface</strong></h2>
<p>The Wireshark interface is divided into three main panels, each with a specific purpose:</p>
<ol>
<li><p><strong>Packet List (top panel)</strong>: This panel shows all the packets that have been captured in the order they were received. It gives you a quick overview of the captured traffic.</p>
</li>
<li><p><strong>Packet Details (middle panel)</strong>: When you select a packet in the top panel, this middle panel shows the details of that packet in a hierarchical format. It breaks down the packet's structure, showing information like the source and destination IP addresses, protocol types, and more.</p>
</li>
<li><p><strong>Packet Bytes (bottom panel)</strong>: This panel displays the raw bytes of the selected packet in hexadecimal format. It's useful for in-depth analysis, especially when you need to look at the exact data being transmitted.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743386808927/2225da6c-a652-4886-bd7d-f3c94586c688.jpeg" alt="Wireshark Interface" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>To see how these panels work together, click on different packets in the top panel. You'll see the corresponding details and raw bytes update in the middle and bottom panels.</p>
<h2 id="heading-understanding-and-applying-basic-display-filters"><strong>Understanding and Applying Basic Display Filters</strong></h2>
<p>In this step, we're going to explore display filters in Wireshark. Display filters are essential tools when it comes to analyzing network traffic. They help you focus on specific types of packets instead of having to sift through all the captured data.</p>
<p>By the end of this section, you'll know what display filters are, why they're useful, and how to apply basic ones to isolate specific types of network traffic.</p>
<h3 id="heading-what-are-display-filters"><strong>What Are Display Filters?</strong></h3>
<p>When you're analyzing network traffic, looking at every single captured packet can be overwhelming. You usually want to focus on specific types of packets. That's where Wireshark display filters come in. They allow you to show only the packets that meet certain criteria. This makes the analysis process much more efficient because you're not wasting time on irrelevant data.</p>
<p>Display filters in Wireshark use a special syntax. This syntax enables you to filter packets based on various attributes such as protocols, IP addresses, ports, and even the content of the packets. Understanding this syntax is key to effectively using display filters.</p>
<h3 id="heading-filter-toolbar"><strong>Filter Toolbar</strong></h3>
<p>Take a look at the top of the Wireshark window. You'll notice a text field. It might be labeled "Apply a display filter..." or simply show "Expression...". This is the place where you'll enter your display filters. Once you enter a filter and press Enter, Wireshark will use that filter to show only the relevant packets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385642595/018e9680-1c29-4168-9d60-975464722447.png" alt="Wireshark Filter Toolbar Location" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-basic-protocol-filters"><strong>Basic Protocol Filters</strong></h3>
<p>Let's start with a simple example. Suppose you want to view only HTTP traffic. HTTP is the protocol used for web browsing. To do this, you'll enter a filter in the filter toolbar. Type the following filter and then press Enter:</p>
<pre><code class="lang-plaintext">http
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385678124/1dff7e49-13c5-439e-aeca-82b461b8727b.png" alt="Wireshark HTTP Filter Output" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>After you apply this filter, Wireshark will only display HTTP packets. All other packets will be temporarily hidden. You'll notice that the filter bar turns green when you apply a valid filter. This is a visual indication that your filter is working correctly.</p>
<p>The output should now show only packets related to HTTP traffic. This typically includes web requests (when you ask a website for information) and responses (when the website sends you the information). If you don't see any HTTP traffic in the sample file, you can try different protocols that might be present, such as TCP, UDP, or DNS:</p>
<pre><code class="lang-plaintext">tcp
</code></pre>
<p>Or try generating more HTTP traffic by running the <code>curl</code> command in a terminal:</p>
<pre><code class="lang-bash">curl www.google.com
</code></pre>
<h3 id="heading-ip-address-filters"><strong>IP Address Filters</strong></h3>
<p>Next, let's filter traffic based on IP addresses. An IP address is like a unique identifier for a device on a network. First, look at your packet list. You'll see columns labeled "Source" and "Destination". These columns show the IP addresses of the devices sending and receiving the packets.</p>
<p>Once you've identified an IP address that appears frequently in your capture (for example, let's say you see <code>192.168.1.1</code>), you can use it to create a filter. Type the following filter in the filter toolbar to see only packets from that source:</p>
<pre><code class="lang-plaintext">ip.src == 192.168.3.131
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385707141/8719b584-9498-4ecb-bf67-d354906626e0.png" alt="Wireshark IP Address Filter Example" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You can replace <code>192.168.3.131</code> with an IP address that you actually see in your capture. After applying this filter, only packets with that source IP address will be shown.</p>
<p>If you want to see all the packets again, you can clear the current filter. Just click the "Clear" button (X) on the right side of the filter bar.</p>
<h3 id="heading-port-filters"><strong>Port Filters</strong></h3>
<p>Many network services operate on specific ports. A port is like a door on a device that allows specific types of network traffic to enter or leave. For example, HTTP typically uses port 80.</p>
<p>To filter packets by port number, you can use the following filter:</p>
<pre><code class="lang-plaintext">tcp.port == 80
</code></pre>
<p>This filter will show both incoming and outgoing packets that use TCP port 80. You might also try other common ports like 443 (HTTPS) or 53 (DNS) depending on what's available in your capture.</p>
<h3 id="heading-combining-filters"><strong>Combining Filters</strong></h3>
<p>You can make your filters more powerful by combining them using logical operators like <code>and</code> and <code>or</code>. For example, if you want to show only HTTP traffic that uses port 80, you can use the following filter:</p>
<pre><code class="lang-plaintext">http and tcp.port == 80
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385736030/8230d965-e822-4341-afa5-35239fbb6975.png" alt="Example of combined filter in Wireshark" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Try applying different combinations of filters and observe how the displayed packets change. Remember, before trying a new filter, you can either clear the previous one by clicking the "Clear" button or modify the existing filter directly in the filter bar to build upon it.</p>
<h2 id="heading-advanced-filtering-techniques"><strong>Advanced Filtering Techniques</strong></h2>
<p>In this part, we'll explore how to create more sophisticated filters for detailed network traffic analysis. As a beginner, you might wonder why we need advanced filtering. Well, in real-world scenarios, network capture files can be extremely large, filled with all kinds of traffic. Advanced filtering techniques are like a powerful magnifying glass for security professionals. They help us quickly pick out the suspicious or important traffic from the sea of data in these large capture files.</p>
<h3 id="heading-complex-filters-with-multiple-conditions"><strong>Complex Filters with Multiple Conditions</strong></h3>
<p>Wireshark gives you the ability to build complex filters by combining multiple conditions. This is very useful when you want to be more precise in your traffic analysis. Let's start by creating a filter to find HTTP GET requests.</p>
<pre><code class="lang-plaintext">http.request.method == "GET"
</code></pre>
<p>This filter is designed to display only HTTP packets that contain GET requests. When you apply this filter, you'll see packets that are requests sent to web servers. The reason we use this filter is that GET requests are a common type of HTTP request used to retrieve data from a server. By isolating these requests, we can focus on the data retrieval activities in the network.</p>
<p>If your sample file doesn't contain HTTP GET requests, try this alternative filter to find TCP SYN packets which indicate connection attempts:</p>
<pre><code class="lang-plaintext">tcp.flags.syn == 1
</code></pre>
<p>Now, let's make our filter more specific. We'll add a port condition:</p>
<pre><code class="lang-plaintext">tcp.port == 80 and http.request.method == "GET"
</code></pre>
<p>This new filter shows only HTTP GET requests that occur on the standard HTTP port (80). The standard HTTP port is widely used for unencrypted web traffic. By adding this port condition, we're narrowing down our search to only those GET requests that are using the typical HTTP communication channel.</p>
<h3 id="heading-filtering-based-on-packet-size"><strong>Filtering Based on Packet Size</strong></h3>
<p>Network attacks often involve packets with unusual sizes. Attackers might use large or small packets to hide malicious data or to disrupt the normal functioning of the network. To filter based on packet size, we use a specific syntax:</p>
<pre><code class="lang-plaintext">tcp.len &gt;= 100 and tcp.len &lt;= 500
</code></pre>
<p>This filter displays TCP packets with a payload length between 100 and 500 bytes. You can adjust these values according to your needs. For example, if you suspect that an attack involves larger packets, you can increase the upper limit. By filtering based on packet size, we can identify abnormal traffic patterns that might indicate an attack.</p>
<h3 id="heading-filtering-based-on-specific-content"><strong>Filtering Based on Specific Content</strong></h3>
<p>You can also filter traffic based on specific content within packets. This is very useful when you're looking for traffic related to a particular website or service. For example, let's find HTTP traffic related to a specific website.</p>
<pre><code class="lang-plaintext">http.host contains "google"
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385773497/36b7bc2e-b7e9-4b68-9dc7-82e429c5ea01.png" alt="Wireshark HTTP Host Filter" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This filter shows only HTTP traffic where the host header contains "google". You can replace "google" with any domain you're interested in analyzing. The host header in an HTTP request tells the server which website the client is trying to access. By filtering based on the host header, we can focus on the traffic related to a specific domain.</p>
<p>If your sample file doesn't have HTTP traffic with host headers, try this more general content filter:</p>
<pre><code class="lang-plaintext">frame contains "http"
</code></pre>
<h3 id="heading-using-the-contains-operator-for-text-searching"><strong>Using the "contains" Operator for Text Searching</strong></h3>
<p>The <code>contains</code> operator is a handy tool for searching for specific text strings in packets. It allows us to look for certain keywords within the packet data.</p>
<pre><code class="lang-plaintext">frame contains "password"
</code></pre>
<p>This filter shows packets containing the word "password" anywhere in the packet data. This can be very helpful for detecting possible security issues. For example, if passwords are being sent in clear text (which is a big security risk), this filter can help us spot those packets.</p>
<p>Or try this filter:</p>
<pre><code class="lang-plaintext">frame contains "login"
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385793822/024b4482-0b5d-4b20-985c-6263bd7f48d6.png" alt="Wireshark Password Filter Example" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-negating-filters"><strong>Negating Filters</strong></h3>
<p>Sometimes, you might want to see all the traffic except for certain types. That's where the <code>not</code> operator comes in.</p>
<pre><code class="lang-plaintext">not arp
</code></pre>
<p>This filter hides all ARP packets. ARP (Address Resolution Protocol) is used to map IP addresses to MAC addresses in a local network. Sometimes, ARP traffic can be very common and might clutter your analysis. By using the <code>not</code> operator, you can exclude this type of traffic and focus on other more relevant packets.</p>
<h3 id="heading-saving-and-applying-filter-bookmarks"><strong>Saving and Applying Filter Bookmarks</strong></h3>
<p>If you find yourself using certain filters frequently, you don't have to type them in every time. You can save them as bookmarks. Here's how:</p>
<ol>
<li><p>Enter a filter in the filter bar. This is where you type in the filter expressions we've been learning about.</p>
</li>
<li><p>Click the "+" button on the right side of the filter bar. This button is used to save the current filter as a bookmark.</p>
</li>
<li><p>Give your filter a name and click "OK". Naming the filter makes it easy to identify later.</p>
</li>
</ol>
<p>Once you've saved your filter, you can apply it by clicking on its name in the filter dropdown menu. This saves you time and effort, especially when you're doing repeated analysis.</p>
<h3 id="heading-exporting-filtered-packets"><strong>Exporting Filtered Packets</strong></h3>
<p>After you've filtered your traffic to show only the packets of interest, you might want to save just these packets to a new file. This is useful for sharing specific findings with colleagues or for further analysis. Here's how you do it:</p>
<ol>
<li><p>Apply your desired filter. Make sure you've set up the filter to show only the packets you want to save.</p>
</li>
<li><p>Click on File &gt; Export Specified Packets. This option allows you to export a specific set of packets.</p>
</li>
<li><p>Make sure "Displayed" is selected in the Packet Range section. This ensures that only the packets that are currently visible (that is, the ones that match your filter) are exported.</p>
</li>
<li><p>Choose a filename and location. This is where you decide where to save the new capture file and what to name it.</p>
</li>
<li><p>Click "Save". This creates a new capture file containing only the packets that matched your filter.</p>
</li>
</ol>
<h2 id="heading-analyzing-security-related-traffic"><strong>Analyzing Security-Related Traffic</strong></h2>
<p>In this step, we're going to focus on using Wireshark filters for security analysis. Security analysis is crucial in the world of cybersecurity as it helps us spot potentially malicious activities in network traffic. By the end of this section, you'll be able to identify various types of security threats using specific Wireshark filters.</p>
<h3 id="heading-identifying-port-scanning-activities"><strong>Identifying Port Scanning Activities</strong></h3>
<p>Port scanning is a common technique used by attackers to gather information about a target system. Attackers use it to find open ports on a network, which they can then exploit.</p>
<p>To detect potential port scanning, we look for a large number of connection attempts from a single source to multiple ports.</p>
<p>Let's use a specific filter to identify such activities. Try this filter in Wireshark:</p>
<pre><code class="lang-plaintext">tcp.flags.syn == 1 and tcp.flags.ack == 0
</code></pre>
<p>This filter shows SYN packets without the ACK flag. In a TCP connection, the SYN packet is the first one sent to initiate a connection, and the ACK packet is used to acknowledge the connection. When we see a lot of SYN packets without ACK from one source to different destination ports, it's a strong indication of port scanning.</p>
<h3 id="heading-detecting-suspicious-dns-traffic"><strong>Detecting Suspicious DNS Traffic</strong></h3>
<p>DNS tunneling and other DNS-based attacks are becoming more common. These attacks use the DNS protocol to hide malicious activities, such as data exfiltration or command and control communication. To detect such attacks, we need to look for unusual DNS traffic.</p>
<p>Use this filter to examine DNS queries:</p>
<pre><code class="lang-plaintext">dns
</code></pre>
<p>Once you apply this filter, look for unusually long domain names or a high volume of DNS requests to the same domain. These could be signs of data exfiltration or command and control communication.</p>
<h3 id="heading-identifying-password-brute-force-attempts"><strong>Identifying Password Brute Force Attempts</strong></h3>
<p>Password brute force attacks are a common way for attackers to gain unauthorized access to services like SSH or FTP. In a brute force attack, the attacker tries multiple password combinations until they find the correct one.</p>
<p>To detect potential brute force password attempts, we can filter for failed login attempts. Use this filter:</p>
<pre><code class="lang-plaintext">ftp contains "530" or ssh contains "Failed"
</code></pre>
<p>This filter shows FTP and SSH packets that contain common failure response messages. If you see multiple failures from the same source, it may indicate a brute force attempt.</p>
<h3 id="heading-analyzing-http-error-responses"><strong>Analyzing HTTP Error Responses</strong></h3>
<p>Web application attacks often generate HTTP error responses. Attackers may try to exploit vulnerabilities in web applications, and these attempts can result in error responses from the server.</p>
<p>Filter for these error responses with:</p>
<pre><code class="lang-plaintext">http.response.code &gt;= 400
</code></pre>
<p>This filter shows HTTP response packets with status codes of 400 or higher. All these status codes represent error responses. By examining these packets, we can identify attempted web exploits.</p>
<h3 id="heading-finding-clear-text-credentials"><strong>Finding Clear-Text Credentials</strong></h3>
<p>Transmitting credentials in clear text is a major security risk. If an attacker intercepts these credentials, they can gain unauthorized access to the system.</p>
<p>To detect clear-text credentials, use this filter:</p>
<pre><code class="lang-plaintext">http contains "user" or http contains "pass" or http contains "login"
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385832534/4c38654f-8ff0-4bc3-8bc6-0dd1161dc0f1.png" alt="Wireshark Clear-Text Cred Filter" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This filter helps us find HTTP traffic that might contain login information. Carefully examine the packets that match this filter to identify potential security risks.</p>
<h2 id="heading-analyzing-sample-traffic-and-generating-new-traffic"><strong>Analyzing Sample Traffic and Generating New Traffic</strong></h2>
<p>Now that you've learned various security-focused filters, it's time to put your knowledge into practice. You can either analyze the provided sample file or generate and analyze new traffic.</p>
<h3 id="heading-analyzing-the-sample-file"><strong>Analyzing the Sample File</strong></h3>
<p>If you're using the provided sample file (<code>/home/labex/project/sample.pcapng</code>), try applying some of the security filters we've discussed to identify any interesting patterns:</p>
<pre><code class="lang-plaintext">tcp.flags.syn == 1 and tcp.flags.ack == 0
</code></pre>
<p>Look for patterns that might indicate scanning, suspicious connections, or other security concerns.</p>
<h3 id="heading-generating-and-analyzing-new-traffic"><strong>Generating and Analyzing New Traffic</strong></h3>
<p>Alternatively, open a new terminal window. In this window, we'll generate some HTTP traffic with multiple requests. Run the following commands:</p>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> {1..5}; <span class="hljs-keyword">do</span>
  curl -I www.google.com
  sleep 1
<span class="hljs-keyword">done</span>
</code></pre>
<p>These commands send five HTTP HEAD requests to <code>www.google.com</code> with a one-second interval between each request.</p>
<p>Next, go to Wireshark and apply this filter to find all HTTP requests:</p>
<pre><code class="lang-plaintext">http.request
</code></pre>
<p>This filter will show all the HTTP requests in the captured traffic.</p>
<p>Look through these packets to identify patterns of normal HTTP traffic. Notice the headers, the frequency of requests, and other details.</p>
<p>Finally, try to create a filter that can distinguish normal HTTP browsing from automated scanning tools. For example:</p>
<pre><code class="lang-plaintext">http.request and !(http.user_agent contains "Mozilla")
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743385858895/3feb916b-39ac-4ced-ab76-8597186cbbf0.png" alt="Wireshark HTTP User Agent Filter" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This filter shows HTTP requests that don't have browser user agents. Since most normal web browsing is done using browsers with Mozilla in the user agent, requests without it might indicate automated tools rather than normal browsing.</p>
<p>By practicing these security-focused filtering techniques, you'll develop the skills needed to quickly identify suspicious traffic in real-world network captures.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this tutorial, you have learned how to use Wireshark display filters for network traffic analysis and potential security threat identification.</p>
<p>You began by either working with a provided sample capture file or capturing live network traffic and familiarizing yourself with the Wireshark interface. Then, you mastered basic display filters to isolate specific traffic types according to protocols, IP addresses, and ports. You also advanced your skills with complex filtering techniques, combining multiple conditions and searching for specific content. Finally, you applied these skills in security analysis scenarios to detect suspicious activities such as port scanning, credential exposure, and potential attacks.</p>
<p>These Wireshark filtering skills are crucial for efficient network troubleshooting and security analysis. By quickly isolating relevant packets from large captures, you can greatly reduce the time required to identify and respond to network issues and security incidents.</p>
<p>As you keep practicing with Wireshark, you will gain an intuitive understanding of network protocols and traffic patterns, enhancing your overall cybersecurity capabilities.</p>
<blockquote>
<p>To practice the operations from this tutorial, try the interactive hands-on lab: <a target="_blank" href="https://labex.io/labs/wireshark-analyze-network-traffic-with-wireshark-display-filters-415944?course=quick-start-with-wireshark">Analyze Network Traffic with Wireshark Display Filters</a></p>
</blockquote>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Learn User Management in RHEL: A Comprehensive Guide ]]>
                </title>
                <description>
                    <![CDATA[ Imagine you're throwing a house party. You wouldn’t hand out keys to every guest, right? Some friends can roam freely, some should probably stick to the living room, and a few—well, let’s just say they need supervision. Managing users in RHEL is kind... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/learn-user-management-in-rhel-a-comprehensive-guide/</link>
                <guid isPermaLink="false">67b5da0a6db178277c2bebc9</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ RHEL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ user management ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Tanishka Makode ]]>
                </dc:creator>
                <pubDate>Wed, 19 Feb 2025 13:18:02 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739971027992/d19c4616-4c2e-4cc4-ac45-384e6520d1a8.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Imagine you're throwing a house party. You wouldn’t hand out keys to every guest, right? Some friends can roam freely, some should probably stick to the living room, and a few—well, let’s just say they need supervision.</p>
<p>Managing users in RHEL is kind of like that. You decide who gets in, what they can do, and how much control they have. Without proper management, things can get messy fast—like that friend who somehow DJs when no one asks.</p>
<p>So, let’s dive into user management and ensure your Linux system stays organized, secure, and drama-free! 🚀</p>
<h2 id="heading-table-of-contents">Table Of Contents</h2>
<ol>
<li><p><a class="post-section-overview" href="#heading-what-is-a-user-in-linux">What is a User in Linux?</a></p>
<ul>
<li><a class="post-section-overview" href="#heading-understanding-sudo-in-user-management">Understanding sudo in User Management</a></li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-user-management-commands-in-linux">User Management Commands in Linux</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-how-to-add-a-user">How to Add a User</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-check-if-a-user-is-created">How to Check if a User is Created</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-assign-a-password">How to Assign a Password</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-switch-users">How to Switch Users</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-groups-in-linux">Understanding Groups in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-modify-users">How to Modify Users</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-final-words">Final Words</a></p>
</li>
</ol>
<h2 id="heading-what-is-a-user-in-linux"><strong>What is a User in Linux?</strong></h2>
<p>A user in Linux is an account that allows someone (or a process) to interact with the system. Since Linux is a multi-user operating system, multiple users can exist on the same system, each with their own settings, files, and permissions. Users can have different levels of permissions, which determine what they can access or modify on the system.</p>
<p>Linux categorizes users into three main types based on their roles and privileges:</p>
<ol>
<li><p>Privileged Users: These users have unrestricted access to the entire system. They have the highest level of permissions and can perform any operation on the system. They can install/remove software, modify system files, create/manage users, and even delete everything. These users are also called root users.</p>
</li>
<li><p>System Users: The system creates these users to run background processes or services. They can’t login like a normal user. Their sole purpose is to manage system operations like databases, web servers and scheduled tasks.</p>
</li>
<li><p>Normal Users: These are the everyday users created by administrators or during system installation. They have their home directory and can store personal files and settings. They can’t modify system files but can execute tasks within their permission scope.</p>
</li>
</ol>
<h3 id="heading-understanding-sudo-in-user-management">Understanding <code>sudo</code> in User Management</h3>
<p>The <code>sudo</code> (Superuser Do) command allows a regular user to execute administrative tasks with elevated privileges. Since user management tasks—such as adding, modifying, or deleting users—require root access, normal users must use <code>sudo</code> before these commands.</p>
<p>Note that the following commands are executed as the root user. If you are using a normal user account, you must prefix them with <code>sudo</code> to perform user management tasks.</p>
<p>Now let’s see how we manage users on RHEL.</p>
<h2 id="heading-user-management-commands-in-linux">User Management Commands in Linux</h2>
<h3 id="heading-how-to-add-a-user">How to add a user</h3>
<p>To create a new user account, use following command:</p>
<p>Syntax:</p>
<pre><code class="lang-bash">useradd [user_name]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">useradd Tanishka <span class="hljs-comment"># Root user</span>
sudo useradd Tanishka <span class="hljs-comment"># Normal user</span>
</code></pre>
<p>Once you create a user, you can verify its existence in the <code>/etc/passwd</code> file. This file stores essential user account information (but <strong>not passwords</strong>, despite the name).</p>
<h4 id="heading-how-to-check-if-a-user-is-created">How to check if a user is created</h4>
<p>To confirm the user entry in <code>/etc/passwd</code>, use one of the following methods:</p>
<ol>
<li>View the file using <code>cat</code> or <code>grep</code></li>
</ol>
<pre><code class="lang-bash">cat /etc/passwd <span class="hljs-comment"># Displays entire file content</span>
grep Tanishka /etc/passwd <span class="hljs-comment"># Displays information about Tanishka user only</span>
</code></pre>
<ol start="2">
<li>Use id command:</li>
</ol>
<p>The <code>id</code> command is used to display a user’s <strong>UID (User ID), GID (Group ID), and the groups they belong to</strong>. It helps in verifying user information and checking permissions.</p>
<pre><code class="lang-bash">id Tanishka
<span class="hljs-comment"># Displays user id of Tanishka,</span>
<span class="hljs-comment"># hence verifying user has been created</span>
</code></pre>
<p>Let’s understand what’s going on in the /etc/password fields. Each line in <code>/etc/passwd</code> represents a user account and contains seven fields separated by colons (<code>:</code>):</p>
<pre><code class="lang-bash">username:x:UID:GID:comment:home_directory:shell
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Field</strong></td><td><strong>Description</strong></td></tr>
</thead>
<tbody>
<tr>
<td>username</td><td>Name of the user (for example, john, admin).</td></tr>
<tr>
<td>x</td><td>Placeholder for the password (actual password is stored in /etc/shadow).</td></tr>
<tr>
<td>UID</td><td>User ID (for example, 1001 for a normal user, 0 for root).</td></tr>
<tr>
<td>GID</td><td>Group ID (primary group of the user).</td></tr>
<tr>
<td>comment</td><td>Optional user description (for example, full name or other info).</td></tr>
<tr>
<td>home_directory</td><td>User’s home directory (for example /home/john).</td></tr>
<tr>
<td>shell</td><td>The default shell assigned to the user (for example, /bin/bash, /bin/sh, /usr/sbin/nologin).</td></tr>
</tbody>
</table>
</div><h3 id="heading-how-to-assign-a-password">How to Assign a Password</h3>
<p>Once an account is created, it’s essential to assign a password to the account. Otherwise, that account can’t be logged in through a GUI login interface. To give a password to a user account, user this command:</p>
<p>Syntax:</p>
<pre><code class="lang-bash">passwd [user_name]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">passwd Tanishka
</code></pre>
<p>You will be prompted to enter the password. Enter the password and you’re all set! Even though user information is stored in /etc/passwd file, actual information about the password is stored in the /etc/shadow file (weird, I know…).</p>
<p>To see the content of the /etc/shadow file, use this command:</p>
<pre><code class="lang-bash">cat /etc/shadow
</code></pre>
<p>Each line in <code>/etc/shadow</code> represents a user account password and contains nine fields separated by colons (<code>:</code>):</p>
<pre><code class="lang-bash">username:password:lastchg:min:max:warn:inactive:expire:reserved
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Field</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td>username</td><td>User’s login name</td></tr>
<tr>
<td>password</td><td>Encrypted password or password status (for example, locked)</td></tr>
<tr>
<td>lastchg</td><td>Last password change (days since Jan 1, 1970)</td></tr>
<tr>
<td>min</td><td>Minimum days between password changes</td></tr>
<tr>
<td>max</td><td>Maximum days before password change is required</td></tr>
<tr>
<td>warn</td><td>Warning period before password expiration</td></tr>
<tr>
<td>inactive</td><td>Inactive period after password expiration</td></tr>
<tr>
<td>expire</td><td>Account expiration date (days since Jan 1, 1970)</td></tr>
<tr>
<td>reserved</td><td>Reserved for future use</td></tr>
</tbody>
</table>
</div><p>To change password aging information, you use the <code>chage</code> (short for change age) command like this:</p>
<p>Syntax:</p>
<pre><code class="lang-bash">chage [OPTIONS] [user_name]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">chage -l tanishka <span class="hljs-comment"># Lists the current password aging information</span>
chage -m 10 tanishka <span class="hljs-comment"># Sets the minimum days to change password</span>
chage -M 10 tanishka <span class="hljs-comment"># Sets the maximum days password must be changed</span>
chage -W 7 tanishka <span class="hljs-comment"># Sets the number of days before the password expires that the user will be warned to change the password</span>
chage -I 10 tanishka <span class="hljs-comment"># Sets the number of days after password expiration that the account will be disabled if not logged in</span>
chage -E 2025-12-31 tanishka <span class="hljs-comment"># Sets the date when the user account will expire </span>
chage -d 2024-12-25 tanishka <span class="hljs-comment"># Sets the last password change date</span>
</code></pre>
<p>Now that you have learned to create users and assign passwords, you need to know how to switch between users. Let’s see that now.</p>
<h3 id="heading-how-to-switch-users">How to Switch Users</h3>
<p>The <code>su</code> (Substitute User) command allows you to <strong>switch from one user to another</strong> without logging out of the current session.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">su - [user_name]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">su - Tanishka <span class="hljs-comment"># Switches to Tanishka user</span>
</code></pre>
<ul>
<li><p><code>su</code> stands for "substitute user" (or "switch user").</p>
</li>
<li><p>The <code>-</code> (hyphen) loads the target user's full environment, including their shell, path, and profile settings (similar to logging in as that user).</p>
</li>
<li><p>If no username is provided, it switches to the root user by default.</p>
</li>
</ul>
<p>To return to original or root user, simply enter ‘exit’.</p>
<h3 id="heading-understanding-groups-in-linux">Understanding Groups in Linux</h3>
<p>Just like a party where guests can belong to different social circles, Linux groups allow users to be part of different permission levels. Groups help manage file access, system privileges, and administrative controls efficiently.</p>
<p>Linux has two types of groups:</p>
<p><strong>1. Primary Group:</strong></p>
<ul>
<li><p>Every user has one primary group.</p>
</li>
<li><p>When a user creates a new file, it belongs to their primary group.</p>
</li>
<li><p>It is usually named the same as the username.</p>
</li>
</ul>
<p><strong>2. Secondary Groups:</strong></p>
<ul>
<li><p>A user can belong to multiple secondary groups.</p>
</li>
<li><p>These groups provide additional permissions beyond the primary group.</p>
</li>
<li><p>Users can be assigned to various secondary groups to access shared resources.</p>
</li>
</ul>
<p>To check a user’s group membership:</p>
<pre><code class="lang-bash">id [user_name]
</code></pre>
<p>This displays the user’s UID, primary group (GID), and any secondary groups they belong to.</p>
<p>To add a new group:</p>
<pre><code class="lang-bash">groupadd [group_name]
</code></pre>
<h3 id="heading-how-to-modify-a-user">How to Modify a User</h3>
<p>Sometimes, you might need to update user details, such as changing usernames, user IDs, group memberships, home directories, or login shells. You use the <code>usermod</code> command to modify existing user accounts while preserving their files and configurations.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod [OPTIONS] [user_name]
</code></pre>
<p>Let’s break down the different options available for modifying user accounts.</p>
<ol>
<li><strong>Change the username</strong></li>
</ol>
<p>If you want to rename an existing user, use the <code>-l</code> option:</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod -l new_username old_username
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">usermod -l tanishkamakode tanishka
</code></pre>
<p>This renames <code>tanishka</code> to <code>tanishkamakode</code>. Just keep in mind that the home directory remains the same (<code>/home/tanishka</code>), so you might need to rename it manually.</p>
<p>To rename the home directory as well, use:</p>
<pre><code class="lang-bash">mv /home/tanishka /home/tanishkamakode
</code></pre>
<ol start="2">
<li><strong>Change the user id:</strong></li>
</ol>
<p>Each user has a unique User ID (UID). If you need to change it, use <code>-u</code>.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod -u new_UID user_name
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">usermod -u 2001 tanishka
</code></pre>
<p>This changes <code>tanishka</code>'s UID to <code>2001</code>. Before you do this, you’ll want to <strong>make sure that no other user has the same UID.</strong> This is important.</p>
<p>If the user owns files under the old UID, you should update them after changing the UID.</p>
<ol start="3">
<li><strong>Change the primary group</strong></li>
</ol>
<p>Every user belongs to a primary group. To change it, use <code>-g</code>.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod -g new_group user_name
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">usermod -g developers tanishka
</code></pre>
<p>This changes <code>tanishka</code>'s primary group to <code>developers</code>. Just keep in mind that <code>usermod -g developers tanishka</code> <strong>removes</strong> the user from all secondary groups. To avoid that, just make sure you check and re-add secondary groups as needed.</p>
<p>Also, the group must exist beforehand. To create a group, run this command:</p>
<p>Syntax:</p>
<pre><code class="lang-bash">groupadd [group_name]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">groupadd developers
</code></pre>
<p>Now, to check tanishka’s group, do the following:</p>
<pre><code class="lang-bash">id tanishka
</code></pre>
<ol start="4">
<li><strong>Add to a secondary group</strong></li>
</ol>
<p>A user can belong to multiple secondary groups. Use <code>-G</code> to assign them.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod -G group1,group2 user_name
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">usermod -G linux,docker tanishka
</code></pre>
<p>This adds <code>tanishka</code> to the <code>sudo</code> and <code>docker</code> groups. Just keep in mind that this <strong>replaces</strong> any existing secondary groups that the user might already belong to. To add groups without removing the current ones, use <code>-aG</code> (append to groups) like this:</p>
<pre><code class="lang-bash">usermod -aG linux,docker tanishka
</code></pre>
<ol start="5">
<li><strong>Change the home directory:</strong></li>
</ol>
<p>You can change a user’s default home directory using <code>-d</code>.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod -d /new/home_directory user_name
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">usermod -d /home/tani tanishka
</code></pre>
<p>This sets <code>tanishka</code>'s home directory to <code>/home/tani</code>, but <strong>it does not move existing files</strong>. To move them, add the <code>-m</code> option:</p>
<pre><code class="lang-bash">usermod -d /home/tani -m tanishka
</code></pre>
<p>After moving the home directory, just make sure you’ve updated file ownership.</p>
<ol start="6">
<li><strong>Change the login shell:</strong></li>
</ol>
<p>The default shell for a user can be changed using <code>-s</code>.</p>
<p>Syntax:</p>
<pre><code class="lang-bash">usermod -s /new/shell user_name
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">usermod -s /bin/zsh tanishka
</code></pre>
<p>This changes <code>tanishka</code>'s default shell to <code>zsh</code>. Common shells include:</p>
<ul>
<li><p><code>/bin/bash</code> (default)</p>
</li>
<li><p><code>/bin/sh</code></p>
</li>
<li><p><code>/bin/zsh</code></p>
</li>
<li><p><code>/usr/sbin/nologin</code> (to disable login)</p>
</li>
</ul>
<p>With <code>usermod</code>, you can fine-tune user settings to match system requirements. Always check changes using:</p>
<pre><code class="lang-bash">id tanishka
grep tanishka /etc/passwd
</code></pre>
<h2 id="heading-final-words">Final Words</h2>
<p>In this article, we explored the fundamentals of user management in RHEL, a crucial aspect of system administration. We started with creating and managing users, then moved on to handling groups.</p>
<p>If you're new to Linux and want to build a strong foundation, check out my first tutorial on <a target="_blank" href="https://www.freecodecamp.org/news/guide-to-rhel-linux-basics/">Basic Linux Commands</a>, where I cover essential commands every beginner should know. You can also read my second tutorial on <a target="_blank" href="https://www.freecodecamp.org/news/how-to-use-the-vim-text-editor-intro-for-devs/">Vim</a> to learn how to navigate and edit text efficiently in this powerful editor. These articles will complement what you’ve learned about user management here.</p>
<p>Keep practicing these commands, and soon they’ll become second nature to you. Mastery comes with repetition, so continue experimenting and applying these fundamentals in real-world scenarios.</p>
<p>Stay tuned for more articles. Get ready to take your RHEL skills to the next level.</p>
<p><a target="_blank" href="https://linktr.ee/tanishkamakode">Let’s connect!</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Use the Vim Text Editor – An Introduction for Developers ]]>
                </title>
                <description>
                    <![CDATA[ Imagine a carpenter without tools, a writer without a pen, or a chef without a knife—this is like trying to imagine a developer or sysadmin without a reliable text editor. For devs, text editors are the ultimate multitools, shaping how we create, man... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-use-the-vim-text-editor-intro-for-devs/</link>
                <guid isPermaLink="false">67a24bb37e501febb084c852</guid>
                
                    <category>
                        <![CDATA[ vim ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Text Editors ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Tanishka Makode ]]>
                </dc:creator>
                <pubDate>Tue, 04 Feb 2025 17:17:39 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738684583892/739ec0fa-e8a2-4f08-a265-7fa5034c932d.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Imagine a carpenter without tools, a writer without a pen, or a chef without a knife—this is like trying to imagine a developer or sysadmin without a reliable text editor.</p>
<p>For devs, text editors are the ultimate multitools, shaping how we create, manage, and transform raw data into meaningful output.</p>
<p>While modern editors like VS Code and Sublime Text have gained popularity for their sleek interfaces, there’s something timeless about the simplicity and power of classic tools.</p>
<p>Loved by some and feared by others, Vim is a text editor that has stood the test of time. Born from its predecessor Vi, Vim (Vi Improved) offers unparalleled speed, versatility, and control.</p>
<p>In this tutorial, you’ll learn what makes Vim so special. We’ll explore its commands, text filtering, and string manipulation capabilities to help you harness its true power.</p>
<h2 id="heading-what-well-cover">What we’ll cover:</h2>
<ol>
<li><p><a class="post-section-overview" href="#heading-text-editors-in-linux">Text Editors in Linux</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-open-the-vim-editor">How to Open the Vim Editor</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-modes-in-vim">Modes in Vim</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-basic-vim-commands">Basic Vim Commands</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-cut-copy-paste-and-delete-commands">Cut, Copy, Paste, and Delete Commands</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-search-and-replace-commands">Search and Replace Commands</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-read-files-using-more-and-less">How to Read Files using more and less</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-text-filters">Text Filters</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-text-summarization-tools-wc">Text Summarization Tool: wc</a></p>
</li>
</ol>
<h2 id="heading-text-editors-in-linux">Text Editors in Linux</h2>
<p>Linux provides a variety of text editors, each designed for different types of users – from beginners to advanced developers.</p>
<p>Editors like <strong>Nano</strong> are great for newcomers who need a simple, user-friendly experience in the terminal. Nano displays helpful commands at the bottom of the screen, making it easy to navigate without a steep learning curve.</p>
<p><strong>Gedit</strong>, the default editor for the GNOME desktop environment, offers a clean, graphical interface ideal for basic text editing. On the other hand, <strong>Kate</strong> caters to KDE desktop users and provides a more feature-rich experience, with multiple windows, syntax highlighting, and an integrated terminal.</p>
<p><strong>Emacs</strong> is a versatile and highly customizable editor that can be turned into an entire development environment, ideal for power users who want more than just a text editor.</p>
<p><strong>VS Code</strong> and <strong>Atom</strong> are modern, graphical editors that offer a rich set of features, including extensions, debugging tools, and Git integration, making them favorites among developers.</p>
<h3 id="heading-why-do-many-devs-prefer-vim">Why Do Many Devs Prefer Vim?</h3>
<p>Despite the wide range of text editors available in Linux, Vim stands out as the preferred choice for many users, especially those who need a lightweight, fast, and highly efficient editing environment.</p>
<p>Vim, an improved version of the classic Vi editor, is available on nearly every Linux distribution and can be used in both graphical and terminal-based environments. Its popularity stems from its exceptional speed and efficiency.</p>
<p>Vim is entirely keyboard-driven, allowing you to perform complex editing tasks quickly without the need for a mouse. This makes it incredibly useful for remote work, where you may have to rely on minimal system resources.</p>
<p>The power of Vim lies in its <strong>modal editing</strong> system, which separates the text input and command modes, which you’ll learn soon. This lets you execute precise actions with a few keystrokes. Whether you're navigating a file, searching for a string, or performing complex text manipulations, Vim enables you to do it all without taking your hands off the keyboard.</p>
<p>Because Vim (or Vi) is pre-installed on most Linux systems, it’s often the go-to option for developers and system administrators, who rely on its ubiquity and powerful features. In short, Vim’s combination of speed, versatility, and efficiency makes it the editor of choice for many Linux users looking to boost their productivity.</p>
<h2 id="heading-how-to-open-the-vim-editor">How to Open the Vim Editor</h2>
<p>Opening a file in Vim is straightforward and efficient. To start editing any file, simply use the following command in your terminal:</p>
<pre><code class="lang-bash">vim [filename]
</code></pre>
<p>Here, replace <code>[filename]</code> with the name of the file you want to open. If the file doesn't exist, Vim will create a new file with that name. Once executed, Vim will open the file and allow you to start editing right away.</p>
<p>Example:</p>
<pre><code class="lang-bash">vim data.txt <span class="hljs-comment"># A file that doesn't exist yet, so Vim creates a new file named data.txt</span>
</code></pre>
<p>When you execute the command <code>vim data.txt</code>, Vim opened a new file named <code>data.txt</code> because a file with this name did not previously exist in the current directory. In Vim, this is indicated by the message at the bottom of the editor, which reads: <code>data.txt [NEW]</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736790675984/98007e6c-cc90-42be-b853-f239f4f2e819.png" alt="Creating a new file using vim command" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If the file you want to edit already exists, you can open it in Vim by using the same command. You’ll be able to see its contents and make edits as needed. If you don’t see the <code>[New]</code> message at the bottom (as shown for new files), it confirms the file already exists.</p>
<h2 id="heading-modes-in-vim">Modes in Vim</h2>
<p>Vim has several modes, but the most commonly used ones are:</p>
<ul>
<li><p><strong>Normal Mode (Command Mode)</strong> – Used for navigation and executing commands.</p>
</li>
<li><p><strong>Insert Mode</strong> – Used for typing and editing text.</p>
</li>
</ul>
<p>When you open a file in Vim, it starts in Command Mode by default. This mode allows you to navigate, execute commands, and perform various operations without directly modifying the text. To edit the text in the file, you need to switch to Insert Mode.</p>
<h3 id="heading-what-is-command-mode"><strong>What is Command Mode?</strong></h3>
<p>Command Mode is the default mode in Vim. In this mode, you can navigate through the file using the arrow keys and cut, copy, paste, or delete the content and execute commands like saving or quitting.</p>
<p>To switch to Command Mode from any other mode, press the <code>Esc</code> key.</p>
<p>Example: If you are in Insert Mode and need to return to Command Mode to save or navigate, press <code>Esc</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736791773096/298ef4f6-6fba-493d-b05c-0564290862a1.png" alt="Vim in command mode" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>In the above image, "hello.txt" 1L, 1B” indicates that the file <code>hello.txt</code> is open and is currently in Command Mode, which is the default mode when you open Vim. 1L Represents 1 line in the file (currently the file is empty, so there’s just one blank line). 1B Represents 1 byte (the file is currently empty)</p>
<h3 id="heading-what-is-insert-mode"><strong>What is Insert Mode?</strong></h3>
<p>Insert Mode allows you to edit or type text in the file, similar to a traditional text editor. You can insert new lines, modify existing text, and make changes directly.</p>
<p>Press <code>i</code> while in Command Mode. This switches to Insert Mode and places the cursor at the current position, allowing you to start typing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736791793215/93d1ff39-9015-4209-8903-e29d1ccfb7c1.png" alt="Editor switched to INSERT mode" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>In the above image, “—INSERT—” indicates that the editor has been switched to Insert Mode, allowing you to type and edit text directly in the file.</p>
<p>Quick glance: When you open a file, always check the bottom of the terminal to determine your current mode. If the bottom line displays file-related information, you are in Command Mode. If the bottom line explicitly says <code>-- INSERT --</code>, you are in Insert Mode. If you want to go from <strong>Command Mode to Insert Mode:</strong> Press <code>i</code>. And from <strong>Insert Mode to Command Mode:</strong> Press <code>Esc</code>.</p>
<h2 id="heading-basic-vim-commands">Basic Vim Commands</h2>
<p>Below are some essential commands to help you manage files efficiently in Vim.</p>
<p><strong>Note:</strong> Before using these commands, ensure you're in command mode by pressing <code>Esc</code>.</p>
<h3 id="heading-1-save-changes"><strong>1. Save Changes</strong></h3>
<p>To save the changes made to a file, use the following command:</p>
<pre><code class="lang-bash">:w
</code></pre>
<p>This writes (saves) the current file without exiting Vim.</p>
<h3 id="heading-2-save-changes-and-quit"><strong>2. Save Changes and Quit</strong></h3>
<p>If you're done editing and want to save changes and exit Vim simultaneously, use:</p>
<pre><code class="lang-bash">:wq
</code></pre>
<p>This command writes the changes and then quits the editor.</p>
<h3 id="heading-3-quit-without-saving-changes"><strong>3. Quit Without Saving Changes</strong></h3>
<p>If you wish to exit without saving any changes, you can use:</p>
<pre><code class="lang-bash">:q
</code></pre>
<p>This command will close the file if no changes have been made since the last save.</p>
<h3 id="heading-4-force-quit-without-saving-changes"><strong>4. Force Quit Without Saving Changes</strong></h3>
<p>In case you've made changes to the file but want to exit without saving them, you can force quit with:</p>
<pre><code class="lang-bash">:q!
</code></pre>
<p>The <code>!</code> overrides any unsaved changes and closes the file immediately.</p>
<h2 id="heading-cut-copy-paste-and-delete-commands-plus-others"><strong>Cut, Copy, Paste, and Delete Commands (Plus Others)</strong></h2>
<h3 id="heading-how-to-position-the-cursor-for-text-manipulation"><strong>How to Position the Cursor for Text Manipulation</strong></h3>
<p>Before using any of the commands listed below (copy, cut, paste, and delete), it's important to understand where to place the cursor.</p>
<ul>
<li><p><strong>Copy (Yank), Cut, Delete:</strong> For most operations, the cursor needs to be placed <strong>at the starting point of the text</strong> you want to act upon. This means if you're copying or cutting a word, place the cursor at the <strong>beginning</strong> of the word. If you're working with a line, the cursor should be anywhere on that line. For paragraph-based operations, position the cursor anywhere within the paragraph.</p>
</li>
<li><p><strong>Paste:</strong> The text will be pasted at the cursor's <strong>current position</strong>. So, ensure your cursor is placed where you want the copied or cut content to appear.</p>
</li>
</ul>
<p>For example, Let’s say I have a file.txt that has the following content -</p>
<pre><code class="lang-bash"><span class="hljs-comment"># file.txt</span>
Hey readers,  
In this blog, we<span class="hljs-string">'re learning Vim. This file is for demonstration purposes,
where we'</span>ll explore various editing commands like cut, copy, paste, and delete. Let<span class="hljs-string">'s dive in!  

Vim is a powerful text editor that comes pre-installed on most Unix-based systems.
Mastering Vim can significantly boost your efficiency as a developer.  

To start with, let'</span>s learn some basic navigation and text manipulation commands.
Stay tuned as we <span class="hljs-built_in">break</span> down each <span class="hljs-built_in">command</span> with examples!
</code></pre>
<p>All the commands mentioned below will use this file as reference to explain examples.</p>
<h3 id="heading-1-copy-yank"><strong>1. Copy (Yank)</strong></h3>
<p>In Vim, copying is called "yanking." Use the following commands to copy text. The cursor's position is important to ensure the correct text is copied.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Description</td><td>Example</td></tr>
</thead>
<tbody>
<tr>
<td><code>yl</code></td><td>Copies a letter from the current cursor position (cursor must be on the left of the letter you want to copy)</td><td>If your cursor is on the left of <strong>"H"</strong> in <code>Hey readers,</code> and you type <code>yl</code>, Vim will copy <strong>"H"</strong> (only one character).</td></tr>
<tr>
<td><code>yw</code></td><td>Copies a word (cursor must be at the beginning of the word)</td><td>If your cursor is on the left of <strong>"blog,"</strong> in the sentence, Typing <code>yw</code> will copy <strong>"blog,"</strong> (including the comma).</td></tr>
<tr>
<td><code>yy</code></td><td>Copies the entire line (cursor can be anywhere on the line)</td><td>If your cursor is at any position on line 1 <strong>Hey readers,</strong> Typing <code>yy</code> will copy the entire line</td></tr>
<tr>
<td><code>2yy</code></td><td>Copies two lines, including the current cursor line (cursor can be anywhere on the first line)</td><td>If your cursor is at any position on line 1 <strong>Hey readers,</strong> Typing <code>2yy</code> will copy the entire line along with the next line.</td></tr>
<tr>
<td><code>y{</code></td><td>Copies the rest of the paragraph above the line where the cursor currently is (and including that line)</td><td>If your cursor is anywhere inside this paragraph 2 (Vim is a powerful text editor…), Typing <code>y{</code> will copy everything from the start of this paragraph up to the cursor position.</td></tr>
<tr>
<td><code>y}</code></td><td>Copies the rest of the paragraph below the line where the cursor currently is (and including that line)</td><td>If your cursor is anywhere inside this paragraph 2 (Vim is a powerful text editor…), Typing <code>y}</code> will copy everything from the current cursor position down to the end of the paragraph.</td></tr>
<tr>
<td><code>yG</code></td><td>Copies everything from the current line to the end of the file (cursor must be at the line where you want the copy operation to start)</td><td>If your cursor is at the beginning of this line “<strong>Vim is a powerful text editor…”,</strong> typing yG will copy this line and everything below it until the end of the file</td></tr>
</tbody>
</table>
</div><h3 id="heading-2-cut-change"><strong>2. Cut (Change)</strong></h3>
<p>Cutting in Vim is known as "changing" the text. The cut operation replaces the text. Just like with copying, the cursor's position is important when using cut commands.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Description</td><td>Example</td></tr>
</thead>
<tbody>
<tr>
<td><code>cl</code></td><td>Cuts a letter from the current cursor position (cursor must be on the left of the letter you want to cut)</td><td>If your cursor is on the <strong>"H"</strong> in <code>"Hey readers,"</code> and you type <code>cl</code>, Vim will delete <strong>"H"</strong> and switch to insert mode, allowing you to type a replacement.</td></tr>
<tr>
<td><code>cw</code></td><td>Cuts a word (cursor must be at the beginning of the word)</td><td>If your cursor is on the <strong>"blog,"</strong> in the sentence, typing <code>cw</code> will delete <strong>"blog,"</strong> (including the comma) and switch to insert mode, allowing you to type a replacement.</td></tr>
<tr>
<td><code>caw</code></td><td>Cuts a word along with trailing whitespace (cursor must be at the beginning of the word)</td><td>If your cursor is anywhere inside <strong>"blog,"</strong>, typing <code>caw</code> will delete <strong>" blog,"</strong> (including the preceding space) and switch to insert mode.</td></tr>
<tr>
<td><code>cc</code></td><td>Cuts the entire line (cursor can be anywhere on the line)</td><td>If your cursor is at any position on line 1 (<code>Hey readers,</code>), typing <code>cc</code> will delete the whole line and switch to insert mode.</td></tr>
<tr>
<td><code>2cc</code></td><td>Cuts two lines, including the current cursor line (cursor can be anywhere on the first line)</td><td>If your cursor is at any position on line 1 (<code>Hey readers,</code>), typing <code>2cc</code> will delete this line along with the next line and switch to insert mode.</td></tr>
<tr>
<td><code>c{</code></td><td>Cuts the text in the paragraph above the cursor’s location</td><td>If your cursor is anywhere inside paragraph 2 (<code>Vim is a powerful text editor…</code>), typing <code>c{</code> will delete everything from the cursor position to the start of the paragraph and switch to insert mode.</td></tr>
<tr>
<td><code>c}</code></td><td>Cuts the text in the paragraphs below the cursor’s location</td><td>If your cursor is anywhere inside paragraph 2 (<code>Vim is a powerful text editor…</code>), typing <code>c}</code> will delete everything from the cursor position to the end of the paragraph and switch to insert mode.</td></tr>
<tr>
<td><code>cG</code></td><td>Cuts everything from the current line to the end of the file (cursor must be at the line where you want the cut operation to start)</td><td>If your cursor is at the beginning of this line (<code>Vim is a powerful text editor…</code>), typing <code>cG</code> will delete this line and everything below it until the end of the file, then switch to insert mode.</td></tr>
</tbody>
</table>
</div><h3 id="heading-3-paste"><strong>3. Paste</strong></h3>
<p>To paste the copied or cut text, use the following commands. The pasted text will appear at the <strong>current cursor position</strong>.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>p</code> (Lowercase)</td><td>Pastes the copied or cut text <strong>after</strong> the cursor</td></tr>
<tr>
<td><code>P</code> (Uppercase)</td><td>Pastes the copied or cut text <strong>before</strong> the cursor</td></tr>
</tbody>
</table>
</div><h3 id="heading-4-delete"><strong>4. Delete</strong></h3>
<p>Deleting text in Vim allows you to remove unwanted text while remaining in command mode. The cursor must be positioned correctly to delete the intended text. Once deleted, you can still paste the deleted text to a new location.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Description</td><td>Example</td></tr>
</thead>
<tbody>
<tr>
<td><code>dl</code></td><td>Deletes a letter from the current cursor position (cursor must be on the left of the letter you want to delete)</td><td>If your cursor is on <strong>"H"</strong> in <code>"Hey readers,"</code> and you type <code>dl</code>, Vim will delete <strong>"H"</strong>.</td></tr>
<tr>
<td><code>dw</code></td><td>Deletes a word (cursor must be at the beginning of the word)</td><td>If your cursor is on <strong>"blog,"</strong> in the sentence, typing <code>dw</code> will delete <strong>"blog,"</strong> (including the comma).</td></tr>
<tr>
<td><code>daw</code></td><td>Deletes a word along with trailing whitespace (cursor must be at the beginning of the word)</td><td>If your cursor is anywhere inside <strong>"blog,"</strong>, typing <code>daw</code> will delete <strong>" blog,"</strong> (including the preceding space).</td></tr>
<tr>
<td><code>dd</code></td><td>Deletes the entire line (cursor can be anywhere on the line)</td><td>If your cursor is at any position on line 1 (<code>Hey readers,</code>), typing <code>dd</code> will delete the whole line.</td></tr>
<tr>
<td><code>2dd</code></td><td>Deletes two lines, including the current cursor line (cursor can be anywhere on the first line)</td><td>If your cursor is at any position on line 1 (<code>Hey readers,</code>), typing <code>2dd</code> will delete this line along with the next line.</td></tr>
<tr>
<td><code>d{</code></td><td>Deletes the paragraph above the cursor (cursor can be anywhere in the paragraph you want to delete)</td><td>If your cursor is anywhere inside paragraph 2 (<code>Vim is a powerful text editor…</code>), typing <code>d{</code> will delete everything from the cursor position to the start of the paragraph.</td></tr>
<tr>
<td><code>d}</code></td><td>Deletes the paragraph below the cursor (cursor can be anywhere in the paragraph you want to delete)</td><td>If your cursor is anywhere inside paragraph 2 (<code>Vim is a powerful text editor…</code>), typing <code>d}</code> will delete everything from the cursor position to the end of the paragraph.</td></tr>
<tr>
<td><code>dG</code></td><td>Deletes everything from the current line to the end of the file (cursor must be at the line where you want the delete operation to start)</td><td>If your cursor is at the beginning of this line (<code>Vim is a powerful text editor…</code>), typing <code>dG</code> will delete this line and everything below it until the end of the file.</td></tr>
</tbody>
</table>
</div><h3 id="heading-5-other-useful-commands"><strong>5. Other Useful Commands</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Commands</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>gg</code></td><td>Moves the cursor to the first line of the file</td></tr>
<tr>
<td><code>G</code></td><td>Moves the cursor to the last line of the file</td></tr>
<tr>
<td><code>:se nu</code></td><td>Sets line numbers in the file</td></tr>
<tr>
<td><code>:se nonu</code></td><td>Removes line numbers from the file</td></tr>
<tr>
<td><code>:u</code></td><td>Undoes the last action</td></tr>
<tr>
<td><code>:10</code></td><td>Jumps to line 10 (for example)</td></tr>
</tbody>
</table>
</div><p>Note that <strong>Delete</strong> removes text but doesn’t store it in the system clipboard by default. The text goes into Vim’s unnamed register, meaning it can be pasted within Vim but not outside it. <strong>Cut</strong> explicitly stores text in the clipboard so you can paste it outside Vim as well.</p>
<h2 id="heading-search-and-replace-commands"><strong>Search and Replace Commands</strong></h2>
<p>Vim provides powerful search and replace functionality that allows you to find specific words or patterns and replace them efficiently. Understanding how to search and replace text is key to improving your productivity when editing large files.</p>
<p>Below is a breakdown of the various search and replace commands in Vim.</p>
<h3 id="heading-search-commands"><strong>Search Commands</strong></h3>
<ul>
<li><p><strong>Search Forward</strong> (<code>/</code>): When you want to search for a word or pattern below the cursor, use the <code>/</code> command. This will search forward in the file.</p>
</li>
<li><p><strong>Search Backward</strong> (<code>?</code>): Similarly, if you want to search for a word or pattern above the cursor, use the <code>?</code> command. This will search backward in the file.</p>
</li>
</ul>
<p>After performing a search, you can navigate through the search results:</p>
<ul>
<li><p><code>n</code>: Go to the next match in the same direction (forward if <code>/</code>, backward if <code>?</code>).</p>
</li>
<li><p><code>N</code>: Go to the previous match in the opposite direction (backward if <code>/</code>, forward if <code>?</code>).</p>
</li>
</ul>
<h3 id="heading-replace-commands"><strong>Replace Commands</strong></h3>
<p>Once you've located the word or pattern you want to replace, Vim provides several commands for replacing text.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command (In command mode)</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>/search_word</code></td><td>Searches for the given word and moves the cursor to its first occurrence below the current cursor position.</td></tr>
<tr>
<td><code>:s/search_word/replace_word</code></td><td>Replaces the first occurrence of <code>search_word</code> with <code>replace_word</code> in the current line.</td></tr>
<tr>
<td><code>:s/search_word/replace_word/g</code></td><td>Replaces all occurrences of <code>search_word</code> with <code>replace_word</code> in the current line.</td></tr>
<tr>
<td><code>:%s/search_word/replace_word</code></td><td>Replaces the first occurrence of <code>search_word</code> with <code>replace_word</code> in the entire file.</td></tr>
<tr>
<td><code>:%s/search_word/replace_word/g</code></td><td>Replaces all occurrences of <code>search_word</code> with <code>replace_word</code> in the entire file.</td></tr>
</tbody>
</table>
</div><p>Here’s an example:</p>
<p>In Vim, the <code>/Tanishka</code> pattern searches for an exact, case-sensitive match of the word "Tanishka."</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736795075490/1ecbc6f4-65ef-46a9-841d-dbb3251f8ec7.png" alt="1ecbc6f4-65ef-46a9-841d-dbb3251f8ec7" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>To replace "Tanishka" with another word, like "Linux," you can use the substitution command like this: <code>:s/Tanishka/Linux</code>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736794982127/c9dffdba-c250-4601-8e3b-950d875908f8.png" alt="c9dffdba-c250-4601-8e3b-950d875908f8" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>By default, this command replaces only the first occurrence of "Tanishka" in the line where the cursor is located.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736794997175/9831423d-4aab-43a1-9b06-408fb5dd4828.png" alt="9831423d-4aab-43a1-9b06-408fb5dd4828" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If you want to replace all occurrences of "Tanishka" in the same line, you need to add the <code>g</code> (global) flag after the replacement string like this: <code>:s/Tanishka/Linux/g</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736795009744/c53d0de2-b7b4-4154-baaa-3d28fb3c29db.png" alt="c53d0de2-b7b4-4154-baaa-3d28fb3c29db" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This ensures that every instance of "Tanishka" in the current line is replaced with "Linux."</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736795017539/dac817e5-8130-44d4-8f09-d887e42cc859.png" alt="dac817e5-8130-44d4-8f09-d887e42cc859" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Similarly, the <code>%</code> symbol is used to specify the <strong>entire file</strong> when performing a substitution. Here's how it works in combination with the substitution command:</p>
<ol>
<li><p><strong>Replace the first occurrence in each line of the file:</strong></p>
<ul>
<li><code>:%s/Tanishka/Linux</code>: This command replaces only the first occurrence of "Tanishka" in each line of the file.</li>
</ul>
</li>
<li><p><strong>Replace all occurrences in the entire file:</strong></p>
<ul>
<li><code>:%s/Tanishka/Linux/g</code>: The addition of the <code>g</code> (global) flag ensures that all occurrences of "Tanishka" in every line of the file are replaced with "Linux."</li>
</ul>
</li>
</ol>
<h2 id="heading-how-to-read-files-using-more-and-less"><strong>How to Read Files using</strong> <code>more</code> <strong>and</strong> <code>less</code></h2>
<h3 id="heading-the-cat-command">The <code>cat</code> command</h3>
<p>The cat command is often used to read file content.</p>
<p>For example:</p>
<pre><code class="lang-bash">cat file.txt <span class="hljs-comment"># Displays content of file</span>
</code></pre>
<p>While the <code>cat</code> command is a straightforward tool for viewing file contents, its simplicity often falls short when working with large files or when precise navigation is required. That’s where the <code>more</code> and <code>less</code> commands come into play, offering enhanced functionality for viewing and navigating text efficiently.</p>
<h3 id="heading-the-more-command">The <code>more</code> Command</h3>
<p>The <code>more</code> command allows you to view files one screen at a time, making it a significant upgrade from <code>cat</code> when dealing with large files. But it comes with limitations in terms of backward navigation and advanced features.</p>
<p>Here’s the syntax for <code>more</code>:</p>
<pre><code class="lang-bash">more [FILENAME]
</code></pre>
<p>And here’s an example:</p>
<pre><code class="lang-bash">more file.txt <span class="hljs-comment"># Displays content of file.txt one page at a time</span>
</code></pre>
<p>Keys used while viewing:</p>
<ol>
<li><p>Spacebar: Moves forward by one page</p>
</li>
<li><p>Enter: Moves forward by one line</p>
</li>
<li><p>b: Moves back by one page</p>
</li>
<li><p>q: Quit and exit file content</p>
</li>
</ol>
<h3 id="heading-the-less-command">The <code>less</code> Command</h3>
<p>The <code>less</code> command is often considered a superior alternative to <code>more</code> due to its advanced navigation capabilities and flexibility. Unlike <code>more</code>, <code>less</code> allows both forward and backward navigation, making it ideal for reviewing large files or logs.</p>
<p>Here’s its syntax:</p>
<pre><code class="lang-bash">less [FILENAME]
</code></pre>
<p>And here’s an example:</p>
<pre><code class="lang-bash">less file.txt <span class="hljs-comment"># Displays content of file.txt one page at a time</span>
</code></pre>
<p>Keys used while viewing:</p>
<ol>
<li><p>Spacebar: Moves forward by one page</p>
</li>
<li><p>Enter: Moves forward by one line</p>
</li>
<li><p>b: Moves back by one page</p>
</li>
<li><p>Up/Down arrow key: Moves up or down by one line</p>
</li>
<li><p>q: Quit and exit less</p>
</li>
</ol>
<p>The only major difference between the <code>more</code> and <code>less</code> commands is that the less command allows bidirectional navigation, so it’s typically more convenient to use.</p>
<h2 id="heading-text-filters"><strong>Text Filters</strong></h2>
<p>A <strong>text filter</strong> in Linux is a command-line utility that processes text data by modifying, extracting, or formatting it before outputting the result.</p>
<h3 id="heading-horizontal-filters">Horizontal filters</h3>
<p>Horizontal filtering focuses on extracting, manipulating, or displaying specific lines of a file or command output. Common tools include <code>head</code>, <code>tail</code>, and <code>grep</code>.</p>
<ol>
<li><p><code>head</code>: The head command displays the first few lines of a file. By default, it shows the first 10 lines. Here’s its syntax:</p>
<pre><code class="lang-bash"> head [OPTIONS] [FILENAME]
</code></pre>
<p> And here’s an example of how to use it:</p>
<pre><code class="lang-bash"> head file.txt <span class="hljs-comment"># Displays first ten lines from file.txt</span>
 head -n 5 file.txt <span class="hljs-comment"># Displays first five lines from file.txt</span>
</code></pre>
</li>
<li><p><code>tail</code>: The tail command displays the last few lines of a file. By default, it shows the last 10 lines. Here’s its syntax:</p>
<pre><code class="lang-bash"> tail [OPTIONS] [FILENAME]
</code></pre>
<p> And here’s an example:</p>
<pre><code class="lang-bash"> tail file.txt <span class="hljs-comment"># Displays last ten lines from file.txt</span>
 tail -n 5 file.txt <span class="hljs-comment"># Displays last five lines from file.txt</span>
</code></pre>
</li>
<li><p><code>grep</code>: The grep command searches for patterns within a file or input. It filters out lines that match a given pattern. Here’s its syntax:</p>
<pre><code class="lang-bash"> grep [OPTIONS] [PATTERN] [FILENAME]
</code></pre>
<p> Options:</p>
<ul>
<li><p><code>-i</code>: Case-insensitive search.</p>
</li>
<li><p><code>-v</code>: Invert the match (exclude matching lines).</p>
</li>
<li><p><code>-n</code>: Show line numbers of matches.</p>
</li>
</ul>
</li>
</ol>
<p>    Example</p>
<pre><code class="lang-bash">    grep Tanishka data.txt <span class="hljs-comment"># Displays lines that have 'Tanishka' in them</span>
    grep -i Tanishka data.txt <span class="hljs-comment"># Displays lines that have 'Tanishka' irrespective of case</span>
    grep -v Tanishka data.txt <span class="hljs-comment"># Displays lines that do not have 'Tanishka' in them</span>
    grep -n Tanishka data.txt <span class="hljs-comment"># Displays lines that have 'Tanishka' in them along with number line</span>
</code></pre>
<h3 id="heading-vertical-filters">Vertical Filters</h3>
<ol>
<li><p><code>cut</code>: The cut command displays selected parts of lines from each file based on delimiters, byte positions, or character fields. Here’s its syntax:</p>
<pre><code class="lang-bash"> cut [OPTION] [FILENAME]
</code></pre>
<p> It also comes with various options:</p>
<ul>
<li><p><code>-c</code>: Extract specific characters.</p>
</li>
<li><p><code>-b</code>: Extract specific bytes.</p>
</li>
<li><p><code>-d</code>: Specify a custom delimiter (default is tab).</p>
<ul>
<li><code>cut -d ":" -f 2 file.txt</code> → Second field separated by <code>:</code>.</li>
</ul>
</li>
<li><p><code>-f</code>: Extract specific fields.</p>
<ul>
<li><code>cut -d "," -f 1,3 file.csv</code> → Fields 1 and 3 from a CSV.</li>
</ul>
</li>
</ul>
</li>
</ol>
<p>    Example:</p>
<pre><code class="lang-bash">    cut -c 1-10 Sample.txt <span class="hljs-comment"># Displays characters from position 1 to 10</span>
    cut -c 5 Sample.txt <span class="hljs-comment"># Displays character at position 5</span>
    cut -c 3,5 Sample.txt <span class="hljs-comment"># Displays characters from position 3 and 5 only</span>
    cut -d <span class="hljs-string">" "</span> -f 1 Sample.txt <span class="hljs-comment"># Displays first field separated by a space</span>
    cut -d <span class="hljs-string">" "</span> -f 2 Sample.txt <span class="hljs-comment"># Displays second field separated by a space</span>
    cut -d <span class="hljs-string">" "</span> -f 3 Sample.txt <span class="hljs-comment"># Displays third field separated by a space</span>
    cut -d <span class="hljs-string">" "</span> -f 1-3 Sample.txt <span class="hljs-comment"># Displays first to third fields separated by a space</span>
    cut -d <span class="hljs-string">" "</span> -f 1,3 Sample.txt <span class="hljs-comment"># Displays first and third fields separated by a space</span>
    cut -d <span class="hljs-string">":"</span> -f 5 /etc/passwd <span class="hljs-comment"># Displays fifth field separated by : in /etc/passwd</span>
</code></pre>
<h2 id="heading-text-summarization-tool-wc"><strong>Text Summarization Tool:</strong> <code>wc</code></h2>
<p>The <code>wc</code> (word count) command is used to display the number of lines, words, characters, or bytes in a file or input. It is a simple yet powerful utility you can use to summarize text content.</p>
<p>Here’s its syntax:</p>
<pre><code class="lang-bash">wc [OPTION] [FILENAME]
</code></pre>
<p>And here are its options:</p>
<ul>
<li><p><code>-l</code>: Displays the number of lines.</p>
</li>
<li><p><code>-w</code>: Displays the number of words.</p>
</li>
<li><p><code>-c</code>: Displays the number of bytes.</p>
</li>
<li><p><code>-m</code>: Displays the number of characters (useful for multibyte characters).</p>
</li>
<li><p><code>-L</code>: Displays the length of the longest line.</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-bash">wc Sample.txt <span class="hljs-comment"># Displays line count, word count, and byte count in Sample.txt</span>
wc -w Sample.txt <span class="hljs-comment"># Displays number of words in Sample.txt</span>
wc -l Sample.txt <span class="hljs-comment"># Displays number of lines in Sample.txt</span>
wc -L Sample.txt <span class="hljs-comment"># Displays number of characters in longest line in Sample.txt</span>

wc -c Sample.txt <span class="hljs-comment"># Displays number of bytes in Sample.txt (Actual storage size)</span>
wc -m Sample.txt <span class="hljs-comment"># Displays number of characters in Sample.txt (Actual number of characters regardless of enoing)</span>

<span class="hljs-comment"># ABCD😄</span>
wc -c above.txt <span class="hljs-comment"># "ABCD" = 4 bytes + "😄" = 4 bytes. 4 + 4 = 8 bytes</span>
wc -m above.txt <span class="hljs-comment"># "ABC" = 4 + "😄" = 1 byte. 4 + 1 = 5 bytes</span>
</code></pre>
<h2 id="heading-final-words">Final Words</h2>
<p>In this article, we covered the basics of using Vim, a powerful and flexible text editor. We started with how to open a file in Vim and then you learned about its modes. You also learned how to navigate through files, edit text, and use features like search and replace to save time. We also explored a helpful summarization tool.</p>
<p>If you're new to Linux and want to build a strong foundation, check <a target="_blank" href="https://www.freecodecamp.org/news/guide-to-rhel-linux-basics/">my previous article</a> where I cover the basics of Linux, including essential commands and tips for beginners. It’s a perfect starting point to complement what you’ve learned about Vim here!</p>
<p>Keep practising these commands, and soon they'll become second nature to you. Mastery comes with repetition, so continue experimenting and applying these fundamentals in real-world scenarios.</p>
<p>Stay tuned for more articles. Get ready to take your RHEL skills to the next level.</p>
<p><a target="_blank" href="https://linktr.ee/tanishkamakode">Let’s connect!</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Create a Basic CI/CD Pipeline with Webhooks on Linux ]]>
                </title>
                <description>
                    <![CDATA[ In the fast-paced world of software development, delivering high-quality applications quickly and reliably is crucial. This is where CI/CD (Continuous Integration and Continuous Delivery/Deployment) comes into play. CI/CD is a set of practices and to... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/create-a-basic-cicd-pipeline-with-webhooks-on-linux/</link>
                <guid isPermaLink="false">67995e567a54c877fce42276</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ linux for beginners ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Python ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Python 3 ]]>
                    </category>
                
                    <category>
                        <![CDATA[ python beginner ]]>
                    </category>
                
                    <category>
                        <![CDATA[ ci-cd ]]>
                    </category>
                
                    <category>
                        <![CDATA[ CI/CD ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Juan P. Romano ]]>
                </dc:creator>
                <pubDate>Tue, 28 Jan 2025 22:46:46 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737640144719/9035597c-0a69-4146-93cc-8bd659384169.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In the fast-paced world of software development, delivering high-quality applications quickly and reliably is crucial. This is where <strong>CI/CD</strong> (Continuous Integration and Continuous Delivery/Deployment) comes into play.</p>
<p>CI/CD is a set of practices and tools designed to automate and streamline the process of integrating code changes, testing them, and deploying them to production. By adopting CI/CD, your team can reduce manual errors, speed up release cycles, and ensure that your code is always in a deployable state.</p>
<p>In this tutorial, we’ll focus on a beginner-friendly approach to setting up a basic CI/CD pipeline using Bitbucket, a Linux server, and Python with Flask. Specifically, we’ll create an automated process that pulls the latest changes from a Bitbucket repository to your Linux server whenever there’s a push or merge to a specific branch.</p>
<p>This process will be powered by Bitbucket webhooks and a simple Flask-based Python server that listens for incoming webhook events and triggers the deployment.</p>
<p>It’s important to note that CI/CD is a vast and complex field, and this tutorial is designed to provide a foundational understanding rather than to be an exhaustive guide.</p>
<p>We’ll cover the basics of setting up a CI/CD pipeline using tools that are accessible to beginners. Just keep in mind that real-world CI/CD systems often involve more advanced tools and configurations, such as containerization, orchestration, and multi-stage testing environments.</p>
<p>By the end of this tutorial, you’ll have a working example of how to automate deployments using Bitbucket, Linux, and Python, which you can build upon as you grow more comfortable with CI/CD concepts.</p>
<h3 id="heading-table-of-contents">Table of Contents:</h3>
<ol>
<li><p><a class="post-section-overview" href="#heading-why-is-cicd-important">Why is CI/CD Important?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-1-set-up-a-webhook-in-bitbucket">Step 1: Set Up a Webhook in Bitbucket</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-set-up-the-flask-listener-on-your-linux-server">Step 2: Set Up the Flask Listener on Your Linux Server</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-3-expose-the-flask-app-optional">Step 3: Expose the Flask App (Optional)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-4-test-the-setup">Step 4: Test the Setup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-5-security-considerations">Step 5: Security Considerations</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-wrapping-up">Wrapping Up</a></p>
</li>
</ol>
<h2 id="heading-why-is-cicd-important">Why is CI/CD Important?</h2>
<p>CI/CD has become a cornerstone of modern software development for several reasons. First and foremost, it accelerates the development process. By automating repetitive tasks like testing and deployment, developers can focus more on writing code and less on manual processes. This leads to faster delivery of new features and bug fixes, which is especially important in competitive markets where speed can be a differentiator.</p>
<p>Another key benefit of CI/CD is reduced errors and improved reliability. Automated testing ensures that every code change is rigorously checked for issues before it’s integrated into the main codebase. This minimizes the risk of introducing bugs that could disrupt the application or require costly fixes later. Automated deployment pipelines also reduce the likelihood of human error during the release process, ensuring that deployments are consistent and predictable.</p>
<p>CI/CD also fosters better collaboration among team members. In traditional development workflows, integrating code changes from multiple developers can be a time-consuming and error-prone process. With CI/CD, code is integrated and tested frequently, often multiple times a day. This means that conflicts are detected and resolved early, and the codebase remains in a stable state. As a result, teams can work more efficiently and with greater confidence, even when multiple contributors are working on different parts of the project simultaneously.</p>
<p>Finally, CI/CD supports continuous improvement and innovation. By automating the deployment process, teams can release updates to production more frequently and with less risk. This enables them to gather feedback from users faster and iterate on their products more effectively.</p>
<h3 id="heading-what-well-cover-in-this-tutorial">What We’ll Cover in This Tutorial</h3>
<p>In this tutorial, we’ll walk through the process of setting up a simple CI/CD pipeline that automates the deployment of code changes from a Bitbucket repository to a Linux server. Here’s what you’ll learn:</p>
<ol>
<li><p>How to configure a Bitbucket repository to send webhook notifications whenever there’s a push or merge to a specific branch.</p>
</li>
<li><p>How to set up a Flask-based Python server on your Linux server to listen for incoming webhook events.</p>
</li>
<li><p>How to write a script that pulls the latest changes from the repository and deploys them to the server.</p>
</li>
<li><p>How to test and troubleshoot your automated deployment process.</p>
</li>
</ol>
<p>By the end of this tutorial, you’ll have a working example of a basic CI/CD pipeline that you can customize and expand as needed. Let’s get started!</p>
<h2 id="heading-step-1-set-up-a-webhook-in-bitbucket"><strong>Step 1: Set Up a Webhook in Bitbucket</strong></h2>
<p>Before starting with the setup, let’s briefly explain what a <strong>webhook</strong> is and how it fits into our CI/CD process.</p>
<p>A webhook is a mechanism that allows one system to notify another system about an event in real-time. In the context of Bitbucket, a webhook can be configured to send an HTTP request (often a POST request with payload data) to a specified URL whenever a specific event occurs in your repository, such as a push to a branch or a pull request merge.</p>
<p>In our case, the webhook will notify our Flask-based Python server (running on your Linux server) whenever there’s a push or merge to a specific branch. This notification will trigger a script on the server to pull the latest changes from the repository and deploy them automatically. Essentially, the webhook acts as the bridge between Bitbucket and your server, enabling seamless automation of the deployment process.</p>
<p>Now that you understand the role of a webhook, let’s set one up in Bitbucket:</p>
<ol>
<li><p>Log in to Bitbucket and navigate to your repository.</p>
</li>
<li><p>On the left-hand sidebar, click on <strong>Settings</strong>.</p>
</li>
<li><p>Under the <strong>Workflow</strong> section, find and click on <strong>Webhooks</strong>.</p>
</li>
<li><p>Click the <strong>Add webhook</strong> button.</p>
</li>
<li><p>Enter a name for your webhook (for example, "Automatic Pull").</p>
</li>
<li><p>In the <strong>URL</strong> field, provide the URL to your server where the webhook will send the request. If you’re running a Flask app locally, this would be something like <a target="_blank" href="http://your-server-ip/pull-repo"><code>http://your-server-ip/pull-repo</code></a>. (For production environments, it’s highly recommended to use HTTPS to secure the communication between Bitbucket and your server.)</p>
</li>
<li><p>In the <strong>Triggers</strong> section, choose the events you want to listen to. For this example, we will select <strong>Push</strong> (and optionally, <strong>Pull Request Merged</strong> if you want to deploy after merges, too).</p>
</li>
<li><p>Save the webhook with a self-explanatory name so it’s easy to identify later.</p>
</li>
</ol>
<p>Once the webhook is set up, Bitbucket will send a POST request to the specified URL every time the selected event occurs. In the next steps, we’ll set up a Flask server to handle these incoming requests and trigger the deployment process.</p>
<p>Here is what you should see when you setup up the Bitbucket webhook</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738092826221/e0d96fd3-d843-4064-a08d-4de95b985800.png" alt="Bitbucket screen showing the user the creation of a webhook, where your server will pull the modifications when you push or merge in your reposiroty." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-step-2-set-up-the-flask-listener-on-your-linux-server"><strong>Step 2: Set Up the Flask Listener on Your Linux Server</strong></h2>
<p>In the next step, you’ll set up a simple web server on your Linux machine that will listen for the webhook from Bitbucket. When it receives the notification, it will execute a <code>git pull</code> or a force pull (in case of local changes) to update the repository.</p>
<h3 id="heading-install-flask"><strong>Install Flask:</strong></h3>
<p>To create the Flask application, first install Flask by running:</p>
<pre><code class="lang-bash">pip install flask
</code></pre>
<h3 id="heading-create-the-flask-app"><strong>Create the Flask App:</strong></h3>
<p>Create a new Python script (for example, <a target="_blank" href="http://app.py"><code>app_repo_pull.py</code></a>) on your server and add the following code:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask
<span class="hljs-keyword">import</span> subprocess

app = Flask(__name__)

<span class="hljs-meta">@app.route('/pull-repo', methods=['POST'])</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">pull_repo</span>():</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-comment"># Fetch the latest changes from the remote repository</span>
        subprocess.run([<span class="hljs-string">"git"</span>, <span class="hljs-string">"-C"</span>, <span class="hljs-string">"/path/to/your/repository"</span>, <span class="hljs-string">"fetch"</span>], check=<span class="hljs-literal">True</span>)
        <span class="hljs-comment"># Force reset the local branch to match the remote 'test' branch</span>
        subprocess.run([<span class="hljs-string">"git"</span>, <span class="hljs-string">"-C"</span>, <span class="hljs-string">"/path/to/your/repository"</span>, <span class="hljs-string">"reset"</span>, <span class="hljs-string">"--hard"</span>, <span class="hljs-string">"origin/test"</span>], check=<span class="hljs-literal">True</span>)  <span class="hljs-comment"># Replace 'test' with your branch name</span>
        <span class="hljs-keyword">return</span> <span class="hljs-string">"Force pull successful"</span>, <span class="hljs-number">200</span>
    <span class="hljs-keyword">except</span> subprocess.CalledProcessError:
        <span class="hljs-keyword">return</span> <span class="hljs-string">"Failed to force pull the repository"</span>, <span class="hljs-number">500</span>

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    app.run(host=<span class="hljs-string">'0.0.0.0'</span>, port=<span class="hljs-number">5000</span>)
</code></pre>
<p>Here’s what this code does:</p>
<ul>
<li><p><a target="_blank" href="http://subprocess.run"><code>subprocess.run</code></a><code>(["git", "-C", "/path/to/your/repository", "fetch"])</code>: This command fetches the latest changes from the remote repository without affecting the local working directory.</p>
</li>
<li><p><a target="_blank" href="http://subprocess.run"><code>subprocess.run</code></a><code>(["git", "-C", "/path/to/your/repository", "reset", "--hard", "origin/test"])</code>: This command performs a hard reset, forcing the local repository to match the remote <code>test</code> branch. Replace <code>test</code> with the name of your branch.</p>
</li>
</ul>
<p>Make sure to replace <code>/path/to/your/repository</code> with the actual path to your local Git repository.</p>
<h2 id="heading-step-3-expose-the-flask-app-optional"><strong>Step 3: Expose the Flask App (Optional)</strong></h2>
<p>If you want the Flask app to be accessible from outside your server, you need to expose it publicly. For this, you can set up a reverse proxy with NGINX. Here's how to do that:</p>
<p>First, install NGINX if you don't have it already by running this command:</p>
<pre><code class="lang-bash">sudo apt-get install nginx
</code></pre>
<p>Next, you’ll need to configure NGINX to proxy requests to your Flask app. Open the NGINX configuration file:</p>
<pre><code class="lang-bash">sudo nano /etc/nginx/sites-available/default
</code></pre>
<p>Modify the configuration to include this block:</p>
<pre><code class="lang-bash">server {
    listen 80;
    server_name your-server-ip;

    location /pull-repo {
        proxy_pass http://localhost:5000;
        proxy_set_header Host <span class="hljs-variable">$host</span>;
        proxy_set_header X-Real-IP <span class="hljs-variable">$remote_addr</span>;
        proxy_set_header X-Forwarded-For <span class="hljs-variable">$proxy_add_x_forwarded_for</span>;
        proxy_set_header X-Forwarded-Proto <span class="hljs-variable">$scheme</span>;
    }
}
</code></pre>
<p>Now just reload NGINX to apply the changes:</p>
<pre><code class="lang-bash">sudo systemctl reload nginx
</code></pre>
<h2 id="heading-step-4-test-the-setup"><strong>Step 4: Test the Setup</strong></h2>
<p>Now that everything is set up, go ahead and start the Flask app by executing this Python script:</p>
<pre><code class="lang-bash">python3 app_repo_pull.py
</code></pre>
<p>Now to test if everything is working:</p>
<ol>
<li><strong>Make a commit</strong>: Push a commit to the <code>test</code> branch in your Bitbucket repository. This action will trigger the webhook.</li>
</ol>
<ol>
<li><p><strong>Webhook trigger</strong>: The webhook will send a POST request to your server. The Flask app will receive this request, perform a force pull from the <code>test</code> branch, and update the local repository.</p>
</li>
<li><p><strong>Verify the pull</strong>: Check the log output of your Flask app or inspect the local repository to verify that the changes have been pulled and applied successfully.</p>
</li>
</ol>
<h2 id="heading-step-5-security-considerations"><strong>Step 5: Security Considerations</strong></h2>
<p>When exposing a Flask app to the internet, securing your server and application is crucial to protect it from unauthorized access, data breaches, and attacks. Here are the key areas to focus on:</p>
<h4 id="heading-1-use-a-secure-server-with-proper-firewall-rules"><strong>1. Use a Secure Server with Proper Firewall Rules</strong></h4>
<p>A secure server is one that is configured to minimize exposure to external threats. This involves using firewall rules, minimizing unnecessary services, and ensuring that only required ports are open for communication.</p>
<h5 id="heading-example-of-a-secure-server-setup"><strong>Example of a secure server setup:</strong></h5>
<ul>
<li><p><strong>Minimal software</strong>: Only install the software you need (for example, Python, Flask, NGINX) and remove unnecessary services.</p>
</li>
<li><p><strong>Operating system updates</strong>: Ensure your server's operating system is up-to-date with the latest security patches.</p>
</li>
<li><p><strong>Firewall configuration</strong>: Use a firewall to control incoming and outgoing traffic and limit access to your server.</p>
</li>
</ul>
<p>For example, a basic <strong>UFW (Uncomplicated Firewall)</strong> configuration on Ubuntu might look like this:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Allow SSH (port 22) for remote access</span>
sudo ufw allow ssh

<span class="hljs-comment"># Allow HTTP (port 80) and HTTPS (port 443) for web traffic</span>
sudo ufw allow http
sudo ufw allow https

<span class="hljs-comment"># Enable the firewall</span>
sudo ufw <span class="hljs-built_in">enable</span>

<span class="hljs-comment"># Check the status of the firewall</span>
sudo ufw status
</code></pre>
<p>In this case:</p>
<ul>
<li><p>The firewall allows incoming SSH connections on port 22, HTTP on port 80, and HTTPS on port 443.</p>
</li>
<li><p>Any unnecessary ports or services should be blocked by default to limit exposure to attacks.</p>
</li>
</ul>
<h5 id="heading-additional-firewall-rules"><strong>Additional Firewall Rules:</strong></h5>
<ul>
<li><p><strong>Limit access to webhook endpoint</strong>: Ideally, only allow traffic to the webhook endpoint from Bitbucket's IP addresses to prevent external access. You can set this up in your firewall or using your web server (for example, NGINX) by only accepting requests from Bitbucket's IP range.</p>
</li>
<li><p><strong>Deny all other incoming traffic</strong>: For any service that does not need to be exposed to the internet (for example, database ports), ensure those ports are blocked.</p>
</li>
</ul>
<h4 id="heading-2-add-authentication-to-the-flask-app"><strong>2. Add Authentication to the Flask App</strong></h4>
<p>Since your Flask app will be publicly accessible via the webhook URL, you should consider adding authentication to ensure only authorized users (such as Bitbucket's servers) can trigger the pull.</p>
<h5 id="heading-basic-authentication-example"><strong>Basic Authentication Example:</strong></h5>
<p>You can use a simple token-based authentication to secure your webhook endpoint. Here’s an example of how to modify your Flask app to require an authentication token:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask, request, abort
<span class="hljs-keyword">import</span> subprocess

app = Flask(__name__)

<span class="hljs-comment"># Define a secret token for webhook verification</span>
SECRET_TOKEN = <span class="hljs-string">'your-secret-token'</span>

<span class="hljs-meta">@app.route('/pull-repo', methods=['POST'])</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">pull_repo</span>():</span>
    <span class="hljs-comment"># Check if the request contains the correct token</span>
    token = request.headers.get(<span class="hljs-string">'X-Hub-Signature'</span>)
    <span class="hljs-keyword">if</span> token != SECRET_TOKEN:
        abort(<span class="hljs-number">403</span>)  <span class="hljs-comment"># Forbidden if the token is incorrect</span>

    <span class="hljs-keyword">try</span>:
        subprocess.run([<span class="hljs-string">"git"</span>, <span class="hljs-string">"-C"</span>, <span class="hljs-string">"/path/to/your/repository"</span>, <span class="hljs-string">"fetch"</span>], check=<span class="hljs-literal">True</span>)
        subprocess.run([<span class="hljs-string">"git"</span>, <span class="hljs-string">"-C"</span>, <span class="hljs-string">"/path/to/your/repository"</span>, <span class="hljs-string">"reset"</span>, <span class="hljs-string">"--hard"</span>, <span class="hljs-string">"origin/test"</span>], check=<span class="hljs-literal">True</span>)
        <span class="hljs-keyword">return</span> <span class="hljs-string">"Force pull successful"</span>, <span class="hljs-number">200</span>
    <span class="hljs-keyword">except</span> subprocess.CalledProcessError:
        <span class="hljs-keyword">return</span> <span class="hljs-string">"Failed to force pull the repository"</span>, <span class="hljs-number">500</span>

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    app.run(host=<span class="hljs-string">'0.0.0.0'</span>, port=<span class="hljs-number">5000</span>)
</code></pre>
<h5 id="heading-how-it-works"><strong>How it works:</strong></h5>
<ul>
<li><p>The <code>X-Hub-Signature</code> is a custom header that you add to the request when setting up the webhook in Bitbucket.</p>
</li>
<li><p>Only requests with the correct token will be allowed to trigger the pull. If the token is missing or incorrect, the request is rejected with a <code>403 Forbidden</code> response.</p>
</li>
</ul>
<p>You can also use more complex forms of authentication, such as OAuth or HMAC (Hash-based Message Authentication Code), but this simple token approach works for many cases.</p>
<h4 id="heading-3-use-https-for-secure-communication"><strong>3. Use HTTPS for Secure Communication</strong></h4>
<p>It’s crucial to encrypt the data transmitted between your Flask app and the Bitbucket webhook, as well as any sensitive data (such as tokens or passwords) being transmitted over the network. This ensures that attackers cannot intercept or modify the data.</p>
<h5 id="heading-why-https"><strong>Why HTTPS?</strong></h5>
<ul>
<li><p><strong>Data encryption</strong>: HTTPS encrypts the communication, ensuring that sensitive data like your authentication token is not exposed to man-in-the-middle attacks.</p>
</li>
<li><p><strong>Trust and integrity</strong>: HTTPS helps ensure that the data received by your server hasn’t been tampered with.</p>
</li>
</ul>
<h5 id="heading-using-lets-encrypt-to-secure-your-flask-app-with-ssl"><strong>Using Let’s Encrypt to Secure Your Flask App with SSL:</strong></h5>
<ol>
<li><strong>Install Certbot</strong> (the tool for obtaining Let’s Encrypt certificates):</li>
</ol>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get install certbot python3-certbot-nginx
</code></pre>
<p><strong>Obtain a free SSL certificate for your domain</strong>:</p>
<pre><code class="lang-bash">sudo certbot --nginx -d your-domain.com
</code></pre>
<ul>
<li><p>This command will automatically configure Nginx to use HTTPS with a free SSL certificate from Let’s Encrypt.</p>
</li>
<li><p><strong>Ensure HTTPS is used</strong>: Make sure that your Flask app or Nginx configuration forces all traffic to use HTTPS. You can do this by setting up a redirection rule in Nginx:</p>
</li>
</ul>
<pre><code class="lang-bash">server {
    listen 80;
    server_name your-domain.com;

    <span class="hljs-comment"># Redirect HTTP to HTTPS</span>
    <span class="hljs-built_in">return</span> 301 https://<span class="hljs-variable">$host</span><span class="hljs-variable">$request_uri</span>;
}

server {
    listen 443 ssl;
    server_name your-domain.com;

    ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;

    <span class="hljs-comment"># Other Nginx configuration...</span>
}
</code></pre>
<p><strong>Automatic Renewal</strong>: Let’s Encrypt certificates are valid for 90 days, so it’s important to set up automatic renewal:</p>
<pre><code class="lang-bash">sudo certbot renew --dry-run
</code></pre>
<p>This command tests the renewal process to make sure everything is working.</p>
<h4 id="heading-4-logging-and-monitoring"><strong>4. Logging and Monitoring</strong></h4>
<p>Implement logging and monitoring for your Flask app to track any unauthorized attempts, errors, or unusual activity:</p>
<ul>
<li><p><strong>Log requests</strong>: Log all incoming requests, including the IP address, request headers, and response status, so you can monitor for any suspicious activity.</p>
</li>
<li><p><strong>Use monitoring tools</strong>: Set up tools like <strong>Prometheus</strong>, <strong>Grafana</strong>, or <strong>New Relic</strong> to monitor server performance and app health.</p>
</li>
</ul>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>In this tutorial, we explored how to set up a simple, beginner-friendly CI/CD pipeline that automates deployments using Bitbucket, a Linux server, and Python with Flask. Here’s a recap of what you’ve learned:</p>
<ol>
<li><p><strong>CI/CD Fundamentals</strong>: We discussed the basics of Continuous Integration (CI) and Continuous Delivery/Deployment (CD), which are essential practices for automating the integration, testing, and deployment of code. You learned how CI/CD helps speed up development, reduce errors, and improve collaboration among developers.</p>
</li>
<li><p><strong>Setting Up Bitbucket Webhooks</strong>: You learned how to configure a Bitbucket webhook to notify your server whenever there’s a push or merge to a specific branch. This webhook serves as a trigger to initiate the deployment process automatically.</p>
</li>
<li><p><strong>Creating a Flask-based Webhook Listener</strong>: We showed you how to set up a Flask app on your Linux server to listen for incoming webhook requests from Bitbucket. This Flask app receives the notifications and runs the necessary Git commands to pull and deploy the latest changes.</p>
</li>
<li><p><strong>Automating the Deployment Process</strong>: Using Python and Flask, we automated the process of pulling changes from the Bitbucket repository and performing a force pull to ensure the latest code is deployed. You also learned how to configure the server to expose the Flask app and accept requests securely.</p>
</li>
<li><p><strong>Security Considerations</strong>: We covered critical security steps to protect your deployment process:</p>
<ul>
<li><p><strong>Firewall Rules</strong>: We discussed configuring firewall rules to limit exposure and ensure only authorized traffic (from Bitbucket) can access your server.</p>
</li>
<li><p><strong>Authentication</strong>: We added token-based authentication to ensure only authorized requests can trigger deployments.</p>
</li>
<li><p><strong>HTTPS</strong>: We explained how to secure the communication between your server and Bitbucket using SSL certificates from Let's Encrypt.</p>
</li>
<li><p><strong>Logging and Monitoring</strong>: Lastly, we recommended setting up logging and monitoring to keep track of any unusual activity or errors.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-next-steps"><strong>Next Steps</strong></h3>
<p>By the end of this tutorial, you now have a working example of an automated deployment pipeline. While this is a basic implementation, it serves as a foundation you can build on. As you grow more comfortable with CI/CD, you can explore advanced topics like:</p>
<ul>
<li><p>Multi-stage deployment pipelines</p>
</li>
<li><p>Integration with containerization tools like Docker</p>
</li>
<li><p>More complex testing and deployment strategies</p>
</li>
<li><p>Use of orchestration tools like Kubernetes for scaling</p>
</li>
</ul>
<p>CI/CD practices are continually evolving, and by mastering the basics, you’ve set yourself up for success as you expand your skills in this area. Happy automating and thank you for reading!</p>
<p>You can <a target="_blank" href="https://github.com/jpromanonet/ci_cd_fcc/tree/main">fork the code from here</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Essential CLI/TUI Tools for Developers ]]>
                </title>
                <description>
                    <![CDATA[ As developers, we spend a lot of time in our terminals. And there are tons of great CLI/TUI tools that can boost our productivity (as well as some that are just fun to use). From managing Git repositories and navigating file systems to monitoring sys... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/essential-cli-tui-tools-for-developers/</link>
                <guid isPermaLink="false">6798fd7b4666e531b7dd7586</guid>
                
                    <category>
                        <![CDATA[ terminal ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ command line ]]>
                    </category>
                
                    <category>
                        <![CDATA[ cli ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Alex Pliutau ]]>
                </dc:creator>
                <pubDate>Tue, 28 Jan 2025 15:53:31 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738077620615/22e3c744-d609-4469-ae10-ef8ad4b515a1.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>As developers, we spend a lot of time in our terminals. And there are tons of great CLI/TUI tools that can boost our productivity (as well as some that are just fun to use). From managing Git repositories and navigating file systems to monitoring system performance and even playing retro games, the command line offers a powerful and versatile environment.</p>
<p>In this article, we’ll go through a collection of CLI / TUI tools that have been widely adopted in the developer community, spanning various categories such as version control, system utilities, text editors, and more. I wanted to give you a diverse selection that caters to different needs and workflows.</p>
<p>For each tool, I’ll include an overview, highlighting its key features and use cases, along with clear and concise installation instructions for various operating systems, ensuring you can quickly get up and running with these valuable command-line companions.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-kubernetes-tools">Kubernetes Tools</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-container-tools">Container Tools</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-file-and-text-tools">File and Text Tools</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-git-tools">Git Tools</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-development-tools">Development Tools</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-networking-tools">Networking Tools</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-workstation-tools">Workstation Tools</a></p>
</li>
</ul>
<h2 id="heading-kubernetes-tools"><strong>Kubernetes Tools</strong></h2>
<h3 id="heading-k9shttpsgithubcomderailedk9s-kubernetes-cli-to-manage-your-clusters-in-style"><a target="_blank" href="https://github.com/derailed/k9s"><strong>k9s</strong></a> — Kubernetes CLI To Manage Your Clusters In Style</h3>
<p>K9s is a must-have tool for anyone working with Kubernetes. Its intuitive terminal-based UI, real-time monitoring capabilities, and powerful command options make it a standout in the world of Kubernetes management tools.</p>
<p>The K9s project is designed to continually watch Kubernetes cluster for changes and offer subsequent commands to interact with observed resources. This makes it easier to manage applications, especially in a complex, multi-cluster environment. The project’s aim is to make Kubernetes management more accessible and less daunting, especially for those who are not Kubernetes experts.</p>
<p>Just launch k9s in your terminal and start exploring the Kubernetes resources with ease.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*tkfwKS01NCnUBE-N.png" alt="K9s interface" width="600" height="400" loading="lazy"></p>
<p>To install K9s:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install derailed/k9s/k9s

<span class="hljs-comment"># via snap for Linux</span>
snap install k9s --devmode

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install k9s

<span class="hljs-comment"># via go install</span>
go install github.com/derailed/k9s@latest
</code></pre>
<h3 id="heading-kubectxhttpsgithubcomahmetbkubectx-switch-between-contexts-clusters-on-kubectl-faster"><a target="_blank" href="https://github.com/ahmetb/kubectx"><strong>kubectx</strong></a> — switch between contexts (clusters) on kubectl faster.</h3>
<p>Kubectx is the most popular tool for switching Kubernetes contexts, but it has the fewest features! It displays all the contexts in your Kubernetes config as a selectable list and lets you pick one. That’s it!</p>
<p>This project comes with 2 tools:</p>
<ul>
<li><p><strong>kubectx</strong> is a tool that helps you switch between contexts (clusters) on kubectl faster.</p>
</li>
<li><p><strong>kubens</strong> is a tool to switch between Kubernetes namespaces (and configure them for kubectl) easily.</p>
</li>
</ul>
<p>These tools make it very easy to switch between Kubernetes clusters and namespaces if you work with many of them daily. Here you can see it in action:</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*g442WF-cXW-z1dKQ.gif" alt="0*g442WF-cXW-z1dKQ" width="600" height="400" loading="lazy"></p>
<p>To install kubectx:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install kubectx

<span class="hljs-comment"># via apt for Debian</span>
sudo apt install kubectx

<span class="hljs-comment"># via pacman for Arch Linux</span>
sudo pacman -S kubectx

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install kubens kubectx
</code></pre>
<h3 id="heading-kubescapehttpsgithubcomkubescapekubescape-kubernetes-security-platform-for-your-ide-cicd-pipelines-and-clusters"><a target="_blank" href="https://github.com/kubescape/kubescape"><strong>kubescape</strong></a> — Kubernetes security platform for your IDE, CI/CD pipelines, and clusters.</h3>
<p>I hope you take the security of your Kubernetes clusters seriously. If so, <strong>kubescape</strong> is really great for testing if your Kubernetes cluster is deployed securely according to multiple frameworks.</p>
<p>Kubescape can scan clusters, YAML files, and Helm charts and detects the misconfigurations according to multiple sources.</p>
<p>I usually use it in my CI/CD to scan for vulnerabilities automatically when changing Kubernetes manifests or Helm templates.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*Ft2r01ij9Rxj2-V0.png" alt="kubescape scan" width="600" height="400" loading="lazy"></p>
<p>To install kubescape:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install kubescape

<span class="hljs-comment"># via apt for Debian</span>
sudo add-apt-repository ppa:kubescape/kubescape
sudo apt update
sudo apt install kubescape

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install kubescape
</code></pre>
<h2 id="heading-container-tools"><strong>Container Tools</strong></h2>
<h3 id="heading-ctophttpsgithubcombcicenctop-a-top-like-interface-for-container-metrics"><a target="_blank" href="https://github.com/bcicen/ctop"><strong>ctop</strong></a> — A top-like interface for container metrics.</h3>
<p><strong>ctop</strong> is basically a better version of <code>docker stats</code>. It provides a concise and condensed overview of real-time metrics for multiple containers. It comes with built-in support for Docker and runC, and connectors for other container and cluster systems are planned for future releases.</p>
<p>Using ctop is simple. Once you have the tool open, you’ll see all of your currently active containers listed.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*EJ5kdlEs5M5QxDBy.gif" alt="ctop in action" width="600" height="400" loading="lazy"></p>
<p>To install ctop:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install ctop

<span class="hljs-comment"># via pacman for Arch Linux</span>
sudo pacman -S ctop

<span class="hljs-comment"># via scoop for Windows</span>
scoop install ctop
</code></pre>
<h3 id="heading-lazydockerhttpsgithubcomjesseduffieldlazydocker-a-simple-terminal-ui-for-both-docker-and-docker-compose"><a target="_blank" href="https://github.com/jesseduffield/lazydocker"><strong>lazydocker</strong></a> — A simple terminal UI for both docker and docker-compose.</h3>
<p>While Docker's command-line interface is powerful, sometimes you might want a more visual approach without the overhead of a full GUI. This is especially true when managing Docker containers on a headless Linux server where installing a web-based GUI might be undesirable.</p>
<p>Lazydocker was created by <a target="_blank" href="https://github.com/jesseduffield">Jesse Duffield</a> to help make <a target="_blank" href="https://github.com/jesseduffield"></a>managing docker containers a bit easier. Simply put, Lazydocker is a terminal UI (written in Golang) for the docker and docker-compose commands.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*Cbmx4ShRSO7ccVy2.gif" alt="lazydocker in action" width="600" height="400" loading="lazy"></p>
<p>To install lazydocker:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install lazydocker

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install lazydocker

<span class="hljs-comment"># via go install</span>
go install github.com/jesseduffield/lazydocker@latest
</code></pre>
<h3 id="heading-divehttpsgithubcomwagoodmandive-a-tool-for-exploring-each-layer-in-a-docker-image"><a target="_blank" href="https://github.com/wagoodman/dive"><strong>dive</strong></a> — A tool for exploring each layer in a Docker image.</h3>
<p>A Docker image is made up of layers, and with every layer you add on, more space will be taken up by the image. Therefore, the more layers in the image, the more space the image will require.</p>
<p>That’s where <strong>dive</strong> shines, it helps you explore your Docker image and layer contents. It can also help you find ways to shrink the size of your Docker/OCI image.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*swo_hrKJ9EV7hyMs.gif" alt="0*swo_hrKJ9EV7hyMs" width="600" height="400" loading="lazy"></p>
<p>To install dive:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install dive

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S dive

<span class="hljs-comment"># via go install</span>
go get github.com/wagoodman/dive
</code></pre>
<h2 id="heading-file-and-text-tools"><strong>File and Text Tools</strong></h2>
<h3 id="heading-jqhttpsgithubcomjqlangjq-command-line-json-processor"><a target="_blank" href="https://github.com/jqlang/jq"><strong>jq</strong></a> — Command-line JSON processor.</h3>
<p>You may be aware of this one already as it’s well known in the developer community.</p>
<p>Unfortunately, shells such as Bash can’t interpret and work with JSON directly. That’s where you can use <strong>jq</strong> as a command-line JSON processor that’s similar to sed, awk, grep, and so on for JSON data. It’s written in portable C and doesn’t have any runtime dependencies. This lets you slice, filter, map, and transform structured data with ease.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*uwysqWprpmrLrJQP.png" alt="0*uwysqWprpmrLrJQP" width="600" height="400" loading="lazy"></p>
<p>To install jq, you can download the latest releases from the <a target="_blank" href="https://github.com/jqlang/jq/releases">GitHub release page.</a></p>
<h3 id="heading-bathttpsgithubcomsharkdpbat-a-cat1-clone-with-wings"><a target="_blank" href="https://github.com/sharkdp/bat"><strong>bat</strong></a> — A cat(1) clone with wings.</h3>
<p>This is the most used CLI on my machine currently. A few years ago it was <strong>cat</strong>, which is great but doesn’t provide syntax highlighting, or Git integration</p>
<p>Bat’s syntax highlighting supports many programming and markup languages, helping you make your code more readable directly in the terminal. Git integration lets you see modifications in relation to the index, highlighting the lines you’ve added or changed.</p>
<p>Simply run <code>bat filename</code> and enjoy its output.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:656/0*L02HhsqDcq2_G_z4.png" alt="Bat example" width="600" height="400" loading="lazy"></p>
<p>To install bat:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install bat

<span class="hljs-comment"># via apt for Debian</span>
sudo apt install bat

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S bat

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install bat
</code></pre>
<h3 id="heading-ripgrephttpsgithubcomburntsushiripgrep-recursively-search-directories-for-a-regex-pattern-while-respecting-your-gitignore"><a target="_blank" href="https://github.com/BurntSushi/ripgrep"><strong>ripgrep</strong></a> — Recursively search directories for a regex pattern while respecting your gitignore.</h3>
<p><strong>ripgrep</strong> is definitely becoming a popular alternative (if not the most popular) to the <strong>grep</strong> command. Even some editors like <a target="_blank" href="https://code.visualstudio.com/updates/v1_11">Visual Studio Code</a> are using ripgrep to power their search offerings.</p>
<p>The major selling point is its default behavior for recursive search and speed.</p>
<p>I now rarely use grep on my personal machine, as ripgrep is much faster.</p>
<p>To install ripgrep:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install ripgrep

<span class="hljs-comment"># via apt for Debian</span>
sudo apt-get install ripgrep

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S ripgrep

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install ripgrep
</code></pre>
<h2 id="heading-git-tools"><strong>Git Tools</strong></h2>
<h3 id="heading-lazygithttpsgithubcomjesseduffieldlazygit-simple-terminal-ui-for-git-commands"><a target="_blank" href="https://github.com/jesseduffield/lazygit"><strong>lazygit</strong></a> — Simple terminal UI for git commands.</h3>
<p><strong>lazygit</strong> is another great terminal UI for Git commands developed by <a target="_blank" href="https://github.com/jesseduffield"><strong>Jesse Duffield</strong></a> using Go.</p>
<p>I don’t mind using the Git CLI directly for simple things, but it is famously verbose for more advanced use cases. I am just too lazy to memorize longer commands.</p>
<p>And lazigit has made me a more productive Git user than ever.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*ykEtn2HQ9QgU40jx.png" alt="lazygit interface" width="600" height="400" loading="lazy"></p>
<p>To install lazygit:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install jesseduffield/lazygit/lazygit

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S lazygit

<span class="hljs-comment"># via scoop for Windows</span>
scoop install lazygit
</code></pre>
<h2 id="heading-development-tools"><strong>Development Tools</strong></h2>
<h3 id="heading-atachttpsgithubcomjulien-cpsnatac-a-simple-api-client-postman-like-in-your-terminal"><a target="_blank" href="https://github.com/Julien-cpsn/ATAC"><strong>ATAC</strong></a> — A simple API client (Postman-like) in your terminal.</h3>
<p>ATAC stands for Arguably a Terminal API Client. It’s based on popular clients like Postman, Insomnia, and Bruno, but it runs inside your terminal without needing any particular graphical environment.</p>
<p>It works best for developers who need an offline, cross-platform API client right at their fingertips (terminal).</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*NoOMeMxkELNFI9RS.png" alt="ATAC" width="600" height="400" loading="lazy"></p>
<p>To install ATAC:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew tap julien-cpsn/atac
brew install atac

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S atac
</code></pre>
<h3 id="heading-k6httpsgithubcomgrafanak6-a-modern-load-testing-tool-using-go-and-javascript"><a target="_blank" href="https://github.com/grafana/k6"><strong>k6</strong></a> — A modern load testing tool, using Go and JavaScript.</h3>
<p>I’ve used many load-testing tools in my career, such as <a target="_blank" href="https://github.com/tsenart/vegeta">vegeta</a> or even <a target="_blank" href="https://httpd.apache.org/docs/2.4/programs/ab.html">ab</a> in the past. But now I mostly use <strong>k6s</strong> as it has everything I need and has a great GUI and TUI.</p>
<p>Why it works well for me:</p>
<ul>
<li><p>k6 has really good <a target="_blank" href="https://k6.io/docs/">documentation</a></p>
</li>
<li><p>Many integrations available: Swagger, JMeter scripts, and so on.</p>
</li>
<li><p>Results reporting is quite good</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737552000859/df5af273-3706-4d41-9dbe-717d2f2d18b7.webp" alt="K6 interface" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>To install k6:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install k6

<span class="hljs-comment"># via apt for Debian</span>
sudo apt-get install k6

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install k6
</code></pre>
<h3 id="heading-httpiehttpsgithubcomhttpiecli-modern-user-friendly-command-line-http-client-for-the-api-era"><a target="_blank" href="https://github.com/httpie/cli"><strong>httpie</strong></a> — modern, user-friendly command-line HTTP client for the API era.</h3>
<p>Don’t get me wrong, curl is great, but not very human-friendly.</p>
<p>HTTPie has a simple and expressive syntax, supports JSON and form data, handles authentication and headers, and displays colorized and formatted output.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*Bqi3gBKgIkeEPEI_.gif" alt="0*Bqi3gBKgIkeEPEI_" width="600" height="400" loading="lazy"></p>
<p>To install httpie:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install httpie

<span class="hljs-comment"># via apt for Debian</span>
sudo apt install httpie

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -Syu httpie

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install httpie
</code></pre>
<h3 id="heading-asciinemahttpsgithubcomasciinemaasciinema-terminal-session-recorder"><a target="_blank" href="https://github.com/asciinema/asciinema"><strong>asciinema</strong></a> — Terminal session recorder.</h3>
<p>I call it a terminal YouTube :)</p>
<p>asciinema is a great tool when you want to share your terminal sessions with someone else, instead of recording heavy videos.</p>
<p>I use it often when I develop some CLI tools and want to share the demo of how they work (on GitHub, for example).</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*Exg2XuZlIPaJJ-iB.png" alt="0*Exg2XuZlIPaJJ-iB" width="600" height="400" loading="lazy"></p>
<p>To install asciinema:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install asciinema

<span class="hljs-comment"># via apt for Debian</span>
sudo apt install asciinema

<span class="hljs-comment"># via pacman for Arch Linux</span>
sudo pacman -S asciinema
</code></pre>
<h2 id="heading-networking"><strong>Networking</strong></h2>
<h3 id="heading-doggohttpsgithubcommr-karandoggo-a-command-line-dns-client"><a target="_blank" href="https://github.com/mr-karan/doggo">doggo</a> — A command-line DNS client.</h3>
<p>It's totally inspired by <strong>dog</strong> which is written in Rust.</p>
<p>In the past I would use <strong>dig</strong> to inspect the DNS, but its output is often verbose and difficult to parse visually.</p>
<p><strong>doggo</strong> addresses these shortcomings by offering two key improvements:</p>
<ul>
<li><p>doggo provides the JSON output support for easy scripting and parsing.</p>
</li>
<li><p>doggo offers a human-readable output format that uses color-coding and a tabular layout to present DNS information clearly and concisely.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737552264803/bb902365-bc0d-4a56-9a87-6b065ee5608a.png" alt="bb902365-bc0d-4a56-9a87-6b065ee5608a" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>To install doggo:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install doggo

<span class="hljs-comment"># via scoop for Windows</span>
scoop install doggo

<span class="hljs-comment"># via go install</span>
go install github.com/mr-karan/doggo/cmd/doggo@latest
</code></pre>
<h3 id="heading-gpinghttpsgithubcomorfgping-ping-but-with-a-graph"><a target="_blank" href="https://github.com/orf/gping"><strong>gping</strong></a> — Ping, but with a graph.</h3>
<p>The well-known <strong>ping</strong> command is not the most interesting to look at, and interpreting its output in a useful way can be difficult.</p>
<p><strong>gping</strong> gives a plot of the ping latency to a host, and the most useful feature is the ability to run concurrent pings to multiple hosts and plot all of them on the same graph.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*IPi1TOpiMnWPN1VU.gif" alt="0*IPi1TOpiMnWPN1VU" width="600" height="400" loading="lazy"></p>
<p>To install gping:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install gping

<span class="hljs-comment"># via Chocolatey for Windows</span>
choco install gping

<span class="hljs-comment"># via apt for Debian</span>
apt install gping
</code></pre>
<h2 id="heading-workstation"><strong>Workstation</strong></h2>
<h3 id="heading-tmuxhttpsgithubcomtmuxtmuxwiki-a-terminal-multiplexer"><a target="_blank" href="https://github.com/tmux/tmux/wiki"><strong>tmux</strong></a> — A terminal multiplexer.</h3>
<p>Why is tmux such a big deal?</p>
<p>You may have run into situations where you need to view multiple terminal consoles at the same time. For example, you may have a few servers running (for example, web, database, debugger) and you might want to monitor all the output coming from these servers in real-time to validate behavior or run commands.</p>
<p>Before tmux, you might have just opened a few different tabs in the terminal and switched between them to see the output.</p>
<p>Thankfully, there’s an easier way — <strong>tmux</strong>.</p>
<p>In a nutshell, here are some of its most popular features:</p>
<ul>
<li><p>Window/Pane management</p>
</li>
<li><p>Session management with persistence</p>
</li>
<li><p>Sharable sessions with other users</p>
</li>
<li><p>Scriptable configurations</p>
</li>
</ul>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*u8o0WxutrPXxg6FG.png" alt="0*u8o0WxutrPXxg6FG" width="600" height="400" loading="lazy"></p>
<p>To install tmux:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install tmux

<span class="hljs-comment"># via apt for Debian</span>
apt install tmux

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S tmux
</code></pre>
<h3 id="heading-zellijhttpsgithubcomzellij-orgzellij-a-terminal-workspace-with-batteries-included"><a target="_blank" href="https://github.com/zellij-org/zellij"><strong>zellij</strong></a> — A terminal workspace with batteries included.</h3>
<p>Since I listed tmux here, it also makes sense to include a new competitor, <strong>Zellij</strong>, which has been gaining traction in the developer community. Both have their own unique features and purposes.</p>
<p>Compared to traditional terminal multiplexers, zellij offers a more user-friendly interface, modern design elements, built-in layout systems, and a plugin system, making it easier for newcomers to get started.</p>
<p>I still like tmux. It has a special place in my heart because it has served a great purpose for years. But zellij is another good option.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*VwAit4tO1IjxH9dp.gif" alt="0*VwAit4tO1IjxH9dp" width="600" height="400" loading="lazy"></p>
<p>To install zellij:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install zellij

<span class="hljs-comment"># via apt for Debian</span>
apt install zellij

<span class="hljs-comment"># via pacman for Arch Linux</span>
pacman -S zellij
</code></pre>
<h3 id="heading-btophttpsgithubcomaristocratosbtop-a-monitor-of-resources"><a target="_blank" href="https://github.com/aristocratos/btop"><strong>btop</strong></a> — A monitor of resources.</h3>
<p>I can’t live without btop, and it’s installed on all my machines via my personal <a target="_blank" href="https://github.com/plutov/dotfiles">dotfiles</a>. I rarely use now built-in OS GUIs to check the resource utilization on my host machine, because <strong>btop</strong> can do it much better.</p>
<p>I use to to quickly explore what uses the most memory, monitor and kill some processes, and more.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*HbuJrCbT6xVApLoh.png" alt="0*HbuJrCbT6xVApLoh" width="600" height="400" loading="lazy"></p>
<p>To install btop:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># via Homebrew for macOS</span>
brew install btop

<span class="hljs-comment"># via snap for Debian</span>
sudo snap install btop
</code></pre>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>These CLIs/TUIs should work well in any modern terminal. I personally use <a target="_blank" href="https://ghostty.org/">Ghostty</a> currently and it works great, but other popular options like <strong>iTerm2, Kitty</strong>, and the default terminal applications on macOS and Linux should also provide a seamless experience. The key is to ensure your terminal supports features like 256-color palettes and UTF-8 encoding for optimal display of these tools.</p>
<p>There’s a huge amount of CLIs/TUIs out there, and I couldn’t list them all (though I tried to list some of the best). This selection represents a starting point for exploring the rich ecosystem of command-line tools available to developers. I encourage you to explore further, discover new tools that fit your specific needs, and contribute back to the community by sharing your findings.</p>
<p><a target="_blank" href="https://packagemain.tech">Explore more articles on packagemain.tech</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Getting Started with RHEL: A Beginner’s Guide to Linux Basics ]]>
                </title>
                <description>
                    <![CDATA[ Imagine an operating system so reliable that it powers the world’s biggest servers, the fastest supercomputers, and even the cloud infrastructure of leading tech companies. Welcome to Red Hat Enterprise Linux (RHEL) — the backbone of modern IT system... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/guide-to-rhel-linux-basics/</link>
                <guid isPermaLink="false">67813ff98eada5cd9c2f7dc7</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Tanishka Makode ]]>
                </dc:creator>
                <pubDate>Fri, 10 Jan 2025 15:42:49 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738685031729/c2dbcf09-c903-4eeb-b97a-0deb3a50385b.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Imagine an operating system so reliable that it powers the world’s biggest servers, the fastest supercomputers, and even the cloud infrastructure of leading tech companies. Welcome to Red Hat Enterprise Linux (RHEL) — the backbone of modern IT systems.</p>
<p>Whether you’re a complete novice exploring Linux for the first time or an experienced professional looking to brush up on your basics, you’re in the right place. This tutorial is your starting point to uncover the power, stability, and versatility of RHEL.</p>
<p>But what makes RHEL stand out in the crowded Linux ecosystem? Why do companies like Google, Amazon, and NASA rely on it? Let’s dive in and explore everything you need to know to begin your journey with Red Hat Enterprise Linux.</p>
<h3 id="heading-what-well-cover">What we’ll cover:</h3>
<ol>
<li><p><a class="post-section-overview" href="#heading-a-little-backstory">A Little Backstory</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-makes-linux-special">What Makes Linux Special?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-why-red-hat-enterprise-linux">Why Red Hat Enterprise Linux?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-your-practice-environment-for-linux-commands">How to Set Up Your Practice Environment for Linux Commands</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-introduction-to-basic-linux-commands">Introduction to Basic Linux Commands</a></p>
</li>
</ol>
<h2 id="heading-a-little-backstory">A Little Backstory</h2>
<p>Have you ever wondered where it all began? Before there was Linux, there was <strong>UNIX</strong>, a revolutionary operating system created in the 1970s that changed the way computers worked. Designed for stability, multitasking, and scalability, UNIX became the foundation upon which modern operating systems were built.</p>
<p>Fast forward to 1991, when a 21-year-old Finnish computer science student named Linus Torvalds at the University of Helsinki decided to create his own operating system kernel as a hobby project. Little did he know, this hobby would evolve into Linux, a game-changing open-source operating system that would redefine the tech world.</p>
<p>Now here’s the fun part: how did Linux get its name? Originally, Linus wanted to call it “Freax” (a combination of “free,” “freak,” and “Unix”). But when he uploaded the project files to a server managed by his friend Ari Lemmke, Ari thought “Freax” didn’t sound appealing enough. So, without telling Linus, Ari named the directory “Linux” instead — a clever blend of <strong>Linus + Unix</strong>. And the rest, as they say, is history.</p>
<h2 id="heading-what-makes-linux-special">What Makes Linux Special?</h2>
<p>Unlike traditional operating systems, Linux was open-source, meaning anyone could view, modify, and distribute the code freely. This sparked a wave of innovation, allowing developers around the world to create their own versions of Linux, tailored to different needs.</p>
<p>What truly sets Linux apart is the global community of developers and enthusiasts who constantly improve and innovate. This collaborative approach ensures that Linux stays at the forefront of technology, evolving with the needs of its users.</p>
<p>Today, Linux isn’t just one operating system — it’s an entire family of distributions (or distros). From user-friendly versions like Ubuntu and Fedora to enterprise-grade solutions like RHEL, there’s a Linux for everyone. In this article, we’ll focus on RHEL and why it’s a great choice for certain projects.</p>
<p>If you want to explore the fascinating variety of Linux distros, you can check out this <a target="_blank" href="https://en.wikipedia.org/wiki/Linux_distribution">Wikipedia page on Linux distributions</a> to see just how diverse the Linux ecosystem is.</p>
<h3 id="heading-why-red-hat-enterprise-linux">Why Red Hat Enterprise Linux?</h3>
<p>Red Hat Enterprise Linux (RHEL) is like the reliable, no-nonsense friend you call when you're organizing a big, important event.</p>
<p>Sure, you could ask your fun but unpredictable friends (like open-source Linux distros) to help, but there's always a chance they'll forget the chairs or crash halfway through the party.</p>
<p>RHEL, on the other hand, is built for stability and comes with a professional support team that’s on call 24/7 to fix anything that goes wrong. It’s tested thoroughly to make sure it works perfectly with all the tools and gadgets big companies use, so there are no surprises.</p>
<p>RHEL’s blend of reliability, security, performance, and support makes it the go-to operating system for enterprises, cementing its importance in the IT landscape.</p>
<p>Here’s a summary of RHEL’s benefits and features:</p>
<h4 id="heading-1-enterprise-grade-stability-and-reliability"><strong>1. Enterprise-Grade Stability and Reliability</strong></h4>
<p>RHEL is designed to meet the demands of mission-critical workloads, ensuring systems run consistently and predictably. Its long lifecycle support ensures businesses can rely on it without worrying about frequent upgrades or compatibility issues. This makes it an ideal choice for applications where downtime is unacceptable.</p>
<h4 id="heading-2-comprehensive-security-features"><strong>2. Comprehensive Security Features</strong></h4>
<p>Security is paramount in enterprise environments, and RHEL excels with robust features such as SELinux (Security-Enhanced Linux) and regular security updates. The proactive approach to identifying and addressing vulnerabilities helps organizations comply with industry regulations and maintain the integrity of their systems.</p>
<h4 id="heading-3-scalability-and-performance-optimization"><strong>3. Scalability and Performance Optimization</strong></h4>
<p>RHEL is optimized to deliver high performance for a wide range of hardware architectures and workloads, including cloud, on-premises, and hybrid setups. Its ability to scale efficiently makes it suitable for small-scale applications as well as large data centers and enterprise-grade workloads.</p>
<h4 id="heading-4-extensive-ecosystem-and-professional-support"><strong>4. Extensive Ecosystem and Professional Support</strong></h4>
<p>RHEL benefits from Red Hat’s extensive ecosystem of certified hardware, software, and cloud providers. Enterprises have access to a wealth of tested and certified solutions, along with 24/7 support from Red Hat. This ensures any technical issues are resolved promptly, minimizing downtime and enhancing productivity.</p>
<h2 id="heading-how-to-set-up-your-practice-environment-for-linux-commands">How to Set Up Your Practice Environment for Linux Commands</h2>
<p>Before we jump into learning and practising Linux commands, you’ll need to set up an environment where you can run these commands. Here are three great options to consider:</p>
<h3 id="heading-1-using-the-terminal-on-your-linux-machine">1. <strong>Using the Terminal on Your Linux Machine</strong></h3>
<p>If you’re already using Linux, the terminal is your go-to interface for interacting with the system. All Linux commands are executed here, and it’s the ideal environment to start practising.</p>
<p>You can open the terminal and directly type your commands to see them in action.</p>
<h3 id="heading-2-using-vmware-or-oracle-virtualbox">2. Using <strong>VMware or Oracle VirtualBox</strong></h3>
<p>If you don’t want to install Linux directly on your main machine, using a virtual machine (VM) is a great solution. Virtualization tools like VMware or Oracle VirtualBox allow you to run a full Linux distribution as a guest operating system without affecting your primary system. This way, you can experiment freely in an isolated environment.</p>
<p><strong>How to use a VM:</strong></p>
<ul>
<li><p>Install <a target="_blank" href="https://blogs.vmware.com/workstation/2024/05/vmware-workstation-pro-now-available-free-for-personal-use.html">VMware Workstation Player</a> or <a target="_blank" href="https://www.virtualbox.org/wiki/Downloads?">Oracle VirtualBox</a> on your computer.</p>
</li>
<li><p>Download the RHEL ISO Image. You can obtain the RHEL ISO by following these steps:</p>
<ol>
<li><p>Register for a Red Hat Developer Account (it’s free):</p>
<ul>
<li><p>Go to the Red Hat Developer Program.</p>
</li>
<li><p>Create an account (it’s free for individual developers).</p>
</li>
<li><p>After registering, sign in to your Red Hat account.</p>
</li>
</ul>
</li>
<li><p>Download the ISO:</p>
<ul>
<li><p>Visit the RHEL Download Page after logging in.</p>
</li>
<li><p>Choose the ISO image for RHEL (you may select the latest version).</p>
</li>
<li><p>Click Download and save the ISO file on your local system.</p>
</li>
</ul>
</li>
</ol>
</li>
</ul>
<p>Once your VM is running, you can use it to practice commands and explore Linux.</p>
<h3 id="heading-3-killercoda-an-online-linux-environment">3. <strong>KillerCoda: An Online Linux Environment</strong></h3>
<p>If you’re looking for an entirely online solution, KillerCoda is a fantastic option. It provides an interactive Linux terminal right in your browser, so you don’t need to install anything on your local machine.</p>
<p>Visit the <a target="_blank" href="https://killercoda.com/pawelpiwosz/course/linuxFundamentals">KillerCoda</a> website and you will see scenario-based lessons.</p>
<p>Now you should be all set.</p>
<h2 id="heading-introduction-to-basic-linux-commands">Introduction to Basic Linux Commands</h2>
<p>One of the key features that makes Linux so versatile is its command-line interface (CLI). This is where you can interact with the system by typing commands. These commands allow you to perform a variety of tasks like managing files, directories, system resources, and much more.</p>
<p>Now, we’ll explore some essential Linux commands that every beginner should know. These commands are simple yet powerful tools that can help you navigate and manage your Linux environment efficiently.</p>
<h3 id="heading-basic-linux-commands">Basic Linux Commands</h3>
<p><strong>1.</strong> <code>echo</code></p>
<p>The <code>echo</code> command is used to display text or variables to the terminal. It is one of the most commonly used commands in Linux and is helpful for displaying messages, variable values, and even system information.</p>
<p><code>echo</code> syntax:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> [OPTION] [STRING]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> Hello <span class="hljs-comment"># Prints 'Hello' on terminal</span>
<span class="hljs-built_in">echo</span> -n Hey <span class="hljs-comment"># Does not output a trailing newline</span>

name=<span class="hljs-string">"Tanishka"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, <span class="hljs-variable">$name</span>"</span> <span class="hljs-comment"># Prints variables</span>
</code></pre>
<p>Option <code>-e</code> allows echo commands to enable escape sequences.</p>
<p>Here are some other options you can use with <code>echo</code>:</p>
<ol>
<li><p><code>\n</code> – New line: Moves the output to the next line.</p>
</li>
<li><p><code>\t</code> – Tab: Adds a tab space.</p>
</li>
<li><p><code>\v</code> – Vertical Tab: Adds a tab as the cursor moves to the next vertical position.</p>
</li>
<li><p><code>\b</code> – Backspace: Removes the last character.</p>
</li>
<li><p><code>\\</code> – Backslash: Prints a backslash.</p>
</li>
</ol>
<p>Example:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> -e <span class="hljs-string">"Hello World\nThis is a new line."</span>
<span class="hljs-comment"># Hello World</span>
<span class="hljs-comment"># This is a new line.</span>

<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"Hello World\tThis is tabbed."</span>
<span class="hljs-comment"># Hello World    This is tabbed.</span>

<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"Hello\vWorld\vThis is vertically spaced."</span>
<span class="hljs-comment"># Hello</span>
<span class="hljs-comment">#       World</span>
<span class="hljs-comment">#             This is vertically spaced.</span>

<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"This is a backslash: \\"</span>
<span class="hljs-comment"># This is a backslash: \</span>
</code></pre>
<p><strong>2. whoami</strong></p>
<p>The <code>whoami</code> command is used to display the username of the currently logged-in user.</p>
<p><code>whoami</code> syntax:</p>
<pre><code class="lang-bash">whoami
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">whoami <span class="hljs-comment">#tanishkamakode</span>
</code></pre>
<p><strong>3.</strong> <code>pwd</code></p>
<p>The <code>pwd</code> command is used to display the current working directory.</p>
<p><code>pwd</code> syntax:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">pwd</span> [OPTION]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">pwd</span> <span class="hljs-comment"># /home/tanishkamakode</span>
<span class="hljs-built_in">pwd</span> -L <span class="hljs-comment"># Displays logical current working directory i.e. shows symlinks (shortcut path ,if exists)</span>
<span class="hljs-built_in">pwd</span> -P <span class="hljs-comment"># Displays physical current working directory i.e. shows resolved path (original path of shortcut ,if exists)</span>
</code></pre>
<p><strong>4.</strong> <code>ls</code></p>
<p>The <code>ls</code> command is used to list the files and directories in the current working directory or specified directory.</p>
<p><code>ls</code> syntax:</p>
<pre><code class="lang-bash">ls [OPTION] [PATH]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">ls <span class="hljs-comment"># Lists files and directories at current working directory</span>
</code></pre>
<p>Here are some options you can use with <code>ls</code>:</p>
<ul>
<li><p><code>ls -l</code>: Lists detailed information about files and directories</p>
</li>
<li><p><code>ls -lh</code>: Lists detailed information about files and directories with size in human readable format</p>
</li>
<li><p><code>ls -a</code>: Lists all hidden files</p>
</li>
<li><p><code>ls -R</code>: Lists recursive content of directory</p>
</li>
</ul>
<p><strong>5.</strong> <code>date</code></p>
<p>The <code>date</code> command is used to display or set the system date and time.</p>
<p><code>date</code> syntax:</p>
<pre><code class="lang-bash">date [OPTION] [FORMAT_SPECIFIER]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">date <span class="hljs-comment"># Displays current date and time</span>
date +<span class="hljs-string">"%d/%m/%Y"</span> <span class="hljs-comment"># Displays date month year</span>
date +<span class="hljs-string">"%H:%M:%S"</span> <span class="hljs-comment"># Displays hours minutes seconds</span>
date -u <span class="hljs-comment"># Displays date in UTC time</span>
date --<span class="hljs-built_in">set</span> <span class="hljs-string">"2024-06-05"</span> <span class="hljs-comment"># Sets date to given YYYY-MM-DD</span>
date -d <span class="hljs-string">"yesterday"</span> <span class="hljs-comment"># Displays yesterday's date</span>
date -d <span class="hljs-string">"tomorrow"</span> <span class="hljs-comment"># Displays tomorrow's date</span>
date -d <span class="hljs-string">"7 days"</span> <span class="hljs-comment"># Displays date of 7 days from today</span>
</code></pre>
<p>Options you can use with the date command:</p>
<ol>
<li><p><code>-u</code>: Displays date and time in UTC.</p>
</li>
<li><p><code>-d</code>: Displays or sets the date/time to a specific string (e.g., "yesterday", "7 days ago").</p>
</li>
<li><p><code>%d</code>: Day of the month (01 to 31).</p>
</li>
<li><p><code>%m</code>: Month of the year (01 to 12).</p>
</li>
<li><p><code>%y</code>: Last two digits of the year (00 to 99).</p>
</li>
<li><p><code>%Y</code>: Full year (for example, 2025).</p>
</li>
</ol>
<pre><code class="lang-bash">date -u <span class="hljs-comment"># Displays date in UTC time</span>
date -d <span class="hljs-string">"yesterday"</span> <span class="hljs-comment"># Displays yesterday's date</span>
date -d <span class="hljs-string">"tomorrow"</span> <span class="hljs-comment"># Displays tommorow's date</span>
date -d <span class="hljs-string">"7 days"</span> <span class="hljs-comment"># Displays date of 7 days from today</span>
date +<span class="hljs-string">"%d/%m/%Y"</span> <span class="hljs-comment"># Displays date month year</span>
date +<span class="hljs-string">"%H:%M:%S"</span> <span class="hljs-comment"># Displays hours minutes seconds</span>
</code></pre>
<p><strong>6.</strong> <code>cal</code></p>
<p>The <code>cal</code> command is used to display calendar details. If no options are given, the current month is displayed.</p>
<p><code>cal</code> syntax:</p>
<pre><code class="lang-bash">cal [OPTIONS]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">cal <span class="hljs-comment"># Displays current month calendar</span>
cal --highlight <span class="hljs-comment"># Highlights current date</span>
</code></pre>
<p>Options you can use with the <code>cal</code> command:</p>
<ol>
<li><p><code>--highlight</code>: Highlights the current date in the calendar.</p>
</li>
<li><p><code>-3</code>: Displays the previous, current, and next month.</p>
</li>
<li><p><code>-m</code>: Displays the current month in a multi-line format.</p>
</li>
<li><p><code>-y</code>: Displays the calendar for the entire year.</p>
</li>
<li><p><code>-A [N]</code>: Displays N months ahead of the current month.</p>
</li>
<li><p><code>-B [N]</code>: Displays N months before the current month.</p>
</li>
<li><p><code>cal [year]</code>: Displays the calendar for the entire year.</p>
</li>
<li><p><code>cal [month] [year]</code>: Displays the calendar for the specified month and year.</p>
</li>
</ol>
<pre><code class="lang-bash">cal 2024 <span class="hljs-comment"># Displays the calendar for all months of 2024</span>
cal 06 2024 <span class="hljs-comment"># Displays the calendar for the 6th month (June) of 2024</span>
cal -m <span class="hljs-comment"># Displays the current month in multi-line format</span>
cal -A 3 <span class="hljs-comment"># Displays the 3 months ahead of the current month</span>
cal -B 2 <span class="hljs-comment"># Displays the 2 months before the current month</span>
</code></pre>
<p><strong>7.</strong> <code>nl</code></p>
<p>The <code>nl</code> command is used to add line number to file content.</p>
<p><code>nl</code> syntax:</p>
<pre><code class="lang-bash">nl [OPTIONS] [FILENAME]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">nl file.txt <span class="hljs-comment"># Displays file content with line numbers</span>
nl -b a file.txt <span class="hljs-comment"># Numbers all lines</span>
nl -b t file.txt <span class="hljs-comment"># Number non-empty lines only</span>
nl -s <span class="hljs-string">') '</span> file.txt <span class="hljs-comment"># Adds a separator between the line number and the content -</span>
<span class="hljs-comment"># 1) First line</span>
<span class="hljs-comment"># 2) Second line</span>
</code></pre>
<p>As you can see in the code above, there are other options you can use with the <code>nl</code> command, too.</p>
<h3 id="heading-file-creation-and-handling-commands">File Creation and Handling Commands</h3>
<p><strong>1.</strong> <code>touch</code></p>
<p>The <code>touch</code> command is used to create an empty file or update the last modified time if a file exists.</p>
<p><code>touch</code> syntax:</p>
<pre><code class="lang-bash">touch [OPTIONS] [FILENAME]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">touch file.txt <span class="hljs-comment"># Creates a single file - file.txt</span>
touch file1.txt file2.txt file3.txt <span class="hljs-comment"># Creates multiple files</span>
touch file{1..10}.txt <span class="hljs-comment"># Creates files with given range names (file1.txt file2.txt upto file10.txt)</span>
</code></pre>
<p><strong>2.</strong> <code>cat</code></p>
<p>The <code>cat</code> command concatenates files and also displays the content of files.</p>
<p><code>cat</code> syntax:</p>
<pre><code class="lang-bash">cat [OPTIONS] [FILENAME]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">cat file.txt <span class="hljs-comment"># Displays content of file.txt</span>
cat file1.txt file2.txt &gt; merged.txt <span class="hljs-comment"># Overrides content of two files in merged.txt</span>
cat file1.txt &gt;&gt; file2.txt <span class="hljs-comment"># Appends content of first file to second file</span>
cat -n file.txt <span class="hljs-comment"># Displays the content along with line numbers</span>

cat &gt; file.txt OR cat &gt;&gt; file.txt <span class="hljs-comment"># &gt; for overriding, &gt;&gt; for appending</span>
<span class="hljs-comment"># This allows you to create a new file with a prompt to enter the content</span>
<span class="hljs-comment"># If file already exists, teminal will read the content you enter.</span>
<span class="hljs-comment"># Once you’re done with writing content,</span>
<span class="hljs-comment"># press Ctrl + D (detach).</span>
<span class="hljs-comment"># Or Ctrl + C but make sure you enter this on new line</span>
<span class="hljs-comment"># or else current line content will not be appended to the file.</span>
</code></pre>
<h3 id="heading-directory-creation-and-handling">Directory Creation and Handling</h3>
<p><strong>1.</strong> <code>mkdir</code></p>
<p>The <code>mkdir</code> command is used to create a directory.</p>
<p><code>mkdir</code> syntax:</p>
<pre><code class="lang-bash">mkdir [OPTIONS] [DIRECTORYNAME]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">mkdir folder <span class="hljs-comment"># Creates a single directory</span>
mkdir fol1 fol2 fol3 <span class="hljs-comment"># Creates multiple directories</span>
mkdir fol{1..10} <span class="hljs-comment"># Creates directories with given range names</span>
mkdir -p /myData/data <span class="hljs-comment"># Creates nested directories</span>
ls -R /myData <span class="hljs-comment"># Verify if nested directories created</span>
mkdir -v fol1 fol2 fol3 <span class="hljs-comment"># Verbose mode i.e confirmation of directory creation on terminal</span>
</code></pre>
<p><strong>2.</strong> <code>cd</code></p>
<p>The <code>cd</code> command is used to change the directory – that is, to navigate between directories.</p>
<p><code>cd</code> syntax:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> [DIRECTORY]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> myFolder <span class="hljs-comment"># Relative path, starting from current directroy</span>
<span class="hljs-built_in">cd</span> /home/myFolder <span class="hljs-comment"># Absolute path, starting from root /</span>
<span class="hljs-built_in">cd</span> .. <span class="hljs-comment"># Goes to one level above current directory</span>
<span class="hljs-built_in">cd</span> ../.. <span class="hljs-comment"># Goes to two level above current directory</span>
<span class="hljs-built_in">cd</span> OR <span class="hljs-built_in">cd</span> ~ <span class="hljs-comment"># Goes to home directory</span>
<span class="hljs-built_in">cd</span> - <span class="hljs-comment"># Switched to directory you were in previously</span>
</code></pre>
<h3 id="heading-copy-move-and-remove-files-and-directories">Copy, Move, and Remove Files and Directories</h3>
<p><strong>1.</strong> <code>cp</code></p>
<p>The <code>cp</code> command is used to copy files and directories from one location to another.</p>
<p><code>cp</code> syntax:</p>
<pre><code class="lang-bash">cp [OPTIONS] [SOURCE] [DESTINATION]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">cp myFile.txt /home/newFolder <span class="hljs-comment"># Copies myFile.txt to newFolder</span>
cp myFile1.txt myFile2.txt /home/newFolder <span class="hljs-comment"># Copies multiple files to newFolder</span>
cp -r oldData /home/newData <span class="hljs-comment"># Recursively copies content of oldData directory to newData directory</span>
cp -i file.txt /home/Folder <span class="hljs-comment"># Asks for confirmation while overriding file.txt that already exists in Folder</span>
cp -v oldData /home/newData <span class="hljs-comment"># Verbose output i.e. confirmation of copying the directory </span>
cp -f file.txt /newFolder <span class="hljs-comment"># Copies the file forecfully</span>
</code></pre>
<p><strong>2.</strong> <code>mv</code></p>
<p>The <code>mv</code> command is used to move files and directories from one location to another. It is also used to rename a file or a directory.</p>
<p><code>mv</code> syntax:</p>
<pre><code class="lang-bash">mv [OPTIONS] [SOURCE] [DESTINATION]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">mv myFile.txt /home/newFolder <span class="hljs-comment"># Moves myFile to newFolder</span>
mv myFile1.txt myFile2.txt /home/newFolder <span class="hljs-comment"># Moves multiple files to newFolder</span>
mv -i file.txt /home/sample <span class="hljs-comment"># Asks before overriding the file.txt that already exists in sample</span>
mv -v oldData /home/sample <span class="hljs-comment"># Verbose mode i.e. confirmation of moving directory oldData in sample directory</span>
mv oldFile.txt newFile.txt <span class="hljs-comment"># Renames the file</span>
</code></pre>
<p><strong>3.</strong> <code>rm</code></p>
<p>The <code>rm</code> command is used to remove files and directories.</p>
<p><code>rm</code> syntax:</p>
<pre><code class="lang-bash">rm [OPTIONS] [FILENAME OR DIRECTORYNAME]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">rm file.txt <span class="hljs-comment"># Removes single file</span>
rm file1.txt file2.txt <span class="hljs-comment"># Removes multiple files</span>
rm emptyDir <span class="hljs-comment"># Won't work for empty directory (Next command will handle this case!)</span>
rm -r myData <span class="hljs-comment"># Removed a non-empty directory recursively</span>
rm -r -i myData <span class="hljs-comment"># Asks for confirmation before deleting each file</span>
rm -r -f myData <span class="hljs-comment"># Removes a non-empty directory recursively without confirmation</span>
rm -r -f -v myData <span class="hljs-comment"># Removes a non-empty directory recursively without confirmation in verbose mode</span>
</code></pre>
<p><strong>4.</strong> <code>rmdir</code></p>
<p>The <code>rmdir</code> command is used to remove empty directories only.</p>
<p><code>rmdir</code> syntax:</p>
<pre><code class="lang-bash">rmdir [OPTIONS] [FILENAME OR DIRECTORYNAME]
</code></pre>
<p>Example:</p>
<pre><code class="lang-bash">rmdir emptyDir <span class="hljs-comment"># Removes empty directory emptyDir</span>
rmdir myDir1 myDir2 myDir3 <span class="hljs-comment"># Removes multiple empty directories</span>
rmdir myDir <span class="hljs-comment"># Won't work for non-empty directories (Use 'rm -r dir_name' for this case!)</span>
</code></pre>
<h2 id="heading-final-words">Final Words</h2>
<p>Congratulations! You've successfully learned the basics of Red Hat Enterprise Linux (RHEL) and the essential commands that form the foundation of Linux systems.</p>
<p>Keep practicing these commands, and soon they'll become second nature to you. Mastery comes with repetition, so continue experimenting and applying these fundamentals in real-world scenarios.</p>
<p>Stay tuned for more articles. Get ready to take your RHEL skills to the next level.</p>
<p><a target="_blank" href="https://linktr.ee/tanishkamakode">Let’s connect!</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build a Simple Secure Chat System Using Netcat ]]>
                </title>
                <description>
                    <![CDATA[ In this hands-on tutorial, you'll learn how to harness the power of Netcat to build practical networking tools. We’ll start with basic message transmission. Then you'll progress to creating a file transfer system, and you’ll ultimately develop a secu... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-a-simple-secure-chat-system-with-netcat/</link>
                <guid isPermaLink="false">671a491036e11cfa5033debb</guid>
                
                    <category>
                        <![CDATA[ cybersecurity ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Netcat ]]>
                    </category>
                
                    <category>
                        <![CDATA[ shell ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Ubuntu ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Hang Hu ]]>
                </dc:creator>
                <pubDate>Thu, 24 Oct 2024 13:18:08 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729729356682/acf8ca42-3aaa-4ca1-9ebc-10f0f658c678.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In this hands-on tutorial, you'll learn how to harness the power of Netcat to build practical networking tools.</p>
<p>We’ll start with basic message transmission. Then you'll progress to creating a file transfer system, and you’ll ultimately develop a secure chat application with encryption.</p>
<p>Here’s what we’ll cover:</p>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-install-netcat">Install Netcat</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-your-first-network-connection">Your First Network Connection</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-build-a-simple-file-transfer-tool">How to Build a Simple File Transfer Tool</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-create-a-secure-chat-system">How to Create a Secure Chat System</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-practice-your-skills">Practice Your Skills</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we start, you'll need:</p>
<ul>
<li><p>A Linux-based system: I recommend Ubuntu. Alternatively, you can use the <a target="_blank" href="https://labex.io/tutorials/linux-online-linux-playground-372915">Online Linux Terminal</a> if you don't have Linux installed.</p>
</li>
<li><p>Basic terminal knowledge (how to use <code>cd</code> and <code>ls</code>)</p>
</li>
</ul>
<p>Don't worry if you're new to networking - I’ll explain everything as we go!</p>
<h2 id="heading-install-netcat"><strong>Install Netcat</strong></h2>
<p><a target="_blank" href="https://nc110.sourceforge.io/">Netcat</a> is like a digital "pipe" between computers – anything you put in one end comes out the other. Before we start using it, let's get it installed on your system.</p>
<p>Open your terminal and run these commands:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Update your system's package list</span>
sudo apt update

<span class="hljs-comment"># Install Netcat</span>
sudo apt install netcat -y
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729583277378/d5fb06c5-3163-4885-b163-2cdde4fa434b.png" alt="Update your system's package list" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>To check if the installation worked, run:</p>
<pre><code class="lang-bash">nc -h
</code></pre>
<p>You should see a message starting with "OpenBSD netcat". If you do, great! If not, try running the installation commands again.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729583329155/812f32b9-2ca7-41f6-aead-07ffacbf161c.png" alt="check if the installation worked" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-your-first-network-connection"><strong>Your First Network Connection</strong></h2>
<p>Before we dive into building tools, let's understand what a network connection actually is. Think of it like a phone call: one person needs to wait for the call (the listener), and another person needs to make the call (the connector).</p>
<p>In networking, we use "ports" to make these connections. You can think of ports like different phone lines – they let multiple conversations happen at the same time.</p>
<p>Let's try making our first connection:</p>
<ol>
<li>Open a terminal window and create a listener:</li>
</ol>
<pre><code class="lang-bash">nc -l 12345
</code></pre>
<p>What did we just do? The <code>-l</code> tells Netcat to "listen" for a connection, and <code>12345</code> is the port number we chose. Your terminal will look like it's frozen – that's normal! It's waiting for someone to connect.</p>
<ol start="2">
<li>Open another terminal window and connect to your listener:</li>
</ol>
<pre><code class="lang-bash">nc localhost 12345
</code></pre>
<p>Here, <code>localhost</code> means "this computer" – we're connecting to ourselves for practice. If you want to connect to another computer, you can replace <code>localhost</code> with its IP address.</p>
<p>Now try typing a message (like "hi") in either window and press Enter. Cool, right? The message appears in the other window! This is exactly how basic network communication works.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729583496294/da70a4bc-5626-493c-8a52-385b708593f4.png" alt="making our first connection" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>To stop the connection, press <code>Ctrl+C</code> in both windows.</p>
<h3 id="heading-what-just-happened"><strong>What Just Happened?</strong></h3>
<p>You just created your first network connection! The first terminal was like someone waiting by a phone, and the second terminal was like someone calling that phone. When they connected, they could send messages back and forth.</p>
<h2 id="heading-how-to-build-a-simple-file-transfer-tool"><strong>How to Build a Simple File Transfer Tool</strong></h2>
<p>Now that we understand basic connections, let's build something more useful: a tool to transfer files between computers.</p>
<p>First, let's create a test file to send:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Create a file with some content</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"This is my secret message"</span> &gt; secret.txt
</code></pre>
<p>To transfer this file, we'll need two terminals again, but this time we'll use them differently:</p>
<ol>
<li>In the first terminal, set up the receiver:</li>
</ol>
<pre><code class="lang-bash">nc -l 12345 &gt; received_file.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729583969994/d3159a2f-37f7-4cab-a23b-3788ed85876f.png" alt="transfer file" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This tells Netcat to:</p>
<ul>
<li><p>Listen for a connection (<code>-l</code>)</p>
</li>
<li><p>Save whatever it receives to a file called <code>received_file.txt</code> (<code>&gt;</code>)</p>
</li>
</ul>
<ol start="2">
<li>In the second terminal, send the file:</li>
</ol>
<pre><code class="lang-bash">nc localhost 12345 &lt; secret.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729583998676/ab865ded-626a-47aa-9df6-83e56aa5f5d1.png" alt="send the file" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>The <code>&lt;</code> tells Netcat to send the contents of our file.</p>
<ol start="3">
<li>Press Ctrl+C in both terminals to stop the transfer. Then check if it worked:</li>
</ol>
<pre><code class="lang-bash">cat received_file.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729584030360/32fa9916-ffdb-49da-abcf-57569c9b2f79.png" alt="received file" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You should see your message!</p>
<p>This is similar to our chat system, but instead of typing messages, we're:</p>
<ol>
<li><p>Taking content from a file</p>
</li>
<li><p>Sending it through our network connection</p>
</li>
<li><p>Saving it to a new file on the other end</p>
</li>
</ol>
<p>Think of it like sending a document through a fax machine!</p>
<h2 id="heading-how-to-create-a-secure-chat-system"><strong>How to Create a Secure Chat System</strong></h2>
<p>Our previous examples sent everything as plain text – anyone could read it if they intercepted the connection. Let's make something more secure by adding encryption.</p>
<p>First, let's understand what encryption does:</p>
<ul>
<li><p>It's like putting your message in a locked box</p>
</li>
<li><p>Only someone with the right key can open it</p>
</li>
<li><p>Even if someone sees the box, they can't read your message</p>
</li>
</ul>
<p>We'll create two scripts: one for sending messages and one for receiving them.</p>
<ol>
<li>Create the sender script:</li>
</ol>
<pre><code class="lang-bash">nano secure_sender.sh
</code></pre>
<p>Copy this code into the file:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Secure Chat - Type your messages below"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Press Ctrl+C to exit"</span>

<span class="hljs-keyword">while</span> <span class="hljs-literal">true</span>; <span class="hljs-keyword">do</span>
  <span class="hljs-comment"># Get the message</span>
  <span class="hljs-built_in">read</span> message

  <span class="hljs-comment"># Encrypt and send it</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$message</span>"</span> | openssl enc -aes-256-cbc -salt -base64 \
    -pbkdf2 -pass pass:chatpassword 2&gt;/dev/null | \
    nc -N localhost 12345
<span class="hljs-keyword">done</span>
</code></pre>
<p>This script will:</p>
<ol>
<li><p>Read messages from user input.</p>
</li>
<li><p>Encrypt them using OpenSSL's AES-256-CBC encryption (a strong encryption standard).</p>
</li>
<li><p>Send the encrypted message to the specified port.</p>
</li>
</ol>
<p>Press Ctrl+X, then Y, then Enter to save.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729584825701/bcd3c3fb-5cd5-40f7-8105-14d1fff069ec.png" alt="Create the sender script" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<ol start="2">
<li>Create the receiver script:</li>
</ol>
<pre><code class="lang-bash">nano secure_receiver.sh
</code></pre>
<p>Copy this code:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Waiting for messages..."</span>

<span class="hljs-keyword">while</span> <span class="hljs-literal">true</span>; <span class="hljs-keyword">do</span>
  <span class="hljs-comment"># Receive and decrypt messages</span>
  nc -l 12345 | openssl enc -aes-256-cbc -d -salt -base64 \
    -pbkdf2 -pass pass:chatpassword 2&gt;/dev/null
<span class="hljs-keyword">done</span>
</code></pre>
<p>This script will:</p>
<ol>
<li><p>Listen for incoming encrypted messages.</p>
</li>
<li><p>Decrypt them using the same encryption key.</p>
</li>
<li><p>Display the decrypted messages.</p>
</li>
</ol>
<p>Save this file too.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729584787785/c1e329ad-cbbe-49e8-918f-25d683884972.png" alt="Create the receiver script" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<ol start="3">
<li>Make both scripts executable:</li>
</ol>
<pre><code class="lang-bash">chmod +x secure_sender.sh secure_receiver.sh
</code></pre>
<ol start="4">
<li>Try it out:</li>
</ol>
<ul>
<li><p>In one terminal: <code>./secure_receiver.sh</code></p>
</li>
<li><p>In another terminal: <code>./secure_sender.sh</code></p>
</li>
</ul>
<p>Type a message in the sender terminal. The receiver will show your decrypted message!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729584834464/1f902a94-3a79-404b-bc9f-af9d8d2471e0.png" alt="Type a message in the sender terminal" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-enhancing-our-chat-system"><strong>Enhancing Our Chat System</strong></h3>
<p>Now that we have a working basic chat system, let's make it more user-friendly and informative. We'll add features like timestamps, color-coded messages, and encryption status updates. This enhanced version will help you better understand what's happening during the encryption and transmission process.</p>
<p>If you're comfortable with the basic version, try this improved version:</p>
<ol>
<li>Create an enhanced sender script (save it as <code>secure_sender_v2.sh</code>):</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-comment"># Set up color codes for better visibility</span>
GREEN=<span class="hljs-string">'\033[0;32m'</span>
BLUE=<span class="hljs-string">'\033[0;34m'</span>
NC=<span class="hljs-string">'\033[0m'</span> <span class="hljs-comment"># No Color</span>

<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${GREEN}</span>Secure Chat Sender - Started at <span class="hljs-subst">$(date)</span><span class="hljs-variable">${NC}</span>"</span>
<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${BLUE}</span>Type your messages below. Press Ctrl+C to exit<span class="hljs-variable">${NC}</span>"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"----------------------------------------"</span>

<span class="hljs-keyword">while</span> <span class="hljs-literal">true</span>; <span class="hljs-keyword">do</span>
    <span class="hljs-comment"># Show prompt with timestamp</span>
    <span class="hljs-built_in">echo</span> -ne <span class="hljs-string">"<span class="hljs-variable">${GREEN}</span>[<span class="hljs-subst">$(date +%H:%M:%S)</span>]<span class="hljs-variable">${NC}</span> Your message: "</span>

    <span class="hljs-comment"># Get the message</span>
    <span class="hljs-built_in">read</span> message

    <span class="hljs-comment"># Skip if message is empty</span>
    <span class="hljs-keyword">if</span> [ -z <span class="hljs-string">"<span class="hljs-variable">$message</span>"</span> ]; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">continue</span>
    <span class="hljs-keyword">fi</span>

    <span class="hljs-comment"># Add timestamp to message</span>
    timestamped_message=<span class="hljs-string">"[<span class="hljs-subst">$(date +%H:%M:%S)</span>] <span class="hljs-variable">$message</span>"</span>

    <span class="hljs-comment"># Show encryption status</span>
    <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${BLUE}</span>Encrypting and sending message...<span class="hljs-variable">${NC}</span>"</span>

    <span class="hljs-comment"># Encrypt and send it, showing the encrypted form</span>
    encrypted=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$timestamped_message</span>"</span> | openssl enc -aes-256-cbc -salt -base64 \
        -pbkdf2 -iter 10000 -pass pass:chatpassword 2&gt;/dev/null)

    <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${BLUE}</span>Encrypted form:<span class="hljs-variable">${NC}</span> <span class="hljs-variable">${encrypted:0:50}</span>..."</span> <span class="hljs-comment"># Show first 50 chars</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$encrypted</span>"</span> | nc -N localhost 12345

    <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${GREEN}</span>Message sent successfully!<span class="hljs-variable">${NC}</span>"</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"----------------------------------------"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<ol start="2">
<li>Create an enhanced receiver script (save as <code>secure_receiver_v2.sh</code>):</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-comment"># Set up color codes for better visibility</span>
GREEN=<span class="hljs-string">'\033[0;32m'</span>
BLUE=<span class="hljs-string">'\033[0;34m'</span>
YELLOW=<span class="hljs-string">'\033[1;33m'</span>
NC=<span class="hljs-string">'\033[0m'</span> <span class="hljs-comment"># No Color</span>

<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${GREEN}</span>Secure Chat Receiver - Started at <span class="hljs-subst">$(date)</span><span class="hljs-variable">${NC}</span>"</span>
<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${BLUE}</span>Waiting for messages... Press Ctrl+C to exit<span class="hljs-variable">${NC}</span>"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"----------------------------------------"</span>

<span class="hljs-keyword">while</span> <span class="hljs-literal">true</span>; <span class="hljs-keyword">do</span>
    <span class="hljs-comment"># Receive and show the encrypted message</span>
    <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${BLUE}</span>Waiting for next message...<span class="hljs-variable">${NC}</span>"</span>

    encrypted=$(nc -l 12345)

    <span class="hljs-comment"># Skip if received nothing</span>
    <span class="hljs-keyword">if</span> [ -z <span class="hljs-string">"<span class="hljs-variable">$encrypted</span>"</span> ]; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">continue</span>
    <span class="hljs-keyword">fi</span>

    <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${YELLOW}</span>Received encrypted message:<span class="hljs-variable">${NC}</span> <span class="hljs-variable">${encrypted:0:50}</span>..."</span> <span class="hljs-comment"># Show first 50 chars</span>
    <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${BLUE}</span>Decrypting...<span class="hljs-variable">${NC}</span>"</span>

    <span class="hljs-comment"># Decrypt and display the message</span>
    decrypted=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$encrypted</span>"</span> | openssl enc -aes-256-cbc -d -salt -base64 \
        -pbkdf2 -iter 10000 -pass pass:chatpassword 2&gt;/dev/null)

    <span class="hljs-comment"># Check if decryption was successful</span>
    <span class="hljs-keyword">if</span> [ $? -eq 0 ]; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"<span class="hljs-variable">${GREEN}</span>Decrypted message:<span class="hljs-variable">${NC}</span> <span class="hljs-variable">$decrypted</span>"</span>
    <span class="hljs-keyword">else</span>
        <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"\033[0;31mError: Failed to decrypt message<span class="hljs-variable">${NC}</span>"</span>
    <span class="hljs-keyword">fi</span>

    <span class="hljs-built_in">echo</span> <span class="hljs-string">"----------------------------------------"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<ol start="3">
<li>Make the enhanced scripts executable:</li>
</ol>
<pre><code class="lang-bash">chmod +x secure_sender_v2.sh secure_receiver_v2.sh
</code></pre>
<p>Try running both versions to see how the additional feedback helps you better understand the encryption and communication process.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729585592733/01d25154-37a9-4c14-95ac-410c513956b9.png" alt="Enhancing Our Chat System" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>The enhanced version (v2) adds several improvements:</p>
<ul>
<li><p>Colorized output for better readability.</p>
</li>
<li><p>Timestamps for each message.</p>
</li>
<li><p>Status updates showing the encryption/decryption process.</p>
</li>
<li><p>Error handling for failed decryption attempts.</p>
</li>
<li><p>Preview of encrypted messages before sending/after receiving.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This tutorial taught you how to use Netcat as a versatile networking tool. We started with basic message sending, progressed to building a simple file transfer system, and then created a secure chat system with encryption.</p>
<p>You've gained hands-on experience with:</p>
<ul>
<li><p>Setting up network listeners and connections</p>
</li>
<li><p>Transferring files securely between systems</p>
</li>
<li><p>Implementing basic encryption for secure communication</p>
</li>
<li><p>Adding user-friendly features like timestamps and status updates</p>
</li>
</ul>
<p>The skills you've learned here form a solid foundation for understanding network communication and can be applied to more complex networking projects. To practice the operations from this tutorial, try <a target="_blank" href="https://labex.io/labs/linux-using-netcat-for-simple-network-communication-392039">the interactive hands-on lab</a>.</p>
<h2 id="heading-practice-your-skills"><strong>Practice Your Skills</strong></h2>
<p>Now that you've learned the basics of Netcat and built a secure chat system, let's put your skills to the test with a real-world scenario. Try the "<a target="_blank" href="https://labex.io/labs/linux-receive-messages-using-netcat-392102"><strong>Receive Messages Using Netcat</strong></a>" lab challenge where you'll play the role of a junior interstellar communications analyst. Your mission: intercept and log signals from an alien civilization using your newfound Netcat knowledge.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Self-host a Container Registry ]]>
                </title>
                <description>
                    <![CDATA[ A container registry is a storage catalog from where you can push and pull container images. There are many public and private registries available to developers such as Docker Hub, Amazon ECR, and Google Cloud Artifact Registry. But sometimes, inste... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-self-host-a-container-registry/</link>
                <guid isPermaLink="false">670ea63e203bba3017cc96ff</guid>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ containers ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ nginx ]]>
                    </category>
                
                    <category>
                        <![CDATA[ SSL ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Alex Pliutau ]]>
                </dc:creator>
                <pubDate>Tue, 15 Oct 2024 17:28:30 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728918386211/cf6fd053-453e-4257-abcd-16942c345845.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>A container registry is a storage catalog from where you can push and pull container images.</p>
<p>There are many public and private registries available to developers such as <a target="_blank" href="https://hub.docker.com/">Docker Hub</a>, <a target="_blank" href="https://aws.amazon.com/ecr/">Amazon ECR</a>, and <a target="_blank" href="https://cloud.google.com/artifact-registry/docs">Google Cloud Artifact Registry</a>. But sometimes, instead of relying on an external vendor, you might want to host your images yourself. This gives you more control over how the registry is configured and where the container images are hosted.</p>
<p>This article is a hands-on tutorial that’ll teach you how to self-host a Container Registry.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-what-is-a-container-image">What is a Container Image?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-a-container-registry">What is a Container Registry?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-why-you-might-want-to-self-host-a-container-registry">Why you might want to self-host a Container Registry</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-self-host-a-container-registry">How to self-host a Container Registry</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-1-install-docker-and-docker-compose-on-the-server">Step 1: Install Docker and Docker Compose on the server</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-configure-and-run-the-registry-container">Step 2: Configure and run the registry container</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-3-run-nginx-for-handling-tls">Step 3: Run NGINX for handling TLS</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-ready-to-go">Ready to go!</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-other-options">Other options</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<p>You will get the most out of this article if you’re already familiar with the tools like Docker and NGINX, and have a general understanding of what a container is.</p>
<h2 id="heading-what-is-a-container-image">What is a Container Image?</h2>
<p>Before we talk about container registries, let's first understand what a container image is. In a nutshell, a container image is a package that includes all of the files, libraries, and configurations to run a container. They are composed of <a target="_blank" href="https://docs.docker.com/get-started/docker-concepts/building-images/understanding-image-layers/">layers</a> where each layer represents a set of file system changes that add, remove, or modify files.</p>
<p>The most common way to create a container image is to use a <strong>Dockerfile</strong>.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># build an image</span>
docker build -t pliutau/hello-world:v0 .

<span class="hljs-comment"># check the images locally</span>
docker images
<span class="hljs-comment"># REPOSITORY    TAG       IMAGE ID       CREATED          SIZE</span>
<span class="hljs-comment"># hello-world   latest    9facd12bbcdd   22 seconds ago   11MB</span>
</code></pre>
<p>This creates a container image that is stored on your local machine. But what if you want to share this image with others or use it on a different machine? This is where container registries come in.</p>
<h2 id="heading-what-is-a-container-registry">What is a Container Registry?</h2>
<p>A container registry is a storage catalog where you can push and pull container images from. The images are grouped into repositories, which are collections of related images with the same name. For example, on Docker Hub registry, <a target="_blank" href="https://hub.docker.com/_/nginx">nginx</a> is the name of the repository that contains different versions of the NGINX images.</p>
<p>Some registries are public, meaning that the images hosted on them are accessible to anyone on the Internet. Public registries such as <a target="_blank" href="https://hub.docker.com/">Docker Hub</a> are a good option to host open-source projects.</p>
<p>On the other hand, private registries provide a way to incorporate security and privacy into enterprise container image storage, either hosted in cloud or on-premises. These private registries often come with advanced security features and technical support.</p>
<p>There is a growing list of private registries available such as <a target="_blank" href="https://aws.amazon.com/ecr/">Amazon ECR</a>, <a target="_blank" href="https://cloud.google.com/artifact-registry/docs">GCP Artifact Registry</a>, <a target="_blank" href="https://github.com/features/packages">GitHub Container Registry</a>, and Docker Hub also offers a private repository feature.</p>
<p>As a developer, you interact with a container registry when using the <code>docker push</code> and <code>docker pull</code> commands.</p>
<pre><code class="lang-bash">docker push docker.io/pliutau/hello-world:v0

<span class="hljs-comment"># In case of Docker Hub we could also skip the registry part</span>
docker push pliutau/hello-world:v0
</code></pre>
<p>Let's look at the anatomy of a container image URL:</p>
<pre><code class="lang-bash">docker pull docker.io/pliutau/hello-world:v0@sha256:dc11b2...
                |            |            |          |
                ↓            ↓            ↓          ↓
             registry    repository      tag       digest
</code></pre>
<h2 id="heading-why-you-might-want-to-self-host-a-container-registry">Why You Might Want to Self-host a Container Registry</h2>
<p>Sometimes, instead of relying on a provider like AWS or GCP, you might want to host your images yourself. This keeps your infrastructure internal and makes you less reliant on external vendors. In some heavily regulated industries, this is even a requirement.</p>
<p>A self-hosted registry runs on your own servers, giving you more control over how the registry is configured and where the container images are hosted. At the same time it comes with a cost of maintaining and securing the registry.</p>
<h2 id="heading-how-to-self-host-a-container-registry">How to Self-host a Container Registry</h2>
<p>There are several open-source container registry solutions available. The most popular one is officially supported by Docker, called <a target="_blank" href="https://hub.docker.com/_/registry">registry</a>, with its implementation for storing and distributing of container images and artifacts. This means that you can run your own registry inside a container.</p>
<p>Here are the main steps to run a registry on a server:</p>
<ul>
<li><p>Install Docker and Docker Compose on the server.</p>
</li>
<li><p>Configure and run the <strong>registry</strong> container.</p>
</li>
<li><p>Run <strong>NGINX</strong> for handling TLS and forwarding requests to the registry container.</p>
</li>
<li><p>Setup SSL certificates and configure a domain.</p>
</li>
</ul>
<h3 id="heading-step-1-install-docker-and-docker-compose-on-the-server">Step 1: Install Docker and Docker Compose on the server</h3>
<p>You can use any server that supports Docker. For example, you can use a DigitalOcean Droplet with Ubuntu. For this demo I used Google Cloud Compute to create a VM with Ubuntu.</p>
<pre><code class="lang-bash">neofetch

<span class="hljs-comment"># OS: Ubuntu 20.04.6 LTS x86_64</span>
<span class="hljs-comment"># CPU: Intel Xeon (2) @ 2.200GHz</span>
<span class="hljs-comment"># Memory: 3908MiB</span>
</code></pre>
<p>Once we're inside our VM, we should install Docker and Docker Compose. Docker Compose is optional, but it makes it easier to manage multi-container applications.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># install docker engine and docker-compose</span>
sudo snap install docker

<span class="hljs-comment"># verify the installation</span>
docker --version
docker-compose --version
</code></pre>
<h3 id="heading-step-2-configure-and-run-the-registry-container">Step 2: Configure and run the registry container</h3>
<p>Next we need to configure our registry container. The following <strong>compose.yaml</strong> file will create a registry container with a volume for storing the images and a volume for storing the password file.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">registry:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">registry:latest</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">REGISTRY_AUTH:</span> <span class="hljs-string">htpasswd</span>
      <span class="hljs-attr">REGISTRY_AUTH_HTPASSWD_REALM:</span> <span class="hljs-string">Registry</span> <span class="hljs-string">Realm</span>
      <span class="hljs-attr">REGISTRY_AUTH_HTPASSWD_PATH:</span> <span class="hljs-string">/auth/registry.password</span>
      <span class="hljs-attr">REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY:</span> <span class="hljs-string">/data</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-comment"># Mount the password file</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./registry/registry.password:/auth/registry.password</span>
      <span class="hljs-comment"># Mount the data directory</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./registry/data:/data</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">5000</span>
</code></pre>
<p>The password file defined in <strong>REGISTRY_AUTH_HTPASSWD_PATH</strong> is used to authenticate users when they push or pull images from the registry. We should create a password file using the <strong>htpasswd</strong> command. We should also create a folder for storing the images.</p>
<pre><code class="lang-yaml"><span class="hljs-string">mkdir</span> <span class="hljs-string">-p</span> <span class="hljs-string">./registry/data</span>

<span class="hljs-comment"># install htpasswd</span>
<span class="hljs-string">sudo</span> <span class="hljs-string">apt</span> <span class="hljs-string">install</span> <span class="hljs-string">apache2-utils</span>

<span class="hljs-comment"># create a password file. username: busy, password: bee</span>
<span class="hljs-string">htpasswd</span> <span class="hljs-string">-Bbn</span> <span class="hljs-string">busy</span> <span class="hljs-string">bee</span> <span class="hljs-string">&gt;</span> <span class="hljs-string">./registry/registry.password</span>
</code></pre>
<p>Now we can start the registry container. If you see this message, than everything is working as it should:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker-compose</span> <span class="hljs-string">up</span>

<span class="hljs-comment"># successfull run should output something like this:</span>
<span class="hljs-comment"># registry | level=info msg="listening on [::]:5000"</span>
</code></pre>
<h3 id="heading-step-3-run-nginx-for-handling-tls">Step 3: Run NGINX for handling TLS</h3>
<p>As mentioned earlier, we can use NGINX to handle TLS and forward requests to the registry container.</p>
<p>The Docker Registry requires a valid trusted SSL certificate to work. You can use something like <a target="_blank" href="https://letsencrypt.org/">Let's Encrypt</a> or obtain it manually. Make sure you have a domain name pointing to your server (<strong>registry.pliutau.com</strong> in my case). For this demo I already obtained the certificates using <a target="_blank" href="https://certbot.eff.org/">certbot</a> and put it in the <strong>./nginx/certs</strong> directory.</p>
<p>Since we're running our Docker Registry in a container, we can run NGINX in a container as well by adding the following service to the <strong>compose.yaml</strong> file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">registry:</span>
    <span class="hljs-comment"># ...</span>
  <span class="hljs-attr">nginx:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">registry</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-comment"># mount the nginx configuration</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./nginx/nginx.conf:/etc/nginx/nginx.conf</span>
      <span class="hljs-comment"># mount the certificates obtained from Let's Encrypt</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./nginx/certs:/etc/nginx/certs</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"443:443"</span>
</code></pre>
<p>Our <strong>nginx.conf</strong> file could look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-string">worker_processes</span> <span class="hljs-string">auto;</span>

<span class="hljs-string">events</span> {
    <span class="hljs-string">worker_connections</span> <span class="hljs-number">1024</span><span class="hljs-string">;</span>
}

<span class="hljs-string">http</span> {
    <span class="hljs-string">upstream</span> <span class="hljs-string">registry</span> {
        <span class="hljs-string">server</span> <span class="hljs-string">registry:5000;</span>
    }

    <span class="hljs-string">server</span> {
        <span class="hljs-string">server_name</span> <span class="hljs-string">registry.pliutau.com;</span>
        <span class="hljs-string">listen</span> <span class="hljs-number">443</span> <span class="hljs-string">ssl;</span>

        <span class="hljs-string">ssl_certificate</span> <span class="hljs-string">/etc/nginx/certs/fullchain.pem;</span>
        <span class="hljs-string">ssl_certificate_key</span> <span class="hljs-string">/etc/nginx/certs/privkey.pem;</span>

        <span class="hljs-string">location</span> <span class="hljs-string">/</span> {
            <span class="hljs-comment"># important setting for large images</span>
            <span class="hljs-string">client_max_body_size</span>                <span class="hljs-string">1000m;</span>

            <span class="hljs-string">proxy_pass</span>                          <span class="hljs-string">http://registry;</span>
            <span class="hljs-string">proxy_set_header</span>  <span class="hljs-string">Host</span>              <span class="hljs-string">$http_host;</span>
            <span class="hljs-string">proxy_set_header</span>  <span class="hljs-string">X-Real-IP</span>         <span class="hljs-string">$remote_addr;</span>
            <span class="hljs-string">proxy_set_header</span>  <span class="hljs-string">X-Forwarded-For</span>   <span class="hljs-string">$proxy_add_x_forwarded_for;</span>
            <span class="hljs-string">proxy_set_header</span>  <span class="hljs-string">X-Forwarded-Proto</span> <span class="hljs-string">$scheme;</span>
            <span class="hljs-string">proxy_read_timeout</span>                  <span class="hljs-number">900</span><span class="hljs-string">;</span>
        }
    }
}
</code></pre>
<h3 id="heading-ready-to-go">Ready to go!</h3>
<p>After these steps we can run our registry and Nginx containers.</p>
<pre><code class="lang-bash">docker-compose up
</code></pre>
<p>Now, on the client side, you can push and pull the images from your registry. But first we need to login to the registry.</p>
<pre><code class="lang-bash">docker login registry.pliutau.com

<span class="hljs-comment"># Username: busy</span>
<span class="hljs-comment"># Password: bee</span>
<span class="hljs-comment"># Login Succeeded</span>
</code></pre>
<p>Time to build and push our image to our self-hosted registry:</p>
<pre><code class="lang-bash">docker build -t registry.pliutau.com/pliutau/hello-world:v0 .

docker push registry.pliutau.com/pliutau/hello-world:v0
<span class="hljs-comment"># v0: digest: sha256:a56ea4... size: 738</span>
</code></pre>
<p>On your server you can check the uploaded images in the data folder:</p>
<pre><code class="lang-bash">ls -la ./registry/data/docker/registry/v2/repositories/
</code></pre>
<h3 id="heading-other-options">Other options</h3>
<p>Following the example above, you can also run the registry on Kubernetes. Or you could use a managed registry service like <a target="_blank" href="https://goharbor.io/">Harbor</a>, which is an open-source registry that provides advanced security features and is compatible with Docker and Kubernetes.</p>
<p>Also, if you want to have a UI for your self-hosted registry, you could use a project like <a target="_blank" href="https://github.com/Joxit/docker-registry-ui">joxit/docker-registry-ui</a> and run it in a separate container.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Self-hosted Container Registries allow you to have complete control over your registry and the way it's deployed. At the same time it comes with a cost of maintaining and securing the registry.</p>
<p>Whatever your reasons for running a self-hosted registry, you now know how it's done. From here you can compare the different options and choose the one that best fits your needs.</p>
<p>You can find the full source code for this demo on <a target="_blank" href="https://github.com/plutov/packagemain/tree/master/26-self-hosted-container-registry">GitHub</a>. Also, you can watch it as a video on <a target="_blank" href="https://www.youtube.com/watch?v=TGLfQZ9qRaI">our YouTube channel</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
