<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ logging - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Thu, 14 May 2026 04:32:55 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/logging/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Build and Deploy a LogAnalyzer Agent using LangChain ]]>
                </title>
                <description>
                    <![CDATA[ Modern systems generate huge volumes of logs. Application logs, server logs, and infrastructure logs often contain the first clues when something breaks. The problem is not a lack of data, but the effort required to read and understand it. Engineers ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-build-and-deploy-a-loganalyzer-agent-using-langchain/</link>
                <guid isPermaLink="false">69837cd8f119ce39fb6041f1</guid>
                
                    <category>
                        <![CDATA[ ai-agent ]]>
                    </category>
                
                    <category>
                        <![CDATA[ llm ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Manish Shivanandhan ]]>
                </dc:creator>
                <pubDate>Wed, 04 Feb 2026 17:07:36 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770224778776/7d5c3a27-adc2-4cde-94d5-4ac7db892673.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Modern systems generate huge volumes of logs.</p>
<p>Application logs, server logs, and infrastructure logs often contain the first clues when something breaks. The problem is not a lack of data, but the effort required to read and understand it.</p>
<p>Engineers usually scroll through thousands of lines, search for error codes, and try to connect events across time. This is slow and error-prone, especially during incidents.</p>
<p>A LogAnalyzer Agent solves this problem by acting like a calm, experienced engineer who reads logs for you and explains what is going on.</p>
<p>In this article, you’ll learn how to build such an agent using <a target="_blank" href="https://fastapi.tiangolo.com/">FastAPI</a>, <a target="_blank" href="https://github.com/langchain-ai/langchain">LangChain</a>, and an OpenAI model.</p>
<p>We’ll walk through the backend, the log analysis logic, and a simple web UI that lets you upload a log file and get insights in seconds. We’ll also upload this app to Sevalla so that you can share your project with the world.</p>
<p>You just need some basic knowledge of Python and HTML/CSS/JavaScript to finish this tutorial.</p>
<p><a target="_blank" href="https://github.com/manishmshiva/loganalyzer">Here is the full code</a> for reference.</p>
<h2 id="heading-what-well-cover">What We’ll Cover</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-what-a-loganalyzer-agent-actually-does">What a LogAnalyzer Agent Actually Does</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-high-level-architecture">High-Level Architecture</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-designing-a-prompt-that-works">Designing a Prompt That Works</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-handling-large-log-files-safely">Handling Large Log Files Safely</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-analyzing-logs-with-langchain-and-openai">Analyzing Logs with LangChain and OpenAI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-building-the-fastapi-backend">Building the FastAPI Backend</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-a-simple-and-clean-web-ui">Creating a Simple and Clean Web UI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-running-the-application-locally">Running the Application Locally</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deployment-to-sevalla">Deployment to Sevalla</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-what-a-loganalyzer-agent-actually-does">What a LogAnalyzer Agent Actually Does</h2>
<p>A LogAnalyzer Agent takes raw log text as input and produces human-friendly analysis as output.</p>
<p>Instead of returning a list of errors, it explains the main failures, the likely root cause, and what to do next. This is important because logs are written for machines, not for people under pressure.</p>
<p>In this project, the agent behaves like a senior site reliability engineer. It reads logs in chunks, identifies patterns, and summarises them in simple language. The intelligence comes from a language model, while the reliability comes from careful handling of input and chunking.</p>
<h2 id="heading-high-level-architecture">High-Level Architecture</h2>
<p>The system has three main parts.</p>
<p>The first part is a web UI built with plain HTML, CSS, and JavaScript. This UI allows a user to upload a text file and start analysis. </p>
<p>The second part is a FastAPI backend that receives the file, validates it, and coordinates the analysis. </p>
<p>The third part is the analysis engine itself, which uses LangChain and an OpenAI model to interpret the logs.</p>
<p>The flow is simple: the browser sends a log file to the backend. The backend reads the file, splits it into manageable pieces, and sends each piece to the language model with a clear prompt. The responses are combined and sent back to the browser as a single analysis.</p>
<h2 id="heading-designing-a-prompt-that-works">Designing a Prompt That Works</h2>
<p>The heart of any AI agent is the prompt. A weak prompt gives vague answers, while a strong prompt produces useful insights.</p>
<p>In this project, the prompt tells the model to act like a senior site reliability engineer. It asks for four things: main errors, likely root cause, practical next steps, and suspicious patterns.</p>
<p>Here is the prompt template used in the backend:</p>
<pre><code class="lang-python">log_analysis_prompt_text = <span class="hljs-string">"""
You are a senior site reliability engineer.
Analyze the following application logs.
1. Identify the main errors or failures.
2. Explain the likely root cause in simple terms.
3. Suggest practical next steps to fix or investigate.
4. Mention any suspicious patterns or repeated issues.
Logs:
{log_data}
Respond in clear paragraphs. Avoid jargon where possible.
"""</span>
</code></pre>
<p>This prompt is simple but effective. It gives the model a role, a clear task, and constraints on the output style. Asking for clear paragraphs helps ensure the response is readable and useful for non-experts as well.</p>
<h2 id="heading-handling-large-log-files-safely">Handling Large Log Files Safely</h2>
<p>Language models have input limits. You can’t send a large log file in one request and expect good results. To handle this, the backend splits the log text into smaller chunks. Each chunk overlaps slightly with the next to preserve context.</p>
<p>We’ll use the <code>RecursiveCharacterTextSplitter</code> from LangChain for this purpose. It ensures that chunks aren’t cut in awkward places and that important lines aren’t lost.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">split_logs</span>(<span class="hljs-params">log_text: str</span>):</span>
    <span class="hljs-string">"""Split log text into manageable chunks"""</span>
    splitter = RecursiveCharacterTextSplitter(
        chunk_size=<span class="hljs-number">2000</span>,
        chunk_overlap=<span class="hljs-number">200</span>
    )
    <span class="hljs-keyword">return</span> splitter.split_text(log_text)
</code></pre>
<p>This approach allows the agent to scale to large files while staying within model limits. Each chunk is analyzed independently, and the results are later combined.</p>
<h2 id="heading-analyzing-logs-with-langchain-and-openai">Analyzing Logs with LangChain and OpenAI</h2>
<p>Once the logs are split, each chunk is passed through the language model using the prompt template. The model used here is a lightweight but capable option, configured with a low temperature to keep responses focused and consistent.</p>
<pre><code class="lang-python">llm = ChatOpenAI(
    temperature=<span class="hljs-number">0.2</span>,
    model=<span class="hljs-string">"gpt-4o-mini"</span>
)
</code></pre>
<p>The analysis function loops over all chunks, formats the prompt, invokes the model, and stores the result.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">analyze_logs</span>(<span class="hljs-params">log_text: str</span>):</span>
    <span class="hljs-string">"""Analyze logs by splitting and processing each chunk"""</span>
    chunks = split_logs(log_text)
    combined_analysis = []

  <span class="hljs-keyword">for</span> chunk <span class="hljs-keyword">in</span> chunks:
          formatted_prompt = log_analysis_prompt_text.format(log_data=chunk)
          result = llm.invoke(formatted_prompt)
          combined_analysis.append(result.content)
      <span class="hljs-keyword">return</span> <span class="hljs-string">"\n\n"</span>.join(combined_analysis)
</code></pre>
<p>This design keeps the logic easy to understand. Each chunk produces a small analysis, and the final output is a stitched together explanation of the whole log file.</p>
<h2 id="heading-building-the-fastapi-backend">Building the FastAPI Backend</h2>
<p>FastAPI is a good choice for this project because it’s fast, simple, and easy to read. The backend exposes three endpoints. The root endpoint serves the HTML UI. The <code>/analyze</code> endpoint accepts a log file and returns the analysis. And the <code>/health</code> endpoint is used to check if the service is running and properly configured.</p>
<p>The analyze endpoint performs several important checks. It ensures that the file is a text file, verifies that it isn’t empty, and handles errors gracefully. This prevents unnecessary calls to the model and improves user experience.</p>
<pre><code class="lang-python"><span class="hljs-meta">@app.post("/analyze")</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">analyze_log_file</span>(<span class="hljs-params">file: UploadFile = File(<span class="hljs-params">...</span>)</span>):</span>
    <span class="hljs-string">"""Analyze uploaded log file"""</span>
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> file.filename.endswith(<span class="hljs-string">".txt"</span>):
        <span class="hljs-keyword">return</span> JSONResponse(
            status_code=<span class="hljs-number">400</span>,
            content={<span class="hljs-string">"error"</span>: <span class="hljs-string">"Only .txt log files are supported"</span>}
        )

     <span class="hljs-keyword">try</span>:
        content = <span class="hljs-keyword">await</span> file.read()
        log_text = content.decode(<span class="hljs-string">"utf-8"</span>, errors=<span class="hljs-string">"ignore"</span>)
        <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> log_text.strip():
            <span class="hljs-keyword">return</span> JSONResponse(
                status_code=<span class="hljs-number">400</span>,
                content={<span class="hljs-string">"error"</span>: <span class="hljs-string">"Log file is empty"</span>}
            )
        insights = analyze_logs(log_text)
        <span class="hljs-keyword">return</span> {<span class="hljs-string">"analysis"</span>: insights}
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">return</span> JSONResponse(
            status_code=<span class="hljs-number">500</span>,
            content={<span class="hljs-string">"error"</span>: <span class="hljs-string">f"Error analyzing logs: <span class="hljs-subst">{str(e)}</span>"</span>}
        )
</code></pre>
<p>This careful handling makes the agent more robust and production-friendly.</p>
<h2 id="heading-creating-a-simple-and-clean-web-ui">Creating a Simple and Clean Web UI</h2>
<p>A good agent isn’t useful if people can’t interact with it easily. The frontend in this project is a single HTML file with embedded CSS and JavaScript. It focuses on clarity and speed rather than complexity.</p>
<p>The UI allows users to choose a log file, see the file name, click an analyze button, and view results in a formatted area. A loading spinner provides feedback while the analysis is running. Errors are shown clearly, without technical noise.</p>
<p>The upload and analysis logic is handled by a small JavaScript function that sends the file to the backend using a fetch request.</p>
<pre><code class="lang-python"><span class="hljs-keyword">async</span> function uploadLog() {
    const fileInput = document.getElementById(<span class="hljs-string">"logFile"</span>);
    const file = fileInput.files[<span class="hljs-number">0</span>];

<span class="hljs-keyword">if</span> (!file) {
        alert(<span class="hljs-string">"Please select a log file first"</span>);
        <span class="hljs-keyword">return</span>;
    }
    const formData = new FormData();
    formData.append(<span class="hljs-string">"file"</span>, file);
    const response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"/analyze"</span>, {
        method: <span class="hljs-string">"POST"</span>,
        body: formData
    });
    const data = <span class="hljs-keyword">await</span> response.json();
    document.getElementById(<span class="hljs-string">"result"</span>).textContent = data.analysis;
}
</code></pre>
<p>This minimal approach keeps the frontend easy to maintain and adapt.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779422013/7bd95a67-66fb-44ee-a2d7-413ebb076676.png" alt="Log Analyzer UI" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-running-the-application-locally">Running the Application Locally</h2>
<p>To run this project, you need Python, a virtual environment, and an OpenAI API key. The API key is loaded from a <code>.env</code> file to keep secrets out of code. Once dependencies are installed, you can start the server using Uvicorn.</p>
<pre><code class="lang-python"><span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    <span class="hljs-keyword">import</span> uvicorn
    port = int(os.getenv(<span class="hljs-string">"PORT"</span>, <span class="hljs-number">8000</span>))
    uvicorn.run(app, host=<span class="hljs-string">"0.0.0.0"</span>, port=port)
</code></pre>
<p>After starting the server, you can open the browser, upload a log file, and see the agent in action.</p>
<h2 id="heading-deployment-to-sevalla">Deployment to Sevalla</h2>
<p>You can choose any cloud provider, like AWS, DigitalOcean, or others, to host your service. I’ll be using Sevalla for this example.</p>
<p><a target="_blank" href="https://sevalla.com/">Sevalla</a> is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.</p>
<p>Every platform will charge you for creating a cloud resource. Sevalla comes with a $20 credit for us to use, so we won’t incur any costs for this example.</p>
<p>Let’s push this project to GitHub so that we can connect our repository to Sevalla. We can also enable auto-deployments so that any new change to the repository is automatically deployed.</p>
<p><a target="_blank" href="https://app.sevalla.com/login">Log in</a> to Sevalla and click on Applications → Create new application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779432416/f9ae505d-505e-4378-9bdc-087cfa0cde78.png" alt="Create Application" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You can see the option to link your GitHub repository to create a new application. Use the default settings. Then click <strong>Create application</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779452276/793b0eab-f832-4e8f-9534-c9b79adca8e1.png" alt="Application Settings" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now we have to add our OpenAI API key to the environment variables. Click on the <strong>Environment variables</strong> section once the application is created, and save the <code>OPENAI_API_KEY</code> value as an environment variable.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779460335/d914c14e-96e7-4bb0-83aa-da3e5cdb0c22.png" alt="Environment Variables" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now we’re ready to deploy our application. Click on <strong>Deployments</strong> and click <strong>Deploy now</strong>. It will take 2–3 minutes for the deployment to complete.</p>
<p>Once done, click on <strong>Visit app</strong>. You’ll see the application served via a URL ending with <code>sevalla.app</code>. This is your new root URL. You can replace <code>localhost:8000</code> with this URL and start using it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779473800/7f4b450f-95cb-4e45-8cb7-1fcedffa54ef.png" alt="Final UI" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Congrats! Your log analyzer service is now live. You can find a sample log in the GitHub repository which you can use to test the service.</p>
<p>You can extend this by adding other capabilities and pushing your code to GitHub. Sevalla will automatically deploy your application to production.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building a LogAnalyzer Agent is a practical way to apply language models to real engineering problems. Logs are everywhere, and understanding them quickly can save hours during incidents. By combining FastAPI, LangChain, and a clear prompt, you can turn raw text into actionable insight.</p>
<p>The key ideas are simple: split large inputs, guide the model with a strong role and task, and present results in a clean interface. With these principles, you can adapt this agent to many other analysis tasks beyond logs.</p>
<p><em>Hope you enjoyed this article. Learn more about me by</em> <a target="_blank" href="https://manishshivanandhan.com/"><strong><em>visiting my website</em></strong></a><em>.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ What Are Logs in Programming? ]]>
                </title>
                <description>
                    <![CDATA[ Have you ever run a program, and it crashed? No error messages, no hints, just silence. How do you figure out what went wrong? That's where logging saves the day. Logs keep track of what’s happening inside your code so that when things go wrong, you ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/what-are-logs-in-programming/</link>
                <guid isPermaLink="false">67abe1c4e2819029e77c7459</guid>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Python ]]>
                    </category>
                
                    <category>
                        <![CDATA[ error handling ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Syeda Maham Fahim ]]>
                </dc:creator>
                <pubDate>Tue, 11 Feb 2025 23:48:20 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738685115991/600c01b6-b031-4ce9-a77a-5d88fcdaa68a.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Have you ever run a program, and it crashed? No error messages, no hints, just silence. How do you figure out what went wrong? That's where logging saves the day.</p>
<p>Logs keep track of what’s happening inside your code so that when things go wrong, you don’t have to guess. They’re similar to <code>print</code> or <code>console.log</code>, but more powerful.</p>
<p>In this tutorial, I will use Python to create and walk you through some logging code examples.</p>
<p>Before we talk about logs, let’s understand the different error types you might use or encounter.</p>
<h2 id="heading-types-of-errors">Types of Errors</h2>
<p>When you’re building a production-level application, you need to display errors based on their severity. There are several error types, and the most important ones are:</p>
<ul>
<li><p><strong>DEBUG:</strong> Detailed information, typically useful for diagnosing problems.</p>
</li>
<li><p><strong>INFO:</strong> General information about the program’s progress.</p>
</li>
<li><p><strong>WARNING:</strong> Something unexpected happened, but it’s not critical.</p>
</li>
<li><p><strong>ERROR:</strong> An error occurred, but the program can still run.</p>
</li>
<li><p><strong>CRITICAL:</strong> A very serious error that may stop the program from running.</p>
</li>
</ul>
<h2 id="heading-what-is-logging">What is Logging?</h2>
<p>Now, let’s get straight to the point and understand what logging is.</p>
<p>In simple terms, logs or logging is the act of recording information about everything your program does. The recorded information could be anything, from basic details like which functions were called to more detailed ones like tracking errors or performance issues.</p>
<h3 id="heading-why-do-we-need-logging">Why Do We Need Logging?</h3>
<p>You might be thinking, "If logs are printing errors, info, and so on, I can just use print statements. Why do I need logging?" Well, <code>print</code> works, but logging gives you more control:</p>
<p>↳ It can store messages in a file.<br>↳ It has different levels (info, warning, error, and so on).<br>↳ You can filter messages based on importance.<br>↳ It helps in debugging without cluttering your code.</p>
<p>These are things <code>print</code> statements can't do effectively.</p>
<h2 id="heading-how-to-add-logs-in-python">How to Add Logs in Python</h2>
<p>In Python, the <code>logging</code> module is built specifically for logging purposes.</p>
<p>Let’s set up some logs to see how they work.</p>
<h3 id="heading-step-1-import-the-logging-module">Step 1: Import the Logging Module</h3>
<p>To start using logging, we need to import the module:</p>
<pre><code class="lang-bash">import logging
</code></pre>
<h3 id="heading-step-2-log-messages">Step 2: Log Messages</h3>
<p>Now, you can start logging messages in your program. You can use different log levels based on the importance of the message. As a reminder, those levels are (from least to most urgent):</p>
<ul>
<li><p>DEBUG</p>
</li>
<li><p>INFO</p>
</li>
<li><p>WARNING</p>
</li>
<li><p>ERROR</p>
</li>
<li><p>CRITICAL</p>
</li>
</ul>
<p>Let’s log a simple message at each level:</p>
<pre><code class="lang-bash">logging.debug(<span class="hljs-string">"This is a debug message"</span>)
logging.info(<span class="hljs-string">"This is an info message"</span>)
logging.warning(<span class="hljs-string">"This is a warning message"</span>)
logging.error(<span class="hljs-string">"This is an error message"</span>)
logging.critical(<span class="hljs-string">"This is a critical message"</span>)
</code></pre>
<p>When you run this, you’ll see a message printed to the console, similar to this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738500126070/a2a395c3-5cbe-4f94-bea2-d871cfc1529e.png" alt="Terminal showing Python log messages." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You might wonder why you don’t see the <strong>DEBUG</strong> and <strong>INFO</strong> messages. The default logging level prevents this.</p>
<p>By default, the logging level is set to <code>WARNING</code>. This means that only messages with a severity of <code>WARNING</code> or higher will be displayed (that is, <code>WARNING</code>, <code>ERROR</code>, and <code>CRITICAL</code>).</p>
<h3 id="heading-step-3-set-up-the-basic-configuration"><strong>Step 3:</strong> Set Up the Basic Configuration</h3>
<p>To see the <code>debug</code> and <code>info</code> messages, we need to set the logging level <code>DEBUG</code> before running the code.</p>
<p>This means we need to configure the logs. So to do this, use the method <code>basicConfig</code> below:</p>
<pre><code class="lang-bash">logging.basicConfig(level=logging.DEBUG)
</code></pre>
<p>This basic configuration allows you to log messages at the <strong>DEBUG</strong> level or higher. You can change the level depending on the type of logs you want.</p>
<p>Now, all logs are printing:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738500423798/96b65689-f0e4-4663-9d1a-1dc7147e964e.png" alt="log messages: debug, info, warning, error, critical." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-4-log-to-a-file">Step 4: Log to a File</h3>
<p>Now, let’s save these logs in a file so we can keep track of errors, as well as when they occurred. To do this, update the configuration:</p>
<pre><code class="lang-bash">logging.basicConfig(filename=<span class="hljs-string">'data_log.log'</span>, level=logging.DEBUG, 
                    format=<span class="hljs-string">'%(asctime)s - %(levelname)s - %(message)s'</span>)
</code></pre>
<p>Here:</p>
<ul>
<li><p><code>asctime</code> – The time when the event occurred.</p>
</li>
<li><p><code>levelname</code> – The type of the log (for example, <strong>DEBUG</strong>, <strong>INFO</strong>).</p>
</li>
<li><p><code>message</code> – The message we display.</p>
</li>
</ul>
<p>Now, when you run the program, the log file will generate and save your logs, showing the exact timing, error type, and message. Like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738500713832/7895f1db-8740-494a-86dd-86020f4f5569.png" alt="Log file with debug, info, warning, error, critical messages" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-use-loggers-for-more-control">How to Use Loggers for More Control</h2>
<p>If you’re working on a large project, you might want a utility logger that you can use anywhere in the code. Let’s create this custom logger.</p>
<p>First, we’ll update the <code>basicConfig</code> to add the filename, line number, and ensure it writes everything, even special characters:</p>
<pre><code class="lang-bash">logging.basicConfig(
        filename=log_file,
        level=logging.DEBUG,
        format=<span class="hljs-string">'%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'</span>, 
        filemode=<span class="hljs-string">'w'</span>,
        encoding=<span class="hljs-string">'utf-8'</span> 
    )
</code></pre>
<p>Explanation:</p>
<ul>
<li><p><code>encoding='utf-8'</code> — Ensures special characters are logged.</p>
</li>
<li><p><code>%(filename)s:%(lineno)d</code> — Logs the filename and line number where the log was generated.</p>
</li>
</ul>
<p>Now, let’s set up a custom console logger:</p>
<pre><code class="lang-bash">  console_handler = logging.StreamHandler()
  console_handler.setLevel(logging.DEBUG)
  console_formatter = logging.Formatter(<span class="hljs-string">'%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'</span>)  <span class="hljs-comment"># Added line number</span>
  console_handler.setFormatter(console_formatter)


   logging.getLogger().addHandler(console_handler)
</code></pre>
<p>This setup does the following:</p>
<ul>
<li><p><code>console_handler</code>: Sends log messages to the console (stdout).</p>
</li>
<li><p><code>console_formatter</code>: Formats the log message with time, level, filename, line number, and the message.</p>
</li>
<li><p><code>logging.getLogger().addHandler(console_handler)</code>: Adds the custom handler to the root logger, so the log messages are printed to the console.</p>
</li>
</ul>
<h3 id="heading-full-example-code">Full Example Code</h3>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> logging
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> datetime

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">setup_daily_logger</span>():</span>
    base_dir = os.path.dirname(os.path.abspath(__file__))
    log_dir = os.path.join(base_dir, <span class="hljs-string">'logs'</span>)  
    os.makedirs(log_dir, exist_ok=<span class="hljs-literal">True</span>)


    current_time = datetime.now().strftime(<span class="hljs-string">"%m_%d_%y_%I_%M_%p"</span>)
    log_file = os.path.join(log_dir, <span class="hljs-string">f"<span class="hljs-subst">{current_time}</span>.log"</span>)


    logging.basicConfig(
        filename=log_file,
        level=logging.DEBUG,
        format=<span class="hljs-string">'%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'</span>, 
        filemode=<span class="hljs-string">'w'</span>,
        encoding=<span class="hljs-string">'utf-8'</span> 
    )


    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.DEBUG)
    console_formatter = logging.Formatter(<span class="hljs-string">'%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(message)s'</span>)  <span class="hljs-comment"># Added line number</span>
    console_handler.setFormatter(console_formatter)


    logging.getLogger().addHandler(console_handler)


    <span class="hljs-keyword">return</span> logging.getLogger(__name__)
</code></pre>
<h3 id="heading-what-happens-now">What Happens Now?</h3>
<p>Now, whenever you run the program, a new log file will be created in the <code>logs</code> folder. Each time the program is executed, a new log file with a unique timestamp will be generated.</p>
<p>Like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738503550743/1ad9fb99-762a-4ca9-a189-58d044955617.png" alt="custom log used in app.py" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>These logs will give you a clear picture of your program’s behavior and help with debugging.</p>
<p>I hope this article helped you get a clearer picture of logs and their importance in programming.</p>
<h1 id="heading-practical-real-world-examples">Practical Real-World Examples</h1>
<p>Now that you understand what logs are and how to set them up in Python, let’s look at real-world use cases.</p>
<h2 id="heading-1-bot-scraping-koreas-largest-property-website">1. Bot: Scraping Korea’s Largest Property Website</h2>
<p>Here’s an example of a bot designed to scrape Korea’s biggest property website.</p>
<ul>
<li><p>The logs show every step the bot takes, making it easier to track progress.</p>
</li>
<li><p>If an error occurs at any step, it gets recorded in the log file.</p>
</li>
<li><p>Even if the bot crashes, I can check the logs to pinpoint where things went wrong.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739037891010/69a8b5ae-d202-4466-add0-bb2ace28230a.png" alt="Log file with INFO messages showing city and town extraction details." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739037833210/bf9ceba0-2caf-48c6-bdb8-ac2d9eb901bd.png" alt="Log file with INFO messages showing city and town extraction details." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>One of the methods in this bot’s class uses logging to track whether the bot correctly selects the province.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739038058017/6153c909-477d-4cd6-b493-124b96bc595f.png" alt="select_province function that utilizes logging" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Here:</p>
<ul>
<li><p>If an error or warning occurs, it’s saved in the log file.</p>
</li>
<li><p>Later, you can review the logs and find out exactly what happened</p>
</li>
</ul>
<h2 id="heading-2-bot-scraping-facebook-groups">2. Bot: Scraping Facebook Groups</h2>
<p>Now, let’s see how logging helps in a Facebook group scraper.</p>
<h5 id="heading-error-tracking">Error Tracking</h5>
<ul>
<li><p>At one point, the bot failed due to an error.</p>
</li>
<li><p>Since we had logging in place, the error was saved in the log file.</p>
</li>
<li><p>This allows you to quickly find out what went wrong.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739038507530/9662bed7-a124-4dd8-94a9-9d657ec022a1.png" alt="Error log file" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Here, you see the exact filename and line number where the error occurs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739038826232/ce717b49-e532-4c5f-a40d-955591aa27a2.png" alt="Logs file shows success logs" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Once we identified and fixed the issue, the bot started working again.</p>
<p>It captures every detail in the log, saving hours of debugging by pinpointing where errors occur.</p>
<h5 id="heading-debugging-made-easy">Debugging Made Easy</h5>
<ul>
<li><p>The logs recorded every detail of the bot’s execution.</p>
</li>
<li><p>This can save you hours of debugging because you’ll know exactly where the error occurred.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Logging is one of those things no one thinks about until something breaks. But when it does, logs become your best friend.</p>
<p>Remember:</p>
<ul>
<li><p>Logging isn’t just for error tracking—it helps you monitor your program’s flow.</p>
</li>
<li><p>Instead of guessing what went wrong, check the logs. The answer is usually right there.</p>
</li>
</ul>
<p>Make sure to add logging to your code. You’ll thank yourself later!</p>
<p><strong>Stay Connected - @syedamahamfahim 🐬</strong></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ The Syslog Handbook – How to Collect and Redirect Logs to a Remote Server ]]>
                </title>
                <description>
                    <![CDATA[ By Serhii Orlivskyi If you're in information technology, you'll likely agree that logging is important. It helps you monitor a system, troubleshoot issues, and generally provides useful feedback about the system’s state. But it’s important to do logg... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/what-is-syslog-handbook/</link>
                <guid isPermaLink="false">66d460ee36c45a88f96b7cfd</guid>
                
                    <category>
                        <![CDATA[ Linux ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Thu, 29 Feb 2024 19:40:35 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/02/The-Syslog-Handbook-Cover.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Serhii Orlivskyi</p>
<p>If you're in information technology, you'll likely agree that logging is important. It helps you monitor a system, troubleshoot issues, and generally provides useful feedback about the system’s state. But it’s important to do logging right.</p>
<p>In this handbook, I'll explain what the syslog protocol is and how it works. You'll learn about syslog's message formats, how to configure rsyslog to redirect messages to a centralized remote server both using TLS and over a local network, how to redirect data from applications to syslog, how to use Docker with syslog, and more.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ol>
<li><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a>  </li>
<li><a class="post-section-overview" href="#heading-introduction">Introduction</a></li>
<li><a class="post-section-overview" href="#heading-what-is-syslog">What is syslog?</a><ul>
<li><a class="post-section-overview" href="#heading-syslog-protocol">Syslog protocol</a></li>
<li><a class="post-section-overview" href="#heading-syslog-daemons">Syslog daemons</a></li>
<li><a class="post-section-overview" href="#heading-syslog-message-formats">Syslog message formats</a><ul>
<li><a class="post-section-overview" href="#heading-rfc3164-format">RFC3164 format</a></li>
<li><a class="post-section-overview" href="#heading-rfc5424-format">RFC5424 format</a></li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-syslog-log-levels">Syslog log levels</a></li>
<li><a class="post-section-overview" href="#heading-syslog-facilities">Syslog facilities</a></li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-how-to-configure-rsyslog-to-redirect-messages-to-a-centralized-remote-server-using-tls">How to Configure rsyslog to Redirect Messages to a Centralized Remote Server using TLS</a><ul>
<li><a class="post-section-overview" href="#heading-update-rsyslog">Update rsyslog</a></li>
<li><a class="post-section-overview" href="#heading-install-dependencies">Install dependencies</a></li>
<li><a class="post-section-overview" href="#heading-configure-the-exporting-rsyslog-server">Configure the exporting rsyslog server</a></li>
<li><a class="post-section-overview" href="#heading-only-forward-logs-generated-by-certain-programs">Only forward logs generated by certain programs</a></li>
<li><a class="post-section-overview" href="#heading-specify-correct-domain-names-and-certificate-paths-in-your-configuration">Specify correct domain names and certificate paths in your configuration</a></li>
<li><a class="post-section-overview" href="#heading-install-certbot-certificates">Install certbot certificates</a></li>
<li><a class="post-section-overview" href="#heading-give-access-to-certificates-to-rsyslog">Give access to certificates to rsyslog</a></li>
<li><a class="post-section-overview" href="#heading-configure-accepting-rsyslog-server">Configure accepting rsyslog server</a></li>
<li><a class="post-section-overview" href="#heading-ensure-firewall-is-not-blocking-your-traffic">Ensure firewall is not blocking your traffic</a></li>
<li><a class="post-section-overview" href="#heading-restart-rsyslog">Restart rsyslog</a></li>
<li><a class="post-section-overview" href="#heading-test-the-configuration">Test the configuration</a></li>
<li><a class="post-section-overview" href="#heading-how-to-store-remote-logs-in-a-separate-file">How to store remote logs in a separate file</a></li>
<li><a class="post-section-overview" href="#heading-performance-considerations">Performance considerations</a></li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-how-to-configure-rsyslog-to-redirect-messages-to-a-centralized-remote-server-over-a-local-network">How to Configure rsyslog to Redirect Messages to a Centralized Remote Server Over a Local Network</a><ul>
<li><a class="post-section-overview" href="#heading-export-the-server-setup">Export the server setup</a></li>
<li><a class="post-section-overview" href="#heading-accept-the-server-setup">Accept the server setup</a></li>
<li><a class="post-section-overview" href="#heading-restart-rsyslog-and-test">Restart rsyslog and test</a></li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-other-possibilities-for-log-forwarding">Other Possibilities for Log Forwarding</a></li>
<li><a class="post-section-overview" href="#heading-how-to-redirect-data-from-applications-to-syslog">How to Redirect Data from Applications to syslog</a><ul>
<li><a class="post-section-overview" href="#heading-standalone-host-application-and-syslog">Standalone host application and syslog</a><ul>
<li><a class="post-section-overview" href="#heading-redirecting-logs-to-syslog-when-running-in-foreground">Redirecting logs to syslog when running in foreground</a></li>
<li><a class="post-section-overview" href="#heading-redirecting-logs-to-syslog-when-running-in-background-with-systemctl">Redirecting logs to syslog when running in background with systemctl</a></li>
<li><a class="post-section-overview" href="#heading-redirecting-logs-from-existing-log-files">Redirecting logs from existing log files</a></li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-docker-and-syslog">Docker and syslog</a><ul>
<li><a class="post-section-overview" href="#heading-configuring-a-single-docker-container">Configuring a single Docker container</a></li>
<li><a class="post-section-overview" href="#heading-configuring-a-docker-service-through-docker-compose-file">Configuring a Docker service through docker-compose file</a></li>
<li><a class="post-section-overview" href="#heading-configuring-a-default-for-every-container-through-the-docker-daemon">Configuring a default for every container through the Docker daemon</a></li>
<li><a class="post-section-overview" href="#heading-enabling-applications-inside-docker-to-log-to-syslog-directly">Enabling applications inside Docker to log to syslog directly</a></li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-how-to-use-logging-libraries-for-your-programming-language-to-log-to-syslog">How to Use Logging Libraries for Your Programming Language to log to syslog</a><ul>
<li><a class="post-section-overview" href="#heading-nodejs-client">Node.js client</a></li>
<li><a class="post-section-overview" href="#heading-python-client">Python client</a></li>
</ul>
</li>
</ul>
</li>
<li><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>In this guide we will discuss syslog and its associated concepts. While I will explain most of the topics we come across, you should have a foundational knowledge about the following:</p>
<ul>
<li>using the Linux terminal (such as navigating the directory tree, creating and editing files, changing file permissions, etc.) </li>
<li>a basic understanding of networks (domain name, host, IP address, TLS/SSL, TLS certificate, private/public key, and so on).</li>
</ul>
<h2 id="heading-introduction">Introduction</h2>
<p>Every system/application might provide its logs in different formats. If you have to work with many such systems and maintain them, it's important to deal with logs in a centralized, manageable, and scalable way.</p>
<p>First of all, it's useful to gather all the logs from the applications on your machine into one place for later processing.</p>
<p>Having collected all the logs in one place, you can now move on to processing them. But what if your machine is just a single node out of a group of servers? In this case, local log processing gives you insights about this single node but certainly not all of them. </p>
<p>Now you may very well want to transfer all the gathered logs to a central server which parses all the records, discovers any issues and inconsistencies, fires alerts, and finally stores the logs for future forensic analysis. </p>
<p>Note the convenience of having a central point of access to all your logs. You don't have to run around from machine to machine, searching for appropriate information and manually overlaying different log files.</p>
<p>So, to achieve the above, you can leverage the syslog protocol and use a very popular syslog daemon called rsyslog to collect all the logs and forward them to a remote server for further processing in a secure and reliable fashion. </p>
<p>And that’s exactly the example that I want to present in this tutorial to showcase a common and important use case of syslog. I'll give those who are not familiar with it a first taste of the problems it can solve.</p>
<p>So we'll explore this scenario, with examples of redirecting logs from host applications, Docker containers, and Node.js and Python clients, in this article. </p>
<p>But first, you need to understand the terminology around syslog as it's often shrouded in myths, mystery, and filled to the brim with confusion. Well, maybe I'm being overly dramatic here, but you get my point: terminology is important.</p>
<h2 id="heading-what-is-syslog">What is syslog?</h2>
<p>Nowadays there is a lot of uncertainty as to what the word syslog actually refers to, so let’s clear it up:</p>
<h3 id="heading-syslog-protocol">Syslog protocol</h3>
<p>Syslog is a system logging protocol that is a standard for message logging. It defines a common message format to allow systems to send messages to a logging server, whether it is set up locally or over the network. The logging server will then handle further processing or transport of the messages. </p>
<p>As long as the format of your messages is compliant with this protocol, you just have to pass them to a logging server (or, put differently, a logging daemon which we will talk about shortly) and forget about them. </p>
<p>The transport of the messages, rotation, processing, and enrichment will, from that point on, be handled by the logging server and the infrastructure it connects to. Your application does not have to know or deal with any of that. Thus we get a decoupled architecture (log handling is separated from the application).</p>
<p>But the main point of the syslog protocol is, of course, standardization. First of all, it is much easier to parse the logs when all the applications adhere to some common standard that generates the logs in the same format (or more or less the same, but let’s not jump the gun just yet). </p>
<p>If your logs have a common format, it’s first of all easy to filter the records by a particular time window or by the respective log levels (also referred to as severity levels. For example: info, warning, error, and so on). </p>
<p>Secondly, you may have a lot of different applications that implement a logtransport themselves. In that case, you'd have to spend quite some time skimming through the docs, figuring out how to configure file logging, log rotation, or log forwarding for every application instead of just configuring it once in your syslog server and expecting all your applications to simply submit their logs to it.</p>
<h3 id="heading-syslog-daemons">Syslog daemons</h3>
<p>Now that you understand that syslog is a protocol that specifies the common format of log messages produced by the applications, we can talk a bit about syslog daemons. They're essentially logging servers or log shippers designed to take in log records, convert them to syslog format (if they are not already converted), and handle data transformations, enrichment, and transport to various destinations.</p>
<p>One of the original older implementations of a syslog daemon for Linux was referred to simply as <strong>syslog</strong> (leading to much confusion) or <strong>sysklogd</strong>. Later, more modern and commonly used implementations such as <strong>rsyslog</strong> or <strong>syslog-ng</strong> emerged. These were also made for Linux specifically. </p>
<p>But if you are interested in a cross platform syslog daemon, which can also be used on MacOS, Android, or even Windows, you can take a look at <strong>nxlog</strong>.</p>
<p>In the later sections of this handbook, we will see multiple practical example of working with syslog. For this we will use rsyslog, which is a lightweight and highly performant syslog daemon with a wide range of features. It typically comes preinstalled on many Linux distributions (both Debian- and RedHat-based).</p>
<p>Rsyslog, like many other syslog daemons, listens to a <code>/dev/log</code> unix socket by default. It forwards the incoming data to a <code>/var/log/syslog</code> file on Debian-based distributions or to <code>/var/log/messages</code> on RedHat-based systems. </p>
<p>Some specific log messages are also stored in other files in <code>/var/log</code>, but the bottom line is that all of this can, of course, be configured to suit your needs.</p>
<p>Now you know about the true meaning of syslog and syslog daemons. But there is one important caveat. In many cases (and during the course of this guide as well) saying syslog colloquially refers to the syslog daemon as well as the infrastructure around it (Unix sockets, files in <code>/var/log</code>, as well as other daemons if the messages are forwarded across the network). So, saying “publish a message to syslog” means sending a message to the <code>/dev/log</code> Unix socket where it will be intercepted and processed by a syslog daemon according to its settings.</p>
<h3 id="heading-syslog-message-formats">Syslog message formats</h3>
<p>Syslog defines a certain format (structure) of the log records. Applications working with syslog should adhere to this format when logging to <code>/dev/log</code>. From there, syslog daemons will pick up the messages, and parse and process them according to their configuration.</p>
<p>There are two types of syslog formats: the original old BSD format which came from the early versions of BSD Unix systems and became a standard with RFC3164 specification, as well as a newer one from RFC5424.</p>
<h4 id="heading-rfc3164-format">RFC3164 format</h4>
<p>This format consists of the following 3 parts: PRI, HEADER (TIMESTAMP, HOSTNAME), MSG (TAG, CONTENT). Here is a more concrete example (taken directly from RFC3164, by the way):</p>
<pre><code>&lt;<span class="hljs-number">34</span>&gt;Oct <span class="hljs-number">11</span> <span class="hljs-number">22</span>:<span class="hljs-number">14</span>:<span class="hljs-number">15</span> mymachine su: <span class="hljs-string">'su root'</span> failed <span class="hljs-keyword">for</span> lonvick on /dev/pts/<span class="hljs-number">8</span>
</code></pre><p>Let's see what's going on here:</p>
<ul>
<li><code>&lt;34&gt;</code> (PRI) – priority of the log record which consists of the facility level multiplied by 8 plus the severity level. We will talk about facilities and severity levels soon, but in the example above we get: a facility number 4 (34 // 8 = 4) and a critical severity level (34 % 8 = 2).</li>
<li><code>Oct 11 22:14:15</code> (TIMESTAMP) – timestamp in local time without the year and a millisecond or a timezone portion. It follows a format string “Mmm dd hh:mm:ss”</li>
<li><code>mymachine</code> (HOSTNAME) _ hostname, IPv4, or IPv6 address of the machine that the message originates from.</li>
<li><code>su</code> (TAG) – Name of the program or process that generated the message. Any non-alphanumeric character terminates the TAG field and is assumed to be a starting part of the next (CONTENT) field. In our case, it is a colon (“:”) character. But it could also have been just a space, or even square brackets with the PID (process id) inside, such as “[123]”.</li>
<li><code>: 'su root' failed for lonvick on /dev/pts/8</code> (CONTENT) – An actual message of the log record.</li>
</ul>
<p>As you can see, RFC3164 doesn’t provide a lot of structural information, and has some limitations and inconveniences such as a restricted timestamp or certain variability and uncertainty (for example, in the delimiters after the TAG field). Also, the RFC3164 format stipulates that only ASCII encoding is supported. </p>
<p>All the above is actually the result of RFC3164 not being a set-in-stone strict standard, but rather a best-effort generalization of some syslog implementations that already existed at the time.</p>
<h4 id="heading-rfc5424-format">RFC5424 format</h4>
<p>RFC5424 presents an upgraded and more structured format which deals with some of the problems found in RFC3164.</p>
<p>It consists of the following parts: HEADER (PRI, VERSION, TIMESTAMP, HOSTNAME, APP-NAME, PROCID, MSGID), STRUCTURED DATA (SD-ELEMENTS (SD-ID, SD-PARAM)), MSG. Below is an example:</p>
<pre><code>&lt;<span class="hljs-number">34</span>&gt;<span class="hljs-number">1</span> <span class="hljs-number">2003</span><span class="hljs-number">-10</span><span class="hljs-number">-11</span>T22:<span class="hljs-number">14</span>:<span class="hljs-number">15.003</span>Z mymachine.example.com su - ID47 - BOM<span class="hljs-string">'su root'</span> failed <span class="hljs-keyword">for</span> lonvick on /dev/pts/<span class="hljs-number">8</span>
</code></pre><ul>
<li><code>&lt;34&gt;</code> (PRI) – priority of the log record. Combination of severity and facility. Same as for RFC3164.</li>
<li><code>1</code> (VERSION) – version of the syslog protocol specification. This number is supposed to be incremented for any future specification that makes changes to the HEADER part.</li>
<li><code>2003-10-11T22:14:15.003Z</code> (TIMESTAMP) – a timestamp with year, sub-second information, and timezone portions. It follows the ISO 8601 standard format as described in RFC3339 with some minor restrictions, like not using leap seconds, always requiring the “T” delimiter, and upper casing every character in the timestamp. NILVALUE (“-”) will be used if the syslog application cannot obtain system time (that is, it doesn’t have access to the time on the server).</li>
<li><code>https://mymachine.example.com/</code> (HOSTNAME) – FQDN, hostname, or the IP address of the log originator. NILVALUE may also be used when the syslog application does not know the originating host name.</li>
<li><code>su</code> (APP-NAME) – device or application that produced the message. NILVALUE may be used when the syslog application is not aware of the application name of the log producer.</li>
<li><code>-</code> (PROCID) – implementation-dependent value often used to provide a process name or process ID of the application that generated the message. A NILVALUE should be used when this field is not provided.</li>
<li><code>ID47</code> (MSGID) – field used to identify the type of message. Should contain NILVALUE when not used.</li>
<li><code>-</code> (STRUCTURED DATA) – provides sections with key-value pairs conveying additional metadata about the message. NILVALUE should be used when structured data is not provided. Examples: “[exampleSection@32473 iut="3" eventSource="Application" eventID="1011"][exampleSection2@32473 class="high"]”. In practice the STRUCTURED DATA part was rarely used and the metadata information was usually put into the MSG part that many applications structure as a JSON.</li>
<li><code>BOM'su root' failed for lonvick on /dev/pts/8</code> (MSG) – actual message of the log record. The “BOM” at the beginning is an unprintable character which signifies that the rest of the payload is UTF8 encoded. If this character is not present, then other encodings like ASCII can be assumed by the syslog daemons.</li>
</ul>
<p>RFC5424 has a more convenient format used for a timestamp and much more structural parts which you can use for specifying all sorts of metadata for the log messages. </p>
<p>Also, the specification was made to be extendable with the VERSION field. Even though I am not aware of any particular syslog specification extensions that make use of the latter and increment the version, the possibility is always there. </p>
<p>Finally, the new format supports UTF8 encoding and not just ASCII.</p>
<p>Notice that if a message directed to <code>/dev/log</code> does not follow one of the described syslog formats, it will still be processed by daemons such as rsyslog. Rsyslog will try to parse such records according to either its defaults or custom <a target="_blank" href="https://www.rsyslog.com/doc/configuration/templates.html">templates</a>. </p>
<p>Templates are a separate topic on their own that needs its own article, so we will not be focusing on them now. </p>
<p>By default, rsyslog will treat such messages as unstructured data and process them as they are. It will try filling in the gaps like a timestamp field, severity level, and so on to the best of its ability and in accordance with its default parameters (for example, timestamp will become current time, log level will be “user”, and severity level will be “info”).</p>
<p>When inspecting the log records in <code>/var/log/messages</code> or <code>/var/log/syslog</code> (depending on the system) you will probably see a different format from those described above. For rsyslog it looks like this:</p>
<pre><code>Feb <span class="hljs-number">19</span> <span class="hljs-number">10</span>:<span class="hljs-number">01</span>:<span class="hljs-number">43</span> mymachine systemd[<span class="hljs-number">1</span>]: systemd-hostnamed.service: Deactivated successfully.
</code></pre><p>This is just the format that rsyslog uses to display ingested messages that it saved to disk and not a standard syslog format. You can find this format in the rsyslog.conf file or in the <a target="_blank" href="https://www.rsyslog.com/doc/configuration/templates.html#:~:text=to%20define%20them%3A-,RSYSLOG_TraditionalFileFormat,-%2D%20The%20%E2%80%9Cold%20style">official documentation</a> under the name <code>RSYSLOG_TraditionalFileFormat</code>. But you can alway configure how rsyslog outputs its messages yourself using <a target="_blank" href="https://www.rsyslog.com/doc/configuration/templates.html">templates</a>.</p>
<p>One important aspect to understand is that rsyslog processes messages as they come. It receives and immediately forwards them to the specified destinations or saves them locally to files, such as <code>/var/log/messages</code>. Once the messages are fully processed, rsyslog does not retain any metadata about them apart from what it stored to log files. </p>
<p>This means that if records in <code>/var/log/messages</code> are stored in the traditional rsyslog format presented above, they will not keep, for example, their initial PRI value. While PRI and other data are accessible to rsyslog internally when processing and routing messages, not all of this information is by default stored in the log files.</p>
<h3 id="heading-syslog-log-levels">Syslog log levels</h3>
<p>Syslog supports the following log levels referred to as severity levels as per syslogs’s terminology:</p>
<table><colgroup><col><col><col></colgroup><tbody><tr><td><p><span>Code</span></p></td><td><p><span>Log level&nbsp;</span></p></td><td><p><span>Key word</span></p></td></tr><tr><td><p><span>0</span></p></td><td><p><span>Emergency&nbsp;</span></p></td><td><p><span>emerg</span></p></td></tr><tr><td><p><span>1</span></p></td><td><p><span>Alert&nbsp;</span></p></td><td><p><span>alert</span></p></td></tr><tr><td><p><span>2</span></p></td><td><p><span>Critical&nbsp;</span></p></td><td><p><span>crit</span></p></td></tr><tr><td><p><span>3</span></p></td><td><p><span>Error&nbsp;</span></p></td><td><p><span>err</span></p></td></tr><tr><td><p><span>4</span></p></td><td><p><span>Warning&nbsp;</span></p></td><td><p><span>warning</span></p></td></tr><tr><td><p><span>5</span></p></td><td><p><span>Notice&nbsp;</span></p></td><td><p><span>notice</span></p></td></tr><tr><td><p><span>6</span></p></td><td><p><span>Informational&nbsp;</span></p></td><td><p><span>info</span></p></td></tr><tr><td><p><span>7</span></p></td><td><p><span>Debug&nbsp;</span></p></td><td><p><span>debug</span></p></td></tr></tbody></table>

<p>These levels allow you to categorize messages by the severity (importance) criteria, with emergency being the highest level.</p>
<h3 id="heading-syslog-facilities">Syslog facilities</h3>
<p>Syslog facilities represent the origin of a message. You can often use them for filtering and categorizing log records by the system that generated them.</p>
<p>Note that syslog facilities (as well as severity levels, actually) are not strictly normative, so different facilities and levels may be used by different operating systems and distributions. Many details here are historically rooted and not always utility-based.</p>
<table><colgroup><col><col><col></colgroup><tbody><tr><td><p><span>Code</span></p></td><td><p><span>Keyword</span></p></td><td><p><span>Description</span></p></td></tr><tr><td><p><span>0</span></p></td><td><p><span>kern</span></p></td><td><p><span>Kernel messages</span></p></td></tr><tr><td><p><span>1</span></p></td><td><p><span>user</span></p></td><td><p><span>General user-level messages. This facility is typically used by default if no other is specified</span></p></td></tr><tr><td><p><span>2</span></p></td><td><p><span>mail</span></p></td><td><p><span>Mail system messages. Generated by running mail servers and clients if any.</span></p></td></tr><tr><td><p><span>3</span></p></td><td><p><span>daemon</span></p></td><td><p><span>System daemon message not related to kernel but to other background services</span></p></td></tr><tr><td><p><span>4</span></p></td><td><p><span>auth</span></p></td><td><p><span>General security/authentication messages (generated by tools like su, login, ftpd etc. that ask for user credentials)</span></p></td></tr><tr><td><p><span>5</span></p></td><td><p><span>syslog</span></p></td><td><p><span>Messages generated by the syslog daemon itself</span></p></td></tr><tr><td><p><span>6</span></p></td><td><p><span>lpr</span></p></td><td><p><span>Messages of line printer subsystem generated by printing services</span></p></td></tr><tr><td><p><span>7</span></p></td><td><p><span>news</span></p></td><td><p><span>Network news subsystem messages (can be used when network devices generate syslog messages)</span></p></td></tr><tr><td><p><span>8</span></p></td><td><p><span>uucp</span></p></td><td><p><span>UUCP (Unix-to-Unix Copy Protocol) subsystem messages</span></p></td></tr><tr><td><p><span>9</span></p></td><td><p><span>cron</span></p></td><td><p><span>Cron daemon messages related to the scheduled tasks (errors, scheduled tasks results etc)</span></p></td></tr><tr><td><p><span>10</span></p></td><td><p><span>authpriv</span></p></td><td><p><span>Private security/authentication messages. The messages with this level are routed to a separate file with more restricted permissions</span></p></td></tr><tr><td><p><span>11</span></p></td><td><p><span>ftp</span></p></td><td><p><span>File Transfer Protocol subsystem messages</span></p></td></tr><tr><td><p><span>12</span></p></td><td><p><span>ntp</span></p></td><td><p><span>Network Time Protocol subsystem messages</span></p></td></tr><tr><td><p><span>13</span></p></td><td><p><span>security/log audit</span></p></td><td><p><span>Messages generated by auditing (sub)systems. Facility is also used by various security/authorization tools</span></p></td></tr><tr><td><p><span>14</span></p></td><td><p><span>console/log alert</span></p></td><td><p><span>Alert messages that require special attention; generated by alerting systems. Can also be used by various security/authorization tools or other any applications that need to relay an important alert message</span></p></td></tr><tr><td><p><span>15</span></p></td><td><p><span>solaris-cron/clock</span></p></td><td><p><span>Messages to clock daemon like cron. Pretty much the same as facility 9. The difference is historic rather than functional.&nbsp;</span></p></td></tr><tr><td><p><span>16-22</span></p></td><td><p><span>local0-local7</span></p></td><td><p><span>Local0-local7: Local facilities reserved for custom use by processes that do not fit into the categories defined above</span></p></td></tr></tbody></table>

<p>Note that the syslog protocol specification defines only the codes for facilities. The keywords may be used by syslog daemons for readability.</p>
<p>Now that we have seen the list of all the existing facilities, pay attention to the ones such as “security”, “authpriv”, “log audit”, or “log alert”. It is possible for an application to log to different facilities depending on the nature of the message. </p>
<p>For example, an application might typically log to the “user” facility, but once it receives an important alert, it might log to facility 14 (log alert). Or in case of some authentication/authorization notice, it may direct it to “auth” facility, and so on.</p>
<p>If you have a custom application and are wondering which facility would be best suited for it, you can use the “user” facility (code 1) or custom local facilities (codes 16-22). </p>
<p>The ultimate difference between user and local facilities is that the former is a more general one, which aggregates the logs from different user applications. But be aware that other software might just as well use one of the local facilities on your machine.</p>
<h2 id="heading-how-to-configure-rsyslog-to-redirect-messages-to-a-centralized-remote-server-using-tls">How to Configure <code>rsyslog</code> to Redirect Messages to a Centralized Remote Server using TLS</h2>
<p>Let’s now look at a practical example that I mentioned at the beginning. This might not appear to be the most basic use case – especially for those who are not familiar with syslog daemons – but it's quite a common scenario. I hope it will help you learn a lot of useful things along the way.</p>
<p>Now I'll walk you through the steps you'll need to take to forward the syslog data from one server to another that will play the role of a centralized log aggregator. In this example, we will be sending logs as they flow in, using the TCP protocol with certificates for encryption and identity verification.</p>
<p>In the following examples, I assume that you have a centralized server for accepting the syslog data and one or more exporting servers that forward their syslog messages to that central accepting node. I'll also assume that all the servers are discoverable by their respective domain names and are running Debian-based or RedHat-based Linux distributions.</p>
<p>So, let's dive in and get started.</p>
<h3 id="heading-update-rsyslog">Update rsyslog</h3>
<p>As rsyslog typically comes preinstalled on most common Linux distros, I won't cover the installation process here. Just make sure your rsylogd is up-to-date enough to take advantage of its wide range of features.</p>
<p>Run the following command across all your servers:</p>
<pre><code>rsyslogd -v
</code></pre><p>And ensure that the version in the output is 6 or higher.</p>
<p>If this is not the case, run the following commands to update your daemon:</p>
<p>For Debian-based distributions:</p>
<pre><code>sudo apt-get update
sudo apt-get install --only-upgrade rsyslog
sudo systemctl restart rsyslog
</code></pre><p>For RedHat-based:</p>
<pre><code>sudo yum update rsyslog
sudo systemctl restart rsyslog
</code></pre><p>Or use <code>dnf</code> instead of <code>yum</code> for CentOS8/RHEL8:</p>
<h3 id="heading-install-dependencies">Install dependencies</h3>
<p>To handle the secure forwarding of the messages over the network using TLS, we will need to install the <code>rsyslog-gnutls</code> module. If you prefer to compile rsyslog from source, you will have to specify a respective flag when building. But if you use package managers you can simply run the following for every server:</p>
<p>For Debian-based distributions: </p>
<pre><code>sudo apt-get update
sudo apt-get install rsyslog-gnutls
sudo systemctl restart rsyslog
</code></pre><p>For RedHat based:</p>
<pre><code>sudo yum install epel-release
sudo yum install rsyslog-gnutls
sudo systemctl restart rsyslog
</code></pre><h3 id="heading-configure-the-exporting-rsyslog-server">Configure the exporting rsyslog server</h3>
<p>Now, we will create a rsyslog configuration file for the nodes that are going to be exporting their logs to the central server. In order to do so, create the configuration file in the config directory of rsyslog:</p>
<pre><code>sudo touch /etc/rsyslog.d/<span class="hljs-keyword">export</span>-syslog.conf
</code></pre><p>Ensure that the file is readable by the <code>syslog</code> user on Debian-based distributions (<code>chown syslog:adm /etc/rsyslog.d/export-syslog.conf</code> or <code>chmod 644 /etc/rsyslog.d/export-syslog.conf</code>). Note that on RedHat-based distros like CentOS rsyslog runs under root, so there shouldn’t be any permissions issues.</p>
<p>Now open the created file and add the following configuration:</p>
<pre><code># <span class="hljs-built_in">Set</span> certificate files
<span class="hljs-built_in">global</span>(
  DefaultNetstreamDriverCAFile=<span class="hljs-string">"&lt;path_to_your_ca.pem&gt;"</span>
  DefaultNetstreamDriverCertFile=<span class="hljs-string">"&lt;path_to_your_cert.pem&gt;"</span>
  DefaultNetstreamDriverKeyFile=<span class="hljs-string">"&lt;path_to_your_private_key.pem&gt;"</span>
)

# <span class="hljs-built_in">Set</span> up the forwarding action <span class="hljs-keyword">for</span> all messages
*.* action(
  type=<span class="hljs-string">"omfwd"</span>
  StreamDriver=<span class="hljs-string">"gtls"</span>
  StreamDriverMode=<span class="hljs-string">"1"</span>
  StreamDriverPermittedPeers=<span class="hljs-string">"&lt;domain_name_of_your_accepting_central_server&gt;"</span>
  StreamDriverAuthMode=<span class="hljs-string">"x509/name"</span>
  target=<span class="hljs-string">"&lt;domain_name_or_ip_of_your_accepting_central_server&gt;"</span> port=<span class="hljs-string">"514"</span> protocol=<span class="hljs-string">"tcp"</span>
  action.resumeRetryCount=<span class="hljs-string">"100"</span> # you may change the queue and retry params <span class="hljs-keyword">as</span> you see fit
  queue.type=<span class="hljs-string">"linkedList"</span> queue.size=<span class="hljs-string">"10000"</span>
)
</code></pre><p>The above configuration will forward all the messages that are digested by rsyslog to your remote server. In case you want to achieve a more fine-grained control, refer to the subsection below.</p>
<h3 id="heading-only-forward-logs-generated-by-certain-programs">Only forward logs generated by certain programs</h3>
<p>If you want to forward messages for a certain program only, you can specify the following condition instead of <code>*.*</code> before <code>action</code> in the configuration above:</p>
<pre><code><span class="hljs-keyword">if</span> $programname == <span class="hljs-string">'&lt;your_program_name&gt;'</span> then
# ...right here goes your action and all the rest
</code></pre><p>If you want to specify more than one program name, add multiple conditions using <code>or</code>:</p>
<pre><code><span class="hljs-keyword">if</span> ($programname == <span class="hljs-string">'&lt;your_program_name1&gt;'</span> or $programname == <span class="hljs-string">'&lt;your_program_name2&gt;'</span> $programname == <span class="hljs-string">'&lt;your_program_name3&gt;'</span>) then
# ...right here goes your action and all the rest
</code></pre><p>For more information refer to the RainerScript documentation for rsyslog <a target="_blank" href="https://www.rsyslog.com/doc/rainerscript/control_structures.html#if-else-if-else">here</a>.</p>
<h3 id="heading-specify-correct-domain-names-and-certificate-paths-in-your-configuration">Specify correct domain names and certificate paths in your configuration</h3>
<p>Now, let’s go back to our configuration file. It will use TLS (as you can see in <code>StreamDriverMode="1"</code>) and forward all the data to <code>target</code> on port 514, which is a default port for syslog. </p>
<p>To make this configuration valid, you will need to replace <code>&lt;domain_name_of_your_accepting_central_server&gt;</code> and <code>&lt;domain_name_or_ip_of_your_accepting_central_server&gt;</code> with the respective domain name of your central accepting server (for example: <code>my-central-server.company.com</code>) as well as specify correct paths to certificates in the <code>global</code> section.</p>
<p>Note that, since on Debian-like distros rsyslog typically runs under the <code>syslog</code> user, you will have to ensure that the certificates themselves and all the directories in their path are readable and accessible by this user (for directories this means that both “r” and “x” permission bits must be set).</p>
<p>On RedHat-based systems, on the other hand, rsyslog often runs as root, so there is no need to tweak the file permissions.</p>
<p>To check under which user your rsyslog runs, run the following:</p>
<pre><code>sudo ps -aux | grep rsyslog
</code></pre><p>And look at the left side at the username executing rsyslogd.</p>
<p>If you don’t have SSL certificates yet, read the next two subsections about installing certs with Let's Encrypt and providing access to rsyslog. If you already have all the needed certificates and permissions, you can safely skip these steps.</p>
<h3 id="heading-install-certbot-certificates">Install certbot certificates</h3>
<p>First, you'll need to install certbot. For Debian-based systems, run the following:</p>
<pre><code>sudo apt-get install certbot
</code></pre><p>If you get an error that the package is not found, run <code>sudo apt-get update</code> and try again.</p>
<p>For RedHat-based systems:</p>
<pre><code>sudo yum install epel-release
sudo yum install certbot
</code></pre><p>Ensure that no server is running on port 80 and then run certbot in standalone mode, specifying your domain name with the <code>-d</code> flag to get your SSL certificates:</p>
<pre><code class="lang-js">sudo certbot certonly --standalone -d &lt;your_domain_name&gt;
# For example: sudo certbot certonly --standalone -d my-sever1.mycompany.com
</code></pre>
<p>Follow the prompts of certbot, and in the end you will receive your SSL certificates that will be stored at <code>/etc/letsencrypt/live/&lt;your_domain_name&gt;/</code>.</p>
<p>Confirm that there are no problems during the certificate renewal process like this:</p>
<pre><code>sudo certbot renew --dry-run
</code></pre><p>Certificates will be automatically renewed by certbot, so you don’t have to worry about manually updating them every time. If you installed certbot as described above, it will use a systemd timer or create a cron job to handle renewals.</p>
<h3 id="heading-give-access-to-certificates-to-rsyslog">Give access to certificates to rsyslog</h3>
<p>If you are running a Debian-based system, then, as mentioned above, you have to grant the <code>syslog</code> user the necessary privileges to access certbot certificates and keys. This is because the <code>/etc/letsencrypt/live</code> directory with certbot-generated files is restricted to root user only. </p>
<p>So, we will copy the certificates and keys over to standard certs and keys locations. For Debian-based distributions, these are <code>/etc/ssl/certs</code> and <code>/etc/ssl/private</code>, respectively. Then we'll change the permissions of these files.</p>
<p>First, create a group that will have access to SSL certificates:</p>
<pre><code>sudo groupadd sslcerts
</code></pre><p>Add the <code>syslog</code> user to this group:</p>
<pre><code>sudo usermod -a -G sslcerts syslog
</code></pre><p>Add permissions and ownership for the <code>/etc/ssl/private</code> directory to the created group:</p>
<pre><code>sudo chown root:sslcerts /etc/ssl/private
sudo chmod <span class="hljs-number">750</span> /etc/ssl/private
</code></pre><p>Now, we'll create a script that will move certificate files from Let's Encrypt’s live directory to a <code>/etc/ssl</code>. Run the following:</p>
<pre><code>sudo touch /etc/letsencrypt/renewal-hooks/deploy/move-ssl-certs.sh
sudo chmod +x /etc/letsencrypt/renewal-hooks/deploy/move-ssl-certs.sh
</code></pre><p>Note that by creating the script in the <code>/etc/letsencrypt/renewal-hooks/deploy</code> directory, it will automatically run after every certificate renewal. This way, you won’t have to worry about manually moving certificates and granting permissions every time they expire.</p>
<p>Open the created file and add the following content, replacing <code>&lt;your-domain-name&gt;</code> with the domain of your machine which corresponds to the directory created by certbot in <code>/etc/letsencrypt/live</code>:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash

# Define the source and destination directories
DOMAIN_NAME=<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">your-domain-name</span>&gt;</span>
LE_LIVE_PATH="/etc/letsencrypt/live/$DOMAIN_NAME"
SSL_CERTS_PATH="/etc/ssl/certs"
SSL_PRIVATE_PATH="/etc/ssl/private"

# Copy the full chain and private key to the respective directories
cp "$LE_LIVE_PATH/fullchain.pem" "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-fullchain.pem"
cp "$LE_LIVE_PATH/cert.pem" "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-cert.pem"
cp "$LE_LIVE_PATH/privkey.pem" "$SSL_PRIVATE_PATH/$DOMAIN_NAME-letsencrypt-privkey.pem"

# Set ownership and permissions
chown root:sslcerts "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-fullchain.pem"
chown root:sslcerts "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-cert.pem"
chown root:sslcerts "$SSL_PRIVATE_PATH/$DOMAIN_NAME-letsencrypt-privkey.pem"

chmod 644 "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-fullchain.pem"
chmod 644 "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-cert.pem"
chmod 640 "$SSL_PRIVATE_PATH/$DOMAIN_NAME-letsencrypt-privkey.pem"</span>
</code></pre><p>Now, execute the created script to actually move the certificates to /etc/ssl and give permissions to the <code>syslog</code> user:</p>
<pre><code>sudo /etc/letsencrypt/renewal-hooks/deploy/move-ssl-certs.sh
</code></pre><p>Lastly, go into the rsyslog configuration file <code>/etc/rsyslog.d/export-syslog.conf</code> and change the certificate paths accordingly:</p>
<pre><code># <span class="hljs-built_in">Set</span> certificate files
<span class="hljs-built_in">global</span>(
DefaultNetstreamDriverCAFile=<span class="hljs-string">"/etc/ssl/certs/&lt;your_domain_name&gt;-letsencrypt-fullchain.pem"</span>
  DefaultNetstreamDriverCertFile=<span class="hljs-string">"/etc/ssl/certs/&lt;your_domain_name&gt;-letsencrypt-cert.pem"</span>
  DefaultNetstreamDriverKeyFile=<span class="hljs-string">"/etc/ssl/private/&lt;your_domain_name&gt;-letsencrypt-privkey.pem"</span>
)
</code></pre><p>Note that even though rsyslog typically runs as root on RedHat-based distributions, you may find that it’s not the case for your system. </p>
<p>If it's not, you can do the same permission manipulations as we did above. But the default recommended location for SSL certificates and keys might differ. For CentOS it’s <code>/etc/pki/tls/certs</code> and <code>/etc/pki/tls/private</code>. But you can also alway choose completely different locations if need be.</p>
<h3 id="heading-configure-accepting-rsyslog-server">Configure accepting rsyslog server</h3>
<p>Let’s now configure a central server that will accept the logs from the rest of the machines.</p>
<p>If you haven’t acquired SSL certificates for your server, refer to the section on <a class="post-section-overview" href="#heading-install-certbot-certificates">installing certbot certificates</a>.</p>
<p>If your server is Debian-based, refer to the section on <a class="post-section-overview" href="#heading-give-access-to-certificates-to-rsyslog">giving access to certificates to rsyslog</a>.</p>
<p>Now, similar to configuring the exporting server, create a rsyslog configuration file:</p>
<pre><code>sudo touch /etc/rsyslog.d/<span class="hljs-keyword">import</span>-syslog.conf
</code></pre><p>Open the file and add the following:</p>
<pre><code># <span class="hljs-built_in">Set</span> certificate files
<span class="hljs-built_in">global</span>(  DefaultNetstreamDriverCAFile=<span class="hljs-string">"/etc/ssl/certs/&lt;your_domain_name&gt;-letsencrypt-fullchain.pem"</span>
  DefaultNetstreamDriverCertFile=<span class="hljs-string">"/etc/ssl/certs/&lt;your_domain_name&gt;-letsencrypt-cert.pem"</span>
  DefaultNetstreamDriverKeyFile=<span class="hljs-string">"/etc/ssl/private/&lt;your_domain_name&gt;-letsencrypt-privkey.pem"</span>
)

# TCP listener
<span class="hljs-built_in">module</span>(
  load=<span class="hljs-string">"imtcp"</span>
  PermittedPeer=[<span class="hljs-string">"&lt;your_peer1&gt;"</span>,<span class="hljs-string">"&lt;your_peer2&gt;"</span>,<span class="hljs-string">"&lt;your_peer3&gt;"</span>]
  StreamDriver.AuthMode=<span class="hljs-string">"x509/name"</span>
  StreamDriver.Mode=<span class="hljs-string">"1"</span>
  StreamDriver.Name=<span class="hljs-string">"gtls"</span>
)

# Start up listener at port <span class="hljs-number">514</span>
input(
  type=<span class="hljs-string">"imtcp"</span>
  port=<span class="hljs-string">"514"</span>
)
</code></pre><p>Note that you need to replace <code>PermittedPeer=["&lt;your_peer1&gt;","&lt;your_peer2&gt;","&lt;your_peer3&gt;"]</code> with an array of the domain names of your export servers, for example: <code>PermittedPeer=["export-server1.company.com","export-server2.company.com","export-server3.company.com"]</code>.</p>
<p>Also don’t forget to double check and change your certificate paths in the <code>global</code> section as needed. Again, if you are on a RedHat-based system, you may simply reference certificates in Let's Encrypt’s live directory because of the root permissions.</p>
<h3 id="heading-ensure-firewall-is-not-blocking-your-traffic">Ensure firewall is not blocking your traffic</h3>
<p>Make sure that the firewall on your central server doesn’t block incoming traffic on port 514. For example, if you are using <code>iptables</code>:</p>
<p>To check the rule already exists:</p>
<pre><code>sudo iptables -C INPUT -p tcp --dport <span class="hljs-number">514</span> -j ACCEPT
</code></pre><p>If previous command exits with an error, you can define an accepting rule with:</p>
<pre><code>sudo iptables -A INPUT -p tcp --dport <span class="hljs-number">514</span> -j ACCEPT # rules will apply immediately
iptables-save &gt; <span class="hljs-regexp">/etc/i</span>ptables/rules.v4 # or use <span class="hljs-string">`iptables-save &gt; /etc/sysconfig/iptables`</span> <span class="hljs-keyword">for</span> RedHat-based distributions
</code></pre><h3 id="heading-restart-rsyslog">Restart rsyslog</h3>
<p>Now, after you’ve added the appropriate configurations to all your servers, you have to restart rsyslog on all of them, beginning with your central accepting node:</p>
<pre><code>sudo systemctl restart rsyslog
</code></pre><p>You can check if there are any errors after the rsyslog restart by executing the following:</p>
<pre><code class="lang-js">sudo systemctl status rsyslog
sudo journalctl -u rsyslog | tail <span class="hljs-number">-100</span>
</code></pre>
<p>The first command above will display the status of rsyslog, and the second one will output the 100 last lines of rsyslog’s logs. If you misconfigured something and your set up didn’t work, you should find helpful information there.</p>
<h3 id="heading-test-the-configuration">Test the configuration</h3>
<p>In order to test whether your syslog redirection worked, issue the following command on the central node to start watching for new data coming in to syslog:</p>
<p>For Debian-based systems:</p>
<pre><code>tail -f /<span class="hljs-keyword">var</span>/log/syslog
</code></pre><p>For RedHat-based:</p>
<pre><code>tail -f /<span class="hljs-keyword">var</span>/log/messages
</code></pre><p>After that, go to each of your export nodes and run:</p>
<pre><code>logger <span class="hljs-string">"Hello, world!"</span>
</code></pre><p>You should see a “Hello, world!” message from each export server popping up in the syslog of your accepting machine.</p>
<p>If everything worked, then congrats! You have now successfully setup and verified syslog redirection over the network.</p>
<p>Note: press Ctrl+C to exit from the <code>tail -f</code> command executed on the central node.</p>
<p>In a later section, we will consider the same scenario but without certificates in case all your servers are located in a trusted local network. After that, we will finally explore how to redirect actual data from different applications to syslog.</p>
<h3 id="heading-how-to-store-remote-logs-in-a-separate-file">How to store remote logs in a separate file</h3>
<p>But wait a second – before moving on, let’s consider a small useful modification to our setup.</p>
<p>Let’s say you want to set up your central rsyslog in such a way that it will redirect remote traffic to a separate file instead of the typical <code>/var/log/syslog</code> or <code>/var/log/messages</code>.</p>
<p>To do this, make the following changes to your <code>/etc/rsyslog.d/import-syslog.conf</code>:</p>
<p>Add a ruleset property to the <code>input</code> object:</p>
<pre><code>input(
  type=<span class="hljs-string">"imtcp"</span>
  port=<span class="hljs-string">"514"</span>
  ruleset=<span class="hljs-string">"remote"</span>
)
</code></pre><p>Then add the following line at the bottom of the file:</p>
<pre><code>ruleset(name=<span class="hljs-string">"remote"</span>) {
  <span class="hljs-keyword">if</span> $hostname == <span class="hljs-string">'&lt;your_remote_hostname&gt;'</span> then {
    action(type=<span class="hljs-string">"omfile"</span> file=<span class="hljs-string">"/var/log/remote-logs.log"</span>)
    stop
  }
}
</code></pre><p>Change <code>&lt;your_remote_hostname&gt;</code> accordingly. You can also define multiple hostnames with an <code>or</code> as we have seen before. Also feel free to change the path to the output file (that is, <code>/var/log/remote-logs.log</code>) to suit your needs.</p>
<p>After that, restart rsyslog.</p>
<h3 id="heading-performance-considerations">Performance considerations</h3>
<p>Rsyslog is a very light and performant tool for managing and forwarding your logs over the network. Still, performing a TCP protocol and a TLS handshake to validate the certificates for every log message (or a batch of messages) comes with its costs. </p>
<p>In the next section, you'll learn how to perform TCP and UDP forwarding without TLS certificates. This will typically be a more performant way, but you should only use it in a trusted local network.</p>
<p>As for UDP, even though it’s more performant than TCP, you should use it only if potential data losses are acceptable.</p>
<p>If you don’t need a near real time log delivery, you might be better off storing all your logs in a single file (you can do this with rsyslog or by employing other tools or techniques). Then you can schedule a separate script, which will transfer this file to a central server when it reaches a certain size or when certain time intervals elapse.</p>
<p>In any case, before employing a particular solution, make sure you do a benchmark focusing on load testing your system to discover which approach works best for you.</p>
<h2 id="heading-how-to-configure-rsyslog-to-redirect-messages-to-a-centralized-remote-server-over-a-local-network">How to Configure <code>rsyslog</code> to Redirect Messages to a Centralized Remote Server Over a Local Network</h2>
<p>If your scenario doesn’t involve communications over an untrustworthy network, you might decide not to use certificates for forwarding your syslog records. I mean, TLS handshakes are costly after all! </p>
<p>Also, the configuration in this case which we'll discuss now will get quite a bit simpler. It will involve fewer steps, as our main concern when setting up syslog forwarding with TLS were SSL certificates and their file permissions.</p>
<h3 id="heading-export-the-server-setup">Export the server setup</h3>
<p>To configure your exporting server to forward syslog data using TCP without encryption, login to every exporting server and create an rsyslog configuration file:</p>
<pre><code>sudo touch /etc/rsyslog.d/<span class="hljs-keyword">export</span>-syslog.conf
</code></pre><p>Open this file and add the following configuration, replacing <code>&lt;your_remote_server_hostname_or_ip&gt;</code> with the hostname or IP of your central node, which must be discoverable on your network:</p>
<pre><code>*.* action(
  type=<span class="hljs-string">"omfwd"</span>
  target=<span class="hljs-string">"&lt;your_remote_server_hostname_or_ip&gt;"</span>
  port=<span class="hljs-string">"514"</span>
  protocol=<span class="hljs-string">"tcp"</span>
  action.resumeRetryCount=<span class="hljs-string">"100"</span>
  queue.type=<span class="hljs-string">"linkedList"</span>
  queue.size=<span class="hljs-string">"10000"</span>
)
</code></pre><p>If you want to use the UDP protocol, you can simply change <code>protocol=”tcp”</code> to <code>protocol=”udp”</code>. </p>
<p>In case you are now wondering whether we could use UDP to forward the traffic in our previous example with certificates, then the answer is no. This is because a TLS handshake works over TCP but not UDP. At least it was originally designed this way, and even though there might exist certain variations and protocol modifications in the wild, they are very tricky and definitely out of the scope of this handbook.</p>
<p>Note that there is an alternative simpler but less flexible syntax for writing the above configuration.</p>
<p>For forwarding over TCP:</p>
<pre><code>*.* @@&lt;your_remote_server_hostname&gt;:<span class="hljs-number">514</span>
</code></pre><p>For forwarding over UDP:</p>
<pre><code>*.* @&lt;your_remote_server_hostname&gt;:<span class="hljs-number">514</span>
</code></pre><p>I am showing you these syntax variations because you may encounter them in other articles. These variations replicate the syntax of sysklogd daemon (yes, one of the first syslog daemon implementations which rsyslog is a backwards compatible fork of).</p>
<h3 id="heading-accept-the-server-setup">Accept the server setup</h3>
<p>Create an rsyslog configuration file on your accepting server:</p>
<pre><code>sudo touch /etc/rsyslog.d/<span class="hljs-keyword">import</span>-syslog.conf
</code></pre><p>Open the file and add the following contents:</p>
<ul>
<li>To receive TCP traffic:</li>
</ul>
<pre><code><span class="hljs-built_in">module</span>(load=<span class="hljs-string">"imtcp"</span>) # Load the imtcp <span class="hljs-built_in">module</span>
input(type=<span class="hljs-string">"imtcp"</span> port=<span class="hljs-string">"514"</span>) # Listen on TCP port <span class="hljs-number">514</span>
</code></pre><ul>
<li>To receive UDP:</li>
</ul>
<pre><code><span class="hljs-built_in">module</span>(load=<span class="hljs-string">"imudp"</span>) # Load the imudp <span class="hljs-built_in">module</span> <span class="hljs-keyword">for</span> UDP
input(type=<span class="hljs-string">"imudp"</span> port=<span class="hljs-string">"514"</span>) # Listen on UDP port <span class="hljs-number">514</span>
</code></pre><p>Legacy syntax alternatives of the config file for the receiving server would be the following:</p>
<p>For TCP:</p>
<pre><code>$ModLoad imtcp # Load the imtcp <span class="hljs-built_in">module</span> <span class="hljs-keyword">for</span> TCP
$InputTCPServerRun <span class="hljs-number">514</span> # Listen on TCP port <span class="hljs-number">514</span>
</code></pre><p>For UDP:</p>
<pre><code>$ModLoad imudp # Load the imudp <span class="hljs-built_in">module</span> <span class="hljs-keyword">for</span> UDP
$UDPServerRun <span class="hljs-number">514</span> # Listen on UDP port <span class="hljs-number">514</span>
</code></pre><p>Note, though, that even though you might sometimes encounter the legacy rsyslog syntax of receiving messages, it is not compatible with sysklogd. </p>
<p>To set up a listener in rsyslogkd, you would have to set a different special variable in the configuration file as described <a target="_blank" href="https://wiki.gentoo.org/wiki/Sysklogd#Remote_logging">here</a>. But going into details about syslogkd is outside the scope of this article.</p>
<p>Also, I don't recommend that you use the old syntax (neither does the author of rsyslog). It is presented purely for educational purposes, so that you know what it is in case you encounter it.</p>
<p>You can read more about rsyslog’s configuration formats <a target="_blank" href="https://www.rsyslog.com/doc/configuration/conf_formats.html">here</a>.</p>
<p>In case you use a firewall, check that its settings allow incoming UDP or TCP connections on port 514.</p>
<h3 id="heading-restart-rsyslog-and-test">Restart rsyslog and test</h3>
<p>Go to every machine starting with the accepting server and restart rsyslog. Then check that there are not errors in its logs:</p>
<pre><code>sudo systemctl restart rsyslog
sudo journalctl -u rsyslog | tail <span class="hljs-number">-100</span>
</code></pre><h2 id="heading-other-possibilities-for-log-forwarding">Other Possibilities for Log Forwarding</h2>
<p>Rsyslog is a very powerful tool with a lot more functionality than we have covered so far. For example, it supports direct forwarding of the logs to Elasticsearch, which is a very performant log storage system. But that's a separate topic which deserves its own article. </p>
<p>For now, I will just give you a taste of how an example rsyslog-to-elasticsearch configuration might look like:</p>
<pre><code># Note that you will have to install <span class="hljs-string">"rsyslog-elasticsearch"</span> using your package manager like apt or yum
<span class="hljs-built_in">module</span>(load=<span class="hljs-string">"omelasticsearch"</span>) # Load the Elasticsearch output <span class="hljs-built_in">module</span>

# Define a template to constract a <span class="hljs-built_in">JSON</span> message <span class="hljs-keyword">for</span> every rsyslog record, sine Elasticsearch works <span class="hljs-keyword">with</span> <span class="hljs-built_in">JSON</span>
template(name=<span class="hljs-string">"plain-syslog"</span>
         type=<span class="hljs-string">"list"</span>) {
           constant(value=<span class="hljs-string">"{"</span>)
             constant(value=<span class="hljs-string">"\"@timestamp\":\""</span>)      property(name=<span class="hljs-string">"timereported"</span> dateFormat=<span class="hljs-string">"rfc3339"</span>)
             constant(value=<span class="hljs-string">"\",\"host\":\""</span>)         property(name=<span class="hljs-string">"hostname"</span>)
             constant(value=<span class="hljs-string">"\",\"severity\":\""</span>)     property(name=<span class="hljs-string">"syslogseverity-text"</span>)
             constant(value=<span class="hljs-string">"\",\"facility\":\""</span>)     property(name=<span class="hljs-string">"syslogfacility-text"</span>)
             constant(value=<span class="hljs-string">"\",\"syslogtag\":\""</span>)    property(name=<span class="hljs-string">"syslogtag"</span>)
             constant(value=<span class="hljs-string">"\",\"message\":\""</span>)      property(name=<span class="hljs-string">"msg"</span> format=<span class="hljs-string">"json"</span>)
           constant(value=<span class="hljs-string">"\"}\n"</span>)
         }

# Redirect all logs to syslog-index <span class="hljs-keyword">of</span> Elasticsearch which listens on localhost:<span class="hljs-number">9200</span>
*.* action(type=<span class="hljs-string">"omelasticsearch"</span>
       server=<span class="hljs-string">"localhost:9200"</span>
       searchIndex=<span class="hljs-string">"syslog-index"</span>
       template=<span class="hljs-string">"plain-syslog"</span>)
</code></pre><p>For more information refer to the <a target="_blank" href="https://www.rsyslog.com/doc/configuration/modules/omelasticsearch.html">docs</a>.</p>
<h2 id="heading-how-to-redirect-data-from-applications-to-syslog">How to Redirect Data from Applications to <code>syslog</code></h2>
<p>So far we have covered configuring a syslog daemon. But how do we actually push logs from real applications into a syslog?</p>
<p>Ideally, it would be best if your application already had a syslog integration and could be configured to send the logs to syslog directly. </p>
<p>But what if this is not the case? Well, it certainly is a pity, because manually redirecting <code>stdout</code> and <code>stderr</code> to syslog might come with its challenges and inconveniences. But don’t worry, it’s not that complicated! At least sort of.</p>
<p>Let’s take a look at different scenarios:</p>
<h3 id="heading-standalone-host-application-and-syslog">Standalone host application and syslog</h3>
<p>First of all, let’s consider that you already have an application running locally on your host machine (no containerization). There are multiple ways to redirect its logs in this case.</p>
<p>Instead of using general example commands and constantly repeating “change  in the command above accordingly” (which I am quite tired of at this point), I am going to install a real application and show the redirection with concrete examples. </p>
<p>The application of choice will be Mosquitto broker, since I am very fond of MQTT, but you can use whatever, as it’s just an example.</p>
<p>Oh, and by the way, Mosquitto does provide a direct integration with the syslog. It just requires a small change (<code>log_dest syslog</code>) in its configuration file. But we will not be looking into this, since our assumption is that we are working with an application incapable of interfacing with the syslog directly.</p>
<p>Here's how to install the broker on Debian-based systems:</p>
<pre><code>sudo apt-get update
sudo apt-get install mosquitto
</code></pre><p>And here's RedHat-based installation:</p>
<pre><code>sudo yum install epel-release
sudo yum install mosquitto
</code></pre><p>After the installation, Mosquitto might be automatically run in the background, so I stop it with <code>sudo systemctl stop mosquitto</code>.</p>
<h4 id="heading-redirecting-logs-to-syslog-when-running-in-foreground">Redirecting logs to syslog when running in foreground</h4>
<p>You can run Mosquitto in the foreground and redirect all its logs to syslog using “info” level and local0 facility:</p>
<pre><code>sudo mosquitto -c /etc/mosquitto/mosquitto.conf -v <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span> | sudo logger -t mosquitto -p local0.info
</code></pre><ul>
<li>The <code>-c</code> option specifies a non-default Mosquitto configuration file and may be omitted. </li>
<li><code>-v</code> specifies a verbose mode which produces more output.</li>
<li>The <code>-t</code> flag provided to the “logger” command is a TAG representing the programname.</li>
</ul>
<p>Note that the default facility of the logger tool is <code>user</code> and default severity level is <code>notice</code>.</p>
<p>Forwarding all the output into syslog with a common severity level is good and all, but it would make more sense to be able to distinguish at least between info and error messages. </p>
<p>To be able to distinguish those, you will have to write a custom bash script. Below you can see an example. Note that the strange looking part “&gt; &gt;(...)” is a Bash substitution feature.</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash

# Define your application command here
command=<span class="hljs-string">"/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf -v"</span>
programname=<span class="hljs-string">"mosquitto"</span>

# Use process substitution to handle stdout and stderr separately
{
    $command <span class="hljs-number">2</span>&gt; &gt;(<span class="hljs-keyword">while</span> read line; <span class="hljs-keyword">do</span> logger -p local0.error -t <span class="hljs-string">"$programname"</span> <span class="hljs-string">"$line"</span>; done) \
                             &gt; &gt;(<span class="hljs-keyword">while</span> read line; <span class="hljs-keyword">do</span> logger -p local0.info -t <span class="hljs-string">"$programname"</span> <span class="hljs-string">"$line"</span>; done)
}
</code></pre><p>To run the above script, just save it to a file, give it execute permissions with <code>sudo chmod +x /path/to/your_script.sh</code>, and run it with <code>sudo ./your_script.sh</code>.</p>
<p>Something to be aware of is that starting Mosquitto is not the most suitable command for the example above, since it redirects all its logging output to stderr by default. So you will only see messages with “error” severity in the syslog log files.</p>
<p>Now, here is an example of a bash script in case you want to determine severity level by parsing each application’s log record from stdout or stderr and base a severity level off of some specific substrings in each record (for example, “ERROR”, “WARN”, “INFO”, and so on):</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash

# Define your application command here
command=<span class="hljs-string">"/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf -v"</span>
programname=<span class="hljs-string">"mosquitto"</span>

# Execute command and pipe its stdout and stderr to a <span class="hljs-keyword">while</span> loop
$command <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span> | <span class="hljs-keyword">while</span> read line; <span class="hljs-keyword">do</span>
    # Determine the severity level based on the content <span class="hljs-keyword">of</span> the line
    <span class="hljs-keyword">if</span> [[ <span class="hljs-string">"$line"</span> == *<span class="hljs-string">"Error:"</span>* ]]; then
        logger -t <span class="hljs-string">"$programname"</span> -p user.err <span class="hljs-string">"$line"</span> # Forward error messages <span class="hljs-keyword">as</span> errors
    elif [[ <span class="hljs-string">"$line"</span> == *<span class="hljs-string">"Warning"</span>* ]]; then
        logger -t <span class="hljs-string">"$programname"</span> -p user.warning <span class="hljs-string">"$line"</span> # Forward warning messages <span class="hljs-keyword">as</span> warnings
    <span class="hljs-keyword">else</span>
        logger -t <span class="hljs-string">"$programname"</span> -p user.info <span class="hljs-string">"$line"</span> # Forward all other messages <span class="hljs-keyword">as</span> info
    fi
done
</code></pre><h4 id="heading-redirecting-logs-to-syslog-when-running-in-background-with-systemctl">Redirecting logs to syslog when running in background with systemctl</h4>
<p>Many applications run as daemons (in the background). Oftentimes they can be started and managed using <code>systemctl</code> process management tool or similar. </p>
<p>If you want to redirect the logs of an application that runs as a systemctl daemon to syslog, follow the example below. </p>
<p>Here are the steps you'll need to perform when running Mosquitto broker in background:</p>
<p>Step 1: create a custom sh script:</p>
<pre><code>sudo touch /usr/local/bin/mosquitto_with_logger.sh
</code></pre><p>Step 2: open the file and add the following content:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf -v <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span> | logger -t mosquitto
</code></pre><p>Step 3: add execute permissions to the script:</p>
<pre><code>sudo chmod +x /usr/local/bin/mosquitto_with_logger.sh
</code></pre><p>Step 4: create a systemd unit file:</p>
<pre><code>sudo touch /etc/systemd/system/mosquitto_syslog.service
</code></pre><p>Step 5: open the file and add the following:</p>
<pre><code>[Unit]
Description=Mosquitto MQTT Broker <span class="hljs-keyword">with</span> custom logging
After=network.target

[Service]
ExecStart=<span class="hljs-regexp">/usr/</span>local/bin/mosquitto_with_logger.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target
</code></pre><p>Step 6: reload systemd, enable the custom service to be run on system startup, and finally start it:</p>
<pre><code>sudo systemctl daemon-reload
sudo systemctl enable mosquitto_syslog.service
sudo systemctl start mosquitto_syslog.service
</code></pre><h4 id="heading-redirecting-logs-from-existing-log-files">Redirecting logs from existing log files</h4>
<p>In case your application only logs to a file and you want to redirect these logs to syslog, see the following rsyslog configuration file that you can place in <code>/etc/rsyslog.d/</code> with a <code>.conf</code> file extension:</p>
<pre><code><span class="hljs-built_in">module</span>(load=<span class="hljs-string">"imfile"</span> PollingInterval=<span class="hljs-string">"10"</span>) # Load the imfile <span class="hljs-built_in">module</span>

# For info logs
input(type=<span class="hljs-string">"imfile"</span>
      File=<span class="hljs-string">"/path/to/your/app-info.log"</span>
      Tag=<span class="hljs-string">"myapp"</span>
      Severity=<span class="hljs-string">"info"</span>
      Facility=<span class="hljs-string">"local0"</span>)

# For error logs
input(type=<span class="hljs-string">"imfile"</span>
      File=<span class="hljs-string">"/path/to/your/app-error.log"</span>
      Tag=<span class="hljs-string">"myapp"</span>
      Severity=<span class="hljs-string">"error"</span>
      Facility=<span class="hljs-string">"local0"</span>)

# you can put your actions that will forward the data here
# or rely on the actions <span class="hljs-keyword">from</span> the original rsyslog.conf file that imports <span class="hljs-built_in">this</span> file
</code></pre><p>The configuration above assumes that you have separate files for info and error logs. </p>
<p>In principle, you could also forward all the contents from a single file by either assigning a common severity or by trying to parse out each new line in the file and guess its intended log level. This would require us to use rsyslog’s rulesets similar to the following:</p>
<pre><code><span class="hljs-built_in">module</span>(load=<span class="hljs-string">"imfile"</span> PollingInterval=<span class="hljs-string">"10"</span>) # Load the imfile input <span class="hljs-built_in">module</span>

# Template <span class="hljs-keyword">for</span> formatting messages <span class="hljs-keyword">with</span> original severity and facility
template(name=<span class="hljs-string">"CustomFormat"</span> type=<span class="hljs-string">"string"</span> string=<span class="hljs-string">"&lt;%PRI%&gt;%TIMESTAMP% %HOSTNAME% %msg%\n"</span>)

# Monitor a specific logfile
input(type=<span class="hljs-string">"imfile"</span>
      File=<span class="hljs-string">"/path/to/your/logfile.log"</span>
      Tag=<span class="hljs-string">"myApp"</span>
      Ruleset=<span class="hljs-string">"guessSeverity"</span>)

# Ruleset to parse log entries and guess severity
ruleset(name=<span class="hljs-string">"guessSeverity"</span>) {
    # Use property-based filters to check message content and route accordingly
    <span class="hljs-keyword">if</span> ($msg contains <span class="hljs-string">"Error:"</span>) then {
        action(type=<span class="hljs-string">"omfile"</span>
               File=<span class="hljs-string">"/var/log/errors.log"</span> # Specify the log file <span class="hljs-keyword">for</span> error messages
               Template=<span class="hljs-string">"CustomFormat"</span>
              )
    } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> ($msg contains <span class="hljs-string">"Warning:"</span>) then {
        action(type=<span class="hljs-string">"omfile"</span>
               File=<span class="hljs-string">"/var/log/warnings.log"</span> # Specify the log file <span class="hljs-keyword">for</span> warning messages
               Template=<span class="hljs-string">"CustomFormat"</span>
              )
    } <span class="hljs-keyword">else</span> {
        action(type=<span class="hljs-string">"omfile"</span>
               File=<span class="hljs-string">"/var/log/info.log"</span> # Specify the <span class="hljs-keyword">default</span> log file <span class="hljs-keyword">for</span> other messages
               Template=<span class="hljs-string">"CustomFormat"</span>
              )
    }
}
</code></pre><p>You should ensure that <code>/path/to/your/logfile.log</code> exists before applying the above configuration.</p>
<p>We used rulesets above which is another nice feature of rsyslog. You can read more on this in the <a target="_blank" href="https://www.rsyslog.com/doc/concepts/multi_ruleset.html">official documentation</a>.</p>
<p>However, the above configuration explicitly sets the destination for processed messages directing them to different files depending on their severity. If you want to forward the messages to the standard <code>/var/log/messages</code> or <code>/var/log/syslog</code>, you will have to specify it explicitly (and amend/add more templates to reflect the appropriate severity levels).</p>
<p>But what if you have many other rules in your main rsyslog config file? You don’t want to repeat them in the ruleset above and just want to parse out the severity level and pass the record on to rsyslog’s main configuration to handle the rest? </p>
<p>Unfortunately, I didn’t find a nice way of doing this. There is just one hacky approach to resubmit your record back to rsyslog using the <code>logger</code> utility and the <code>omprog</code> module. I will show this approach anyway for completeness, and since it’s a good way to see more rsyslog features. But be aware that it involves some overhead, since you'll basically be invoking rsyslog twice for every record.</p>
<p>To resubmit a record back to rsyslog, include the <code>omprog</code> module:</p>
<pre><code><span class="hljs-built_in">module</span>(load=<span class="hljs-string">"omprog"</span>)
</code></pre><p>And change the actions inside the if-else tree to the following:</p>
<pre><code>action(type=<span class="hljs-string">"omprog"</span>
        Template=<span class="hljs-string">"CustomFormat"</span> # Optional property to format the message
        binary=<span class="hljs-string">"/usr/bin/logger -t myApp -p local0.error"</span>
      )
</code></pre><p>By the way, don’t forget to make sure that the log files are accessible to the user under which rsyslog runs.</p>
<p>I recommend that you keep all the parsing and log redirection logic in rsyslog config files. But if you don’t want to do so, and instead want to create a separate rsyslog configuration in your specific usecase, below you can find a bash script to do what we have done above. </p>
<p>The script tails a log file, parses each record to assign an appropriate severity level, and forwards these records to syslog:</p>
<pre><code>#! <span class="hljs-regexp">/bin/</span>bash

tail -F /path/to/log-file.log | <span class="hljs-keyword">while</span> read line; <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">if</span> [[ <span class="hljs-string">"$line"</span> == *<span class="hljs-string">"Error:"</span>* ]]; then
    logger -p local0.err <span class="hljs-string">"$line"</span> # Forward error messages <span class="hljs-keyword">as</span> errors
  <span class="hljs-keyword">else</span>
    logger -p local0.info <span class="hljs-string">"$line"</span> # Forward other messages <span class="hljs-keyword">as</span> info
  fi
done
</code></pre><p>If you want to test whether the above script works, just create a <code>log-file.log</code>, run the script, and then issue <code>echo "Error: this is a test error message" &gt;&gt; log-file.log</code> in a separate terminal. After that you should see the error message in the rsyslog log file <code>/var/log/messages</code> or <code>/var/log/syslog</code>.</p>
<p>Running the script above directly will block your terminal and wait for its compilation. So for practical scenarios, you'll want to dispatch it to the background using, for example, <code>setsid</code> or other tools.</p>
<p>One important thing before we move on is that when testing the above scripts and configurations, be aware that your rsyslog might have a deduplication feature on. If this is the case, duplicate messages might not get processed. This is a legacy feature but chances are that it’s still in your configuration (mainly on Debian-based systems). Read more <a target="_blank" href="https://www.rsyslog.com/doc/configuration/action/rsconf1_repeatedmsgreduction.html">here</a>.</p>
<p>In addition, you can drop <a target="_blank" href="https://www.rsyslog.com/discarding-unwanted-messages/">unwanted messages</a>.</p>
<h3 id="heading-docker-and-syslog">Docker and syslog</h3>
<p>Default logs generated to stdout/stderr by the applications running in Docker containers are stored in files under /var/lib/docker/containers (the exact path may depend on your system). </p>
<p>To access the logs for a particular container you can use <code>docker logs &lt;container name or container id&gt;</code>. But what if you want to redirect the stdout and stderr of your containerized applications into syslog directly? Then there are again multiple options. Below I will be using a Mosquitto broker container as an example.</p>
<h4 id="heading-configuring-a-single-docker-container">Configuring a single Docker container</h4>
<p>If you are starting a container using a <code>docker run</code> command, refer to the example below:</p>
<pre><code>docker run -it -d -p <span class="hljs-number">1883</span>:<span class="hljs-number">1883</span> -v /etc/mosquitto/mosquitto.conf:<span class="hljs-regexp">/mosquitto/</span>config/mosquitto.conf --log-driver=syslog --log-opt syslog-address=udp:<span class="hljs-comment">//192.168.0.1:514 --log-opt tag="docker/{{.Name}}/{{.ID}}" eclipse-mosquitto:2</span>
</code></pre><p>In this example:</p>
<ul>
<li><code>--log-driver=syslog</code> specifies that the syslog driver should be used.</li>
<li><code>--log-opt syslog-address=udp://192.168.0.1:514</code> specifies the protocol, address, and port of your syslog server. If you have a syslog server running locally and just want your logs to appear under <code>/var/log</code> on your local machine, then you can omit this option.</li>
<li><code>--log-opt tag="docker-{{.Name}}-{{.ID}}"</code> sets a custom TAG field for the logs from this container. <code>{{.Name}}</code> will resolve to the container name, while <code>{{.ID}}</code> to the container id. Note that you shouldn’t use slashes (“/”) here as rsyslog will not parse them and truncate the TAG parts which follow. But it will work with hyphens (“-”). This also implies that rsyslog tries its best to parse all the possible message formats and it might not always be what you expect. You can read more <a target="_blank" href="https://www.rsyslog.com/doc/whitepapers/syslog_parsing.html">here</a>.</li>
<li>The rest of the flags like <code>-it -d</code> or <code>-p</code> and <code>-v</code> are container specific flags which specify the mode of the container, mapped ports, volumes and so on. You can read more about them in detail in <a target="_blank" href="https://cedalo.com/blog/mosquitto-docker-configuration-ultimate-guide/">this article</a>.</li>
</ul>
<h4 id="heading-configuring-a-docker-service-through-docker-compose-file">Configuring a Docker service through docker-compose file</h4>
<p>If you are using docker-compose instead of executing <code>docker run</code> directly, here is an example <code>docker-compose.yml</code> file:</p>
<pre><code>version: <span class="hljs-string">'3'</span>
<span class="hljs-attr">services</span>:
  mosquitto:
    image: eclipse-mosquitto:<span class="hljs-number">2</span>
    <span class="hljs-attr">logging</span>:
      driver: syslog
      <span class="hljs-attr">options</span>:
        syslog-address: <span class="hljs-string">"udp://192.168.0.1:514"</span>
        <span class="hljs-attr">tag</span>: <span class="hljs-string">"docker/{{.Name}}/{{.ID}}"</span>
   <span class="hljs-attr">ports</span>:
     - <span class="hljs-number">1883</span>:<span class="hljs-number">1883</span>
     - <span class="hljs-number">8883</span>:<span class="hljs-number">8883</span>
     - <span class="hljs-number">9001</span>:<span class="hljs-number">9001</span>
   <span class="hljs-attr">volumes</span>:
     - ./mosquitto/config:<span class="hljs-regexp">/mosquitto/</span>config
</code></pre><p>Pay attention to directives <code>driver</code>, <code>syslog-address</code>, and <code>tag</code>, which are similar to the docker-run example.</p>
<h4 id="heading-configuring-a-default-for-every-container-through-the-docker-daemon">Configuring a default for every container through the Docker daemon</h4>
<p>If you don't want to specify log driver options in every docker-compose file or every time you use a “docker run” command, you can set the following configuration in <code>/etc/docker/daemon.json</code> which will apply it globally.</p>
<pre><code>{
  <span class="hljs-string">"log-driver"</span>: <span class="hljs-string">"syslog"</span>,
  <span class="hljs-string">"log-opts"</span>: {
    <span class="hljs-string">"syslog-address"</span>: <span class="hljs-string">"udp://192.168.0.1:514"</span>,
    <span class="hljs-string">"tag"</span>: <span class="hljs-string">"docker/{{.Name}}/{{.ID}}"</span>
  }
}
</code></pre><p>After that, restart docker with <code>sudo systemctl restart docker</code> or <code>sudo service docker restart</code>.</p>
<h4 id="heading-enabling-applications-inside-docker-to-log-to-syslog-directly">Enabling applications inside Docker to log to syslog directly</h4>
<p>If you have an application which is able to forward its logs to syslog directly (such as Mosquitto) and you want to use it in a container, then you will have to map the local <code>/dev/log</code> to <code>/dev/log</code> inside the container. </p>
<p>For that, you can use the <code>volumes</code> section of <code>docker-compose.yml</code> or the <code>-v</code> flag of the <code>docker run</code> command.</p>
<h3 id="heading-how-to-use-logging-libraries-for-your-programming-language-to-log-to-syslog">How to Use Logging Libraries for Your Programming Language to log to syslog</h3>
<p>Now, what if you are developing an application or need to create some custom aggregation script which forwards messages from certain apps or devices to syslog?</p>
<p>To give a simple real world example, you might want to build a control console for your IoT devices. </p>
<p>Let’s say you have a bunch of devices that connect to an MQTT broker. Whenever those devices generate log messages, they publish them to a certain MQTT topic. </p>
<p>In this case you might want to create a custom script that subscribes to this topic, receives messages from it and forwards them to syslog for further storage and processing. This way you will gather all your logs in one place with the ability to further visualize and manage them with tools such as Splunk, Elastic stack, and so on or run any statistics or reports on them.</p>
<p>Below, I am going to show you how to fire messages from your Node.js or Python application to syslog. This will enable you to implement your custom applications that work with syslog.</p>
<p>Note that the example scenario above gave a practical use case for what we will see in this section. But we will not explore it further, since it would require a bit more time and effort and might lead us off of the main point of this guide. </p>
<p>But if you are interested in managing something like the above, you can easily extend the scripts I show below by using the MqttJS library and connecting to the broker with Node.js as described <a target="_blank" href="https://cedalo.com/blog/nodejs-mqtt-usage-with-mosquitto-example/">here</a> or using Paho MQTT Python client as shown in this <a target="_blank" href="https://cedalo.com/blog/configuring-paho-mqtt-python-client-with-examples/">tutorial</a>.</p>
<h4 id="heading-nodejs-client">Node.js client</h4>
<p>Unfortunately, there aren’t many popular, maintainable, well-proven libraries for syslog logging with Node.js. But one good option you can use is a flexible general purpose library for logging called winston. It's quite usable in both small and larger scale projects. </p>
<p>When installing winston, you will also have to additionally install a custom transport called <code>winston-syslog</code>:</p>
<pre><code>npm install winston winston-syslog
</code></pre><p>Here is a usage example:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> winston = <span class="hljs-built_in">require</span>(<span class="hljs-string">'winston'</span>);
<span class="hljs-built_in">require</span>(<span class="hljs-string">'winston-syslog'</span>).Syslog;

<span class="hljs-keyword">const</span> logger = winston.createLogger({
  <span class="hljs-attr">levels</span>: winston.config.syslog.levels,
  <span class="hljs-attr">format</span>: winston.format.printf(<span class="hljs-function">(<span class="hljs-params">info</span>) =&gt;</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-string">`<span class="hljs-subst">${info.message}</span>`</span>;
  }),
  <span class="hljs-attr">transports</span>: [
    <span class="hljs-keyword">new</span> winston.transports.Syslog({
      <span class="hljs-attr">app_name</span>: <span class="hljs-string">'MyNodeApp'</span>,
      <span class="hljs-attr">facility</span>: <span class="hljs-string">'local0'</span>,
      <span class="hljs-attr">type</span>: <span class="hljs-string">'RFC5424'</span>,
      <span class="hljs-attr">protocol</span>: <span class="hljs-string">'unix'</span>, <span class="hljs-comment">// Use Unix socket</span>
      <span class="hljs-attr">path</span>: <span class="hljs-string">'/dev/log'</span>, <span class="hljs-comment">// Path to the Unix socket for syslog</span>
    })
  ]
});

<span class="hljs-comment">// Log messages of various severity levels</span>
<span class="hljs-comment">// When using emerg level you might get some warnings in your terminal</span>
<span class="hljs-comment">// But don't panic - this is expected, since it's the most sever level </span>
logger.emerg(<span class="hljs-string">'This is an emerge message.'</span>);
logger.alert(<span class="hljs-string">'This is an alert message.'</span>);
logger.crit(<span class="hljs-string">'This is a crit message.'</span>);
logger.error(<span class="hljs-string">'This is an error message.'</span>);
logger.warning(<span class="hljs-string">'This is a warning message.'</span>);
logger.notice(<span class="hljs-string">'This is a notice message.'</span>);
logger.info(<span class="hljs-string">'This is an informational message.'</span>);
logger.debug(<span class="hljs-string">'This is a debug message.'</span>);


logger.end();
</code></pre>
<p>Note that if you remove the <code>format</code> property from the object passed to <code>createLogger</code>, you will see a JSON payload consisting of a message and severity level for messages in syslog. That’s the default format of records parsed by <code>winston-syslog</code>.</p>
<h4 id="heading-python-client">Python client</h4>
<p>In case of Python, you don’t even have to install any third party dependencies as Python already comes with two quite capable libraries: syslog and logging. You can use either one.</p>
<p>The former is tailored to working with syslog specifically, while the latter can also handle other log transports (stdout, file, and so on). It can also often be seamlessly extended to work with syslog for existing projects.</p>
<p>Here is an example of using the “syslog” library:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> syslog

<span class="hljs-comment"># Log an single info message</span>
<span class="hljs-comment"># Triggers an implicit call to openlog() with no parameters</span>
syslog.syslog(syslog.LOG_INFO, <span class="hljs-string">"Message an info message."</span>)

<span class="hljs-comment"># You can also set the facility</span>
syslog.openlog(ident=<span class="hljs-string">"MyPythonApp"</span>, facility=syslog.LOG_LOCAL0)

<span class="hljs-comment"># messages with different severity levels and LOG_LOCAL0 facility</span>
syslog.syslog(syslog.LOG_EMERG, <span class="hljs-string">"This is an emerge message."</span>)
syslog.syslog(syslog.LOG_ALERT, <span class="hljs-string">"This is an alert message."</span>)
syslog.syslog(syslog.LOG_CRIT, <span class="hljs-string">"This is a critical message."</span>)
syslog.syslog(syslog.LOG_ERR, <span class="hljs-string">"This is an error message."</span>)
syslog.syslog(syslog.LOG_WARNING, <span class="hljs-string">"This is a warning message."</span>)
syslog.syslog(syslog.LOG_NOTICE, <span class="hljs-string">"This is an notice message."</span>)
syslog.syslog(syslog.LOG_INFO, <span class="hljs-string">"This is an informational message."</span>)
syslog.syslog(syslog.LOG_DEBUG, <span class="hljs-string">"This is a debug message."</span>)

<span class="hljs-comment"># Close the log if necessary (usually handled automatically at program exit)</span>
syslog.closelog()
</code></pre>
<p>And here is an example of using the “logging” library. Note that “logging” has a predefined set of log levels which does not fully align with syslog severity levels (for example, some levels like “crit”, “emerg”, and “notice” are missing by default). You can, however, extend it when needed but we will keep it simple here. For more information refer <a target="_blank" href="https://docs.python.org/3/library/logging.handlers.html#:~:text=LOG_LOCAL7-,mapPriority,-(levelname)">here</a>:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> logging
<span class="hljs-keyword">import</span> logging.handlers

<span class="hljs-comment"># Create a logger</span>
logger = logging.getLogger(<span class="hljs-string">'MyPythonApp'</span>) <span class="hljs-comment"># Set application name</span>
logger.setLevel(logging.INFO) <span class="hljs-comment"># Set the default log level</span>

<span class="hljs-comment"># Create a SysLogHandler</span>
syslog_handler = logging.handlers.SysLogHandler(address=<span class="hljs-string">'/dev/log'</span>, facility=logging.handlers.SysLogHandler.LOG_LOCAL0)

<span class="hljs-comment"># Optional: format the log message</span>
<span class="hljs-comment"># Set a format that can be parsed by rsyslog.</span>
<span class="hljs-comment"># The one presented below is a simplification of RFC3164</span>
<span class="hljs-comment"># Note that PRI value will be prepended to the beginning automatically</span>
formatter = logging.Formatter(<span class="hljs-string">"%(name)s: %(message)s"</span>) 
syslog_handler.setFormatter(formatter)

<span class="hljs-comment"># Add the SysLogHandler to the logger</span>
logger.addHandler(syslog_handler)

<span class="hljs-comment"># Log messages with standard logging levels</span>
logger.debug(<span class="hljs-string">'This is a debug message.'</span>)
logger.info(<span class="hljs-string">'This is an informational message.'</span>)
logger.warning(<span class="hljs-string">'This is a warning message.'</span>)
logger.error(<span class="hljs-string">'This is an error message.'</span>)
logger.critical(<span class="hljs-string">'This is a critical message.'</span>)
</code></pre>
<p>Alternatively, you can use a non-standard “loguru” library or any other. Built-in libraries are quite powerful and sufficient for most of the use cases. But if you are already using a library like loguru in your project, you can extend it to work with syslog:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> loguru <span class="hljs-keyword">import</span> logger
<span class="hljs-keyword">import</span> logging
<span class="hljs-keyword">import</span> logging.handlers

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SyslogHandler</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, appname, address=<span class="hljs-string">'/dev/log'</span>, facility=logging.handlers.SysLogHandler.LOG_USER</span>):</span>
        self.appname = appname
        self.handler = logging.handlers.SysLogHandler(address=address, facility=facility)
        self.loglevel_map = {
            <span class="hljs-string">"TRACE"</span>: logging.DEBUG,
            <span class="hljs-string">"DEBUG"</span>: logging.DEBUG,
            <span class="hljs-string">"INFO"</span>: logging.INFO,
            <span class="hljs-string">"SUCCESS"</span>: logging.INFO,
            <span class="hljs-string">"WARNING"</span>: logging.WARNING,
            <span class="hljs-string">"ERROR"</span>: logging.ERROR,
            <span class="hljs-string">"CRITICAL"</span>: logging.CRITICAL,
        }

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">write</span>(<span class="hljs-params">self, message</span>):</span>
        <span class="hljs-comment"># Extract the log level, message text, and other necessary information.</span>
        loglevel = self.loglevel_map.get(message.record[<span class="hljs-string">"level"</span>].name, logging.INFO)
        logmsg = <span class="hljs-string">f"<span class="hljs-subst">{self.appname}</span>: <span class="hljs-subst">{message.record[<span class="hljs-string">'message'</span>]}</span>"</span>

        <span class="hljs-comment"># Create a log record that the standard logging system can understand.</span>
        record = logging.LogRecord(name=self.appname, level=loglevel, pathname=<span class="hljs-string">""</span>, lineno=<span class="hljs-number">0</span>, msg=logmsg, args=(), exc_info=<span class="hljs-literal">None</span>)
        self.handler.emit(record)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">flush</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">pass</span>

<span class="hljs-comment"># Configure Loguru to use the above defined SyslogHandler</span>
appname = <span class="hljs-string">"MyPythonApp"</span>
logger.add(SyslogHandler(appname), format=<span class="hljs-string">"{message}"</span>)

<span class="hljs-comment"># Now you can log messages and they will appear be directed to syslog</span>
logger.info(<span class="hljs-string">"This is an informational message sent to syslog."</span>)
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this handbook, you learned all about syslog. We clarified the confusing terminology, explored its use cases, and saw a lot of usage examples.</p>
<p>The main points to take away are:</p>
<ul>
<li>Syslog is a protocol describing the common format of message exchange between applications and syslog daemons. The latter take on message parsing, enrichments, transport, and storage.</li>
<li>People commonly colloquially refer to the infrastructure of syslog daemons, their configuration, log storage files, and accepting sockets as “syslog”. “Redirect logs to syslog” means redirect the logs to the <code>/dev/log</code> socket where they will be picked up by a syslog daemon, processed, and saved according to its configuration.</li>
<li>There are two standard syslog formats: the obsolete RFC3164 and a newer RFC5424.</li>
<li>Some well known syslog daemons include: sysklogd (Linux), rsyslog (Linux), syslog-ng (Linux), and nxlog (cross-platform).</li>
<li>Rsyslog and other log daemons can forward logs from one server to another. You can use this to create a log collecting infrastructure with a central server processing all the logs coming from the rest of the nodes.</li>
<li>Even though it incurs some overhead, it’s important to forward the logs using the TLS protocol in case they are transported over an untrustworthy network.</li>
<li>The rsyslog daemon is a lightweight and powerful tool with many features. It can collect messages from different sources, including files and network. It can process this data using customizable templates and rulesets, and then either save it to disk or forward it elsewhere. Rsyslog can also directly integrate with Elasticsearch, among other capabilities.</li>
<li>It is possible to forward the logs of an application to syslog even if it does not provide a native integration. You can do this for standalone host applications, containerized systems, or through an aggregation script written in a programming language of your choice.</li>
<li>Output of the standalone apps (stdout and stderr) can be captured and redirected  to the <code>logger</code> Linux utility tool. Docker provides a separate syslog driver for logging, while many programming languages have dedicated logging libraries.</li>
</ul>
<p>Thanks for reading, and happy logging!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Debug Coding Problems When Building Your Own Projects ]]>
                </title>
                <description>
                    <![CDATA[ Ah, the joy of coding! There you are, cruising through your project, when suddenly – bam! – you hit a bug. It's like hitting a wall in a maze. But fear not, fellow coder, for I bring you the trusty map to navigate the treacherous bug-infested waters ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/debug-coding-problems-in-your-projects/</link>
                <guid isPermaLink="false">66c8c8d0fe21816c4cb75d13</guid>
                
                    <category>
                        <![CDATA[ debugging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ self-improvement  ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Chris Blakely ]]>
                </dc:creator>
                <pubDate>Tue, 20 Feb 2024 21:47:09 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2024/08/pexels-pixabay-144243--1-.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Ah, the joy of coding! There you are, cruising through your project, when suddenly – bam! – you hit a bug. It's like hitting a wall in a maze.</p>
<p>But fear not, fellow coder, for I bring you the trusty map to navigate the treacherous bug-infested waters of programming. Whether you're a self-learner aiming to land that dream job in tech or just tinkering for fun, here's how to become a debugging ninja.</p>
<h2 id="heading-look-for-errors-in-your-ide">Look for Errors in Your IDE</h2>
<p>Your Integrated Development Environment (IDE) isn't just a fancy text editor – it's your first line of defense against bugs. </p>
<p>TypeScript, for example, is like that friend who points out the pothole you're about to step into – it helps catch errors early with its type-checking prowess. </p>
<p>Imagine you accidentally try to add a number to a string. TypeScript waves a big red flag, saving you from a facepalm moment later. It's one of the many reasons we adore TS.</p>
<p><strong>Example</strong>: You declare <code>let age: number = 'twenty';</code>. TypeScript will frown upon this, telling you that 'twenty' is not a number. It's like having a guardian angel for your code.</p>
<h2 id="heading-try-and-isolate-the-area">Try and Isolate the Area</h2>
<p>Before you start pulling out your hair, try to play detective and isolate where the crime scene is. </p>
<p>Is the bug lurking in the backend, hiding in the frontend, conspiring in the database, or chilling in the infrastructure? </p>
<p>When you're working locally, it's usually one of the first three suspects. And here's a hot tip: the network tab in your browser's developer tools is like your police scanner, helping you pinpoint the location of the distress call.</p>
<p><strong>Example</strong>: Let's say you send out a request to GET /users and it returns a 500 status. That's the server telling you, "Mate, I've got problems." It's a backend issue. But if the call comes back with a 200 status and your UI is still playing hide and seek with the data, then the bug's hosting a party in your frontend. The network tab just handed you the address.</p>
<p>By narrowing down the location of your issue, you can focus your debugging efforts more efficiently. It's like knowing whether to raid the castle, the dragon's lair, or the dark forest. Happy hunting!</p>
<h2 id="heading-look-for-errors-in-the-browser-console">Look for Errors in the Browser Console</h2>
<p>The browser console is your Sherlock Holmes magnifying glass for web projects. It uncovers clues hidden in plain sight. The console tab, on the other hand, is like tracking where the villain has been, helping you spot those pesky code misfires. </p>
<p><strong>Example</strong>: Your React app isn't fetching data. A quick peek in the console tab shows an "undefined" error, and a line number. This is where your problem is at. Elementary, my dear Watson!</p>
<h2 id="heading-add-consolelog-to-different-functions">Add <code>console.log()</code> to Different Functions</h2>
<p>Ah, the humble <code>console.log()</code>, the print statement that could. When in doubt, log it out. It's like dropping breadcrumbs through your code to see how far Little Red Riding Hood gets before she meets the Big Bad Bug.</p>
<p><strong>Example</strong>: Unsure if your function is receiving the expected data? <code>console.log('Data:', data)</code> at the start of the function. No data? Now you know where the problem starts.</p>
<h2 id="heading-use-try-catch-blocks">Use Try-Catch Blocks</h2>
<p>Try-catch blocks are your safety net, allowing your code to perform daring feats without crashing your app. They let you gracefully handle errors by catching them before they wreak havoc. They also let you specify your own custom error messages for a given block of code, helping you to find the problem area.</p>
<p><strong>Example</strong>: Wrap your API call in a try-catch. If the call fails, the catch block catches the error, allowing you to console.log it or display a friendly message to the user. </p>
<p>Heres what a try catch block looks like in JS:</p>
<pre><code class="lang-javascript">  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">displayUsers</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> users = getUsers();
    } <span class="hljs-keyword">catch</span> (error) {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"oh crap"</span>);
    }
  }
</code></pre>
<h2 id="heading-search-google-or-use-chatgpt-to-help-with-error-messages">Search Google or Use ChatGPT to Help With Error Messages</h2>
<p>Stuck on an error message? Google and ChatGPT are your library and librarian. Just copy and paste the error into the search bar and watch a plethora of solutions unfold. It's like asking the hive mind: someone, somewhere, has had your problem before.</p>
<p><strong>Example</strong>: Getting a "TypeError: Cannot read property 'map' of undefined"? A quick search reveals you might be trying to use <code>.map()</code> on something that's not an array. Oops!</p>
<h2 id="heading-test-often">Test Often</h2>
<p>The mantra "test early, test often" will save you heaps of time. By testing small bits of code as you go, you catch bugs early, when they're easier to squash. It's like cleaning as you cook– it makes the final cleanup so much easier.</p>
<p><strong>Example</strong>: Just added a new feature? Test it out before moving on. Does it work as expected? Great! No? Time to debug while the code is still fresh in your mind.</p>
<h2 id="heading-try-a-different-approach">Try a Different Approach</h2>
<p>If you're banging your head against the wall with a problem, maybe it's time to climb over it instead. Don't get too attached to your code. Be willing to refactor or even start from scratch if it means a cleaner, more elegant solution.</p>
<p><strong>Example</strong>: If your code is more tangled than a bowl of spaghetti, stepping back and rethinking your approach might reveal a simpler, more efficient path.</p>
<p>Debugging is part art, part science, and entirely a test of patience. But with these strategies in your toolkit, you'll be squashing bugs with the best of them. Happy coding, and may your bug hunts be short and your code be clean!</p>
<h2 id="heading-real-life-scenario">Real Life Scenario</h2>
<p>Let's take real life scenario. I have a React, Node, Postgres app that displays users in the browser. The code, as far as I know, should be working but I'm not seeing the users displayed on the frontend.</p>
<h3 id="heading-step-1-check-the-console">Step 1 – Check the Console</h3>
<p>Let's have a nosey at the Chrome dev tools console and see what's going on. </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/02/step1.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>"Look ma, errors!"</em></p>
<p>Ah, the plot thickens in the saga of "Why isn't this thing working?". Let's dive into the drama unfolding in your console and break down the breadcrumbs left behind by our mischievous friend, the bug.</p>
<p>First up, we have our leading clue: <code>GET http://localhost:3000/api/users 500 (Internal Server Error)</code>. This line is the equivalent of a scream in the night in our detective story. It tells us that our backend is in distress, possibly tied to a nefarious SQL query or a rogue piece of logic in our API route. </p>
<p>The server's cry for help is loud and clear: "Internal Server Error." Classic move by the backend, really.</p>
<p>Now, our supporting cast makes their entrance with <code>ResponseError: Response returned an error code</code>. This is the big reveal. The issue isn't just a server having a bad day – it's a ResponseError caught red-handed by <code>UsersApi.request</code>, and even tells us where the error line is (UserApi.ts:83).</p>
<h3 id="heading-step-2-check-the-backend-terminal">Step 2 – Check the backend Terminal</h3>
<p>Our journey into investigating the bug has brought us to the backend, where we are greeted with this:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/02/step2.PNG" alt="Image" width="600" height="400" loading="lazy"></p>
<p>If you see this and your first instinct is to run away and hide, don't worry – that was mine too. But fear not! There are plenty of clues that point us to the issue. </p>
<p>When a backend error occurs, this is what's known as a <strong>stack trace</strong> – basically all the errors, info, line numbers, and so on that the compiler encountered in one big block of text. Thanks compiler!  </p>
<p>What we do here is look for key words, recognisable files, or anything that's readable by humans. Did you spot anything? Let's dive deeper:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/02/step2-1.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>Digging deeper into the errors</em></p>
<p>The highlighted parts in <strong>yellow</strong>, indicate that there was an error in <code>userController.ts</code>, specifically at <code>getAllUsers()</code> function. If we read further, the highlighted parts in <strong>red point</strong>  us to the error message:</p>
<pre><code class="lang-bash">Authentication failed against database server at `dpg-cn9kr28l6cac73a1a7eg-a.frankfurt-postgres.render.com`, 
the provided database credentials <span class="hljs-keyword">for</span> `dmin` are not valid.\n\nPlease make sure to provide valid database credentials <span class="hljs-keyword">for</span> the database server
</code></pre>
<p>Hurray! Now we know the error. We have spelled "admin" incorrectly in our database connection string, meaning the connection failed. Doh! After we fix this, the error is resolved:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2024/02/step4.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>Error resolved</em></p>
<h3 id="heading-step-3-verify-the-fix">Step 3: Verify the Fix</h3>
<p>Now we've added a fix, we can verify by checking the browser to see if everything's working. In this case, checking the UI is enough to verify, but for more complex flows you can get that the API is returning the correct status code (in this case, 200)</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I hope this article has shed some light on how you can debug your projects. </p>
<p>If you are looking for more debugging insights, and real industry level projects to build, you can check out by YouTube where we build and deploy full stack applications using React, Node, and some other cool tech. Hope to see you there!  </p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/undefined" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Why Your Company Should Be Using Log Management Software ]]>
                </title>
                <description>
                    <![CDATA[ By Andrej Kovacevic Logging, safeguarding, and utilizing data is a massive part of any software development operation. And chronological records of events have become an invaluable tool for determining future decisions. Today, we're fortunate enough ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/why-you-should-use-log-management-software/</link>
                <guid isPermaLink="false">66d45db533b83c4378a517be</guid>
                
                    <category>
                        <![CDATA[ data ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Thu, 23 Feb 2023 21:24:21 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2022/12/log-management-apple-mac-tech.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Andrej Kovacevic</p>
<p>Logging, safeguarding, and utilizing data is a massive part of any software development operation. And chronological records of events have become an invaluable tool for determining future decisions.</p>
<p>Today, we're fortunate enough to have the concept of system logs, which represents a collection of files.</p>
<p>These files have data crucial to software development and system operations. And it's important to properly organize these records for any tech uses cases, whether IT systems or software development.</p>
<table><colgroup><col></colgroup><tbody><tr><td><p><span>Key Takeaways</span></p><ul><li><p><span>Logs are computer-generated files that record activity within a system that can include reports, messages, errors, requests, and file transfers.&nbsp;</span></p></li><li><p><span>Log management systems collect, analyze, monitor, index, retain, and report these data logs.</span></p></li><li><p><span>Log management systems are what IT professionals need to streamline operational processes, gain real-time data insights, and improve cybersecurity.&nbsp;</span></p></li></ul></td></tr></tbody></table>

<h2 id="heading-what-is-log-management">What is Log Management?</h2>
<p>Applications and programs have data logs, including information gathered over a given period. Logs are computer-generated files that record processes or activities within a software or operating system. These files keep track of any events that occur within a network or system. An event log is a file <a target="_blank" href="https://www.humio.com/glossary/event-log/">where these activities are recorded</a>.</p>
<p>These can include error reports, messages, file transfers, and requests, which typically have time stamps. These time stamps give developers and IT professionals a better view of what and when an event occurred.</p>
<p>In short, log management refers to the practice of constantly gathering, processing, storing, analyzing, and synthesizing data.</p>
<p>All these processes are necessary for optimizing system performance, identifying technical issues, enhancing resource management, improving compliance, and tightening security.</p>
<p>Log management falls under the following categories:</p>
<ul>
<li><strong>Collection:</strong> This includes any cluster data from applications, users, servers, endpoints, operating systems, or other significant sources within the company.</li>
<li><strong>Analysis:</strong> This tool analyzes the log collection extracted from the log server. Its purpose is to proactively identify security threats, bugs, and other issues.</li>
<li><strong>Monitoring:</strong> This tracks activities and events while providing time stamps for accurate documentation.</li>
<li><strong>Indexing:</strong> Commonly used to assist the IT team with searching, filtering, sorting, and analyzing all logs.</li>
<li><strong>Retention:</strong> This tool determines the duration of retained log data within the log file.</li>
<li><strong>Reporting:</strong> This feature automates audit logs reports concerning resource allocation, operational performance, and security and regulatory compliance.</li>
</ul>
<h2 id="heading-how-do-log-management-systems-help">How Do Log Management Systems Help?</h2>
<p>By definition, a log management system or LMS is <a target="_blank" href="https://www.sciencedirect.com/topics/computer-science/log-management">a software solution</a> for gathering, sorting, and storing event logs and log data. Event logs and data come from several sources, and collecting and sorting them can take time.</p>
<p>Log management system software makes storing data in one centralized database easier for developers, IT teams, and security officers. This automates the process of file indexing, making each event searchable.</p>
<p>IT teams have easier access to data required to make resource allocation, security, and network integrity decisions. Log management tools help manage the high volumes of organization log data created across the organization, effectively streamlining activities within the network.</p>
<p>These tools assist in distinguishing:</p>
<ul>
<li>Which information and data need logging</li>
<li>What format to use for logging</li>
<li>What is the correct time to save log data</li>
<li>What method to use for disposing or destroying expendable data</li>
</ul>
<h2 id="heading-why-is-log-management-critical">Why is Log Management Critical?</h2>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/12/event-log-example.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Image via PAOLO / Adobe Stock</em></p>
<p>The best log management strategies and systems provide real-time insights into system operations and integrity. Log management solutions <a target="_blank" href="https://wp.nyu.edu/dispatch/why-log-management-is-absolutely-crucial-to-a-stable-it-system/">are most effective</a> when they give you the following:</p>
<ul>
<li>Merged data storage using a unified log aggregation</li>
<li><a target="_blank" href="https://www.forbes.com/sites/forbestechcouncil/2022/03/22/why-centralized-log-management-matters/?sh=717de980b2f5">Boosted security</a> through real-time surveillance, minimized attack surface, improved detection, and quicker response times</li>
<li>Better visibility and observable variables in every part of the enterprise using the same event log</li>
<li>Quicker and more accurate troubleshooting ability using advanced network analytics</li>
<li>Amplified customer experience using log data, predictive modeling,and data analysis</li>
</ul>
<h2 id="heading-what-are-the-common-challenges-in-log-management">What Are the Common Challenges in Log Management?</h2>
<p>Typically, when there is a proliferation of devices connected to a system, it causes a sudden outburst of data. That outburst, especially in cases where a system shifts to the cloud, creates more complexities in log management for organizations.</p>
<p>The best way to tell if a log management solution is effective is if it addresses the following challenges:</p>
<h3 id="heading-1-latency">1. Latency</h3>
<p>Cataloging within the log file can be very time-consuming and computationally expensive. It can cause delays between entering data into the system and including data in search visualizations and results. A good log management system should give low latency to achieve faster transactions.</p>
<h3 id="heading-2-uniformity">2. Uniformity</h3>
<p>Because a log management solution extracts data from various systems, applications, and hosts, data characteristics can vary. As they need to be collated into one system, these data must have the same format. </p>
<p>When dealing with uniformly presented data, information security and IT teams can readily generate insights and analysis to improve business services and operations.</p>
<h3 id="heading-3-volume">3. Volume</h3>
<p>Event logs can generate data at an incredible rate. For most companies, the sheer amount of data constantly produced takes enormous effort to organize. </p>
<p>Effective log management software must be able to handle the collection, formatting, analysis, and storage of a ridiculous amount of data. It must also be able to do all that while providing timely insights.</p>
<h3 id="heading-4-high-it-workload">4. High IT Workload</h3>
<p>Manual log management is incredibly labor-intensive, expensive, and time-consuming. Proper digital log management software assists in automating these tedious tasks to reduce the burden on IT professionals.</p>
<h2 id="heading-what-to-look-for-in-log-management-systems">What to Look for in Log Management Systems</h2>
<p>Considering the tremendous amounts of data generated in any organization today, manually managing logs is impossible for IT professionals. That’s why you should look for advanced log management software and systems. Specifically, the software is invaluable in automating key aspects such as data gathering, formatting, and analysis.</p>
<p>Here are the key things developers and IT organizations should look into before investing in log management software.</p>
<h3 id="heading-1-automation-prioritization-to-decrease-it-workload">1. Automation prioritization to decrease IT workload</h3>
<p>Log management can drain resources from the IT team because it is time-consuming. Most of these systematic data collection and analysis tasks can be automated using advanced tooling.</p>
<p>You want to avoid overstraining your IT department with time-consuming processes as much as possible. Sophisticated log management tools have automation tools to mitigate this problem.</p>
<p>For instance, if your system has a bug that could potentially crash critical processes. With IT professionals unburdened by tedious processes, they can easily detect and create a patch for the bug.</p>
<h3 id="heading-2-unified-systems-to-boost-access-and-improve-security">2. Unified systems to boost access and improve security</h3>
<p>When you have log management centralized, it improves data access while strengthening an organization's security. Connecting and saving data in a central location allows for faster anomaly detection and action.</p>
<p>A centralized log management system <a target="_blank" href="https://www.forbes.com/sites/forbestechcouncil/2022/03/22/why-centralized-log-management-matters/?sh=8fcb133b2f55">helps companies stay ahead</a> of vulnerabilities and mitigate any system damage. It is also another line of defense against cyber criminals trying to break into your system via those vulnerabilities and malware.</p>
<p>For example, one of your technologies has an outdated operating system that could result in a security vulnerability. With a centralized log, it is easier for IT professionals to detect this problem and quickly resolve it.</p>
<h3 id="heading-3-flexibility-and-scalability-by-leveraging-the-cloud">3. Flexibility and scalability by leveraging the cloud</h3>
<p>Since the data landscape is consistently growing, a cloud-based solution makes the most sense for log management. Cloud-based log management tools are ideal for their scalability. It allows them to increase or decrease processing and storage based on their current needs.</p>
<p>In any business involving technology, data management and organization is paramount. An effective log management software is critical in managing event logs and streamlining operations.</p>
<p>A typical example of what slows them down is how a company scales goes back to <a target="_blank" href="https://www.sciencedirect.com/topics/engineering/operational-task">operational tasks</a>. When operational tasks are streamlined, it provides the required space to generate improvements to internal systems.</p>
<h2 id="heading-log-management-systems-are-not-optional">Log Management Systems Are Not Optional</h2>
<p>As you have already guessed, log management systems are an absolute necessity for IT professionals. Without them, your IT team will always be swamped with operational tasks, and vulnerabilities will slip through the cracks. </p>
<p>These days it is no longer a question of if you have a log management system but what kind you are using.</p>
<p><em>Feature image via Unsplash (Campaign Creators).</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How Logging Works in Laravel Applications ]]>
                </title>
                <description>
                    <![CDATA[ Logs are records of the events happening with your application. Laravel, by default, writes log information into the laravel.log file that ships with a fresh Laravel installation. The file is housed within the storage > logs directory.  In this tutor... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/laravel-logging/</link>
                <guid isPermaLink="false">66ba2fc97787ec7052709314</guid>
                
                    <category>
                        <![CDATA[ lavarel ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Sule-Balogun Olanrewaju ]]>
                </dc:creator>
                <pubDate>Fri, 06 Jan 2023 19:38:55 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2023/01/how-logging-works-laravel.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Logs are records of the events happening with your application. Laravel, by default, writes log information into the laravel.log file that ships with a fresh Laravel installation. The file is housed within the <code>storage &gt; logs</code> directory. </p>
<p>In this tutorial, you'll learn the following:</p>
<ul>
<li>Introduction to logging</li>
<li>Understanding log configurations</li>
<li>Channel drivers for log files</li>
<li>Formatting log messages</li>
</ul>
<p><img src="https://www.freecodecamp.org/news/content/images/2023/01/Screenshot-2023-01-05-at-23.26.03.png" alt="Image" width="600" height="400" loading="lazy">
<em>Log file image</em></p>
<h2 id="heading-introduction-to-logging">Introduction to Logging</h2>
<p>Laravel provides a log of what's happening in your application. The log service is built upon the Monolog open-source library. </p>
<p>The logging service is robust as it allows you to write log messages to files and send critical ones to teams on Slack (if configured), Socket, databases, and other web services. </p>
<p>The channel you wish to write your log information on is defined by the team, as there are a couple of channels supported by Laravel. Based on the severity of log information, the write information can also be done in multiple channels. You'll see how you can do this in the configuration section.</p>
<h2 id="heading-how-to-configure-your-laravel-logs">How to Configure Your Laravel Logs</h2>
<p>Laravel log configuration is located in the <strong>config &gt; logging.php</strong> file. Consider using a couple of log channels based on your preferences, such as stack, single, daily, syslogs, slack, papertail, and so on. </p>
<p>The channels are where you send log information. The default channel for every project is usually <strong>stack</strong>. You can change it by defining the <code>LOG_CHANNEL</code> in the env or specifying the channel name as the second parameter in the same <strong>logging.php</strong> file.</p>
<pre><code><span class="hljs-string">'default'</span> =&gt; env(<span class="hljs-string">'LOG_CHANNEL'</span>, <span class="hljs-string">'stack'</span>)
</code></pre><p>The stack channel has a driver name set to <code>stack</code>. Channels set to <code>single</code> means you get all logs in a single log file. You can also use <code>daily</code> which means a new log is auto-generated every day. It is an array. </p>
<p>You can also use multiple channels, <code>'channels' =&gt; ['daily', 'slack']</code>, and <code>ignore_exception</code> is a boolean (true, false). </p>
<p>I highly recommend using the daily channel, as this helps you keep track of daily logs by auto-generating a new log file every day (laravel-2023-01-15.log, laravel-2023-01-16.log and so on) without having to clear logs for the previous day. </p>
<p>The daily options keep you updated each day with log info in your log files for as long as you want. It also enables you to monitor frequent errors within the application if they occur on different days.</p>
<pre><code class="lang-php"> <span class="hljs-string">'channels'</span> =&gt; [        
    <span class="hljs-string">'stack'</span> =&gt; [ 
        <span class="hljs-string">'driver'</span> =&gt; <span class="hljs-string">'stack'</span>,            
        <span class="hljs-string">'channels'</span> =&gt; [<span class="hljs-string">'daily'</span>],    
        <span class="hljs-string">'ignore_exceptions'</span> =&gt; <span class="hljs-literal">false</span>,        
        ],
....],
</code></pre>
<h2 id="heading-channel-drivers-for-log-files">Channel Drivers for Log Files</h2>
<p>Here’s the list of the channel drivers Laravel offers:</p>
<ol>
<li><code>Single</code>: The Single driver ensures log information is sent to a single file. The driver sends logs to <strong>storage &gt; logs &gt; laravel.log</strong> by default.</li>
<li><code>daily</code>: The driver ensures that logs are written daily. The beauty about this is that every day new log file is auto-generated. Compared to the Single driver, there's no need to clear up the previous day's log information frequently. But the drawback to this channel is that you might have several log files. Every week/month, you should clear up all unused files.</li>
</ol>
<p>Within the logs directory, you often get logs like this:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/12/Screenshot-2022-12-26-at-13.29.50.png" alt="Image" width="600" height="400" loading="lazy">
<em>Daily log file</em></p>
<ol start="3">
<li><code>slack</code>: The slack driver ensures that all logs are sent to Slack. Slack needs to be configured to get user credentials (username, webhook) to help with error logging. This is super helpful as it allows your team to stay updated about what's happening right in a Slack channel. </li>
</ol>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/12/Screenshot-2022-12-26-at-13.19.26.png" alt="Image" width="600" height="400" loading="lazy"></p>
<ol start="4">
<li><p><code>syslog</code>: Logs using this driver will send log reports to the system log. The location of this log driver is dependent on the server operating system.</p>
</li>
<li><p><code>errorlog</code>: Logs set up to use this driver will send log reports to the error logs file setup on the web server operating system.</p>
</li>
<li><p><code>monolog</code>: This driver provides support for all Monolog handlers.</p>
</li>
<li><p><code>custom</code>: This driver helps create a custom channel based on user preference. It could be to a third-party service that needs log reports.</p>
</li>
<li><p><code>stack</code>: The stack driver is responsible for creating multiple channels</p>
</li>
<li><p><code>null</code>: All log messages get discarded by the driver.</p>
</li>
</ol>
<h2 id="heading-how-to-format-log-messages">How to Format Log Messages</h2>
<p>If you need a refresher on how facades work or how to create one, you should refer to this article about <a target="_blank" href="https://www.freecodecamp.org/news/how-to-use-facades-in-laravel/">how facades work in Laravel</a>.  </p>
<p>Laravel has a <code>Log</code> facade that helps with writing logs. Import the facade at the top of the file to use any log level.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>

<span class="hljs-keyword">use</span> <span class="hljs-title">Illuminate</span>\<span class="hljs-title">Support</span>\<span class="hljs-title">Facades</span>\<span class="hljs-title">Log</span>; 


Log::info($message);
</code></pre>
<p>You can also choose to escape the <code>Log</code> facade, so you won't need to import anything. This is suitable if you're logging a single instance of log info.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>

\Log::info($message);
</code></pre>
<p>In a recent Laravel release, logging has greatly improved. You can do away with the Log facade while logging info and reference the <code>info</code> from within your Laravel application.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>

info($message);
</code></pre>
<p>Other logging levels used to write information include <strong>alert, emergency, critical, warning, error, notice, and debug</strong>.</p>
<p>Within a file, you can log any of the data types or messages and even format the output of text you wish to write to the log file. </p>
<h3 id="heading-how-to-format-strings-booleans-and-integers">How to format strings, booleans, and integers:</h3>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>

<span class="hljs-keyword">use</span> <span class="hljs-title">Illuminate</span>\<span class="hljs-title">Support</span>\<span class="hljs-title">Facades</span>\<span class="hljs-title">Log</span>;


Log::warning(<span class="hljs-string">'There is a warning'</span>); 

Log::error(<span class="hljs-literal">false</span>);

Log::notice(<span class="hljs-number">500</span>);
</code></pre>
<h3 id="heading-how-to-format-to-an-array">How to format to an array:</h3>
<p>You can also format to an array. With the array function, a new array is created. So we can write an array to the log file by passing the log info to the array function. The <code>json_decode</code> converts the JSON object to a PHP object, and the <code>true</code> ensure it returns associative arrays (key and value pairs).</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>

$person = <span class="hljs-string">'{"Peter":35, "John":37, "Yinka":43}'</span>; 

$data = json_decode($person, <span class="hljs-literal">true</span>);

info($data);
</code></pre>
<h3 id="heading-how-to-format-to-an-object">How to format to an object:</h3>
<p>You can also write JSON objects to the log file when working with logs. Use the <code>json_encode</code> to encode values into JSON format.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span> 

$data = [<span class="hljs-string">"Peter"</span>=&gt; <span class="hljs-number">35</span>, <span class="hljs-string">"John"</span>=&gt; <span class="hljs-number">37</span>, <span class="hljs-string">"Yinka"</span> =&gt; <span class="hljs-number">43</span>];

info(json_encode($data));
</code></pre>
<h3 id="heading-how-to-concatenate-string-with-array-or-objects">How to concatenate string with array or objects:</h3>
<p>This is helpful when including a string to track the log information. You can do this using the concatenation operator (.)</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>

$persons = [<span class="hljs-string">"Peter"</span>=&gt; <span class="hljs-number">35</span>, <span class="hljs-string">"John"</span>=&gt; <span class="hljs-number">37</span>, <span class="hljs-string">"Yinka"</span> =&gt; <span class="hljs-number">43</span>];

info(<span class="hljs-string">'The person info '</span> . json_encode($persons));
</code></pre>
<h3 id="heading-how-to-write-to-dedicated-channels">How to write to dedicated channels:</h3>
<p>This method is helpful when you feel there's a need to write into specific channels aside from the default log channel. It would help if you had to specify the channel name when calling the Log facade.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>


\Log::channel(<span class="hljs-string">'slack'</span>)-&gt;info(<span class="hljs-string">'registeration successful'</span>);
</code></pre>
<p>The snippet above ensures the write operation is done on the Slack channel. Also, the stack method allows logging on multiple channels.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>


\Log::stack([<span class="hljs-string">'single'</span>, <span class="hljs-string">'slack'</span>])-&gt;info(<span class="hljs-string">'registeration successful!'</span>);
</code></pre>
<p>You can learn more about custom channels via factories and monolog channel customization from the <a target="_blank" href="https://laravel.com/docs/master/logging">official documentation</a>.</p>
<h2 id="heading-wrapping-up">Wrapping up</h2>
<p>In this article, you have learnt about logging, configuring logs in your Laravel application, available channel drivers, and how to write log files in different formats. </p>
<p>You should now have a better understanding of laravel logging. Keep learning, and Happy Coding!</p>
<p>You can find me on <a target="_blank" href="https://www.linkedin.com/in/suleolanrewaju/">LinkedIn</a> and <a target="_blank" href="https://twitter.com/bigdevlarry">Twitter</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build a Logging Web App with Server-Sent Events, RxJS, and Express ]]>
                </title>
                <description>
                    <![CDATA[ By Shayan Say you're working on your new great idea – a web or mobile app, and a back end server. Nothing too complicated so far. Until you realize that you need to stream data from your server to these clients.  Usually, when working on this, the fi... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-a-logging-web-app-with-server-sent-events-rxjs-and-express/</link>
                <guid isPermaLink="false">66d460ee052ad259f07e4b38</guid>
                
                    <category>
                        <![CDATA[ Express ]]>
                    </category>
                
                    <category>
                        <![CDATA[ full stack ]]>
                    </category>
                
                    <category>
                        <![CDATA[ JavaScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ node ]]>
                    </category>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Wed, 09 Feb 2022 18:06:54 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2022/02/Frame-11.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Shayan</p>
<p>Say you're working on your new great idea – a web or mobile app, and a back end server. Nothing too complicated so far. Until you realize that you need to stream data from your server to these clients. </p>
<p>Usually, when working on this, the first thing that comes to mind is to use one of the cool kids on the block, like WebSockets, SocketIO, or even a paid service that takes care of it for you. </p>
<p>But there's another method that's usually left out, and you might not have heard about it yet. It's called SSE, short for Server-Sent Events. </p>
<p>SSE has a special place in my heart because of its simplicity. It's lightweight, efficient, and very powerful. </p>
<p>To explain SSE in detail and how I use it, I will go over a small side project of mine that I think is an excellent showcase of SSE. I'll be using Typescript, Express, and RxJS, so get your environment ready and buckle up as we are about to dive into some code.</p>
<p>Before we get started, there is something that you should know about SSE. As its name suggests, Server-Sent Events is uni-directional from your server to the client. This may be a deal-breaker if your client needs to stream back data to the server. But this is not the case in many scenarios, and we can just rely on REST to send data to the server.</p>
<h2 id="heading-whats-the-project">What's the Project?</h2>
<p>The idea of this project is simple: I have a bunch of scripts running around on Raspberry Pis, droplets on Digital Ocean, and other places that are not easily accessible to me. So I want a way to print out logs and view them from anywhere.</p>
<p>As a solution, I would like a basic web app to push my logs and have a direct link to my session that I can open on any device or even share with others.</p>
<p>There are a couple of things to keep in mind before we proceed. </p>
<p>First, logs coming from my scripts are not that frequent, and the overhead of using HTTP is negligible for my use case. Because of this, I decided to publish my logs over a basic REST API and use SSE on the client-side to subscribe the incoming logs.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/Frame-8-1.png" alt="Image" width="600" height="400" loading="lazy">
<em>Logging Example</em></p>
<p>Second, this tool is mainly for quickly debugging things I'm working on. There are many production-ready and enterprise tools out there that I could use instead. But I wanted something very light and easy to use.</p>
<h2 id="heading-lets-write-some-server-side-code">Let's Write Some Server-side Code</h2>
<p>The server-side setup is straightforward. So let's start with a diagram to give you an idea of the setup before explaining everything in detail.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/Frame-10-1.png" alt="Image" width="600" height="400" loading="lazy">
<em>Server Diagram</em></p>
<p>If we think of our backend server as a pipeline, on one end we have a series of publishers – in our case, the scripts publishing logs. On the other end, we have some clients subscribing to these logs.</p>
<p>To connect these two ends, I will be using an RxJS Subject. It will allow me to publish anything from the publishers over REST and then subscribe to these events and forward the messages to the clients over SSE.</p>
<p>To get started, let's define our Log interface. To keep things simple, I will only define a content field that will hold our log information.</p>
<pre><code class="lang-ts"><span class="hljs-keyword">interface</span> Log {
  content: <span class="hljs-built_in">string</span>;
}
</code></pre>
<h3 id="heading-how-to-set-up-rxjs">How to set up RxJS</h3>
<p>Let's import RxJS, create a new Subject for our Logs, and define a function to publish our logs to this Subject. </p>
<p>Of course, we could export our Subject and directly call it from our router, but I prefer to abstract away the implementation and only provide the emit function to the rest of my code.</p>
<pre><code class="lang-ts"><span class="hljs-keyword">import</span> { Subject } <span class="hljs-keyword">from</span> <span class="hljs-string">'rxjs'</span>;

<span class="hljs-comment">// Log Subject</span>
<span class="hljs-keyword">const</span> NewLog$ = <span class="hljs-keyword">new</span> Subject&lt;Log&gt;();

<span class="hljs-comment">/**
 * Emit a new log to the RxJS subject
 * @param log
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">emitNewLog</span>(<span class="hljs-params">log: Log</span>): <span class="hljs-title">void</span> </span>{
    NewLog$.next(log);
}
</code></pre>
<p>Finally, let's define a new route on our Express server that would accept new logs from our client and publish them to the emitNewLog method that we have just created.</p>
<pre><code class="lang-ts">app.post(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">req: Request, res: Response</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> content = req.body.content;
  <span class="hljs-keyword">const</span> log: Log = { content: content };
  emitNewLog(log);
  <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).json({ ok: <span class="hljs-literal">true</span> });
});
</code></pre>
<p>We are now done with the publishing side. What's left is to define our SSE route, subscribe to the RxJS Subject, and deliver the logs to our client.</p>
<h3 id="heading-how-to-set-up-the-sse-route">How to set up the SSE Route</h3>
<p>Let's define a new route for our SSE connection. To enable SSE, we need to flush a couple of headers back to our client. </p>
<p>We want the <strong>‘Connection’</strong> set to <strong>‘keep-alive’</strong>, <strong>‘Cache-Control’</strong> set to ‘<strong>no-cache</strong>’, and <strong>‘Content-Type’</strong> set to <strong>‘text/event-stream’</strong>. This way our client will understand that this is an SSE route.</p>
<p>In addition, I have added <strong>‘Access-Control-Allow-Origin’</strong> for CORS and <strong>‘X-Accel-Buffering’</strong> set to <strong>‘no’</strong> to keep <a target="_blank" href="https://www.nginx.com/">Nginx</a> from messing with this route. Finally, we can flush the headers back to our client to kickstart the event stream.</p>
<pre><code class="lang-ts">app.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">req: Request, res: Response</span>) =&gt;</span> {
  res.setHeader(<span class="hljs-string">'Cache-Control'</span>, <span class="hljs-string">'no-cache'</span>);
  res.setHeader(<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'text/event-stream'</span>);
  res.setHeader(<span class="hljs-string">'Connection'</span>, <span class="hljs-string">'keep-alive'</span>);
  res.setHeader(<span class="hljs-string">'Access-Control-Allow-Origin'</span>, <span class="hljs-string">'*'</span>);
  res.setHeader(<span class="hljs-string">'X-Accel-Buffering'</span>, <span class="hljs-string">'no'</span>);
  res.flushHeaders();
});
</code></pre>
<p>We can now start streaming data by writing something into our response. </p>
<p>SSE provides a text-based protocol that we can use to help our clients differentiate between the event types. Each one of our events should look like the following:</p>
<pre><code class="lang-ts">event: ${event name}\n
data: ${event data}\n\n
</code></pre>
<p>To make my life a bit easier, I have created a helper function to take care of serialization for us.</p>
<pre><code class="lang-ts"><span class="hljs-comment">/**
 * SSE message serializer
 * @param event: Event name
 * @param data: Event data
 */</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">serializeEvent</span>(<span class="hljs-params">event: <span class="hljs-built_in">string</span>, data: <span class="hljs-built_in">any</span></span>): <span class="hljs-title">string</span> </span>{
  <span class="hljs-keyword">const</span> jsonString = <span class="hljs-built_in">JSON</span>.stringify(data);
  <span class="hljs-keyword">return</span> <span class="hljs-string">`event: <span class="hljs-subst">${event}</span>\ndata: <span class="hljs-subst">${jsonString}</span>\n\n`</span>;
}
</code></pre>
<p>We can now subscribe to the RxJS Subject we created earlier, serialize each new log, and write it as a <strong>NEW_LOG</strong> event to our connection.</p>
<pre><code class="lang-ts">app.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">req: Request, res: Response</span>) =&gt;</span> {
  res.setHeader(<span class="hljs-string">'Cache-Control'</span>, <span class="hljs-string">'no-cache'</span>);
  res.setHeader(<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'text/event-stream'</span>);
  res.setHeader(<span class="hljs-string">'Connection'</span>, <span class="hljs-string">'keep-alive'</span>);
  res.setHeader(<span class="hljs-string">'Access-Control-Allow-Origin'</span>, <span class="hljs-string">'*'</span>);
  res.setHeader(<span class="hljs-string">'X-Accel-Buffering'</span>, <span class="hljs-string">'no'</span>);
  res.flushHeaders();

  NewLog$.subscribe(<span class="hljs-function">(<span class="hljs-params">log: Log</span>) =&gt;</span> {
    res.write(serializeEvent(<span class="hljs-string">'NEW_LOG'</span>, log));
  });

}
</code></pre>
<p>Finally, we have to make sure to unsubscribe from our observer when the SSE connection is closed. Putting all of these together, we should have something like this:</p>
<pre><code class="lang-ts">app.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">req: Request, res: Response</span>) =&gt;</span> {
  res.setHeader(<span class="hljs-string">'Cache-Control'</span>, <span class="hljs-string">'no-cache'</span>);
  res.setHeader(<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'text/event-stream'</span>);
  res.setHeader(<span class="hljs-string">'Connection'</span>, <span class="hljs-string">'keep-alive'</span>);
  res.setHeader(<span class="hljs-string">'Access-Control-Allow-Origin'</span>, <span class="hljs-string">'*'</span>);
  res.setHeader(<span class="hljs-string">'X-Accel-Buffering'</span>, <span class="hljs-string">'no'</span>);
  res.flushHeaders();

  <span class="hljs-keyword">const</span> stream$ = NewLog$.subscribe(<span class="hljs-function">(<span class="hljs-params">log: Log</span>) =&gt;</span> {
    res.write(serializeEvent(<span class="hljs-string">'NEW_LOG'</span>, log));
  });

  req.on(<span class="hljs-string">'close'</span>, <span class="hljs-function">() =&gt;</span> {
    stream$.unsubscribe();
  });
});
</code></pre>
<p>That’s it! We are done with our backend server and it’s time to move to the frontend code.</p>
<h2 id="heading-write-the-client-code">Write the Client Code</h2>
<p>Subscribing to our SSE route on the browser is very straightforward. First, let’s move to our client code and create a new instance of the <strong>EventSource</strong> interface and pass our endpoint to the constructor.</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> eventSource = <span class="hljs-keyword">new</span> EventSource(<span class="hljs-string">"/"</span>);
</code></pre>
<p>Then, we can add event listeners for the events we want to subscribe to (in our case, <strong>NEW_LOG</strong>) and define a callback method to handle our log.</p>
<pre><code class="lang-js">eventSource.addEventListener(
   <span class="hljs-string">"NEW_LOG"</span>, <span class="hljs-function">(<span class="hljs-params">event</span>) =&gt;</span> {
       <span class="hljs-keyword">const</span> log = <span class="hljs-built_in">JSON</span>.parse(event.data);
       <span class="hljs-comment">// use the data to update the UI</span>
    }, <span class="hljs-literal">false</span>
);
</code></pre>
<p>And finally, we can close the connection whenever we are done listening to these events.</p>
<pre><code class="lang-js">eventSource.close();
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>As you can see, Server-Sent Events make it very easy to stream content from the server to the client. They are specifically helpful because we get a built-in interface in most modern browsers, and we can easily poly-fill for those that do not provide the interface. </p>
<p>In addition, SSE automatically handles re-connect for us in case the client loses connection with the server. Therefore, it is a valid alternative to SocketIO and WebSockets in various scenarios where we need a uni-directional event streaming from the server.</p>
<p>If you are further interested in this project, I have added a couple of extra functionalities to the code that we just went over and a web GUI that you can check out here: <a target="_blank" href="https://logsnag.com/console">LogSnag Console</a>.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/02/Frame-9-1.png" alt="Image" width="600" height="400" loading="lazy">
<em>Console Demo</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Add Sentry to Your Node.js Project with TypeScript ]]>
                </title>
                <description>
                    <![CDATA[ Sentry.io is an external monitoring and logging service which can help you identify and triage errors in your code.  These logs provide information such as a trace stack, breadcrumbs, and (assuming this is a web application) browser data. This can he... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-add-sentry-to-your-node-js-project-with-typescript/</link>
                <guid isPermaLink="false">66ac7ef210d8e430980ae9dc</guid>
                
                    <category>
                        <![CDATA[ error handling ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ node js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Naomi Carrigan ]]>
                </dc:creator>
                <pubDate>Tue, 28 Sep 2021 16:24:26 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2021/09/pexels-pixabay-366283.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Sentry.io is an external monitoring and logging service which can help you identify and triage errors in your code. </p>
<p>These logs provide information such as a trace stack, breadcrumbs, and (assuming this is a web application) browser data. This can help you triage issues and resolve bugs faster, with less investigative overhead.</p>
<h2 id="heading-how-to-prepare-your-sentry-account">How to Prepare Your Sentry Account</h2>
<p>Begin by navigating to <a target="_blank" href="https://sentry.io">Sentry</a> and clicking "Get Started". You will be taken to the account creation screen:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-83.png" alt="Sentry's account creation screen." width="600" height="400" loading="lazy"></p>
<p>You can either sign up with OAuth or create separate credentials for Sentry. If you choose to create separate credentials, you'll need to enter an organization name now (this can be changed later). I used my username as my organization name.  </p>
<p>Once you create your account, Sentry will take you through a tutorial to set up your first project. Click "I'm Ready" to be taken to the first step.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-84.png" alt="Sentry's &quot;Choose your project's platform&quot; screen" width="600" height="400" loading="lazy"></p>
<p>For our purposes, the <code>NODE.JS</code> option is the platform you should select. Then click "Create Project".</p>
<p>This takes you to the instructions for preparing the SDK to integrate with your codebase. Leave that page open as you will need your <code>dsn</code> value.</p>
<h2 id="heading-how-to-use-sentry-in-your-code">How to Use Sentry in Your Code</h2>
<p>Your next step is to install the necessary Sentry packages:</p>
<pre><code class="lang-bash">npm install @sentry/node @sentry/integrations
</code></pre>
<p>The <code>@sentry/node</code> package is the core SDK for your Node.js project, and the <code>@sentry/integrations</code> package contains a tool you will use for mapping the file path.</p>
<p>Your Sentry tooling should be loaded as early as possible in your code flow. Ideally, this means you should initialize it within the entry point for your application (that is, <code>index.ts</code>). </p>
<p>Start by importing the packages:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> Sentry <span class="hljs-keyword">from</span> <span class="hljs-string">"@sentry/node"</span>;
<span class="hljs-keyword">import</span> { RewriteFrames } <span class="hljs-keyword">from</span> <span class="hljs-string">"@sentry/integrations"</span>;
</code></pre>
<p>The first import pulls in the Sentry-Node tooling, and the second gives you access to the <code>RewriteFrames</code> integration. This integration allows you to adjust the pathing of the stack trace, which is necessary for properly pointing to your compiled JavaScript files.</p>
<p> Now you need to instantiate the Sentry monitor and provide the configuration:</p>
<pre><code class="lang-ts">Sentry.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: <span class="hljs-number">1.0</span>,
  integrations: [
    <span class="hljs-keyword">new</span> RewriteFrames({
      root: <span class="hljs-built_in">global</span>.__dirname,
    }),
  ],
});
</code></pre>
<p>Here you have passed a configuration object to the <code>Sentry.init()</code> method (which is used to instantiate and initialise the Sentry process). To break these options down:</p>
<ul>
<li><code>dsn</code> is a unique URL used to connect your Sentry instance to your dashboard. We will explore this a bit later.</li>
<li><code>tracesSampleRate</code> determines the percent of events the monitor should send to the dashboard. A value of <code>1.0</code> sends 100% of the captured events – but if you find this to be too noisy you can reduce this number.</li>
<li><code>integrations</code> loads the integrations you want to use. In this case, you are loading the <code>RewriteFrames</code> option and setting the <code>root</code> path for your stack traces to <code>global.__dirname</code> (which resolves to the directory you run your application from).</li>
</ul>
<p>Then, anywhere in your code base where you are logging errors (such as a <code>try / catch</code> block or a <code>.catch()</code> chain), add <code>Sentry.captureException(error)</code> (replacing <code>error</code> with the variable that represents your error object) to pass that error off to your Sentry monitor. </p>
<h2 id="heading-how-to-connect-your-code-to-your-dashboard">How to Connect Your Code to Your Dashboard</h2>
<p>Back on that project setup page, you'll see a URL value for the <code>dsn</code> option in the configuration.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-85.png" alt="Example sentry option, showing a valid dsn URL." width="600" height="400" loading="lazy"></p>
<p><strong>Your <code>dsn</code> should be treated as a secret and not shared with anyone.</strong> You can achieve this by adding it to your <code>.env</code> file (assign it to <code>SENTRY_DSN</code> to match with our configuration from the previous step).</p>
<p>The <code>dsn</code> tells Sentry where to send the captured errors, and the dashboard uses it to link those errors to your new project.</p>
<blockquote>
<p>A note for front end projects:<br>Because you do not have access to a <code>.env</code> on the front end, you will need to expose your <code>dsn</code> publicly. We will cover how to handle this in the next step.</p>
</blockquote>
<p>Once this is set up, you can click "View a sample event for this SDK" in the small print at the bottom of the Sentry page. This will generate a fake error event and take you to the dashboard.</p>
<h2 id="heading-how-to-configure-your-sentry-dashboard">How to Configure Your Sentry Dashboard</h2>
<p>The Sentry website will offer you a quick tour of the dashboard, which you can follow if you would like, or you can skip it and continue with this article.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-86.png" alt="The top half of the Sentry Dashboard" width="600" height="400" loading="lazy"></p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-87.png" alt="The bottom half of the Sentry dashboard" width="600" height="400" loading="lazy"></p>
<p>This view shows you the specific details for a captured error event. In this case, it is the sample event generated by Sentry from the previous step.</p>
<p>The top half offers information such as the browser data from the user that triggered the error, the error message, and the error type. The bottom half provides the stack trace and breadcrumbs (actions that took place to trigger this error) – both helpful for reproducing this error in triage.</p>
<p>At the very top you should see your project's name (which defaults to your organization name) and a gear. Click that gear to be taken to your project's settings.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-89.png" alt="The Sentry project settings" width="600" height="400" loading="lazy"></p>
<p>Here you see some configurations for your project. The "Name" determines the name of your project. Changing the "Platform" affects how stack traces are rendered. You are welcome to experiment with the other settings as desired.</p>
<blockquote>
<p>For front-end projects:<br>As mentioned earlier, you will need to expose your DSN publicly. You can set your webpage's URL in the "Allowed Domains" to prevent data being sent from any other source.</p>
</blockquote>
<p>On the side bar are a few additional options. Selecting "Client Keys(DSN)" will take you to a page where you can copy your DSN again, if needed. You can also delete and regenerate it if you accidentally exposed it.</p>
<p>Selecting "Alerts" will allow you to configure how you receive notifications for error events. I have mine set to send to a <a target="_blank" href="https://github.com/nhcarrigan/discord-integrations">Discord Webhook</a>, but you can configure a number of integration options for receiving your alerts.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2021/09/image-91.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Finally you have the main side bar. Here you can configure your organization settings, including renaming your organization or creating additional organizations and projects.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You have now successfully integrated Sentry with your Node.js-Typescript project. You are now ready to start receiving error information, triaging issues, and improving your project's stability. </p>
<p>Feel free to experiment with Sentry's settings and features to personalize your experience to meet your needs. Happy Coding!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Logging in Python – How to Use Logs to Debug Your Django Projects ]]>
                </title>
                <description>
                    <![CDATA[ By Md. Saifur Rahman The only perfect code is code that has never been written. As a developer, you are bound to face errors and will be responsible for debugging them.  If you're coding in Python, you can always look at its error messages to figure ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/logging-in-python-debug-your-django-projects/</link>
                <guid isPermaLink="false">66d46012ffe6b1f641b5fa20</guid>
                
                    <category>
                        <![CDATA[ debugging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Django ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Python ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 31 Aug 2021 20:46:24 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2021/08/Django-Logger.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Md. Saifur Rahman</p>
<p>The only perfect code is code that has never been written. As a developer, you are bound to face errors and will be responsible for debugging them. </p>
<p>If you're coding in Python, you can always look at its error messages to figure out what's going on. But what if an error occurs that you have no idea what's breaking your code? </p>
<p>Something might be strangely wrong in the background, but you are unable to recognize it. You can always turn it off and on again – or even better, you can check the logs.</p>
<h2 id="heading-what-is-logging">What is Logging?</h2>
<p>If an error occurs or your app decides to work strangely, your log files will come in handy. You can traverse through them and find out where exactly the application is having problems and how you can replicate those problems.</p>
<p>By reproducing the problem, you can dig deeper and find a reasonable solution for the errors. Something which might otherwise take several hours to detect might just take few minutes to diagnose with the presence of log files.</p>
<h2 id="heading-how-logging-works-in-django">How Logging Works in Django</h2>
<p>Thankfully, Django has support for logging and most of the hard work has already been done by its developers. Django comes with Python's built-in logging module to leverage system logging. </p>
<p>The Python logging module has four main parts:</p>
<ol>
<li><a class="post-section-overview" href="#heading-1-loggers">Loggers</a></li>
<li><a class="post-section-overview" href="#heading-2-handlers">Handlers</a></li>
<li><a class="post-section-overview" href="#heading-3-filters">Filters</a></li>
<li><a class="post-section-overview" href="#heading-4-formatters">Formatters</a></li>
</ol>
<p>Every component is explained meticulously in the <a target="_blank" href="https://docs.djangoproject.com/en/3.2/topics/logging/">Django Official Documentation</a>. I don't want you to be overwhelmed with its complexity, so I'll explain every single part briefly:</p>
<h3 id="loggers">1. Loggers</h3>

<p>Loggers are basically the entry point of the logging system. This is what you'll actually work with as a developers. </p>
<p>When a message is received by the logger, the log level is compared to the log level of the logger. If it is the same or exceeds the log level of the logger, <strong>the message is sent to the handler for further processing.</strong> The log levels are:</p>
<ul>
<li><strong>DEBUG:</strong> Low-level system information</li>
<li><strong>INFO:</strong> General system information</li>
<li><strong>WARNING:</strong> Minor problems related information</li>
<li><strong>ERROR:</strong> Major problems related information</li>
<li><strong>CRITICAL:</strong> Critical problems related information</li>
</ul>
<h3 id="handlers">2. Handlers</h3>

<p>Handlers basically determine what happens to each message in a logger. It has log levels the same as Loggers. But, we can essentially define what way we want to handle each log level. </p>
<p>For example: <strong>ERROR</strong> log level messages can be sent in real-time to the developer, while <strong>INFO</strong> log levels can just be stored in a system file.</p>
<p>It essentially tells the system what to do with the message like writing it on the screen, a file, or to a network socket</p>
<h3 id="filters">3. Filters</h3>

<p>A filter can sit between a <strong>Logger</strong> and a <strong>Handler</strong>. It can be used to filter the log record. </p>
<p>For example: in <strong>CRITICAL</strong> messages, you can set a filter which only allows a particular source to be processed.</p>
<h3 id="formatters">4. Formatters</h3>

<p>As the name suggests, formatters describe the format of the text which will be rendered.</p>
<p>Now that we have covered the basics, let's dig deeper with an actual example. <a target="_blank" href="https://github.com/sa1if3/django-logging-tutorial">Click here for the source code</a>. </p>
<p><strong>Please note that this tutorial assumes that you are already familiar with the basics of Django.</strong></p>
<h2 id="heading-project-setup">Project Setup</h2>
<p>First, create a virtual environment called <strong><code>venv</code></strong> inside your project folder <code>django-logging-tutorial</code> with the command below and activate it.</p>
<p>``` bash
mkdir django-logging-tutorial
virtualenv venv
source venv/bin/activate</p>
<p>Create a new Django project called <code>django_logging_tutorial</code>. Notice that the project folder name is with a dash while the project name is with an underscore (- vs _). We will also run a series of commands quickly to set up our project.</p>
<h2 id="heading-how-to-configure-your-log-files">How to Configure Your Log Files</h2>
<p>Let's first set up the <code>settings.py</code> file of our project. Heads up – notice my comments in the code which will help you understand this process better. </p>
<p>This code is also mentioned in the <a target="_blank" href="https://docs.djangoproject.com/en/3.2/topics/logging/#examples">3rd example of the official documentation</a> and in most of our projects, it will serve just fine. I have slightly modified it to make it more robust.</p>
<p>``` python</p>
<p>LOGGING = {
    'version': 1,</p>
<h1 id="heading-the-version-number-of-our-log">The version number of our log</h1>
<p>    'disable_existing_loggers': False,</p>
<h1 id="heading-django-uses-some-of-its-own-loggers-for-internal-operations-in-case-you-want-to-disable-them-just-replace-the-false-above-with-true">django uses some of its own loggers for internal operations. In case you want to disable them just replace the False above with true.</h1>
<h1 id="heading-a-handler-for-warning-it-is-basically-writing-the-warning-messages-into-a-file-called-warninglog">A handler for WARNING. It is basically writing the WARNING messages into a file called WARNING.log</h1>
<p>    'handlers': {
        'file': {
            'level': 'WARNING',
            'class': 'logging.FileHandler',
            'filename': BASE_DIR / 'warning.log',
        },
    },</p>
<h1 id="heading-a-logger-for-warning-which-has-a-handler-called-file-a-logger-can-have-multiple-handler">A logger for WARNING which has a handler called 'file'. A logger can have multiple handler</h1>
<p>    'loggers': {</p>
<h1 id="heading-notice-the-blank-usually-you-would-put-built-in-loggers-like-django-or-root-here-based-on-your-needs">notice the blank '', Usually you would put built in loggers like django or root here based on your needs</h1>
<p>        '': {
            'handlers': ['file'], #notice how file variable is called in handler which has been defined above
            'level': 'WARNING',
            'propagate': True,
        },
    },
}</p>
<p>If you read my comments above, you might have noticed that the logger part was just blank. Which essentially means any logger. </p>
<p>Be careful with this approach as most of our work can be satisfied with in-built <a target="_blank" href="https://docs.djangoproject.com/en/3.2/topics/logging/#django-s-logging-extensions">Django loggers</a> like <code>django.request</code> or <code>django.db.backends</code>. </p>
<p>Also, for the sake of simplicity, I only used a file for storing the logs. Depending on your use case you might also choose to drop an email when <strong>CRITICAL</strong> or <strong>ERROR</strong> messages are encountered. </p>
<p>To learn more about this, I would encourage you to read the <a target="_blank" href="https://docs.djangoproject.com/en/3.2/topics/logging/#id4">handler</a> part of the docs. The docs might feel overwhelming at the start, but the more you get used to reading them the more you might discover other interesting or better approaches. </p>
<p>Don't worry if it's your first time working with documentation. There is always a first time for everything.</p>
<p>I've explained most of the code in the comments, but we still haven't touched upon <code>propagate</code> yet. What is it? </p>
<p>When <code>propagate</code> is set as <strong>True</strong>, a child will propagate all their logging calls to the parent. This means that we can define a handler at the root (parent) of the logger tree and all logging calls in the subtree (child) go to the handler defined in the parent. </p>
<p>It is also important to note that hierarchy is important here. We can also just set it up as <strong>True</strong> in our project as it won't matter in our case since there is no subtree.</p>
<h2 id="heading-how-to-trigger-logs-in-python">How to Trigger Logs in Python</h2>
<p>Now, we need to create some log messages so we can try out our configuration in <strong><code>settings.py</code></strong>. </p>
<p>Let's have a default homepage that just displays '<strong>Hello FreeCodeCamp.org Reader :)'</strong> and every time someone visits the page we note down a <strong>WARNING</strong> message in our <code>warning.log</code> file as 'Homepage was accessed at 2021-08-29 22:23:33.551543 hours!'</p>
<p>Go to your app <code>logging_example</code>, and in <code>views.py</code> include the following code. Make sure you have added <code>logging_example</code> in the <code>INSTALLED_APPS</code> in <code>setting.py</code>.</p>
<p>``` python</p>
<p>from django.http import HttpResponse
import datetime</p>
<h1 id="heading-import-the-logging-library">import the logging library</h1>
<p>import logging</p>
<h1 id="heading-get-an-instance-of-a-logger">Get an instance of a logger</h1>
<p>logger = logging.getLogger(<strong>name</strong>)</p>
<p>def hello_reader(request):
    logger.warning('Homepage was accessed at '+str(datetime.datetime.now())+' hours!')
    return HttpResponse("</p><h1 id="heading-hello-freecodecamporg-reader">Hello FreeCodeCamp.org Reader :)</h1>")<p></p>
<p>In the project's <code>urls.py</code>, add the following code so that when we access the homepage the right function is called.</p>
<p>``` python
from django.contrib import admin
from django.urls import path
from logging_example import views</p>
<p>urlpatterns = [
    path('admin/', admin.site.urls),
    path('',views.hello_reader, name="hello_reader")
]</p>
<h2 id="heading-time-for-some-testing">Time for Some Testing</h2>
<p>Finally, our simple setup is done. All we need to do now is to fire up our server and test our log.</p>
<p>Run the development server with this command:</p>
<p>``` bash
python manage.py runserver</p>
<p>Now, go to your homepage <strong>127.0.0.1:8000</strong> where you will be greeted with the message we have coded. Now check your <code>warning.log</code> file in the path created. Sample output is shown below:</p>
<p>``` txt
Homepage was accessed at 2021-08-29 22:38:29.922510 hours!
Homepage was accessed at 2021-08-29 22:48:35.088296 hours!</p>
<p>That's it! Now you know how to perform logging in Django. If you have any questions, just drop me a message. I promise to help :)</p>
<p>If you found my article helpful and want to read more, please check out some Django tutorials at my blog <a target="_blank" href="https://techflow360.com/category/web-development/django/">Techflow360.com</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Use Metrics to Monitor Your Microservices ]]>
                </title>
                <description>
                    <![CDATA[ By Siben Nayak In my previous article, I talked about the importance of logs and the differences between structured and unstructured logging.  Logs are easy to integrate into your application, and they give you the ability to represent any type of da... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/microservice-observability-metrics/</link>
                <guid isPermaLink="false">66d46157bd438296f45cd3c8</guid>
                
                    <category>
                        <![CDATA[ error handling ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ metrics ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Microservices ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Wed, 30 Dec 2020 16:49:38 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/12/Microservice-Observability---Metrics.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Siben Nayak</p>
<p>In my previous <a target="_blank" href="https://www.freecodecamp.org/news/how-to-handle-logs-in-microservices/">article</a>, I talked about the importance of logs and the differences between structured and unstructured logging. </p>
<p>Logs are easy to integrate into your application, and they give you the ability to represent any type of data in the form of strings.</p>
<p>Metrics, on the other hand, are numerical representations of data. These are often used to count or measure a value and are aggregated over a period of time. </p>
<p>Metrics give us insights into the historical and current state of a system. Since they are just numbers, we can also use them to perform statistical analysis and predictions about the system’s future behaviour. </p>
<p>You can also use metrics to trigger alerts and notify you about issues in the system’s behaviour.</p>
<h1 id="heading-logs-vs-metrics">Logs vs. Metrics</h1>
<h2 id="heading-how-logs-and-metrics-are-formatted">How Logs and Metrics are Formatted</h2>
<p>Logs are represented as strings. They can be simple text, JSON payloads, or key-value pairs (like we discussed in structured logging).</p>
<p>A typical log entry looks like this:</p>
<pre><code>[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">54</span>:<span class="hljs-number">41</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> fetching product information - Product Not Available
</code></pre><p>Metrics are represented as numbers. They measure something (like CPU usage, number of errors, and so on) and are numeric in nature.</p>
<p>A typical metric looks like this:</p>
<pre><code>{<span class="hljs-class"><span class="hljs-keyword">class</span></span>=InventoryValidator, exception=Product Not Available, timestamp=<span class="hljs-number">1609306200</span>}
</code></pre><h2 id="heading-the-resolution-of-logs-and-metrics">The Resolution of Logs and Metrics</h2>
<p>Logs contain high-resolution data. This includes complete information about an event and can be used to correlate the flow (or path) that the event took through the system. </p>
<p>In case of errors, logs contain the entire stack trace of the exception, which allows us to view and debug issues originating from downstream systems as well. </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/12/android-stack-trace-error-2.png" alt="Image" width="600" height="400" loading="lazy">
<em>A log entry showing the stacktrace of an error</em></p>
<p>In short, logs can tell you <em>what happened</em> in the system at a certain time.</p>
<p>Metrics contain low-resolution data. This may include a count of parameters (such as requests, errors, and so on) and measures of resources (such as CPU and memory utilization). </p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/12/tracing_aggregated_red_metrics.png" alt="Image" width="600" height="400" loading="lazy">
<em>A Metric showing number of hits to a service</em></p>
<p>In short, metrics can give you <em>a count of something that happened</em> in the system at a certain time.</p>
<h2 id="heading-the-cost-of-logs-and-metrics">The Cost of Logs and Metrics</h2>
<p>Logs are expensive to store. The storage overhead of logs also increases over time and is directly proportional to the increase in traffic.</p>
<p>Metrics have a constant storage overhead. The cost of storage and retrieval of metrics does not increase too much with the increase in traffic. It is, however, dependent on the number of variables we emit with each metric.</p>
<h1 id="heading-cardinality-of-metrics">Cardinality of Metrics</h1>
<p>Metrics are identified by two key pieces of information:</p>
<ul>
<li>A metric name</li>
<li>A set of key-value pairs called tags or labels</li>
</ul>
<p>A permutation of these values provides the metric its cardinality. For example, if we are measuring the CPU utilization of a system with three hosts, the metric has a cardinality value of 3 and can have the following three values:</p>
<pre><code>(name=pod.cpu.utilization, host=A)
(name=pod.cpu.utilization, host=B)
(name=pod.cpu.utilization, host=C)
</code></pre><p>Similarly, if we introduced another tag in the metric that determined the AWS region of the hosts (say, <code>us-west-1</code> and <code>us-west-2</code>), we would now have a metric with a cardinality value of 6.</p>
<h1 id="heading-types-of-metrics">Types of Metrics</h1>
<h2 id="heading-golden-signals">Golden signals</h2>
<p>Golden signals are an effective way of monitoring the overall state of the system and identifying problems.</p>
<ul>
<li><strong>Availability:</strong> State of your system measured from the perspective of clients (for example, the percentage of errors on total requests).</li>
<li><strong>Health:</strong> State of your system measured using periodic pings.</li>
<li><strong>Request Rate:</strong> Rate of incoming requests to the system.</li>
<li><strong>Saturation:</strong> How free or loaded the system is (foe example, the queue depth or available memory).</li>
<li><strong>Utilization:</strong> How busy the system is (for example, CPU load or memory usage). This is represented as a percentage.</li>
<li><strong>Error Rate:</strong> Rate of errors being produced in the system.</li>
<li><strong>Latency:</strong> Response time of the system, usually measured in the 95th or 99th percentile.</li>
</ul>
<h2 id="heading-resource-metrics">Resource metrics</h2>
<p>Resource metrics are almost always made available by default from the infrastructure provider (AWS CloudWatch or Kubernetes metrics) and are used to monitor infrastructure health.</p>
<ul>
<li><strong>CPU/Memory Utilization:</strong> Usage of the system’s core resources.</li>
<li><strong>Host Count:</strong> Number of hosts/pods that are running your system (used to detect availability issues due to pod crashes).</li>
<li><strong>Live Threads:</strong> Threads spawned in your service (used to detect issues in multi-threading).</li>
<li><strong>Heap Usage:</strong> Heap memory usage statistics (can help debug memory leaks).</li>
</ul>
<h2 id="heading-business-metrics">Business metrics</h2>
<p>Business metrics can be used to monitor granular interaction with core APIs or functionality in your services.</p>
<ul>
<li><strong>Request Rate:</strong> Rate of requests to the APIs.</li>
<li><strong>Error Rate:</strong> Rate of errors being thrown by the APIs.</li>
<li><strong>Latency:</strong> Time taken to process requests by the APIs.</li>
</ul>
<h1 id="heading-dashboards-and-alerts-for-metrics">Dashboards and Alerts for Metrics</h1>
<p>Since metrics are stored in a time-series database, it’s more efficient and reliable to run queries against them for measuring the state of the system.</p>
<p>You can use these queries to build dashboards for representing the historical state of the system.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/10/Screenshot-2020-10-03-at-3.20.16-PM.png" alt="Image" width="600" height="400" loading="lazy">
<em>A Wavefront dashboard with some important metrics</em></p>
<p>They can also be used to trigger alerts when there is an issue with the system (like an increase in the number of errors observed or a sudden spike in CPU utilization).</p>
<p>Due to their numeric nature, we can also create complex mathematical queries (such as X% of errors in last Y minutes) to monitor system health.</p>
<p>The biggest challenge, however, in handling metrics is deciding the right amount of cardinality that makes the metric useful while also keeping its costs under control. </p>
<p>Emitting too many metrics, or metrics with too many dimensions, can lead to an increase in storage and processing costs. You need to choose the minimum cardinality that is just enough to give a high level picture about the system.</p>
<h1 id="heading-how-to-use-logs-and-metrics">How to Use Logs and Metrics</h1>
<p>Both logs and metrics have their own pros and cons. However, in any production system, we need to use both logs and metrics together to effectively monitor the system and debug any issues.</p>
<p>Metrics are often the first line of sight into the health of a system. Let's take the example of an e-commerce application like Amazon. The most important metric for such a use-case is the total number of successful and failed orders. </p>
<p>On a normal day, the metric for number of failed orders would remain at zero or some very small number. If there is an issue in the system that causes orders to suddenly start failing, this metric will show an increase in count.</p>
<p>You can create an <em>alert</em> on a combination of two metrics - total orders and failed orders. This will allow you to send a notification when the percentage of failed orders increases beyond a certain threshold (say 5%).</p>
<p>Once you are notified about the failing orders, you can then refer to the logs to find the cause of the failures. The logs would contain the error messages leading to the failure, as well as the detailed stacktrace that can identify the root cause of the failure.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In this article, we saw the differences between metrics and logs, and how metrics can help us monitor the health of our system more efficiently. Metrics can also be used to create dashboards and alerts using monitoring software like Wavefront and Grafana.</p>
<p>It is also necessary to use both metrics and logs in coordination to accurately detect and debug issues.</p>
<p>Thank you for staying with me so far. Hope you liked the article. You can connect with me on <a target="_blank" href="https://www.linkedin.com/in/theawesomenayak/">LinkedIn</a> where I regularly discuss technology and life. Also take a look at some of my other articles on <a target="_blank" href="https://medium.com/@theawesomenayak">Medium</a>. Happy reading 🙂</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Handle Logs in Microservices ]]>
                </title>
                <description>
                    <![CDATA[ By Siben Nayak Logging is one of the most important parts of software systems. Whether you have just started working on a new piece of software, or your system is running in a large scale production environment, you’ll always find yourself seeking he... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-handle-logs-in-microservices/</link>
                <guid isPermaLink="false">66d46151d14641365a050979</guid>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Microservices ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Thu, 12 Nov 2020 16:31:00 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/11/Microservice-Observability---Logs.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Siben Nayak</p>
<p>Logging is one of the most important parts of software systems. Whether you have just started working on a new piece of software, or your system is running in a large scale production environment, you’ll always find yourself seeking help from log files. </p>
<p>Because of this, logs are the first thing developers look for when something goes wrong, or something doesn’t work as expected.</p>
<p>Logging the right information in the right way makes a developer's life so much easier. And in order to get better at logging, you need to know what and how to log. </p>
<p>In this article, we’ll take a look at the some basic logging etiquette that can help you get the most out of your logs.</p>
<h1 id="heading-what-to-log-and-how-logging-works">What to Log and How Logging Works</h1>
<p>Let’s start with an example of an e-commerce system and take a look at logging in its Order Management Service (OMS).</p>
<p>Suppose a customer order fails due to an error from Inventory Management Service (IMS), a downstream service that OMS uses to verify the available inventory.</p>
<p>Let’s assume that OMS has already accepted an order. But during the final verification step, IMS returns the following error because the product is no longer available in the inventory.</p>
<p><code>404 Product Not Available</code></p>
<h2 id="heading-what-to-log">What to Log</h2>
<p>Normally, you would log the error in this way:</p>
<pre><code class="lang-java">log.error(“Exception in fetching product information - {}”, ex.getResponseBodyAsString())
</code></pre>
<p>This will output a log in the following format:</p>
<pre><code>[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">54</span>:<span class="hljs-number">41</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> fetching product information - Product Not Available
</code></pre><p>Well, there isn’t much information available in this log statement, is there? A log like this doesn’t serve much purpose because it lacks any contextual information about the error.</p>
<p>Can we add more information to this log to make it more relevant for debugging? How about adding the Order Id and Product Id?</p>
<pre><code class="lang-java">log.error(<span class="hljs-string">"Exception in processing Order #{} for Product #{} due to exception - {}"</span>, orderId, productId, ex.getResponseBodyAsString())
</code></pre>
<p>This will output a log in the following format:</p>
<pre><code>[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">54</span>:<span class="hljs-number">41</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> processing Order #<span class="hljs-number">182726</span> <span class="hljs-keyword">for</span> Product #<span class="hljs-number">21</span> due to exception - Product Not Available
</code></pre><p>Now this makes a lot of sense! Looking at the logs, we can understand that an error occurred while processing Order #182726 because Product #21 was not available.</p>
<h2 id="heading-how-to-log">How to Log</h2>
<p>While the above log makes perfect sense for us humans, it may not be the the best format for machines. Let’s look at an example to understand why.</p>
<p>Suppose there is some issue in the availability of a certain product (say Product #21) due to which all orders containing that product are failing. You have been assigned the task to find all the failed orders for this product.</p>
<p>You happily do a <code>grep</code> for Product #21 in your logs and excitedly wait for the results. When the search completes, you get something like this</p>
<pre><code>[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">54</span>:<span class="hljs-number">41</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> processing Order #<span class="hljs-number">182726</span> <span class="hljs-keyword">for</span> Product #<span class="hljs-number">21</span> due to exception - Product Not Available

[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">53</span>:<span class="hljs-number">29</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> processing Order #<span class="hljs-number">972526</span> <span class="hljs-keyword">for</span> Product #<span class="hljs-number">217</span> due to exception - Product Not Available

[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">52</span>:<span class="hljs-number">34</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> processing Order #<span class="hljs-number">46675754</span> <span class="hljs-keyword">for</span> Product #<span class="hljs-number">21</span> due to exception - Product Not Available

[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">52</span>:<span class="hljs-number">13</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> processing Order #<span class="hljs-number">332254</span> <span class="hljs-keyword">for</span> Product #<span class="hljs-number">2109</span> due to exception - Product Not Available
</code></pre><p>Not quite what you were expecting right? So how can you improve this? Structured logging to the rescue.</p>
<h1 id="heading-what-is-structured-logging">What is Structured Logging?</h1>
<p>Structured logging solves these common problems and allows log analysis tools to provide additional capabilities. Logs written in a structured format are more machine-friendly, meaning they can be easily parsed by a machine. </p>
<p>This can be helpful in the following scenarios:</p>
<ul>
<li>Developers can search logs and correlate events, which is invaluable both during development as well as for troubleshooting production issues.</li>
<li>Business teams can parse these logs and perform analysis over certain fields (for example, unique product count per day) by extracting and summarising these fields.</li>
<li>You can build dashboards (both business and technical) by parsing the logs and performing aggregates over relevant fields.</li>
</ul>
<p>Let’s use our earlier log statement and make a small change to make it structured.</p>
<pre><code class="lang-java">log.error(<span class="hljs-string">"Exception in processing OrderId={} for ProductId={} due to Error={}"</span>, orderId, productId, ex.getResponseBodyAsString())
</code></pre>
<p>This will output a log in the following format:</p>
<pre><code>[<span class="hljs-number">2020</span><span class="hljs-number">-09</span><span class="hljs-number">-27</span>T18:<span class="hljs-number">54</span>:<span class="hljs-number">41</span>,<span class="hljs-number">500</span>+<span class="hljs-number">0530</span>]-[ERROR]-[InventoryValidator]-[<span class="hljs-number">13</span>] Exception <span class="hljs-keyword">in</span> processing OrderId=<span class="hljs-number">182726</span> <span class="hljs-keyword">for</span> ProductId=<span class="hljs-number">21</span> due to <span class="hljs-built_in">Error</span>=Product Not Available
</code></pre><p>Now this log message can be easily parsed by the machine by using “=” as a delimiter to extract the <code>OrderId</code>, <code>ProductId</code> and <code>Error</code> fields. We can now do an exact search over <code>ProductId=21</code> to accomplish our task.</p>
<p>This also allows us to perform more advanced analytics on the logs, such as preparing a report of all the orders that failed with such errors.</p>
<p>If you use a log management system like Splunk, the query <code>Error=”Product Not Available” | stats count by ProductId</code> can now produce the following result:</p>
<pre><code>+-----------+-------+
| ProductId | count |
+-----------+-------+
| <span class="hljs-number">21</span>        | <span class="hljs-number">5</span>     |
| <span class="hljs-number">27</span>        | <span class="hljs-number">12</span>    |
+-----------+-------+
</code></pre><p>We could also use a JSON layout to print our logs in the JSON format:</p>
<pre><code class="lang-json">{  
    <span class="hljs-attr">"timestamp"</span>:<span class="hljs-string">"2020-09-27T18:54:41,500+0530"</span>  
    <span class="hljs-string">"level"</span>:<span class="hljs-string">"ERROR"</span>  
    <span class="hljs-string">"class"</span>:<span class="hljs-string">"InventoryValidator"</span>  
    <span class="hljs-string">"line"</span>:<span class="hljs-string">"13"</span>  
    <span class="hljs-string">"OrderId"</span>:<span class="hljs-string">"182726"</span>  
    <span class="hljs-string">"ProductId"</span>:<span class="hljs-string">"21"</span>  
    <span class="hljs-string">"Error"</span>:<span class="hljs-string">"Product Not Available"</span>
}
</code></pre>
<p>It’s important to understand the approach behind structured logging. There is no fixed standard and it can be done in many different ways.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In this article, we saw the pitfalls of unstructured logging and the benefits and advantages offered by structured logging. </p>
<p>Log management systems such as Splunk are hugely benefited by a well structured log message and can offer easy search and analytics on log events.</p>
<p>The biggest challenge however, when it comes to structured logging, is establishing a standard set of fields for your software. This can be achieved by following a custom logging model or centralised logging which ensures that all developers use the same fields in their log messages.</p>
<p>Thank you for staying with me so far. Hope you liked the article. You can connect with me on <a target="_blank" href="https://www.linkedin.com/in/theawesomenayak/">LinkedIn</a> where I regularly discuss technology and life. Also take a look at some of <a target="_blank" href="https://www.freecodecamp.org/news/author/theawesomenayak/">my other articles</a>. Happy reading. 🙂</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ To Log, or Not to Log — An Alternative Strategy to Make Loggers Your Friends ]]>
                </title>
                <description>
                    <![CDATA[ By Stanley Nguyen Logging is universally present in software projects and has many different forms, requirements, and flavors.  Logging is everywhere, from small 1-person-startups to large enterprises. Even a simple algorithmic programming question i... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-use-logs-effectively-in-your-code/</link>
                <guid isPermaLink="false">66d46149d1ffc3d3eb89de76</guid>
                
                    <category>
                        <![CDATA[ error ]]>
                    </category>
                
                    <category>
                        <![CDATA[ error handling ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Wed, 17 Jun 2020 01:28:26 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/06/shakespeare.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Stanley Nguyen</p>
<p>Logging is universally present in software projects and has many different forms, requirements, and flavors. </p>
<p>Logging is everywhere, from small 1-person-startups to large enterprises. Even a simple algorithmic programming question involves some logging along the way. </p>
<p>We rely so much on logging to develop, maintain, and keep our programs up and running. However, not much attention has been paid to how to design logging within our systems.</p>
<p>Often times, logging is treated as a second thought – it's only sprinkled into source code upon implementation like some magic powder that helps lighten the day-to-day operational abyss in our systems.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/06/salt-logger-1.jpg" alt="Image" width="600" height="400" loading="lazy">
<em>Are we all log baes?</em></p>
<p>Just like how any piece of code written will eventually become technical debt – a process that we can only slow down with great discipline – loggers rot at an unbelievable speed. After a while, we find ourselves fixing problems caused by loggers more often than the loggers give us useful information. </p>
<p>So how can we manage this mess with loggers and turn them into one of our allies rather than legacy ghosts haunting us from past development mistakes?</p>
<h2 id="heading-state-of-the-art">“State of The Art”</h2>
<p>Before I dive deeper into my proposed solution, let’s define a concrete problem statement based on my observations.</p>
<p>So what exactly is logging? Here's an interesting and on-point one-liner that I found from <a target="_blank" href="https://www.codeproject.com/Articles/42354/The-Art-of-Logging">Colin Eberhardt’s article</a>:</p>
<blockquote>
<p>Logging is the process of recording application actions and state to a secondary interface.</p>
</blockquote>
<p>This is exactly how logging is woven into systems. We all seem to subconsciously agree that loggers don't belong to any particular layers of our systems. Instead, we consider them to be application-wide and shared amongst different components.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/06/stackoverflow-answer.png" alt="Image" width="600" height="400" loading="lazy">
<em>A much approved answer on StackOverflow</em></p>
<p>A simple diagram where logging has been fit into a system that is designed with <a target="_blank" href="https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html">clean architecture</a> would look something like this:</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/06/logging-arch.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>We can safely say that logging itself is a subsystem within our application. And we can safely say that, without careful consideration, it often spirals out of control faster than we think. </p>
<p>While designing logging to be a subsystem within our applications is not wrong, the traditional perception of logging (with 4 to 6 levels of <code>info</code>, <code>warn</code>, <code>error</code>, <code>debug</code>, and so on) often makes developers focus on the wrong thing. It makes us focus on the format rather than the actual purpose of why we are writing logs.</p>
<p>This is one of the reasons why we log errors out without thinking twice about how to handle them. It's also why we log at every step of our code while ironically being unable to debug effectively if there is a production issue.</p>
<p>This is why I am proposing an alternative framework for logging and in turn, how we can design logging into our systems reliably.</p>
<h2 id="heading-the-good-the-bad-and-the-ugly">The Good, The Bad, and The Ugly</h2>
<p>This is a framework for how I think we should strategize our logging. It has three – and only three – categories or concerns for our logs.</p>
<h3 id="heading-first-rule-of-logging-dont-log">First rule of logging: Don’t log</h3>
<p>Overlogging is detrimental to our teams’ productivity and ability to handle business-as-usual operations. </p>
<p>There’re tons of reasons why we should not “Log whenever you can” as advised by some observability fanfare. Logging means more code to maintain, it incurs expense in terms of system performance, and more logging subjects us to more data privacy regulatory audits. </p>
<p>If you need more reasons to refrain from logging, check out <a target="_blank" href="https://sobolevn.me/2020/03/do-not-log">this post by Nikita Sobolev</a> or <a target="_blank" href="https://blog.codinghorror.com/the-problem-with-logging/">this post by Jeff Atwood</a>.</p>
<p>Nevertheless, I’m not advising that you eliminate logs altogether. I think logging, used correctly, can significantly help us keep our systems running reliably. </p>
<p>I’m just proposing we start without logging and work our way up to identify places where we need to log, rather than “log everywhere as we <strong>might</strong> need to look at them”. </p>
<p>My rule of thumb for adding a line of logging is “if we can’t pin down an exact reason or a scenario when we will look at the log, <strong>don’t log</strong>".</p>
<p>With that being said, how can we safely introduce logging when it’s absolutely necessary? How should we structure our logs and format their content? What information is necessary to include in logs?</p>
<h3 id="heading-the-ugly">The Ugly</h3>
<p>These are the first type of logs that I want to describe, and they are also the ones that I find least frequently. (If we find them too often, we might have bigger issues in our systems!) </p>
<p>“The Ugly” is the kind of log under catastrophic or unexpected scenarios that requires immediate action (like catastrophic errors that need an application restart). We can argue that, under these circumstances, it makes more sense to use alerting tools like Sentry. </p>
<p>Nevertheless, an error log might still be useful to provide ourselves with more context surrounding these errors which are not available in their stack trace. But they could help with reproducing these errors, like user inputs. </p>
<p>Just like the errors that they accompany, these logs should be kept to a minimum in our code and placed in a single location. They should also be designed/documented in the spec as a required system behavior for error handling. Also, they should be woven into the source code around where the error handling happens. </p>
<p>While format and level for “The Ugly” logs are completely preferential on a team-by-team basis, I would recommend using <code>log.error</code> or <code>log.fatal</code> before a graceful shutdown and restart of the application. You should also attach the full error stack trace and the function or requests’ input data for reproduction if necessary.</p>
<h3 id="heading-the-bad">The Bad</h3>
<p>“The Bad” is the type of logs that addresses expected, handled errors like network issues and user input validation. This type of log only requires developers’ attention if there’s an anomaly with them. </p>
<p>Together with a monitor set up to alert developers upon an error, these logs are handy to mitigate potential serious infrastructure or security problems.</p>
<p>This type of log should be spec-ed inside error handling technical requirements as well, and can actually be bundled if we are handling expected and unexpected errors in the same code location. </p>
<p>Based on the nature of what they are making “visible” for developers, <code>log.warn</code> or <code>log.error</code> can be used for “The Bad” logs given a team's convention.</p>
<h3 id="heading-the-good">The Good</h3>
<p>Last but definitely not least, “The Good” is the type of log that should appear most often in our source code – but it's often the most difficult to get right. “The Good” kind of logs are those associated with the “happy” steps of our applications, indicating the success of operations. </p>
<p>For its very nature of indicating starting/successful execution operations in our system, “The Good” is often abused by developers who are seduced by the mantra “Just one more bit of data in the log, we might need it”. </p>
<p>Again, I would circle back to our very first rule of logging: “Don’t log unless you absolutely have to”. To prevent this kind of over-logging from happening, we should document “The Good” as part of our technical requirements complementing the main business logic. </p>
<p>On top of that, for every single one of “The Good” logs that are inside our technical spec, they need to pass the litmus test: are there any circumstances under which we would look at the log (be it a customer support request, an external auditor’s inquiry)? Only this way will <code>log.info</code> not be a dreaded legacy that obscures developers’ vision into our applications.</p>
<h3 id="heading-the-rest-that-you-need-to-know">The Rest (That You Need to Know)</h3>
<p>By now I assume you've noticed that the general theme of my proposed logging strategy revolves around clear and specific documenting of the log's purpose. It is important that we treat logging as part of our requirements, and that we're specific about what keywords and messages we want to tag in each log's context for them to be effectively indexed. </p>
<p>Only by doing that can we be aware of each and every log that we produce, and in turn, have a clear vision into our systems.</p>
<p>As logs are upgraded to first-class citizens with concrete technical requirements in our specs, the implications are that they would need to be:</p>
<ul>
<li>maintained and updated as the business and technical requirements evolve</li>
<li>covered by unit and integration tests</li>
</ul>
<p>This might sound like a lot of extra work to get our logs right. However, I argue that this is the kind of attention and effort logging deserves so it can be useful.</p>
<p><strong>Serve our logs, and we will be rewarded splendidly!</strong></p>
<h2 id="heading-a-practical-migration-guide">A Practical Migration Guide</h2>
<p>I reckon there’s no use for a new logging strategy (or any new strategies/frameworks for that matter) for legacy projects if there’s no way of moving them from their messy state to the proposed ideal. </p>
<p>So I have a three-step general plan for anyone who is frustrated with their system's logs and is willing to invest the time to log more effectively.</p>
<h3 id="heading-identify-the-usual-suspects">Identify The Usual Suspects</h3>
<p>Since the idea is to reduce garbage logs, our first step is to identify where the criminals are hiding. With the powerful text editors and IDEs we have nowadays (or <code>grep</code> if you are reading this in the past through a window-to-the-future), all occurrences of logging can be easily identified. </p>
<p>A document (or spreadsheet if you would like to be organised) documenting all of these logging occurrences might be necessary if there are too many of them.</p>
<h3 id="heading-convict-them-bad-actors">Convict Them Bad Actors!</h3>
<p>After identifying all suspects, it’s time to weed out the bad apples. Logs which are duplicated or unreachable are low hanging fruits that we can immediately eliminate from our source code. </p>
<p>For the rest of our logging occurrences, it’s time to involve other stakeholders like the “inception” engineer who started the project (if that is possible), product managers, customer support, or compliance folks to answer the question: Do we need each one of these logs, and if so, what are they being used for?</p>
<h2 id="heading-light-at-the-end-of-the-tunnel">Light At the End of the Tunnel</h2>
<p>Now that we have a narrowed-down list of absolutely necessary logs, turning them into technical requirements with documented purpose for each one of them is essential to nail down a contract (or we can call it specification) for our logging subsystem. Ask yourself what to do when a <code>log.error</code> happens, and who are we <code>log.info</code>-ing for?</p>
<p>After this, it’s just a matter of discipline in the same way that we write and maintain software in general. Let's all work together and make logging awesome!</p>
<p>You can <a target="_blank" href="https://twitter.com/stanley_ngn">reach out to me on Twitter</a> with any questions or comments.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Practice Logging in Python with Logzero ]]>
                </title>
                <description>
                    <![CDATA[ By Davis David Logzero is a Python package created by Chris Hager that simplifies logging with Python 2 and 3. Logzero makes it easier as a print statement to show information and debugging details. If you are wondering what logging is, I recommend t... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/good-logging-practice-in-python-with-logzero/</link>
                <guid isPermaLink="false">66d84eb5ef84e4cc27cfbe31</guid>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Python ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Mon, 13 Apr 2020 21:26:52 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2020/04/logezero-image.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Davis David</p>
<p>Logzero is a Python package created by <a target="_blank" href="https://twitter.com/metachris">Chris Hager</a> that simplifies logging with Python 2 and 3. Logzero makes it easier as a print statement to show information and debugging details.</p>
<p>If you are wondering <strong>what logging is</strong>, I recommend that you read the previous article I wrote about <a target="_blank" href="https://medium.com/analytics-vidhya/how-to-run-machine-learning-experiments-with-python-logging-module-9030fbee120e">“How to Run Machine Learning Experiments with Python Logging Module”</a>, especially the first 3 sections. </p>
<p>In that article, you will learn:</p>
<ul>
<li>What is Logging?</li>
<li>Why logging is important.</li>
<li>Applications of logging in different technology industries.</li>
</ul>
<p>Logzero has different features that make it easier to use in Python projects. Some of these features are:</p>
<ul>
<li>Easy logging to console and/or file.</li>
<li>Provides a fully configured standard Python logger object.</li>
<li>Pretty formatting, including level-specific <strong>colors</strong> in the console.</li>
<li>works with all kinds of character encodings and special characters.</li>
<li>Compatible with Python 2 and 3.</li>
<li>No further Python dependencies.</li>
</ul>
<h2 id="heading-installation">Installation</h2>
<p>To install logzero with pip run the following:</p>
<pre><code class="lang-python">pip install -U logzero
</code></pre>
<p>You can also install logzero from the public <a target="_blank" href="https://github.com/metachris/logzero">Github repo</a>:</p>
<pre><code>git clone https:<span class="hljs-comment">//github.com/metachris/logzero.git</span>
cd logzero
python setup.py install
</code></pre><h2 id="heading-basic-example">Basic Example</h2>
<p>We will start with a basic example. In the python file, we will import the logger from logzero and try 4 different logging level examples.</p>
<pre><code class="lang-python"><span class="hljs-comment">#import logger from logzero</span>
<span class="hljs-keyword">from</span> logzero <span class="hljs-keyword">import</span> logger

logger.debug(<span class="hljs-string">"hello"</span>)
logger.info(<span class="hljs-string">"info"</span>)
logger.warning(<span class="hljs-string">"warning"</span>)
logger.error(<span class="hljs-string">"error"</span>)
</code></pre>
<p>The output is colored so it's easy to read.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/04/dfdfdf.PNG" alt="Image" width="600" height="400" loading="lazy">
<em>logzero output</em></p>
<p>As you can see each level has its own color. This means you can identify the level easily by checking the color.</p>
<h2 id="heading-write-logs-to-a-file">Write logs to a file</h2>
<p>Most of the time Python users tend to write logs in the file. When the system is running you can save logs in the file and review them for error checks and maintenance purposes. You can also set a file to save all the log entries in legzero.</p>
<p>We will import the logger and logfile from logezero. The logfile method will help us configure the log file to save our log entries.</p>
<p>Now your log entries will be logged into the file named my_logfile.log.</p>
<pre><code class="lang-python"><span class="hljs-comment">#import logger and logfile</span>
<span class="hljs-keyword">from</span> logzero <span class="hljs-keyword">import</span> logger, logfile

<span class="hljs-comment">#set logfile path</span>
logfile(<span class="hljs-string">'my_logfile.log'</span>)

<span class="hljs-comment"># Log messages</span>
logger.info(<span class="hljs-string">"This log message saved in the log file"</span>)
</code></pre>
<p>The output in the my_logfile.log contains the logging level label (for info level labeled as “I”), date, time, python filename, line number and the message itself.</p>
<pre><code>[I <span class="hljs-number">200409</span> <span class="hljs-number">23</span>:<span class="hljs-number">49</span>:<span class="hljs-number">59</span> demo:<span class="hljs-number">8</span>] This log message saved <span class="hljs-keyword">in</span> the log file
</code></pre><h2 id="heading-rotating-a-log-file">Rotating a log file</h2>
<p>You don't need to have a single log file saving all the log entries. This results in a massive log file that is intensive for the system to open and close.</p>
<p>You can use the <strong>maxBytes</strong> and <strong>backupCount</strong> parameters to allow the file to roll over at a predetermined size. When the size is about to be exceeded, the file is closed and a new file is silently opened for output. Rollover occurs whenever the current log file is nearly maxBytes in length. If either maxBytes or backupCount is zero, rollover never occurs.</p>
<p>In the example below, we have set the maxBytes to be <strong>1000000 bytes (1 MB).</strong> This means that when the size exceeds 1MB the file is closed and a new file is opened to save log entries. The number of backups to keep is set to 3.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Set a rotating logfile</span>
logzero.logfile(<span class="hljs-string">"my_logfile.log"</span>, maxBytes=<span class="hljs-number">1000000</span>, backupCount=<span class="hljs-number">3</span>)
</code></pre>
<h2 id="heading-set-a-minimum-logging-level">Set a Minimum Logging Level</h2>
<p><img src="https://www.freecodecamp.org/news/content/images/2020/04/1_5vfxSz_sdZuPR0CnnBlDLg.png" alt="Image" width="600" height="400" loading="lazy">
<em>[Photo by Son Nguyen Kim](https://www.toptal.com/resume/son-nguyen-kim?<strong>hstc=753710.17be834d28ba29055621f0833fc6733b.1582400164835.1582400164835.1582400164835.1&amp;</strong>hssc=753710.1.1582400164836&amp;__hsfp=3618320745" rel="noopener nofollow noopener)</em></p>
<p>The logging level means to set the importance level of a given log message. You can also set a different <strong>log level</strong> for the file handler by using the loglevel argument in the logfile method.</p>
<p>In the example below, we set loglevel to be <code>warning</code>. This means all log entries below the <strong>warning level</strong> will not be saved into a log file.</p>
<pre><code class="lang-python"><span class="hljs-comment">#import logzero package</span>
<span class="hljs-keyword">from</span> logzero <span class="hljs-keyword">import</span> logger, logfile
<span class="hljs-keyword">import</span> logging

<span class="hljs-comment"># You can also set a different loglevel for the file handler</span>
logfile(<span class="hljs-string">"my_logfile.log"</span>, loglevel=logging.WARNING)

<span class="hljs-comment"># Log messages</span>
logger.info(<span class="hljs-string">"This log message saved in the log file"</span>)
logger.warning(<span class="hljs-string">"This log message saved in the log file"</span>)
</code></pre>
<h2 id="heading-set-a-custom-formatter">Set a custom formatter</h2>
<p>How you want the log record to be formated is up to you. There are different ways you can format your log record. You can include the date, time and logging level in your format so that you know when the log was sent and at what level. </p>
<p>The example below shows how you can configure the format of the log records.</p>
<pre><code class="lang-python"><span class="hljs-comment">#import logzero package</span>
<span class="hljs-keyword">import</span> logzero
<span class="hljs-keyword">from</span> logzero <span class="hljs-keyword">import</span> logger, logfile
<span class="hljs-keyword">import</span> logging

<span class="hljs-comment">#set file path</span>
logfile(<span class="hljs-string">"my_logfile.log"</span>)

<span class="hljs-comment"># Set a custom formatter</span>
my_formatter = logging.Formatter(<span class="hljs-string">'%(filename)s - %(asctime)s - %(levelname)s: %(message)s'</span>);
logzero.formatter(my_formatter)

<span class="hljs-comment"># Log messages</span>
logger.info(<span class="hljs-string">"This log message saved in the log file"</span>)
logger.warning(<span class="hljs-string">"This log message saved in the log file"</span>)
</code></pre>
<p>In the example above we have configured the log format by including <em>filename, date, time, logging level name,</em> and <em>message.</em></p>
<p>This is the output in the my_logfile.log:</p>
<pre><code>demo.py - <span class="hljs-number">2020</span>–<span class="hljs-number">04</span>–<span class="hljs-number">10</span> <span class="hljs-number">00</span>:<span class="hljs-number">51</span>:<span class="hljs-number">44</span>,<span class="hljs-number">706</span> - INFO: This log message saved <span class="hljs-keyword">in</span> the log file
demo.py - <span class="hljs-number">2020</span>–<span class="hljs-number">04</span>–<span class="hljs-number">10</span> <span class="hljs-number">00</span>:<span class="hljs-number">51</span>:<span class="hljs-number">44</span>,<span class="hljs-number">707</span> - WARNING: This log message saved <span class="hljs-keyword">in</span> the log file
</code></pre><h2 id="heading-custom-logger-instances">Custom Logger Instances</h2>
<p>Instead of using the default logger, you can also setup specific logger instances with <strong>logzero.setup_logger(..)</strong>. You can configure and returns a fully configured logger instance with different parameters such as <em>name, logfile name, formatter, maxBytes, backupCount,</em> and <em>logging level.</em></p>
<p>This is a working example of how to setup logging with a custom logger instance:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> logzero package
<span class="hljs-keyword">from</span> logzero <span class="hljs-keyword">import</span> logger, logfile, setup_logger
<span class="hljs-keyword">import</span> logging

<span class="hljs-comment"># Set a custom formatter</span>
my_formatter = logging.Formatter(<span class="hljs-string">'%(filename)s - %(asctime)s - %(levelname)s: %(message)s'</span>);


<span class="hljs-comment">#create custom logger instance</span>
custom_logger = setup_logger(
 name=<span class="hljs-string">"My Custom Logger"</span>,
 logfile=<span class="hljs-string">"my_logfile.log"</span>,
 formatter=my_formatter,
 maxBytes=<span class="hljs-number">1000000</span>,
 backupCount=<span class="hljs-number">3</span>,level=logging.INFO)

<span class="hljs-comment"># Log messages</span>
custom_logger.info(<span class="hljs-string">"This log message saved in the log file"</span>)
custom_logger.warning(<span class="hljs-string">"This log message saved in the log file"</span>)
</code></pre>
<p>In the example above we have set a custom logger instance called <strong>custom_logger</strong> with different configured parameter values.</p>
<h2 id="heading-wrap-up">Wrap up</h2>
<p>In this article, you've learned the basics, along with some examples, of how to use the Logezero Python package. You can learn more about the features available in the <a target="_blank" href="https://logzero.readthedocs.io/en/latest/#">documentation</a>. Now you can start implementing the logzero package in your next <a target="_blank" href="https://realpython.com/intermediate-python-project-ideas/">python project</a>.</p>
<p>If you learned something new or enjoyed reading this article, please share it so that others can see it. Until then, see you in the next post! I can also be reached on Twitter <a target="_blank" href="https://twitter.com/Davis_McDavid">@Davis_McDavid</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to use Elasticsearch, Logstash and Kibana to visualise logs in Python in realtime ]]>
                </title>
                <description>
                    <![CDATA[ By Ritvik Khanna What is logging? Let’s say you are developing a software product. It works remotely, interacts with different devices, collects data from sensors and provides a service to the user. One day, something goes wrong and the system is not... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-use-elasticsearch-logstash-and-kibana-to-visualise-logs-in-python-in-realtime-acaab281c9de/</link>
                <guid isPermaLink="false">66c355a9dae03919d93dc02c</guid>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ General Programming ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Python ]]>
                    </category>
                
                    <category>
                        <![CDATA[ tech  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ technology ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Tue, 29 Jan 2019 17:04:34 +0000</pubDate>
                <media:content url="https://cdn-media-1.freecodecamp.org/images/0*-sOdBVARaJLNvu17.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Ritvik Khanna</p>
<h3 id="heading-what-is-logging">What is logging?</h3>
<p>Let’s say you are developing a software product. It works remotely, interacts with different devices, collects data from sensors and provides a service to the user. One day, something goes wrong and the system is not working as expected. It might not be identifying the devices or not receiving any data from the sensors, or might have just gotten a runtime error due to a bug in the code. How can you know for sure?</p>
<p>Now, imagine if there are checkpoints in the system code where, if the system returns an unexpected result, it simply flags it and notifies the developer. This is the concept of logging.</p>
<p>Logging enables the developers to understand what the code is actually doing and how the work-flow is. A large part of software developers’ lives is monitoring, troubleshooting and debugging. Logging makes this a much easier and smoother process.</p>
<h3 id="heading-visualisation-of-logs">Visualisation of logs</h3>
<p><img src="https://cdn-media-1.freecodecamp.org/images/0*-sOdBVARaJLNvu17.png" alt="Image" width="600" height="400" loading="lazy">
_[source](https://www.datalabsagency.com/wp-content/uploads/2014/11/Interactive-Data-Visualisation-Service.png" rel="noopener" target="<em>blank" title=")</em></p>
<p>Now, if you are an expert developer who has been developing and creating software for quite a while, then you would think that logging is not a big deal and most of our code is included with a <code>**Debug.Log('____')**</code> statement. Well, that is great but there are some other aspects of logging we can make use of.</p>
<p>Visualisation of specific logged data has the following benefits:</p>
<ul>
<li>Monitor the operations of the system remotely.</li>
<li>Communicate information clearly and efficiently via statistical graphics, plots and information graphics.</li>
<li>Extract knowledge from the data visualised in the form of different graphs.</li>
<li>Take necessary actions to better the system.</li>
</ul>
<p>There are a number of ways we can visualise raw data. There are a number of libraries in the Python and R programming languages that can help in plotting graphs. You can learn more about it <a target="_blank" href="https://towardsdatascience.com/5-quick-and-easy-data-visualizations-in-python-with-code-a2284bae952f"><strong>here</strong></a>. But in this post, I am not going to discuss about above mentioned methods. Have you ever heard about the <a target="_blank" href="https://www.elastic.co/elk-stack"><strong>ELK stack</strong></a>?</p>
<h3 id="heading-elk-stack">ELK stack</h3>
<p>E — <a target="_blank" href="https://www.elastic.co/products/elasticsearch"><strong>Elasticsearch</strong></a>, L — <a target="_blank" href="https://www.elastic.co/products/logstash"><strong>Logstash</strong></a><strong>,</strong> K — <a target="_blank" href="https://www.elastic.co/products/kibana"><strong>Kibana</strong></a></p>
<p>Let me give a brief introduction to it. The ELK stack is a collection of three open source softwares that helps in providing realtime insights about data that can be either structured or unstructured. One can search and analyse data using its tools with extreme ease and efficiently.</p>
<p><a target="_blank" href="https://www.elastic.co/products/elasticsearch"><strong>Elasticsearch</strong></a> is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch lets you perform and combine many types of searches — structured, unstructured, geo, metric etc. It is built on Java programming language, which enables Elasticsearch to run on different platforms. It enables users to explore very large amount of data at very high speed.</p>
<p><a target="_blank" href="https://www.elastic.co/products/logstash"><strong>Logstash</strong></a> is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favourite “stash” (like Elasticsearch). Data is often scattered or siloed across many systems in many formats. Logstash supports a variety of inputs that pull in events from a multitude of common sources, all at the same time. Easily ingest from your logs, metrics, web applications, data stores, and various AWS services, all in continuous, streaming fashion. Logstash has a pluggable framework featuring over 200 plugins. Mix, match, and orchestrate different inputs, filters, and outputs to work in pipeline harmony.</p>
<p><a target="_blank" href="https://www.elastic.co/products/kibana"><strong>Kibana</strong></a> is an open source analytics and visualisation platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualise your data in a variety of charts, tables, and maps. Kibana makes it easy to understand large volumes of data. Its simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time.</p>
<p>To get a better picture of the workflow of how the three softwares interact with each other, refer to the following diagram:</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*OHP01Lidop3GQZbnwg9s4Q.jpeg" alt="Image" width="600" height="400" loading="lazy">
_[source](http://Howtodoinjava.com" rel="noopener" target="<em>blank" title=")</em></p>
<h3 id="heading-implementation">Implementation</h3>
<h4 id="heading-logging-in-python">Logging in Python</h4>
<p>Here, I chose to explain the implementation of logging in Python because it is the most used language for projects involving communication between multiple machines and internet of things. It’ll help give you an overall idea of how it works.</p>
<p>Python provides a logging system as a part of its standard library, so you can quickly add logging to your application.</p>
<pre><code class="lang-py"><span class="hljs-keyword">import</span> logging
</code></pre>
<p>In Python, logging can be done at 5 different levels that each respectively indicate the type of event. There are as follows:</p>
<ul>
<li><strong>Info</strong> — Designates informational messages that highlight the progress of the application at coarse-grained level.</li>
<li><strong>Debug</strong> — Designates fine-grained informational events that are most useful to debug an application.</li>
<li><strong>Warning</strong> — Designates potentially harmful situations.</li>
<li><strong>Error</strong> — Designates error events that might still allow the application to continue running.</li>
<li><strong>Critical</strong> — Designates very severe error events that will presumably lead the application to abort.</li>
</ul>
<p>Therefore depending on the problem that needs to be logged, we use the defined level accordingly.</p>
<blockquote>
<p><strong>Note</strong>: Info and Debug do not get logged by default as logs of only level Warning and above are logged.</p>
</blockquote>
<p>Now to give an example and create a set of log statements to visualise, I have created a Python script that logs statements of specific format and a message.</p>
<pre><code class="lang-py"><span class="hljs-keyword">import</span> logging
<span class="hljs-keyword">import</span> random

logging.basicConfig(filename=<span class="hljs-string">"logFile.txt"</span>,
                    filemode=<span class="hljs-string">'a'</span>,
                    format=<span class="hljs-string">'%(asctime)s %(levelname)s-%(message)s'</span>,
                    datefmt=<span class="hljs-string">'%Y-%m-%d %H:%M:%S'</span>)
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> xrange(<span class="hljs-number">0</span>,<span class="hljs-number">15</span>):
    x=random.randint(<span class="hljs-number">0</span>,<span class="hljs-number">2</span>)
    <span class="hljs-keyword">if</span>(x==<span class="hljs-number">0</span>):
        logging.warning(<span class="hljs-string">'Log Message'</span>)
    <span class="hljs-keyword">elif</span>(x==<span class="hljs-number">1</span>):
        logging.critical(<span class="hljs-string">'Log Message'</span>)
    <span class="hljs-keyword">else</span>:
        logging.error(<span class="hljs-string">'Log Message'</span>)
</code></pre>
<p>Here, the log statements will append to a file named <em>logFile.txt</em> in the specified format. I ran the script for three days at different time intervals creating a file containing logs at random like below:</p>
<pre><code><span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> CRITICAL-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> CRITICAL-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> CRITICAL-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">09</span>:<span class="hljs-number">01</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> CRITICAL-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> CRITICAL-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> WARNING-Log Message
<span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-09</span> <span class="hljs-number">11</span>:<span class="hljs-number">07</span>:<span class="hljs-number">05</span>,<span class="hljs-number">333</span> ERROR-Log Message
</code></pre><h4 id="heading-setting-up-elasticsearch-logstash-and-kibana">Setting up Elasticsearch, Logstash and Kibana</h4>
<p>At first let’s download the three open source softwares from their respective links [<a target="_blank" href="https://www.elastic.co/downloads/elasticsearch">elasticsearch</a>],[<a target="_blank" href="https://www.elastic.co/downloads/logstash">logstash</a>]and[<a target="_blank" href="https://www.elastic.co/downloads/kibana">kibana</a>]. Unzip the files and put all three in the project folder.</p>
<p>Let’s get started.</p>
<p><strong>Step 1</strong> — Set up Kibana and Elasticsearch on the local system. We run Kibana by the following command in the bin folder of Kibana.</p>
<pre><code class="lang-bash">bin\kibana
</code></pre>
<p>Similarly, Elasticsearch is setup like this:</p>
<pre><code class="lang-bash">bin\elasticsearch
</code></pre>
<p>Now, in the two separate terminals we can see both of the modules running. In order to check that the services are running open <strong>localhost:5621</strong> and <strong>localhost:9600</strong><em>.</em></p>
<p>After both the services are successfully running we use Logstash and Python programs to parse the raw log data and pipeline it to Elasticsearch from which Kibana queries data.</p>
<p><strong>Step 2</strong>— Now let’s get on with Logstash. Before starting Logstash, a Logstash configuration file is created in which the details of input file, output location, and filter methods are specified.</p>
<pre><code>input{
 file{
 <span class="hljs-function"><span class="hljs-params">path</span> =&gt;</span> <span class="hljs-string">"full/path/to/log_file/location/logFile.txt"</span>
 start_position =&gt; <span class="hljs-string">"beginning"</span>
 }
}
filter
{
 grok{
 <span class="hljs-function"><span class="hljs-params">match</span> =&gt;</span> {<span class="hljs-string">"message"</span> =&gt; <span class="hljs-string">"%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level}-%{GREEDYDATA:message}"</span>}
 }
    date {
    <span class="hljs-function"><span class="hljs-params">match</span> =&gt;</span> [<span class="hljs-string">"timestamp"</span>, <span class="hljs-string">"ISO8601"</span>]
  }
}
output{
 elasticsearch{
 <span class="hljs-function"><span class="hljs-params">hosts</span> =&gt;</span> [<span class="hljs-string">"localhost:9200"</span>]
 index =&gt; <span class="hljs-string">"index_name"</span>}
stdout{<span class="hljs-function"><span class="hljs-params">codec</span> =&gt;</span> rubydebug}
}
</code></pre><p>This configuration file plays a major role in the ELK stack. Take a look at <strong>filter{grok{…}}</strong> line. This is a Grok filter plugin. Grok is a great way to parse unstructured log data into something structured and queryable. This tool is perfect for syslog logs, apache and other webserver logs, mysql logs, and in general, any log format that is generally written for humans and not computer consumption. This grok pattern mentioned in the code tells Logstash how to parse each line entry in our log file.</p>
<p>Now save the file in Logstash folder and start the Logstash service.</p>
<pre><code class="lang-bash">bin\logstash –f logstash-simple.conf
</code></pre>
<blockquote>
<p>In order to learn more about configuring logstash, click [<a target="_blank" href="https://www.elastic.co/guide/en/logstash/current/configuration.html"><strong>here</strong></a>].</p>
</blockquote>
<p><strong>Step 3</strong> — After this the parsed data from the log files will be available in Kibana management at <strong>localhost:5621</strong> for creating different visuals and dashboards. To check if Kibana is receiving any data, in the management tab of Kibana run the following command:</p>
<pre><code class="lang-bash">localhost:9200/_cat/indices?v
</code></pre>
<p>This will display all the indexes. For every visualisation, a new Index pattern has to be selected from dev tools, after which various visualisation techniques are used to create a dashboard.</p>
<h4 id="heading-dashboard-using-kibana">Dashboard Using Kibana</h4>
<p>After setting up everything, now it’s time to create graphs in order to visualise the log data.</p>
<p>After opening the Kibana management homepage, we will be asked to create a new index pattern. Enter <code>index_name*</code> in the <strong>Index pattern field</strong> and select <strong>@timestamp</strong> in the <strong>Time Filter field</strong> name dropdown menu.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*vgcx_wyDnpKiHuQNQZn_pw.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Now to create graphs, we go to the <strong>Visualize</strong> tab.</p>
<p>Select a new visualisation, choose a type of graph and index name, and depending on your axis requirements, create a graph. We can create a histogram with <strong>y-axis</strong> as the <strong>count</strong> and <strong>x-axis</strong> with the <strong>log-level keyword</strong> or the <strong>timestamp.</strong></p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*Yt0dS1APsC5DRb33SkcDYA.gif" alt="Image" width="600" height="400" loading="lazy">
<em>Creating a graph</em></p>
<p>After creating a few graphs, we can add all the required visualisations and create a <strong>Dashboard</strong>, like below:</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/1*sLN0wwMyVaK0YtWB3aI8TQ.png" alt="Image" width="600" height="400" loading="lazy"></p>
<blockquote>
<p>Note — Whenever the logs in the log file get updated or appended to the previous logs, as long as the three services are running the data in elasticsearch and graphs in kibana will automatically update according to the new data.</p>
</blockquote>
<h4 id="heading-wrapping-up">Wrapping up</h4>
<p>Logging can be an aid in fighting errors and debugging programs instead of using a print statement. The logging module divides the messages according to different levels. This results in better understanding of the code and how the call flow goes without interrupting the program.</p>
<p>The visualisation of data is a necessary step in situations where a huge amount of data is generated every single moment. Data-Visualization tools and techniques offer executives and other knowledge workers new approaches to dramatically improve their ability to grasp information hiding in their data. Rapid identification of error logs, easy comprehension of data and highly customisable data visuals are some of the advantages. It is one of the most constructive way of organising raw data.</p>
<blockquote>
<p>For further reference you can refer to the official ELK documentation from here — <a target="_blank" href="https://www.elastic.co/learn">https://www.elastic.co/learn</a> and on logging in python — <a target="_blank" href="https://docs.python.org/2/library/logging.html">https://docs.python.org/2/library/logging.html</a></p>
</blockquote>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to save hours of debugging with logs ]]>
                </title>
                <description>
                    <![CDATA[ By Maya Gilad A good logging mechanism helps us in our time of need. When we’re handling a production failure or trying to understand an unexpected response, logs can be our best friend or our worst enemy. Their importance for our ability to handle f... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-save-hours-of-debugging-with-logs-6989cc533370/</link>
                <guid isPermaLink="false">66c35445b1d4339762339fb9</guid>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                    <category>
                        <![CDATA[ General Programming ]]>
                    </category>
                
                    <category>
                        <![CDATA[ software development ]]>
                    </category>
                
                    <category>
                        <![CDATA[ tech  ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Mon, 12 Nov 2018 17:34:22 +0000</pubDate>
                <media:content url="https://cdn-media-1.freecodecamp.org/images/1*mdCjm9D20RnHCIIQlgFr1w.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Maya Gilad</p>
<p>A good logging mechanism helps us in our time of need.</p>
<p>When we’re handling a production failure or trying to understand an unexpected response, logs can be our best friend or our worst enemy.</p>
<p>Their importance for our ability to handle failures is enormous. When it comes to our day to day work, when we design our new production service/feature, we sometimes overlook their importance. We neglect to give them proper attention.</p>
<p>When I started developing, I made a few logging mistakes that cost me many sleepless nights. Now, I know better, and I can share with you a few practices I’ve learned over the years.</p>
<h3 id="heading-not-enough-disk-space">Not enough disk space</h3>
<p>When developing on our local machine, we usually don’t mind using a file handler for logging. Our local disk is quite large and the amount of log entries being written is very small.</p>
<p>That is not the case in our production machines. Their local disk usually has limited free disk space. In time the disk space won’t be able to store log entries of a production service. Therefore, using a file handler will eventually result in losing all new log entries.</p>
<p>If you want your logs to be available on the service’s local disk, <strong>don’t forget to use a rotating file handler.</strong> This can limit the max space that your logs will consume. The rotating file handler will handle overriding old log entries to make space for new ones.</p>
<h3 id="heading-eeny-meeny-miny-moe"><strong>Eeny, meeny, miny, moe</strong></h3>
<p><img src="https://cdn-media-1.freecodecamp.org/images/Pt2k019xulsxiiGcOhkWJ6R4Nm3QpN-THYid" alt="Image" width="600" height="400" loading="lazy"></p>
<p>Our production service is usually spread across multiple machines. Searching a specific log entry will require investigating all them. When we’re in a hurry to fix our service, there’s no time to waste on trying to figure out where exactly did the error occur.</p>
<p>Instead of saving logs on local disk, <strong>stream them into a centralized logging system.</strong> This allows you to search all them at the same time.</p>
<p>If you’re using AWS or GCP — you can use their logging agent. The agent will take care of streaming the logs into their logging search engine.</p>
<h3 id="heading-to-log-or-not-log-this-is-the-question">To log or not log? this is the question…</h3>
<p><img src="https://cdn-media-1.freecodecamp.org/images/ZU2WKnvPTiAv6QIh8TUUQcK92TU5mSXgHjL3" alt="Image" width="600" height="400" loading="lazy"></p>
<p>There is a thin line between too few and too many logs. In my opinion, log entries should be meaningful and only serve the purpose of investigating issues on our production environment. When you’re about to add a new log entry, you should think about how you will use it in the future. Try to answer this question: <strong>What information does the log message provide the developer who will read it?</strong></p>
<p>Too many times I see logs being used for user analysis. Yes, it is much easier to write “user watermelon2018 has clicked the button” to a log entry than to develop a new events infrastructure. This is not the what logs are meant for (and parsing log entries is not fun either, so extracting insights will take time).</p>
<h3 id="heading-a-needle-in-a-haystack">A needle in a haystack</h3>
<p>In the following screenshot we see three requests which were processed by our service.</p>
<p>How long did it take to process the second request? Is it 1ms, 4ms or 6ms?</p>
<pre><code><span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">39</span>:<span class="hljs-number">07</span>,<span class="hljs-number">051</span> - simple_example - INFO - entered request <span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> 
<span class="hljs-number">22</span>:<span class="hljs-number">39</span>:<span class="hljs-number">07</span>,<span class="hljs-number">053</span> - simple_example - INFO - entered request <span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> 
<span class="hljs-number">22</span>:<span class="hljs-number">39</span>:<span class="hljs-number">07</span>,<span class="hljs-number">054</span> - simple_example - INFO - ended request <span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> 
<span class="hljs-number">22</span>:<span class="hljs-number">39</span>:<span class="hljs-number">07</span>,<span class="hljs-number">056</span> - simple_example - INFO - entered request <span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> 
<span class="hljs-number">22</span>:<span class="hljs-number">39</span>:<span class="hljs-number">07</span>,<span class="hljs-number">057</span> - simple_example - INFO - ended request <span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> 
<span class="hljs-number">22</span>:<span class="hljs-number">39</span>:<span class="hljs-number">07</span>,<span class="hljs-number">059</span> - simple_example - INFO - ended request
</code></pre><p>Since we don’t have any additional information on each log entry, we cannot be sure which is the correct answer. Having the request id in each log entry could have reduced the number of possible answers to one. Moreover, <strong>having metadata inside each log entry can help us filter the logs</strong> and focus on the relevant entries.</p>
<p>Let’s add some metadata to our log entry:</p>
<pre><code><span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">139</span> - INFO - entered request <span class="hljs-number">1</span> - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">141</span> - INFO - entered request <span class="hljs-number">2</span> - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">142</span> - INFO - ended request id <span class="hljs-number">2</span> - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">143</span> - INFO - req <span class="hljs-number">1</span> invalid request structure - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">144</span> - INFO - entered request <span class="hljs-number">3</span> - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">145</span> - INFO - ended request id <span class="hljs-number">1</span> - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">17</span>:<span class="hljs-number">09</span>,<span class="hljs-number">147</span> - INFO - ended request id <span class="hljs-number">3</span> - simple_example
</code></pre><p>The metadata is placed as part of the free text section of the entry. Therefore, each developer can enforce his/her own standards and style. This will result in a complicated search.</p>
<p>Our metadata should be defined as part of the entry’s fixed structure.</p>
<pre><code><span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">45</span>:<span class="hljs-number">38</span>,<span class="hljs-number">325</span> - simple_example - INFO - user/create - req <span class="hljs-number">1</span> - entered request
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">45</span>:<span class="hljs-number">38</span>,<span class="hljs-number">328</span> - simple_example - INFO - user/login - req <span class="hljs-number">2</span> - entered request
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">45</span>:<span class="hljs-number">38</span>,<span class="hljs-number">329</span> - simple_example - INFO - user/login - req <span class="hljs-number">2</span> - ended request
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">45</span>:<span class="hljs-number">38</span>,<span class="hljs-number">331</span> - simple_example - INFO - user/create - req <span class="hljs-number">3</span> - entered request
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">45</span>:<span class="hljs-number">38</span>,<span class="hljs-number">333</span> - simple_example - INFO - user/create - req <span class="hljs-number">1</span> - ended request
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">22</span>:<span class="hljs-number">45</span>:<span class="hljs-number">38</span>,<span class="hljs-number">335</span> - simple_example - INFO - user/create - req <span class="hljs-number">3</span> - ended request
</code></pre><p>Each message in the log was pushed aside by our metadata. Since we read from left to right, we should place the message as close as possible to the beginning of the line. In addition, placing the message in the beginning “breaks” the line’s structure. This helps us with identifying the message faster.</p>
<pre><code><span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">10</span>:<span class="hljs-number">02</span>,<span class="hljs-number">097</span> - INFO - entered request [user/create] [req: <span class="hljs-number">1</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">10</span>:<span class="hljs-number">02</span>,<span class="hljs-number">099</span> - INFO - entered request [user/login] [req: <span class="hljs-number">2</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">10</span>:<span class="hljs-number">02</span>,<span class="hljs-number">101</span> - INFO - ended request [user/login] [req: <span class="hljs-number">2</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">10</span>:<span class="hljs-number">02</span>,<span class="hljs-number">102</span> - INFO - entered request [user/create] [req: <span class="hljs-number">3</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">10</span>:<span class="hljs-number">02</span>,<span class="hljs-number">104</span> - INFO - ended request [user/create [req: <span class="hljs-number">1</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">10</span>:<span class="hljs-number">02</span>,<span class="hljs-number">107</span> - INFO - ended request [user/create] [req: <span class="hljs-number">3</span>] - simple_example
</code></pre><p>Placing the timestamp and log level prior to the message can assist us in understanding the flow of events. The rest of the metadata is mainly used for filtering. At this stage it is no longer necessary and can be placed at the end of the line.</p>
<p>An error which is logged under INFO will be lost between all normal log entries. <strong>Using the entire range of logging levels (ERROR, DEBUG, etc.) can reduce search time significantly</strong>. If you want to read more about log levels, you can continue reading <a target="_blank" href="https://blog.scalyr.com/2017/12/logging-levels/">here</a>.</p>
<pre><code><span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">497</span> - INFO - entered request [user/create] [req: <span class="hljs-number">1</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">500</span> - INFO - entered request [user/login] [req: <span class="hljs-number">2</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">502</span> - INFO - ended request [user/login] [req: <span class="hljs-number">2</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">504</span> - ERROR - invalid request structure [user/login] [req: <span class="hljs-number">1</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">506</span> - INFO - entered request [user/create] [req: <span class="hljs-number">3</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">507</span> - INFO - ended request [user/create [req: <span class="hljs-number">1</span>] - simple_example
<span class="hljs-number">2018</span><span class="hljs-number">-10</span><span class="hljs-number">-21</span> <span class="hljs-number">23</span>:<span class="hljs-number">12</span>:<span class="hljs-number">39</span>,<span class="hljs-number">509</span> - INFO - ended request [user/create] [req: <span class="hljs-number">3</span>] - simple_example
</code></pre><h3 id="heading-logs-analysis">Logs analysis</h3>
<p>Searching files for log entries is a long and frustrating process. It usually requires us to process very large files and sometimes even to use regular expressions.</p>
<p>Nowadays, we can <strong>take advantage of fast search engines</strong> such as Elastic Search and index our log entries in it. Using ELK stack will also provide you the ability to analyze your logs and answer questions such as:</p>
<ol>
<li>Is the error localized to one machine? or does it occur in all the environment?</li>
<li>When did the error started? What is the error’s occurrence rate?</li>
</ol>
<p>Being able to perform aggregations on log entries can provide hints for possible failure’s causes that will not be noticed just by reading a few log entries.</p>
<p>In conclusion, do not take logging for granted. On each new feature you develop, think about your future self and which log entry will help you and which will just distract you.</p>
<p>Remember: your logs will help you solve production issues only if you let them.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
