<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ ai-agent - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Thu, 07 May 2026 09:27:20 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/ai-agent/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Deploy Your Own 24x7 AI Agent using OpenClaw ]]>
                </title>
                <description>
                    <![CDATA[ OpenClaw is a self-hosted AI assistant designed to run under your control instead of inside a hosted SaaS platform. It can connect to messaging interfaces, local tools, and model providers while keepi ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-deploy-your-own-24x7-ai-agent-using-openclaw/</link>
                <guid isPermaLink="false">69b841aa2ad6ae5184d57f6d</guid>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ ai-agent ]]>
                    </category>
                
                    <category>
                        <![CDATA[ #ai-tools ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Manish Shivanandhan ]]>
                </dc:creator>
                <pubDate>Mon, 16 Mar 2026 17:45:14 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/40d08032-5a22-434d-b27c-5dcb6eb9bf85.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p><a href="https://openclaw.ai/">OpenClaw</a> is a self-hosted AI assistant designed to run under your control instead of inside a hosted SaaS platform.</p>
<p>It can connect to messaging interfaces, local tools, and model providers while keeping execution and data closer to your own infrastructure.</p>
<p>The project is actively developed, and the current ecosystem revolves around a CLI-driven setup flow, onboarding wizard, and multiple deployment paths ranging from local installs to containerised or cloud-hosted setups.</p>
<p>This article explains how to deploy your own instance of OpenClaw from a practical systems perspective. We'll look at how to deploy it on your local machine as well as a PaaS provider like Sevalla.</p>
<p>The goal is not just to “make it run,” but to understand deployment choices, architecture implications, and operational tradeoffs so you can run a stable instance long term.</p>
<blockquote>
<p><em>Note: It is dangerous to give an AI system full control of your system. Make sure you</em> <a href="https://www.microsoft.com/en-us/security/blog/2026/02/19/running-openclaw-safely-identity-isolation-runtime-risk/"><em>understand the risks</em></a> <em>before running it on your machine.</em></p>
</blockquote>
<h3 id="heading-what-well-cover">What we'll cover:</h3>
<ol>
<li><p><a href="#heading-understanding-what-you-are-deploying">Understanding What You Are Deploying</a></p>
</li>
<li><p><a href="#heading-deploying-on-a-local-machine">Deploying on a Local Machine</a></p>
</li>
<li><p><a href="#heading-deploying-on-the-cloud-using-sevalla">Deploying on the Cloud using Sevalla</a></p>
</li>
<li><p><a href="#heading-interacting-with-the-agent">Interacting with the Agent</a></p>
</li>
<li><p><a href="#heading-security-and-operational-considerations">Security and Operational Considerations</a></p>
</li>
<li><p><a href="#heading-updating-and-maintaining-your-instance">Updating and Maintaining Your Instance</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-understanding-what-you-are-deploying">Understanding What You Are Deploying</h2>
<p>Before touching installation commands, it helps to understand the runtime model.</p>
<p>OpenClaw is essentially a local-first AI assistant that runs as a service and exposes interaction through chat interfaces and a <a href="https://docs.openclaw.ai/concepts/architecture">gateway architecture</a>.</p>
<p>The gateway acts as the operational core, handling communication between messaging platforms, models, and local capabilities.</p>
<p>In practical terms, deploying OpenClaw means deploying three layers.</p>
<p>The first layer is the CLI and runtime, which launches and manages the assistant.</p>
<p>The second layer is configuration and onboarding, where you select model providers and integrations.</p>
<p>The third layer is persistence and execution context, which determines whether OpenClaw runs on your laptop, a VPS, or inside a container.</p>
<p>Because OpenClaw runs with access to local resources, deployment decisions are not only about convenience but also about security boundaries. Treat it as an administrative system, not just a chatbot.</p>
<h2 id="heading-deploying-on-a-local-machine">Deploying on a Local Machine</h2>
<p>OpenClaw supports multiple deployment approaches, and the right one depends on your goals.</p>
<p>The simplest route is to install it directly on a local machine. This is ideal for experimentation, private workflows, or development because onboarding is fast and maintenance is minimal.</p>
<p>The installer script handles environment detection, dependency setup, and launching the onboarding wizard.</p>
<p>The fastest way to install OpenClaw is via the official installer script. The installer downloads the CLI, installs it globally through npm, and launches onboarding automatically.</p>
<pre><code class="language-plaintext">curl -fsSL https://openclaw.ai/install.cmd -o install.cmd &amp;&amp; install.cmd &amp;&amp; del install.cmd
</code></pre>
<p>This method abstracts away most environmental complexity and is recommended for first-time deployments.</p>
<p>If you already maintain a Node environment, you can install it directly using npm.</p>
<pre><code class="language-plaintext">npm i -g openclaw
</code></pre>
<p>The CLI is then used to run onboarding and optionally install a daemon for persistent background execution. This approach gives you more control over versioning and update cadence.</p>
<pre><code class="language-plaintext">openclaw onboard
</code></pre>
<p>Regardless of installation path, verify that the CLI is discoverable in your shell. Environment path issues are common when global npm packages are installed under custom Node managers.</p>
<h3 id="heading-the-onboarding-process">The Onboarding Process</h3>
<p>Once installed, OpenClaw relies heavily on onboarding to bootstrap configuration.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66c6d8f04fa7fe6a6e337edd/de6b00c1-cf26-4c2b-8f1c-00c39b975e7c.png" alt="Openclaw CLI" style="display:block;margin:0 auto" width="1000" height="472" loading="lazy">

<p>During onboarding you will select an AI provider, configure authentication, and choose how you want to interact with the assistant. This process establishes the core runtime state and generates local configuration files used by the gateway.</p>
<p>Onboarding also allows you to connect messaging channels such as Telegram or Discord. These integrations transform OpenClaw from a local CLI tool into an always-accessible assistant.</p>
<p>From a deployment perspective, this is the moment where availability requirements change. If you connect external chat platforms, your instance must remain online consistently.</p>
<p>You can skip certain onboarding steps and configure integrations later, but for production deployments it's better to complete the initial configuration so you can validate end-to-end functionality immediately.</p>
<p>Once you add an OpenAI API key or Claude key, you can choose to open the web UI.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66c6d8f04fa7fe6a6e337edd/d70fb5cf-2572-4181-80ea-5d47ac6981f6.png" alt="Openclaw Options" style="display:block;margin:0 auto" width="1000" height="445" loading="lazy">

<p>Go to <code>localhost:18789</code> to interact with OpenClaw.</p>
<h2 id="heading-deploying-on-the-cloud-using-sevalla">Deploying on the Cloud using&nbsp;Sevalla</h2>
<p>A second approach is to deploy to a VPS or cloud instance. This model gives you always-on availability and makes it possible to interact with OpenClaw from anywhere.</p>
<p>A third approach is containerised deployment using Docker or similar tooling. This provides reproducibility and cleaner dependency isolation.</p>
<p>Docker setups are particularly useful if you want predictable upgrades or easy migration between machines. OpenClaw’s repository includes scripts and compose configurations that support container execution workflows.</p>
<p>I have set up a custom <a href="https://hub.docker.com/r/manishmshiva/openclaw">Docker image</a> to load OpenClaw into a PaaS platform like Sevalla.</p>
<p><a href="https://sevalla.com/">Sevalla</a> is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.</p>
<p><a href="https://app.sevalla.com/">Log in</a> to Sevalla and click “Create application”. Choose “Docker image” as the application source instead of a GitHub repository. Use <code>manishmshiva/openclaw</code> as the Docker image, and it will be pulled automatically from DockerHub.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66c6d8f04fa7fe6a6e337edd/a9eb4892-35c5-4ffb-a4d5-ffd59fe6752f.png" alt="Sevalla New Application" style="display:block;margin:0 auto" width="1000" height="716" loading="lazy">

<p>Click “Create application” and go to the environment variables. Add an environment variable <code>ANTHROPIC_API_KEY</code>&nbsp;. Then go to “Deployments” and click “Deploy now”.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66c6d8f04fa7fe6a6e337edd/64040349-06e9-4e96-b7c5-3b0d3fcfc9f9.png" alt="OpenClaw Deployment" style="display:block;margin:0 auto" width="1000" height="147" loading="lazy">

<p>Once the deployment is successful, you can click “Visit app” and interact with the UI with the Sevalla-provided URL.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66c6d8f04fa7fe6a6e337edd/5a5d69aa-df82-4bca-971b-3e4b301dcf97.png" alt="OpenClaw Dashboard" style="display:block;margin:0 auto" width="1000" height="474" loading="lazy">

<h2 id="heading-interacting-with-the-agent">Interacting with the&nbsp;Agent</h2>
<p>There are many ways to interact with the agent once you set up Openclaw. You can configure a <a href="https://medium.com/chatfuel-blog/how-to-create-your-own-telegram-bot-who-answer-its-users-without-coding-996de337f019">Telegram bot</a> to interact with your agent. Basically, the agent will (try to) do a task similar to a human assistant. Its capabilities depend on how much access you provide the agent.</p>
<p>You can ask it to clean your inbox, watch a website for new articles, and perform many other tasks. Please note that providing OpenClaw access to your critical apps or files is not ideal or secure. This is still a system in its early stages, and the risk of it making a mistake or exposing your private information is high.</p>
<p>Here are some of the ways <a href="https://openclaw.ai/showcase">people are using OpenClaw</a>.</p>
<h2 id="heading-security-and-operational-considerations">Security and Operational Considerations</h2>
<p>Because OpenClaw can execute tasks and access system resources, deployment security is not optional. The safest baseline is to bind services to localhost and access them through secure VPN tunnels when remote control is required. <a href="https://surfshark.com/blog/best-vpn-for-privacy">Learn more</a> about VPNs here.</p>
<p>When deploying on a VPS, harden the host like any administrative service. Use non-root users, keep packages updated, restrict inbound ports, and monitor logs. If you're integrating messaging channels, treat tokens and API keys as sensitive secrets and avoid storing them in plaintext configuration where possible.</p>
<p>Containerization helps isolate dependencies but doesn't eliminate risk. The container still executes code on your host, so network and volume permissions should be carefully scoped.</p>
<h2 id="heading-updating-and-maintaining-your-instance">Updating and Maintaining Your&nbsp;Instance</h2>
<p>OpenClaw evolves quickly, with frequent releases and feature changes. Keeping your instance updated is important not only for features but also for stability and compatibility with integrations.</p>
<p>For npm-based installations, updates are straightforward, but you should test upgrades in a staging environment if your assistant handles important workflows. For source-based deployments, pull changes and rebuild consistently rather than mixing old build artifacts with new code.</p>
<p>Monitoring is another overlooked aspect. Even simple log inspection can reveal integration failures early. If your deployment is mission-critical, consider external uptime checks or process supervisors.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Deploying your own OpenClaw agent is ultimately about taking control of how your AI assistant works, where it runs, and how it fits into your daily workflows. While the setup process is straightforward, the real value comes from understanding the choices you make along the way, whether you run it locally for privacy, host it in the cloud for constant availability, or use containers for consistency and portability.</p>
<p>As the ecosystem around self-hosted AI continues to evolve, tools like OpenClaw make it possible to move beyond relying entirely on third-party platforms. Running your own agent gives you flexibility, ownership, and the freedom to shape the experience around your needs.</p>
<p>Start small, experiment safely, and gradually build confidence in how your assistant operates. Over time, what begins as a simple deployment can become a dependable, personalized system that works the way you want&nbsp;, under your control.</p>
<p><em>Hope you enjoyed this article. Learn more about me by</em> <a href="https://manishmshiva.me/"><em><strong>visiting my website</strong></em></a><em>.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build and Deploy a LogAnalyzer Agent using LangChain ]]>
                </title>
                <description>
                    <![CDATA[ Modern systems generate huge volumes of logs. Application logs, server logs, and infrastructure logs often contain the first clues when something breaks. The problem is not a lack of data, but the effort required to read and understand it. Engineers ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-build-and-deploy-a-loganalyzer-agent-using-langchain/</link>
                <guid isPermaLink="false">69837cd8f119ce39fb6041f1</guid>
                
                    <category>
                        <![CDATA[ ai-agent ]]>
                    </category>
                
                    <category>
                        <![CDATA[ llm ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ logging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Manish Shivanandhan ]]>
                </dc:creator>
                <pubDate>Wed, 04 Feb 2026 17:07:36 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770224778776/7d5c3a27-adc2-4cde-94d5-4ac7db892673.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Modern systems generate huge volumes of logs.</p>
<p>Application logs, server logs, and infrastructure logs often contain the first clues when something breaks. The problem is not a lack of data, but the effort required to read and understand it.</p>
<p>Engineers usually scroll through thousands of lines, search for error codes, and try to connect events across time. This is slow and error-prone, especially during incidents.</p>
<p>A LogAnalyzer Agent solves this problem by acting like a calm, experienced engineer who reads logs for you and explains what is going on.</p>
<p>In this article, you’ll learn how to build such an agent using <a target="_blank" href="https://fastapi.tiangolo.com/">FastAPI</a>, <a target="_blank" href="https://github.com/langchain-ai/langchain">LangChain</a>, and an OpenAI model.</p>
<p>We’ll walk through the backend, the log analysis logic, and a simple web UI that lets you upload a log file and get insights in seconds. We’ll also upload this app to Sevalla so that you can share your project with the world.</p>
<p>You just need some basic knowledge of Python and HTML/CSS/JavaScript to finish this tutorial.</p>
<p><a target="_blank" href="https://github.com/manishmshiva/loganalyzer">Here is the full code</a> for reference.</p>
<h2 id="heading-what-well-cover">What We’ll Cover</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-what-a-loganalyzer-agent-actually-does">What a LogAnalyzer Agent Actually Does</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-high-level-architecture">High-Level Architecture</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-designing-a-prompt-that-works">Designing a Prompt That Works</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-handling-large-log-files-safely">Handling Large Log Files Safely</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-analyzing-logs-with-langchain-and-openai">Analyzing Logs with LangChain and OpenAI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-building-the-fastapi-backend">Building the FastAPI Backend</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-a-simple-and-clean-web-ui">Creating a Simple and Clean Web UI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-running-the-application-locally">Running the Application Locally</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deployment-to-sevalla">Deployment to Sevalla</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-what-a-loganalyzer-agent-actually-does">What a LogAnalyzer Agent Actually Does</h2>
<p>A LogAnalyzer Agent takes raw log text as input and produces human-friendly analysis as output.</p>
<p>Instead of returning a list of errors, it explains the main failures, the likely root cause, and what to do next. This is important because logs are written for machines, not for people under pressure.</p>
<p>In this project, the agent behaves like a senior site reliability engineer. It reads logs in chunks, identifies patterns, and summarises them in simple language. The intelligence comes from a language model, while the reliability comes from careful handling of input and chunking.</p>
<h2 id="heading-high-level-architecture">High-Level Architecture</h2>
<p>The system has three main parts.</p>
<p>The first part is a web UI built with plain HTML, CSS, and JavaScript. This UI allows a user to upload a text file and start analysis. </p>
<p>The second part is a FastAPI backend that receives the file, validates it, and coordinates the analysis. </p>
<p>The third part is the analysis engine itself, which uses LangChain and an OpenAI model to interpret the logs.</p>
<p>The flow is simple: the browser sends a log file to the backend. The backend reads the file, splits it into manageable pieces, and sends each piece to the language model with a clear prompt. The responses are combined and sent back to the browser as a single analysis.</p>
<h2 id="heading-designing-a-prompt-that-works">Designing a Prompt That Works</h2>
<p>The heart of any AI agent is the prompt. A weak prompt gives vague answers, while a strong prompt produces useful insights.</p>
<p>In this project, the prompt tells the model to act like a senior site reliability engineer. It asks for four things: main errors, likely root cause, practical next steps, and suspicious patterns.</p>
<p>Here is the prompt template used in the backend:</p>
<pre><code class="lang-python">log_analysis_prompt_text = <span class="hljs-string">"""
You are a senior site reliability engineer.
Analyze the following application logs.
1. Identify the main errors or failures.
2. Explain the likely root cause in simple terms.
3. Suggest practical next steps to fix or investigate.
4. Mention any suspicious patterns or repeated issues.
Logs:
{log_data}
Respond in clear paragraphs. Avoid jargon where possible.
"""</span>
</code></pre>
<p>This prompt is simple but effective. It gives the model a role, a clear task, and constraints on the output style. Asking for clear paragraphs helps ensure the response is readable and useful for non-experts as well.</p>
<h2 id="heading-handling-large-log-files-safely">Handling Large Log Files Safely</h2>
<p>Language models have input limits. You can’t send a large log file in one request and expect good results. To handle this, the backend splits the log text into smaller chunks. Each chunk overlaps slightly with the next to preserve context.</p>
<p>We’ll use the <code>RecursiveCharacterTextSplitter</code> from LangChain for this purpose. It ensures that chunks aren’t cut in awkward places and that important lines aren’t lost.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">split_logs</span>(<span class="hljs-params">log_text: str</span>):</span>
    <span class="hljs-string">"""Split log text into manageable chunks"""</span>
    splitter = RecursiveCharacterTextSplitter(
        chunk_size=<span class="hljs-number">2000</span>,
        chunk_overlap=<span class="hljs-number">200</span>
    )
    <span class="hljs-keyword">return</span> splitter.split_text(log_text)
</code></pre>
<p>This approach allows the agent to scale to large files while staying within model limits. Each chunk is analyzed independently, and the results are later combined.</p>
<h2 id="heading-analyzing-logs-with-langchain-and-openai">Analyzing Logs with LangChain and OpenAI</h2>
<p>Once the logs are split, each chunk is passed through the language model using the prompt template. The model used here is a lightweight but capable option, configured with a low temperature to keep responses focused and consistent.</p>
<pre><code class="lang-python">llm = ChatOpenAI(
    temperature=<span class="hljs-number">0.2</span>,
    model=<span class="hljs-string">"gpt-4o-mini"</span>
)
</code></pre>
<p>The analysis function loops over all chunks, formats the prompt, invokes the model, and stores the result.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">analyze_logs</span>(<span class="hljs-params">log_text: str</span>):</span>
    <span class="hljs-string">"""Analyze logs by splitting and processing each chunk"""</span>
    chunks = split_logs(log_text)
    combined_analysis = []

  <span class="hljs-keyword">for</span> chunk <span class="hljs-keyword">in</span> chunks:
          formatted_prompt = log_analysis_prompt_text.format(log_data=chunk)
          result = llm.invoke(formatted_prompt)
          combined_analysis.append(result.content)
      <span class="hljs-keyword">return</span> <span class="hljs-string">"\n\n"</span>.join(combined_analysis)
</code></pre>
<p>This design keeps the logic easy to understand. Each chunk produces a small analysis, and the final output is a stitched together explanation of the whole log file.</p>
<h2 id="heading-building-the-fastapi-backend">Building the FastAPI Backend</h2>
<p>FastAPI is a good choice for this project because it’s fast, simple, and easy to read. The backend exposes three endpoints. The root endpoint serves the HTML UI. The <code>/analyze</code> endpoint accepts a log file and returns the analysis. And the <code>/health</code> endpoint is used to check if the service is running and properly configured.</p>
<p>The analyze endpoint performs several important checks. It ensures that the file is a text file, verifies that it isn’t empty, and handles errors gracefully. This prevents unnecessary calls to the model and improves user experience.</p>
<pre><code class="lang-python"><span class="hljs-meta">@app.post("/analyze")</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">analyze_log_file</span>(<span class="hljs-params">file: UploadFile = File(<span class="hljs-params">...</span>)</span>):</span>
    <span class="hljs-string">"""Analyze uploaded log file"""</span>
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> file.filename.endswith(<span class="hljs-string">".txt"</span>):
        <span class="hljs-keyword">return</span> JSONResponse(
            status_code=<span class="hljs-number">400</span>,
            content={<span class="hljs-string">"error"</span>: <span class="hljs-string">"Only .txt log files are supported"</span>}
        )

     <span class="hljs-keyword">try</span>:
        content = <span class="hljs-keyword">await</span> file.read()
        log_text = content.decode(<span class="hljs-string">"utf-8"</span>, errors=<span class="hljs-string">"ignore"</span>)
        <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> log_text.strip():
            <span class="hljs-keyword">return</span> JSONResponse(
                status_code=<span class="hljs-number">400</span>,
                content={<span class="hljs-string">"error"</span>: <span class="hljs-string">"Log file is empty"</span>}
            )
        insights = analyze_logs(log_text)
        <span class="hljs-keyword">return</span> {<span class="hljs-string">"analysis"</span>: insights}
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">return</span> JSONResponse(
            status_code=<span class="hljs-number">500</span>,
            content={<span class="hljs-string">"error"</span>: <span class="hljs-string">f"Error analyzing logs: <span class="hljs-subst">{str(e)}</span>"</span>}
        )
</code></pre>
<p>This careful handling makes the agent more robust and production-friendly.</p>
<h2 id="heading-creating-a-simple-and-clean-web-ui">Creating a Simple and Clean Web UI</h2>
<p>A good agent isn’t useful if people can’t interact with it easily. The frontend in this project is a single HTML file with embedded CSS and JavaScript. It focuses on clarity and speed rather than complexity.</p>
<p>The UI allows users to choose a log file, see the file name, click an analyze button, and view results in a formatted area. A loading spinner provides feedback while the analysis is running. Errors are shown clearly, without technical noise.</p>
<p>The upload and analysis logic is handled by a small JavaScript function that sends the file to the backend using a fetch request.</p>
<pre><code class="lang-python"><span class="hljs-keyword">async</span> function uploadLog() {
    const fileInput = document.getElementById(<span class="hljs-string">"logFile"</span>);
    const file = fileInput.files[<span class="hljs-number">0</span>];

<span class="hljs-keyword">if</span> (!file) {
        alert(<span class="hljs-string">"Please select a log file first"</span>);
        <span class="hljs-keyword">return</span>;
    }
    const formData = new FormData();
    formData.append(<span class="hljs-string">"file"</span>, file);
    const response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"/analyze"</span>, {
        method: <span class="hljs-string">"POST"</span>,
        body: formData
    });
    const data = <span class="hljs-keyword">await</span> response.json();
    document.getElementById(<span class="hljs-string">"result"</span>).textContent = data.analysis;
}
</code></pre>
<p>This minimal approach keeps the frontend easy to maintain and adapt.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779422013/7bd95a67-66fb-44ee-a2d7-413ebb076676.png" alt="Log Analyzer UI" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-running-the-application-locally">Running the Application Locally</h2>
<p>To run this project, you need Python, a virtual environment, and an OpenAI API key. The API key is loaded from a <code>.env</code> file to keep secrets out of code. Once dependencies are installed, you can start the server using Uvicorn.</p>
<pre><code class="lang-python"><span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    <span class="hljs-keyword">import</span> uvicorn
    port = int(os.getenv(<span class="hljs-string">"PORT"</span>, <span class="hljs-number">8000</span>))
    uvicorn.run(app, host=<span class="hljs-string">"0.0.0.0"</span>, port=port)
</code></pre>
<p>After starting the server, you can open the browser, upload a log file, and see the agent in action.</p>
<h2 id="heading-deployment-to-sevalla">Deployment to Sevalla</h2>
<p>You can choose any cloud provider, like AWS, DigitalOcean, or others, to host your service. I’ll be using Sevalla for this example.</p>
<p><a target="_blank" href="https://sevalla.com/">Sevalla</a> is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.</p>
<p>Every platform will charge you for creating a cloud resource. Sevalla comes with a $20 credit for us to use, so we won’t incur any costs for this example.</p>
<p>Let’s push this project to GitHub so that we can connect our repository to Sevalla. We can also enable auto-deployments so that any new change to the repository is automatically deployed.</p>
<p><a target="_blank" href="https://app.sevalla.com/login">Log in</a> to Sevalla and click on Applications → Create new application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779432416/f9ae505d-505e-4378-9bdc-087cfa0cde78.png" alt="Create Application" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You can see the option to link your GitHub repository to create a new application. Use the default settings. Then click <strong>Create application</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779452276/793b0eab-f832-4e8f-9534-c9b79adca8e1.png" alt="Application Settings" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now we have to add our OpenAI API key to the environment variables. Click on the <strong>Environment variables</strong> section once the application is created, and save the <code>OPENAI_API_KEY</code> value as an environment variable.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779460335/d914c14e-96e7-4bb0-83aa-da3e5cdb0c22.png" alt="Environment Variables" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now we’re ready to deploy our application. Click on <strong>Deployments</strong> and click <strong>Deploy now</strong>. It will take 2–3 minutes for the deployment to complete.</p>
<p>Once done, click on <strong>Visit app</strong>. You’ll see the application served via a URL ending with <code>sevalla.app</code>. This is your new root URL. You can replace <code>localhost:8000</code> with this URL and start using it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769779473800/7f4b450f-95cb-4e45-8cb7-1fcedffa54ef.png" alt="Final UI" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Congrats! Your log analyzer service is now live. You can find a sample log in the GitHub repository which you can use to test the service.</p>
<p>You can extend this by adding other capabilities and pushing your code to GitHub. Sevalla will automatically deploy your application to production.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building a LogAnalyzer Agent is a practical way to apply language models to real engineering problems. Logs are everywhere, and understanding them quickly can save hours during incidents. By combining FastAPI, LangChain, and a clear prompt, you can turn raw text into actionable insight.</p>
<p>The key ideas are simple: split large inputs, guide the model with a strong role and task, and present results in a clean interface. With these principles, you can adapt this agent to many other analysis tasks beyond logs.</p>
<p><em>Hope you enjoyed this article. Learn more about me by</em> <a target="_blank" href="https://manishshivanandhan.com/"><strong><em>visiting my website</em></strong></a><em>.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build Autonomous Agents using Prompt Chaining with AI Primitives (No Frameworks) ]]>
                </title>
                <description>
                    <![CDATA[ Autonomous agents might sound complex, but they don’t have to be. These are AI systems that can make decisions and take actions on their own to achieve a goal – usually by using LLMs, various tools, and memory to reason through a task. You can build ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-autonomous-agents-using-prompt-chaining-with-ai-primitives/</link>
                <guid isPermaLink="false">680662c3d6b81962a8fc5351</guid>
                
                    <category>
                        <![CDATA[ llm ]]>
                    </category>
                
                    <category>
                        <![CDATA[ ai-agent ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Maham Codes ]]>
                </dc:creator>
                <pubDate>Mon, 21 Apr 2025 15:22:42 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745248868960/12efd5ab-3d9b-4c93-979f-45bde796639b.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Autonomous agents might sound complex, but they don’t have to be. These are AI systems that can make decisions and take actions on their own to achieve a goal – usually by using LLMs, various tools, and memory to reason through a task.</p>
<p>You can build powerful agentic systems without heavyweight frameworks or orchestration engines. One of the simplest and most effective ways to do that is to use Langbase agentic architectures (built with AI primitives that don't require a framework to ship scalable AI agentic systems).</p>
<p>In this article, we’ll dive into one of Langbase's agentic architectures: prompt chaining. We’ll look at why it’s useful and how to implement it by building a prompt chaining agent.</p>
<h3 id="heading-table-of-contents">Table of Contents</h3>
<ol>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-ai-primitives-agentic-architecture">AI primitives (agentic architecture)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-prompt-chaining">What is prompt chaining?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-prompt-chaining-architecture">Prompt chaining architecture</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-langbase-sdk">Langbase SDK</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-building-a-prompt-chaining-agent-using-httpslangbasecomlangbase-pipes">Building a prompt chaining agent using Langbase Pipes</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-step-1-setup-your-project">Step 1: Setup your project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-get-langbase-api-key">Step 2: Get Langbase API Key</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-3-add-llm-api-keys">Step 3: Add LLM API keys</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-4-add-logic-in-prompt-chainingts-file">Step 4: Add logic in prompt-chaining.ts file</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-5-run-the-file">Step 5: Run the file</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-the-result">The result</a></p>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we begin creating a prompt chaining agent, you’ll need to have the following setup and tools ready to go.</p>
<p>In this tutorial, I’ll be using the following tech stack:</p>
<ul>
<li><p><a target="_blank" href="http://langbase.com/">Langbase</a> – the platform to build and deploy your serverless AI agents.</p>
</li>
<li><p><a target="_blank" href="https://langbase.com/docs/sdk">Langbase SDK</a> – a TypeScript AI SDK, designed to work with JavaScript, TypeScript, Node.js, Next.js, React, and the like.</p>
</li>
<li><p><a target="_blank" href="https://openai.com/">OpenAI</a> – to get the LLM key for the preferred model.</p>
</li>
</ul>
<p>You’ll also need to:</p>
<ul>
<li><p>Sign up on Langbase to get access to the API key.</p>
</li>
<li><p>Sign up on OpenAI to generate the LLM key for the model you want to use (for this demo, I’ll be using the <code>openai:gpt-4o-mini</code> model). You can generate the key <a target="_blank" href="https://platform.openai.com/api-keys">here</a>.</p>
</li>
</ul>
<h2 id="heading-ai-primitives-agentic-architecture">AI Primitives (Agentic Architecture)</h2>
<p>An AI primitive level approach means building AI systems using the most basic building blocks – without relying on heavy abstractions, orchestration engines, or full-blown frameworks.</p>
<p>Langbase Pipe and Memory agents serve as these building blocks.</p>
<p><a target="_blank" href="https://langbase.com/docs/pipe">Pipe agents</a> on Langbase are different from other agents. They are serverless AI agents with agentic tools that can work with any language or framework. Pipe agents are easily deployable, and with just one API they let you connect 250+ LLMs to any data to build any developer API workflow.</p>
<p><a target="_blank" href="https://langbase.com/docs/memory">Langbase memory agents</a> (long-term memory solution) are designed to acquire, process, retain, and retrieve information seamlessly. They dynamically attach private data to any LLM, enabling context-aware responses in real time and reducing hallucinations. Memory, when connected to a pipe agent, becomes a memory agent.</p>
<p>With these building blocks (AI primitives) you can build entire agentic workflows. For this, Langbase agentic architectures serves as a boilerplate in building, deploying, and scaling autonomous agents.</p>
<p>Let’s look at one of the agentic architectures: prompt chaining.</p>
<h2 id="heading-what-is-prompt-chaining">What is Prompt Chaining?</h2>
<p>Prompt chaining is an agent architecture where a task is broken down into a sequence of prompts. Each step passes its output to the next, enabling the LLM to handle more complex workflows with higher accuracy.</p>
<p>This is particularly useful for structured tasks like:</p>
<ul>
<li><p>Document summarization and analysis</p>
</li>
<li><p>Multi-step content generation</p>
</li>
<li><p>Data transformation and cleanup</p>
</li>
<li><p>Content validation and refinement</p>
</li>
</ul>
<p>Rather than relying on a single prompt to do everything, you split the work into focused steps. This makes it easier to debug, improves output quality, and introduces natural "checkpoints" in your AI workflow.</p>
<h2 id="heading-prompt-chaining-architecture">Prompt Chaining Architecture</h2>
<p>Here’s a reference architecture explaining the workflow:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXchmBDXvU8DXnQu7EjqKoSUTdxQ__KsTZemZ9yaTGpeCAMUc1RX_Swby9NOtxXwONFdKGPrjFcjVZhQmQoKe1eu2nceFWGLaPA8bpu-JYB7rh4ChJmExLRRWJzjB4686HjUsP_t?key=l4b_IFG3ufUXGX7WLcs4Dknq" alt="AD_4nXchmBDXvU8DXnQu7EjqKoSUTdxQ__KsTZemZ9yaTGpeCAMUc1RX_Swby9NOtxXwONFdKGPrjFcjVZhQmQoKe1eu2nceFWGLaPA8bpu-JYB7rh4ChJmExLRRWJzjB4686HjUsP_t?key=l4b_IFG3ufUXGX7WLcs4Dknq" width="600" height="400" loading="lazy"></p>
<p>This diagram is a visual reference for how prompt chaining can be used to build a lightweight agentic system using just LLM calls and conditional logic – without any heavyweight frameworks.</p>
<p>Here’s a breakdown of what’s happening in the flow:</p>
<ol>
<li><strong>In → LLM Call</strong></li>
</ol>
<ul>
<li><p>Takes the initial input and runs the first LLM call.</p>
</li>
<li><p>Produces Output 1.</p>
</li>
</ul>
<ol start="2">
<li><strong>Gate</strong></li>
</ol>
<ul>
<li><p>Evaluates Output 1 to decide the next step.</p>
</li>
<li><p>Acts as a conditional checkpoint (for example, success/failure, intent validation, confidence threshold).</p>
</li>
</ul>
<ol start="3">
<li><strong>If Gate passes:</strong></li>
</ol>
<ul>
<li><p>Proceeds to LLM Call 2 with Output 1 as input.</p>
</li>
<li><p>LLM Call 2 produces Output 2.</p>
</li>
<li><p>Output 2 goes into LLM Call 3, which generates the final result.</p>
</li>
<li><p>Final output flows into the Out.</p>
</li>
</ul>
<ol start="4">
<li><strong>If Gate fails:</strong></li>
</ol>
<ul>
<li><p>The flow terminates early at Exit.</p>
</li>
<li><p>Skips further LLM calls, saving compute and avoiding invalid outputs.</p>
</li>
</ul>
<h2 id="heading-langbase-sdk">Langbase SDK</h2>
<p>The Langbase SDK makes it easy to build powerful AI agents using TypeScript. It gives you everything you need to work with any LLM, connect your own embedding models, manage document memory, and build AI agents that can reason and respond.</p>
<p>The SDK <a target="_blank" href="http://langbase.com">i</a>s designed to work with Node.js, Next.js, React, or any modern JavaScript stack. You can use it to upload documents, create semantic memory, and run AI workflows (called Pipes agents) with just a few lines of code.</p>
<p>Langbase is an API-first AI platform, and its TypeScript SDK smooths out the experience – making it easy to get started without dealing with infrastructure. Just drop in your API key, write your logic, and you're good to go.</p>
<p>Now that you know about Langbase SDK, let’s start building the prompt chaining agent.</p>
<h2 id="heading-building-a-prompt-chaining-agent-using-langbase-pipes">Building a Prompt Chaining Agent using Langbase Pipes</h2>
<p>Let’s walk through a real prompt chaining agentic system built using Langbase Pipe agents (serverless AI agents with unified APIs for every LLM). For this, we’ll be setting up a basic Node.js project.</p>
<p>We’ll be implementing a sequential product marketing content pipeline that transforms a raw product description into polished marketing copy through three stages (that is, the creation of three Pipe agents):</p>
<h3 id="heading-first-stage-summary-agent">First Stage (Summary Agent):</h3>
<ul>
<li><p>Takes a raw product description</p>
</li>
<li><p>Condenses it into two concise sentences</p>
</li>
<li><p>Has a quality gate that checks if the summary is detailed enough (at least 10 words)</p>
</li>
</ul>
<h3 id="heading-second-stage-features-agent">Second Stage (Features Agent):</h3>
<ul>
<li><p>Takes the summary from stage 1</p>
</li>
<li><p>Extracts and formats key product features as bullet points</p>
</li>
</ul>
<h3 id="heading-final-stage-marketing-copy-agent">Final Stage (Marketing Copy Agent):</h3>
<ul>
<li><p>Takes the bullet points from stage 2</p>
</li>
<li><p>Generates refined marketing copy for the product</p>
</li>
</ul>
<p>All stages will be using the OpenAI 4o-mini model through the Langbase SDK. The best part is that you can use different LLM models for each stage/Pipe agent creation as well.</p>
<p>What makes this interesting is its pipeline approach. Each stage builds upon the output of the previous stage, with a quality check after the summary stage to ensure the pipeline maintains high standards.</p>
<p>Let’s begin with the creation of this prompt chaining agentic system.</p>
<h3 id="heading-step-1-setup-your-project">Step 1: Setup Your Project</h3>
<p>I’ll be building a basic Node.js app in TypeScript that uses the Langbase SDK to create a scalable prompt chaining agentic system. It will work without any framework, following an AI primitive level approach.</p>
<p>To get started with that, create a new directory for your project and navigate to it:</p>
<pre><code class="lang-bash">mkdir agentic-architecture &amp;&amp; <span class="hljs-built_in">cd</span> agentic-architecture
</code></pre>
<p>Then initialize a Node.js project and create a TypeScript file by running this command in your terminal:</p>
<pre><code class="lang-bash">npm init -y &amp;&amp; touch prompt-chaining.ts
</code></pre>
<p>The <code>prompt-chaining.ts</code> file will contain code of all the agent creations in it.</p>
<p>After this, we will be using the Langbase SDK to create the agents and <code>dotenv</code> to manage environment variables. So, let's install these dependencies.</p>
<pre><code class="lang-bash">npm i langbase dotenv
</code></pre>
<h3 id="heading-step-2-get-langbase-api-key">Step 2: Get Langbase API Key</h3>
<p>Every request you send to Langbase needs an API key. You can generate API keys from the <a target="_blank" href="https://studio.langbase.com/">Langbase studio</a> by following these steps:</p>
<ol>
<li><p>Switch to your user or org account.</p>
</li>
<li><p>From the sidebar, click on the <code>Settings</code> menu.</p>
</li>
<li><p>In the developer settings section, click on the <code>Langbase API keys</code> link.</p>
</li>
<li><p>From here you can create a new API key or manage existing ones.</p>
</li>
</ol>
<p>For more details, you can visit the Langbase API keys documentation.</p>
<p>After generating the API key, create an <code>.env</code> file in the root of your project and add your Langbase API key in it:</p>
<pre><code class="lang-bash">LANGBASE_API_KEY=xxxxxxxxx
</code></pre>
<p>Replace xxxxxxxxx with your Langbase API key.</p>
<h3 id="heading-step-3-add-llm-api-keys">Step 3: Add LLM API keys</h3>
<p>Once you have the Langbase API key, you’ll be needing the LLM key as well to run the RAG agent. If you have set up LLM API keys in your profile, the AI memory and agent pipe will automatically use them. Otherwise navigate to the LLM API keys page and add keys for different providers like OpenAI, Anthropic, and so on.</p>
<p>Follow these steps to add the LLM keys:</p>
<ol>
<li><p>Add LLM API keys in your account using Langbase studio</p>
</li>
<li><p>Switch to your user or org account.</p>
</li>
<li><p>From the sidebar, click on the <code>Settings</code> menu.</p>
</li>
<li><p>In the developer settings section, click on the <code>LLM API keys</code> link.</p>
</li>
<li><p>From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, and so on.</p>
</li>
</ol>
<h3 id="heading-step-4-add-logic-in-prompt-chainingts-file">Step 4: Add logic in <code>prompt-chaining.ts</code> file</h3>
<p>In the <code>prompt-chaining.ts</code> file you created in Step 1, add the following code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;
<span class="hljs-keyword">import</span> { Langbase } <span class="hljs-keyword">from</span> <span class="hljs-string">'langbase'</span>;


dotenv.config();


<span class="hljs-keyword">const</span> langbase = <span class="hljs-keyword">new</span> Langbase({
   apiKey: process.env.LANGBASE_API_KEY!
});


<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params">inputText: <span class="hljs-built_in">string</span></span>) </span>{
   <span class="hljs-comment">// Prompt chaining steps</span>
   <span class="hljs-keyword">const</span> steps = [
       {
           name: <span class="hljs-string">`summary-agent-<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
           model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
           description:
               <span class="hljs-string">'summarize the product description into two concise sentences'</span>,
           prompt: <span class="hljs-string">`Please summarize the following product description into two concise
           sentences:\n`</span>
       },
       {
           name: <span class="hljs-string">`features-agent-<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
           model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
           description: <span class="hljs-string">'extract key product features as bullet points'</span>,
           prompt: <span class="hljs-string">`Based on the following summary, list the key product features as
           bullet points:\n`</span>
       },
       {
           name: <span class="hljs-string">`marketing-copy-agent-<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
           model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
           description:
               <span class="hljs-string">'generate a polished marketing copy using the bullet points'</span>,
           prompt: <span class="hljs-string">`Using the following bullet points of product features, generate a
           compelling and refined marketing copy for the product, be precise:\n`</span>
       }
   ];


   <span class="hljs-comment">//  Create the pipe agents</span>
   <span class="hljs-keyword">await</span> <span class="hljs-built_in">Promise</span>.all(
       steps.map(<span class="hljs-function"><span class="hljs-params">step</span> =&gt;</span>
           langbase.pipes.create({
               name: step.name,
               model: step.model,
               messages: [
                   {
                       role: <span class="hljs-string">'system'</span>,
                       content: <span class="hljs-string">`You are a helpful assistant that can <span class="hljs-subst">${step.description}</span>.`</span>
                   }
               ]
           })
       )
   );


   <span class="hljs-comment">// Initialize the data with the raw input.</span>
   <span class="hljs-keyword">let</span> data = inputText;


   <span class="hljs-keyword">try</span> {
       <span class="hljs-comment">// Process each step in the workflow sequentially.</span>
       <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> step <span class="hljs-keyword">of</span> steps) {
           <span class="hljs-comment">// Call the LLM for the current step.</span>
           <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> langbase.pipes.run({
               stream: <span class="hljs-literal">false</span>,
               name: step.name,
               messages: [{ role: <span class="hljs-string">'user'</span>, content: <span class="hljs-string">`<span class="hljs-subst">${step.prompt}</span> <span class="hljs-subst">${data}</span>`</span> }]
           });


           data = response.completion;


           <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Step: <span class="hljs-subst">${step.name}</span> \n\n Response: <span class="hljs-subst">${data}</span>`</span>);


           <span class="hljs-comment">// Gate on summary agent output to ensure it is not too brief.</span>
           <span class="hljs-comment">// If summary is less than 10 words, throw an error to stop the workflow.</span>
           <span class="hljs-keyword">if</span> (step.name === <span class="hljs-string">'summary-agent'</span> &amp;&amp; data.split(<span class="hljs-string">' '</span>).length &lt; <span class="hljs-number">10</span>) {
               <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(
                   <span class="hljs-string">'Gate triggered for summary agent. Summary is too brief. Exiting workflow.'</span>
               );
               <span class="hljs-keyword">return</span>;
           }
       }
   } <span class="hljs-keyword">catch</span> (error) {
       <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error in main workflow:'</span>, error);
   }


   <span class="hljs-comment">// The final refined marketing copy</span>
   <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Final Refined Product Marketing Copy:'</span>, data);
}


<span class="hljs-keyword">const</span> inputText = <span class="hljs-string">`Our new smartwatch is a versatile device featuring a high-resolution display,
long-lasting battery life,fitness tracking, and smartphone connectivity. It's designed for
everyday use and is water-resistant. With cutting-edge sensors and a sleek design, it's
perfect for tech-savvy individuals.`</span>;


main(inputText);
</code></pre>
<p>Here’s a breakdown of the above code:</p>
<p>Setup and initialization:</p>
<ul>
<li><p><code>dotenv</code> loads <code>env</code> variables from the <code>.env</code> file for secure API key access.</p>
</li>
<li><p>Langbase is imported from the SDK to interact with the API.</p>
</li>
<li><p>A Langbase client instance is created using your API key.</p>
</li>
</ul>
<p>Define the AI steps (prompt chain):</p>
<ul>
<li><p>Three AI agents (steps) are defined for a pipeline:</p>
<ol>
<li><p><strong>Summarization Agent</strong>: Summarizes the input product description into 2 sentences.</p>
</li>
<li><p><strong>Feature Extraction Agent</strong>: Extracts key features from the summary as bullet points.</p>
</li>
<li><p><strong>Marketing Copy Agent</strong>: Turns bullet points into polished marketing copy.</p>
</li>
</ol>
</li>
<li><p>Each agent uses <code>openai:gpt-4o-mini</code> as the LLM.</p>
</li>
</ul>
<p>Create Langbase Pipes (agents):</p>
<ul>
<li><p>Langbase pipes are created for each step using <code>langbase.pipes.create(...)</code>.</p>
</li>
<li><p>Each pipe has a unique name (timestamped) and a system message guiding its purpose.</p>
</li>
</ul>
<p>Run the workflow (sequential processing):</p>
<ul>
<li><p>Input text flows through each step one by one:</p>
<ul>
<li><p>The output of one step becomes the input for the next.</p>
</li>
<li><p>Pipes are run using <code>langbase.pipes.run(...)</code>.</p>
</li>
</ul>
</li>
<li><p>Intermediate outputs are logged after each step.</p>
</li>
</ul>
<p>Validation check (gatekeeping):</p>
<ul>
<li>If the summary output is too short (less than 10 words), the workflow stops with an error.</li>
</ul>
<p>Final Output:</p>
<ul>
<li>After all steps, the final result is a refined marketing copy printed to the console.</li>
</ul>
<p>For this article, we’re using a demo smartwatch product description to view the result in the <code>inputText</code> field.</p>
<h3 id="heading-step-5-run-the-file">Step 5: Run the file</h3>
<p>To run the <code>prompt-chaining.ts</code> file to view the results, you need to:</p>
<ul>
<li><p>Add TypeScript as a dependency</p>
</li>
<li><p>Add a script to run TypeScript files</p>
</li>
<li><p>Add a TypeScript configuration file</p>
</li>
</ul>
<p>For it lets first install <code>pnpm</code> by running this command in your terminal:</p>
<pre><code class="lang-bash">pnpm install
</code></pre>
<p>Then in your terminal again, run this command to add relevant dependencies and configuration files:</p>
<pre><code class="lang-bash">pnpm add -D typescript ts-node @types/node
</code></pre>
<p>After that, create a TypeScript configuration file <code>tsconfig.json</code>:</p>
<pre><code class="lang-bash">pnpm <span class="hljs-built_in">exec</span> tsc --init
</code></pre>
<p>And update the <code>package.json</code> to add the relevant script. This is what your <code>package.json</code> should look like after updating:</p>
<pre><code class="lang-json">{
 <span class="hljs-attr">"name"</span>: <span class="hljs-string">"agentic-architectures"</span>,
 <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0.0"</span>,
 <span class="hljs-attr">"main"</span>: <span class="hljs-string">"index.js"</span>,
 <span class="hljs-attr">"scripts"</span>: {
   <span class="hljs-attr">"test"</span>: <span class="hljs-string">"echo \"Error: no test specified\" &amp;&amp; exit 1"</span>,
   <span class="hljs-attr">"prompt-chaining"</span>: <span class="hljs-string">"ts-node prompt-chaining.ts"</span>
 },
 <span class="hljs-attr">"keywords"</span>: [],
 <span class="hljs-attr">"author"</span>: <span class="hljs-string">""</span>,
 <span class="hljs-attr">"license"</span>: <span class="hljs-string">"ISC"</span>,
 <span class="hljs-attr">"description"</span>: <span class="hljs-string">""</span>,
 <span class="hljs-attr">"dependencies"</span>: {
   <span class="hljs-attr">"dotenv"</span>: <span class="hljs-string">"^16.5.0"</span>,
   <span class="hljs-attr">"langbase"</span>: <span class="hljs-string">"^1.1.55"</span>
 },
 <span class="hljs-attr">"devDependencies"</span>: {
   <span class="hljs-attr">"@types/node"</span>: <span class="hljs-string">"^22.14.1"</span>,
   <span class="hljs-attr">"ts-node"</span>: <span class="hljs-string">"^10.9.2"</span>,
   <span class="hljs-attr">"typescript"</span>: <span class="hljs-string">"^5.8.3"</span>
 }
}
</code></pre>
<p>Now let’s run the project by pnpm run prompt-chaining</p>
<h2 id="heading-the-result">The Result</h2>
<p>After running the project, you’ll see the result of the example smartwatch product description in your console as follows:</p>
<pre><code class="lang-bash">Step: summarize-description
Response: This smartwatch combines fitness tracking and smartphone connectivity with a high-resolution display and long-lasting battery. Designed <span class="hljs-keyword">for</span> everyday use with a sleek, water-resistant build, it<span class="hljs-string">'s ideal for tech enthusiasts.

Step: extract-features
Response: Okay, here are the key product features extracted from the summary:

Fitness Tracking
Smartphone Connectivity
High-Resolution Display
Long-Lasting Battery
Sleek Design
Water-Resistant Build
Designed for Everyday Use
Step: refine-marketing-copy
Response: ## Elevate Your Everyday with Seamless Connectivity and Unrivaled Performance.

Experience the perfect fusion of style and functionality with our revolutionary device, designed to seamlessly integrate into your active lifestyle. Stay motivated and informed with comprehensive Fitness Tracking, while effortlessly staying connected via Smartphone Connectivity.

Immerse yourself in vibrant clarity with the stunning High-Resolution Display, and power through your day without interruption thanks to the Long-Lasting Battery. Encased in a Sleek Design, this device is as stylish as it is practical.

Built to withstand the rigors of daily life, the Water-Resistant Build ensures worry-free wear, rain or shine. Engineered for comfort and performance, this device is Designed for Everyday Use, empowering you to live your best life, effortlessly.</span>
</code></pre>
<p>This is how you can build a prompt chaining agentic system with AI primitives (no framework) using the Langbase SDK and Langbase agentic architectures.</p>
<p>Thank you for reading!</p>
<p>Connect with me by 🙌:</p>
<ul>
<li><p>Subscribing to my <a target="_blank" href="https://www.youtube.com/@AIwithMahamCodes">YouTube</a> Channel. If you are willing to learn about AI and agents.</p>
</li>
<li><p>Subscribing to my free newsletter <a target="_blank" href="https://mahamcodes.substack.com/">“The Agentic Engineer”</a> where I share all the latest AI and agents news/trends/jobs and much more.</p>
</li>
<li><p>Follow me on <a target="_blank" href="https://x.com/MahamDev">X (Twitter)</a>.</p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to build a Serverless AI Agent to Generate Cold emails for Your Dream Job ]]>
                </title>
                <description>
                    <![CDATA[ Cold emails can make a huge difference in your job search, but writing the perfect one takes time. You need to match your skills with the job description, find the right tone, and do it over and over again—it’s exhausting. This guide will walk you th... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-a-serverless-ai-agent-for-generating-cold-emails/</link>
                <guid isPermaLink="false">67b5df98970327b4e047537c</guid>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ llm ]]>
                    </category>
                
                    <category>
                        <![CDATA[ ai-agent ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Maham Codes ]]>
                </dc:creator>
                <pubDate>Wed, 19 Feb 2025 13:41:44 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739971173263/869c0c1c-9b45-48af-a1d1-0982436b8630.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Cold emails can make a huge difference in your job search, but writing the perfect one takes time. You need to match your skills with the job description, find the right tone, and do it over and over again—it’s exhausting.</p>
<p>This guide will walk you through building a cold email generator agent using serverless memory agents by Langbase to automate this entire process. We’ll integrate the memory agent into a Node.js project, enabling it to read your résumé, analyze the job description, and generate a personalized, high-impact cold email in seconds.</p>
<h3 id="heading-heres-what-ill-cover">Here’s what I’ll cover:</h3>
<ol>
<li><p><a class="post-section-overview" href="#heading-large-language-models-llms-are-stateless-by-nature">Large language models (LLMs) are stateless by nature</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-are-memory-agents">What are Memory agents?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-reference-architecture">Reference Architecture</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-1-create-a-directory-and-initialize-npm">Step 1: Create a directory and initialize npm</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-create-a-serverless-pipe-agent">Step 2: Create a serverless Pipe agent</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-3-add-a-env-file">Step 3: Add a .env file</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-4-create-a-serverless-memory-agent">Step 4: Create a serverless memory agent</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-5-add-documents-to-the-memory-agent">Step 5: Add documents to the memory agent</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-6-generate-memory-embeddings">Step 6: Generate memory embeddings</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-understanding-memory-embeddings">Understanding memory embeddings</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-generate-embeddings">How to generate embeddings?</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-step-7-integrate-memory-in-pipe-agent">Step 7: Integrate memory in Pipe agent</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-8-integrate-the-memory-agent-in-nodejs">Step 8: Integrate the memory agent in Node.js</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-9-start-the-baseai-server">Step 9: Start the BaseAI server</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-10-run-the-memory-agent">Step 10: Run the memory agent</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-the-result">The result</a></p>
</li>
</ol>
<h2 id="heading-large-language-models-llms-are-stateless-by-nature">Large Language Models (LLMs) Are Stateless by Nature</h2>
<p>LLMs (Large Language Models) are stateless because they don’t retain any memory of previous interactions or the context of past queries beyond the input they're given in a session. Each time an LLM processes a prompt, it operates on that specific prompt without any history from prior ones.</p>
<p>This stateless nature allows the model to treat each request as independent, which simplifies its architecture and training process. But this also means that without mechanisms like RAG (Retrieval-Augmented Generation) or memory (long-term), LLMs can't carry forward information from one interaction to the next.</p>
<p>To introduce continuity or context, developers can implement external systems to manage and inject context, but the model itself doesn't "remember" anything between requests.</p>
<h3 id="heading-how-do-we-solve-this">How do we solve this?</h3>
<p>By integrating <strong>Memory Agents</strong> by Langbase, we can give LLMs long-term memory—allowing them to store, retrieve, and use information dynamically, making them much more useful for real-world applications.</p>
<h2 id="heading-what-are-memory-agents">What Are Memory Agents?</h2>
<p><a target="_blank" href="https://langbase.com/docs/memory">Langbase serverless memory agents</a> (long-term memory solution) are designed to acquire, process, retain, and retrieve information seamlessly. They dynamically attach private data to any LLM, enabling context-aware responses in real time and reducing hallucinations.</p>
<p>These agents combine vector storage, Retrieval-Augmented Generation (RAG), and internet access to create a powerful managed context search API. Developers can use them to build smarter, more capable AI applications.</p>
<p>In a RAG setup, memory – when connected directly to a Langbase Pipe Agent – becomes a memory agent. This pairing gives the LLM the ability to fetch relevant data and deliver precise, contextually accurate answers—addressing the limitations of LLMs when it comes to handling private data.</p>
<p>Memory agents ensure secure local memory storage. Data used to create memory embeddings stays protected, processed within secure environments, and only sent externally if explicitly configured. Access is strictly controlled via API keys, ensuring sensitive information remains safe.</p>
<p>Note that pipe is a serverless AI agent. It has agentic memory and tools.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we begin creating a cold email generator agent, you’ll need to have the following setup and tools ready to go.</p>
<p>In this tutorial, I’ll be using this tech stack:</p>
<ul>
<li><p><a target="_blank" href="http://baseai.dev/">BaseAI</a> — the web framework for building AI agents locally.</p>
</li>
<li><p><a target="_blank" href="http://langbase.com/">Langbase</a> — the platform to build and deploy your serverless AI agents.</p>
</li>
<li><p><a target="_blank" href="https://openai.com/">OpenAI</a> — to get the LLM key for the preferred model.</p>
</li>
</ul>
<p>You’ll also need to:</p>
<ul>
<li><p>Sign up on Langbase to get access to the API key.</p>
</li>
<li><p>Sign up on OpenAI to generate the LLM key for the model you want to use (for this demo, I’ll be using GPT-4o mini). You can generate the key <a target="_blank" href="https://platform.openai.com/api-keys">here</a>.</p>
</li>
</ul>
<h2 id="heading-reference-architecture">Reference Architecture</h2>
<p>Here’s a diagrammatic representation of the entire process of building a serverless AI agent to generate cold emails for job applications:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739900463621/e2b6753e-287f-4d69-b453-36d50f316fb8.png" alt="Reference architecture of memory agents working" class="image--center mx-auto" width="3142" height="1476" loading="lazy"></p>
<p>Let’s start building the agent!</p>
<h2 id="heading-step-1-create-a-directory-and-initialize-npm">Step 1: Create a Directory and Initialize npm</h2>
<p>To start creating a serverless AI agent that generates cold emails for a job opening, you need to create a directory in your local machine and install all the relevant dev dependencies in it. You can do this by navigating to it and running the following command in the terminal:</p>
<pre><code class="lang-bash">mkdir my-project
npm init -y
npm install dotenv
</code></pre>
<p>This command will create a package.json file in your project directory with default values. It will also install the <code>dotenv</code> package to read environment variables from the <code>.env</code> file.</p>
<h2 id="heading-step-2-create-a-serverless-pipe-agent">Step 2: Create a Serverless Pipe Agent</h2>
<p>Next, we’ll be creating a <a target="_blank" href="https://langbase.com/docs/pipe/quickstart">pipe agent</a>. Pipes are different from other agents, as they are serverless AI agents with agentic tools that can work with any language or framework. They are easily deployable, and with just one API they let you connect more than 250 LLMs to any data to build any developer API workflow.</p>
<p>To create your AI agent pipe, navigate to your project directory. Run the following command:</p>
<pre><code class="lang-bash">npx baseai@latest pipe
</code></pre>
<p>Upon running, you’ll see the following prompts:</p>
<pre><code class="lang-bash">BaseAI is not installed but required to run. Would you like to install it? Yes/No
Name of the pipe? email-generator-agent
Description of the pipe? Generates emails <span class="hljs-keyword">for</span> your dream job <span class="hljs-keyword">in</span> seconds
Status of the pipe? Public/Private
System prompt? You are a helpful AI assistant
</code></pre>
<p>Once you are done with the name, description, and status of the AI agent pipe, everything will be set up automatically for you. Your pipe will be created successfully at <code>/baseai/pipes/email-generator-agent.ts</code>.</p>
<h2 id="heading-step-3-add-a-env-file">Step 3: Add a .env File</h2>
<p>Create a <code>.env</code> file in the root directory of your project and add the OpenAI and Langbase API keys in it. You can access your Langbase API key from <a target="_blank" href="https://langbase.com/docs/api-reference/api-keys">here</a>.</p>
<h2 id="heading-step-4-create-a-serverless-memory-agent">Step 4: Create a Serverless Memory Agent</h2>
<p>Next, we’ll be creating a memory and then attaching it with the Pipe to make it a memory agent. To do this, run this command in your terminal:</p>
<pre><code class="lang-bash">npx baseai@latest memory
</code></pre>
<p>Upon running this command, you’ll see the following prompts:</p>
<pre><code class="lang-bash">Name of the memory? email-generator-memory
Description of the memory? Contains my resume
Do you want to create memory from the current project git repository? Yes/No
</code></pre>
<p>After this, everything will be set up automatically for you and you can access your memory created successfully at <code>/baseai/memory/email-generator-memory.ts</code>.</p>
<h2 id="heading-step-5-add-documents-to-the-memory-agent">Step 5: Add Documents to the Memory Agent</h2>
<p>Inside <code>/baseai/memory/email-generator-memory.ts</code> you’ll see another folder called documents. This is where you’ll store the files you want your AI agent to access. Let’s save your résumé as either a <code>.pdf</code> or <code>.txt</code> file. Then, I’ll convert it to a markdown file and place it in the <code>/baseai/memory/email-generator-memory/documents</code> directory.</p>
<p>This step ensures that the memory agent can process and retrieve information from your documents, making the AI agent capable of generating accurate cold emails based on the experiences and skills provided in the résumé attached.</p>
<h2 id="heading-step-6-generate-memory-embeddings">Step 6: Generate Memory Embeddings</h2>
<p>With your documents added to memory, the next step is generating memory embeddings. But before that, let me quickly explain what embeddings are and why they matter.</p>
<h3 id="heading-understanding-memory-embeddings">Understanding memory embeddings</h3>
<p>Memory embeddings are numerical representations of your documents that enable an AI to grasp context, relationships, and meaning within text. They act as a bridge, converting raw data into a structured format AI can process for semantic search and retrieval.</p>
<p>Without embeddings, AI agents wouldn’t effectively connect user queries with relevant content. Generating embeddings creates a searchable index, allowing the memory agent to deliver accurate, context-aware responses efficiently.</p>
<h3 id="heading-how-to-generate-embeddings">How to generate embeddings</h3>
<p>To generate embeddings for your documents, run the following command in your terminal:</p>
<pre><code class="lang-bash">npx baseai@latest embed -m email-generator-memory
</code></pre>
<p>Your memory is now ready to be connected with a Pipe (memory agent), enabling your AI agent to fetch precise, context-aware responses from your documents.</p>
<h2 id="heading-step-7-integrate-memory-in-pipe-agent">Step 7: Integrate Memory in Pipe Agent</h2>
<p>Next, you have to attach the memory you created to your Pipe agent to make it a memory agent. For that, go to <code>/baseai/pipes/email-generator-agent.ts</code>. This is what it will look like at the moment:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { PipeI } <span class="hljs-keyword">from</span> <span class="hljs-string">'@baseai/core'</span>;

<span class="hljs-keyword">const</span> pipePipeWithMemory = (): <span class="hljs-function"><span class="hljs-params">PipeI</span> =&gt;</span> ({
    apiKey: process.env.LANGBASE_API_KEY!, <span class="hljs-comment">// Replace with your API key https://langbase.com/docs/api-reference/api-keys</span>
    name: <span class="hljs-string">'email-generator-agent'</span>,
    description: <span class="hljs-string">'Generates emails for your dream job in seconds'</span>,
    status: <span class="hljs-string">'public'</span>,
    model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
    stream: <span class="hljs-literal">true</span>,
    json: <span class="hljs-literal">false</span>,
    store: <span class="hljs-literal">true</span>,
    moderate: <span class="hljs-literal">true</span>,
    top_p: <span class="hljs-number">1</span>,
    max_tokens: <span class="hljs-number">1000</span>,
    temperature: <span class="hljs-number">0.7</span>,
    presence_penalty: <span class="hljs-number">1</span>,
    frequency_penalty: <span class="hljs-number">1</span>,
    stop: [],
    tool_choice: <span class="hljs-string">'auto'</span>,
    parallel_tool_calls: <span class="hljs-literal">false</span>,
    messages: [
        { role: <span class="hljs-string">'system'</span>, content: You are a helpful AI assistant. }],
    variables: [],
    memory: [],
    tools: []
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> pipePipeWithMemory;
</code></pre>
<p>Now integrate the memory in the pipe by importing it at the top and calling it as a function in the <code>memory</code> array. Also, add the following in the messages content:</p>
<pre><code class="lang-bash">Based on the job description and my resume attached, write a compelling cold email tailored to the job, highlighting my most relevant skills, achievements, and experiences. Ensure the tone is professional yet approachable, and include a strong call to action <span class="hljs-keyword">for</span> a follow-up or interview.
</code></pre>
<p>This is what the code will look like after doing all of this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { PipeI } <span class="hljs-keyword">from</span> <span class="hljs-string">'@baseai/core'</span>;
<span class="hljs-keyword">import</span> emailGeneratorMemoryMemory <span class="hljs-keyword">from</span> <span class="hljs-string">'../memory/email-generator-memory'</span>;

<span class="hljs-keyword">const</span> pipeEmailGeneratorAgent = (): <span class="hljs-function"><span class="hljs-params">PipeI</span> =&gt;</span> ({
 <span class="hljs-comment">// Replace with your API key https://langbase.com/docs/api-reference/api-keys</span>
 apiKey: process.env.LANGBASE_API_KEY!,
 name: <span class="hljs-string">'email-generator-agent'</span>,
 description: <span class="hljs-string">'Generates emails for your dream job in seconds'</span>,
 status: <span class="hljs-string">'private'</span>,
 model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
 stream: <span class="hljs-literal">true</span>,
 json: <span class="hljs-literal">false</span>,
 store: <span class="hljs-literal">true</span>,
 moderate: <span class="hljs-literal">true</span>,
 top_p: <span class="hljs-number">1</span>,
 max_tokens: <span class="hljs-number">1000</span>,
 temperature: <span class="hljs-number">0.7</span>,
 presence_penalty: <span class="hljs-number">1</span>,
 frequency_penalty: <span class="hljs-number">1</span>,
 stop: [],
 tool_choice: <span class="hljs-string">'auto'</span>,
 parallel_tool_calls: <span class="hljs-literal">true</span>,
 messages: [{ role: <span class="hljs-string">'system'</span>, content: Based on the job description and my resume attached, write a compelling cold email tailored to the job, highlighting my most relevant skills, achievements, and experiences. Ensure the tone is professional yet approachable, and include a strong call to action <span class="hljs-keyword">for</span> a follow-up or interview. }],
 variables: [],
 memory: [emailGeneratorMemoryMemory()],
 tools: []
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> pipeEmailGeneratorAgent;
</code></pre>
<h2 id="heading-step-8-integrate-the-memory-agent-in-nodejs">Step 8: Integrate the Memory Agent in Node.js</h2>
<p>Now we’ll integrate the memory agent you created into the Node.js project to build an interactive command-line interface (CLI) for the document attached. This Node.js project will serve as the base for testing and interacting with the memory agent (in the beginning of the tutorial, we set up a Node.js project by initializing npm).</p>
<p>Now, create an index.ts file:</p>
<pre><code class="lang-bash">touch index.ts
</code></pre>
<p>In this TypeScript file, import the pipe agent you created. We will use the pipe primitive from <code>@baseai/core</code> to run the pipe.</p>
<p>Add the following code to the <code>index.ts</code> file:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> <span class="hljs-string">'dotenv/config'</span>;
<span class="hljs-keyword">import</span> { Pipe } <span class="hljs-keyword">from</span> <span class="hljs-string">'@baseai/core'</span>;
<span class="hljs-keyword">import</span> inquirer <span class="hljs-keyword">from</span> <span class="hljs-string">'inquirer'</span>;
<span class="hljs-keyword">import</span> ora <span class="hljs-keyword">from</span> <span class="hljs-string">'ora'</span>;
<span class="hljs-keyword">import</span> chalk <span class="hljs-keyword">from</span> <span class="hljs-string">'chalk'</span>;
<span class="hljs-keyword">import</span> pipeEmailGeneratorAgent <span class="hljs-keyword">from</span> <span class="hljs-string">'./baseai/pipes/email-generator-agent'</span>;

<span class="hljs-keyword">const</span> pipe = <span class="hljs-keyword">new</span> Pipe(pipeEmailGeneratorAgent());

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params"></span>) </span>{

   <span class="hljs-keyword">const</span> initialSpinner = ora(<span class="hljs-string">'Conversation with Memory agent...'</span>).start();
   <span class="hljs-keyword">try</span> {
       <span class="hljs-keyword">const</span> { completion: calculatorTool} = <span class="hljs-keyword">await</span> pipe.run({
           messages: [{ role: <span class="hljs-string">'user'</span>, content: <span class="hljs-string">'Hello'</span> }],
       });
       initialSpinner.stop();
       <span class="hljs-built_in">console</span>.log(chalk.cyan(<span class="hljs-string">'Report Generator Agent response...'</span>));
       <span class="hljs-built_in">console</span>.log(calculatorTool);
   } <span class="hljs-keyword">catch</span> (error) {
       initialSpinner.stop();
       <span class="hljs-built_in">console</span>.error(chalk.red(<span class="hljs-string">'Error processing initial request:'</span>), error);
   }

   <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
       <span class="hljs-keyword">const</span> { userMsg } = <span class="hljs-keyword">await</span> inquirer.prompt([
           {
               <span class="hljs-keyword">type</span>: <span class="hljs-string">'input'</span>,
               name: <span class="hljs-string">'userMsg'</span>,
               message: chalk.blue(<span class="hljs-string">'Enter your query (or type "exit" to quit):'</span>),
           },
       ]);


       <span class="hljs-keyword">if</span> (userMsg.toLowerCase() === <span class="hljs-string">'exit'</span>) {
           <span class="hljs-built_in">console</span>.log(chalk.green(<span class="hljs-string">'Goodbye!'</span>));
           <span class="hljs-keyword">break</span>;
       }


       <span class="hljs-keyword">const</span> spinner = ora(<span class="hljs-string">'Processing your request...'</span>).start();


       <span class="hljs-keyword">try</span> {
           <span class="hljs-keyword">const</span> { completion: reportAgentResponse } = <span class="hljs-keyword">await</span> pipe.run({
               messages: [{ role: <span class="hljs-string">'user'</span>, content: userMsg }],
           });


           spinner.stop();
           <span class="hljs-built_in">console</span>.log(chalk.cyan(<span class="hljs-string">'Agent:'</span>));
           <span class="hljs-built_in">console</span>.log(reportAgentResponse);
       } <span class="hljs-keyword">catch</span> (error) {
           spinner.stop();
           <span class="hljs-built_in">console</span>.error(chalk.red(<span class="hljs-string">'Error processing your request:'</span>), error);
       }
   }
}

main();
</code></pre>
<p>This code creates an interactive CLI for chatting with an AI agent, using a pipe from the <code>@baseai/core</code> library to process user input. Here's what happens:</p>
<ul>
<li><p>It imports necessary libraries such as <code>dotenv</code> for environment configuration, <code>inquirer</code> for user input, <code>ora</code> for loading spinners, and <code>chalk</code> for colored output. Make sure you install these libraries first using this command in your terminal: <code>npm install ora inquirer</code>.</p>
</li>
<li><p>A pipe object is created from the BaseAI library using a predefined memory called <code>email-generator-agent</code>.</p>
</li>
</ul>
<p>In the <code>main()</code> function:</p>
<ul>
<li><p>A spinner starts while an initial conversation with the AI agent is initiated with the message 'Hello'.</p>
</li>
<li><p>The response from the AI is displayed.</p>
</li>
<li><p>A loop runs to continually ask the user for input and send queries to the AI agent.</p>
</li>
<li><p>The AI's responses are shown, and the process continues until the user types "exit”.</p>
</li>
</ul>
<h2 id="heading-step-9-start-the-baseai-server">Step 9: Start the BaseAI Server</h2>
<p>To run the memory agent locally, you need to start the BaseAI server first. Run the following command in your terminal:</p>
<pre><code class="lang-bash">npx baseai@latest dev
</code></pre>
<h2 id="heading-step-10-run-the-memory-agent">Step 10: Run the Memory Agent</h2>
<p>Run the <code>index.ts</code> file using the following command:</p>
<pre><code class="lang-bash">npx tsx index.ts
</code></pre>
<h2 id="heading-the-result">The Result</h2>
<p>In your terminal, you’ll be prompted to "Enter your query." For example, let’s paste a job description and ask to generate an email from our end showing interest. And it will give us the response with correct sources/citations as well.</p>
<p>With this setup, we’ve built a Cold Email Generator agent that uses the power of LLMs and Langbase memory agents to overcome LLMs' limitations, ensuring accurate responses without hallucinating on private data.</p>
<p>Here’s a demo of the end result:</p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/ns7UqX6Ycs8" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
<p> </p>
<p>Thank you for reading!</p>
<p>Connect with me by 🙌:</p>
<ul>
<li><p>Subscribing to my <a target="_blank" href="https://www.youtube.com/@AIwithMahamCodes">YouTube</a> Channel. If you are willing to learn about AI and agents.</p>
</li>
<li><p>Subscribing to my free newsletter <a target="_blank" href="https://mahamcodes.substack.com/">“The Agentic Engineer”</a> where I share all the latest AI and agents news/trends/jobs and much more.</p>
</li>
<li><p>Follow me on <a target="_blank" href="https://x.com/MahamDev">X (Twitter)</a>.</p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
