<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ n8n - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Sun, 17 May 2026 04:37:37 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/n8n/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Build a Self-Hosted WhatsApp Bot with n8n and WAHA ]]>
                </title>
                <description>
                    <![CDATA[ WhatsApp is where your many of your customers likely already are. For support tickets, order updates, booking reminders, and lead qualification, a WhatsApp channel often converts several times better  ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-build-a-self-hosted-whatsapp-bot-with-n8n-and-waha/</link>
                <guid isPermaLink="false">6a01e032fca21b0d4b2bb4c1</guid>
                
                    <category>
                        <![CDATA[ whatsapp ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ self-hosted ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ אחיה כהן ]]>
                </dc:creator>
                <pubDate>Mon, 11 May 2026 13:57:06 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/28affe4d-9359-4cbb-a311-a2ee9d0829c0.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>WhatsApp is where your many of your customers likely already are. For support tickets, order updates, booking reminders, and lead qualification, a WhatsApp channel often converts several times better than email.</p>
<p>But the official WhatsApp Business Cloud API can be slow to onboard, template-restricted for proactive messages, and priced per conversation — which adds up fast at scale.</p>
<p>There's another path: you can run your own WhatsApp HTTP gateway on a small server, connect it to a workflow engine, and keep every message — inbound and outbound — inside infrastructure you control. No monthly conversation fees, no template approvals for routine replies, no third-party middleman holding your customer data.</p>
<p>In this tutorial, you'll build exactly that. By the end, you'll have a WhatsApp bot that:</p>
<ul>
<li><p>Receives every incoming message through a webhook</p>
</li>
<li><p>Routes messages through an n8n workflow</p>
</li>
<li><p>Replies automatically based on keywords, AI, or any API call you want</p>
</li>
<li><p>Runs entirely on your own server, using two open-source tools</p>
</li>
</ul>
<p>You'll use <strong>WAHA</strong> (WhatsApp HTTP API) as the gateway, and <strong>n8n</strong> as the workflow engine. Both run in Docker, both are free for self-hosting, and together they cover everything from a simple auto-reply to a full CRM integration.</p>
<h2 id="heading-table-of-contents">Table of contents</h2>
<ul>
<li><p><a href="#heading-what-youll-learn">What You'll Learn</a></p>
</li>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-a-note-on-which-whatsapp-account-to-use">A Note on Which WhatsApp Account to Use</a></p>
</li>
<li><p><a href="#heading-waha-vs-the-official-whatsapp-business-cloud-api">WAHA vs the official WhatsApp Business Cloud API</a></p>
</li>
<li><p><a href="#heading-part-1-understanding-waha">Part 1: Understanding WAHA</a></p>
</li>
<li><p><a href="#heading-part-2-running-waha-with-docker">Part 2: Running WAHA with Docker</a></p>
</li>
<li><p><a href="#heading-part-3-starting-a-whatsapp-session">Part 3: Starting a WhatsApp session</a></p>
</li>
<li><p><a href="#heading-part-4-running-n8n">Part 4: Running n8n</a></p>
</li>
<li><p><a href="#heading-part-5-creating-the-webhook-trigger-in-n8n">Part 5: Creating the Webhook Trigger in n8n</a></p>
</li>
<li><p><a href="#heading-part-6-wiring-waha-to-n8n">Part 6: Wiring WAHA to n8n</a></p>
</li>
<li><p><a href="#heading-part-7-building-the-first-auto-reply">Part 7: Building the first auto-reply</a></p>
</li>
<li><p><a href="#heading-part-8-a-second-example-proactive-booking-confirmations">Part 8: A Second Example — Proactive Booking Confirmations</a></p>
</li>
<li><p><a href="#heading-part-9-going-to-production">Part 9: Going to Production</a></p>
</li>
<li><p><a href="#heading-common-pitfalls">Common Pitfalls</a></p>
</li>
<li><p><a href="#heading-where-to-go-next">Where to Go Next</a></p>
</li>
</ul>
<h2 id="heading-what-youll-learn">What You'll Learn</h2>
<ul>
<li><p>How WAHA works under the hood and when to use it instead of the official Cloud API</p>
</li>
<li><p>How to run WAHA and n8n side by side with Docker Compose</p>
</li>
<li><p>How to scan the QR code and bind a WhatsApp account to your gateway</p>
</li>
<li><p>How to connect WAHA's webhook to an n8n workflow</p>
</li>
<li><p>How to build a keyword-based auto-reply bot</p>
</li>
<li><p>How to send proactive confirmations from a separate workflow</p>
</li>
<li><p>How to harden the setup for production (HTTPS, API keys, rate limits, Queue Mode)</p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>A Linux server (any VPS works — 2 GB of RAM is enough for a small bot)</p>
</li>
<li><p>Docker and Docker Compose installed</p>
</li>
<li><p>A public hostname with DNS pointing at the server, or an ngrok tunnel for local testing</p>
</li>
<li><p>A WhatsApp account you're willing to dedicate to the bot (more on that below)</p>
</li>
<li><p>Basic familiarity with JSON and HTTP requests</p>
</li>
</ul>
<p>You don't need prior n8n experience. If you can drag a box and wire it to another box, you can build the flow.</p>
<h2 id="heading-a-note-on-which-whatsapp-account-to-use">A Note on Which WhatsApp Account to Use</h2>
<p>WAHA works by running an actual WhatsApp Web session inside a headless Chromium process. It logs in as a real account — the same way you would open web.whatsapp.com in your browser. Meta doesn't officially endorse this approach for commercial use at scale, and heavy volume from a single number can lead to a ban.</p>
<p>For that reason, use a dedicated number for the bot. Don't use your personal WhatsApp. Get a second SIM, eSIM, or a VoIP number that supports WhatsApp activation. Keep outbound volume reasonable, and you'll be fine for most small-business use cases.</p>
<p>If you plan to send thousands of marketing messages per day, switch to the official WhatsApp Business Cloud API — that's what it exists for. This tutorial is aimed at the middle ground: support bots, order updates, booking confirmations, and similar conversational flows where you need real-time control without enterprise pricing.</p>
<h2 id="heading-waha-vs-the-official-whatsapp-business-cloud-api">WAHA vs the official WhatsApp Business Cloud API</h2>
<p>Before writing any code, it helps to understand when each option is the right fit.</p>
<table>
<thead>
<tr>
<th>Dimension</th>
<th>WAHA (self-hosted)</th>
<th>WhatsApp Cloud API (Meta)</th>
</tr>
</thead>
<tbody><tr>
<td>Onboarding</td>
<td>Scan a QR code — ready in minutes</td>
<td>Business verification, app review — days to weeks</td>
</tr>
<tr>
<td>Cost</td>
<td>Server cost only</td>
<td>Per-conversation pricing</td>
</tr>
<tr>
<td>Template approval</td>
<td>Not needed</td>
<td>Required for proactive messages outside the 24-hour window</td>
</tr>
<tr>
<td>Session model</td>
<td>One WhatsApp Web session per Core container</td>
<td>Native API, no web session</td>
</tr>
<tr>
<td>Risk</td>
<td>Account ban possible at high unsolicited volume</td>
<td>Rate limits but no ban for normal use</td>
</tr>
<tr>
<td>Vendor lock-in</td>
<td>None — pure open source</td>
<td>Tied to Meta's API and pricing</td>
</tr>
<tr>
<td>Best for</td>
<td>Support bots, small-team workflows, internal tools</td>
<td>High-volume marketing, regulated industries, &gt;100k monthly messages</td>
</tr>
</tbody></table>
<p>Neither is strictly better. If you run a support team for a small business, WAHA is often the pragmatic choice. If you're a bank sending millions of transactional messages, you want the Cloud API. Many teams run both — WAHA for conversational support, Cloud API for bulk transactional traffic.</p>
<h2 id="heading-part-1-understanding-waha">Part 1: Understanding WAHA</h2>
<p>WAHA is an open-source project that wraps WhatsApp Web behind a clean REST API. You <code>POST /api/sendText</code> with a chat ID and a message, and WAHA sends it. You configure a webhook URL, and WAHA <code>POST</code>s to that URL every time a message arrives.</p>
<p>Under the hood, WAHA spawns a Chromium instance, opens WhatsApp Web, and uses an engine (<code>whatsapp-web.js</code>, <code>NOWEB</code>, or <code>GOWS</code>) to automate the session. Your code doesn't see any of that complexity — you just see an HTTP API.</p>
<p>The project ships in two flavors:</p>
<ul>
<li><p><strong>WAHA Core</strong> — free, MIT licensed, one active session per container, community support.</p>
</li>
<li><p><strong>WAHA Plus</strong> — commercial license, multi-session support, priority support, and access to advanced endpoints.</p>
</li>
</ul>
<p>For most developers building a single bot, Core is enough. You can always upgrade later.</p>
<p>Official docs live at <a href="https://waha.devlike.pro/">waha.devlike.pro</a>. Keep that open in another tab — we'll reference specific endpoints as we go.</p>
<h2 id="heading-part-2-running-waha-with-docker">Part 2: Running WAHA with Docker</h2>
<p>Create a fresh directory for the project:</p>
<pre><code class="language-bash">mkdir whatsapp-bot &amp;&amp; cd whatsapp-bot
</code></pre>
<p>Create a <code>docker-compose.yml</code> file:</p>
<pre><code class="language-yaml">services:
  waha:
    image: devlikeapro/waha:latest
    container_name: waha
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - WAHA_DASHBOARD_ENABLED=true
      - WAHA_DASHBOARD_USERNAME=admin
      - WAHA_DASHBOARD_PASSWORD=change-me-now
      - WHATSAPP_API_KEY=super-secret-key-change-me
      - WHATSAPP_DEFAULT_ENGINE=WEBJS
    volumes:
      - ./waha-sessions:/app/.sessions
</code></pre>
<p>A few things to notice:</p>
<ul>
<li><p>The dashboard username and password protect the web UI at <code>http://your-server:3000</code>. Always change the defaults before you expose the port publicly.</p>
</li>
<li><p><code>WHATSAPP_API_KEY</code> is the key every HTTP request to WAHA must include in the <code>X-Api-Key</code> header. Treat it like a database password.</p>
</li>
<li><p><code>WHATSAPP_DEFAULT_ENGINE=WEBJS</code> uses the mature <code>whatsapp-web.js</code> engine. WAHA also supports <code>NOWEB</code> and <code>GOWS</code> engines with different trade-offs — WEBJS is the safest default for a first deployment.</p>
</li>
<li><p>The volume mount persists the session across restarts. Without it, every container rebuild forces you to scan the QR code again.</p>
</li>
</ul>
<p>Start the container:</p>
<pre><code class="language-bash">docker compose up -d
docker compose logs -f waha
</code></pre>
<p>Within about 20 seconds WAHA finishes booting. Visit <code>http://your-server:3000</code> and log in with the dashboard credentials.</p>
<h2 id="heading-part-3-starting-a-whatsapp-session">Part 3: Starting a WhatsApp session</h2>
<p>WAHA calls each WhatsApp account a "session." You can have one session at a time on WAHA Core.</p>
<p>From the dashboard, click <strong>Start New Session</strong> and name it <code>default</code>. WAHA displays a QR code.</p>
<p>On your phone:</p>
<ol>
<li><p>Open WhatsApp.</p>
</li>
<li><p>Tap the three-dot menu (Android) or Settings (iOS).</p>
</li>
<li><p>Tap Linked Devices → Link a Device.</p>
</li>
<li><p>Point the camera at the QR code on your screen.</p>
</li>
</ol>
<p>Within a few seconds the dashboard shows <code>WORKING</code> status. Your session is live.</p>
<p>You can also do this over the API. Start the session (<code>default</code> is the session name, encoded in the URL path):</p>
<pre><code class="language-bash">curl -X POST http://your-server:3000/api/sessions/default/start \
  -H "X-Api-Key: super-secret-key-change-me"
</code></pre>
<p>The call is idempotent — if the session is already running, nothing happens.</p>
<p>Fetch the QR as a PNG:</p>
<pre><code class="language-bash">curl http://your-server:3000/api/default/auth/qr \
  -H "X-Api-Key: super-secret-key-change-me" \
  -H "Accept: image/png" \
  --output qr.png
</code></pre>
<p>Scan and you're in.</p>
<p>Test that the session works by sending a message to yourself:</p>
<pre><code class="language-bash">curl -X POST http://your-server:3000/api/sendText \
  -H "X-Api-Key: super-secret-key-change-me" \
  -H "Content-Type: application/json" \
  -d '{
    "session": "default",
    "chatId": "15555550123@c.us",
    "text": "Hello from WAHA!"
  }'
</code></pre>
<p>Replace <code>15555550123</code> with your own number (country code plus number, no <code>+</code>, no spaces, no dashes). The <code>@c.us</code> suffix marks it as an individual chat. Groups use <code>@g.us</code>.</p>
<p>If the message lands on your phone — congratulations. The gateway works.</p>
<h2 id="heading-part-4-running-n8n">Part 4: Running n8n</h2>
<p>Add an <code>n8n</code> service to your <code>docker-compose.yml</code> alongside WAHA:</p>
<pre><code class="language-yaml">services:
  waha:
    # ... existing config

  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_HOST=n8n.example.com
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://n8n.example.com/
      - GENERIC_TIMEZONE=UTC
    volumes:
      - ./n8n-data:/home/node/.n8n
</code></pre>
<p>Replace <code>n8n.example.com</code> with your real domain. For purely local testing, set:</p>
<pre><code class="language-yaml">- N8N_HOST=localhost
- N8N_PROTOCOL=http
- WEBHOOK_URL=http://localhost:5678/
</code></pre>
<p>If you want to test webhooks from your laptop without a server, run <code>ngrok http 5678</code> in another terminal and use the ngrok HTTPS URL as <code>WEBHOOK_URL</code>. n8n uses <code>WEBHOOK_URL</code> to tell external services where to POST — get this wrong and your webhooks will 404.</p>
<p>Start the stack:</p>
<pre><code class="language-bash">docker compose up -d
</code></pre>
<p>Visit <code>http://your-server:5678</code>. On the first visit, n8n walks you through creating an owner account (email and password). Every subsequent visit requires that login. For extra safety in production, put n8n behind a reverse proxy with an allow-list or an additional auth layer — we'll set that up later.</p>
<h2 id="heading-part-5-creating-the-webhook-trigger-in-n8n">Part 5: Creating the Webhook Trigger in n8n</h2>
<p>Click Create Workflow. You'll see an empty canvas.</p>
<p>Add a Webhook node and configure it:</p>
<ul>
<li><p><strong>HTTP Method</strong>: POST</p>
</li>
<li><p><strong>Path</strong>: <code>whatsapp</code> (this becomes part of the URL)</p>
</li>
<li><p><strong>Response Mode</strong>: Respond Immediately</p>
</li>
<li><p><strong>Response Data</strong>: First Entry JSON</p>
</li>
</ul>
<p>Click Listen for Test Event. n8n shows you two URLs: a test URL and a production URL. Copy the production URL. It looks like this:</p>
<pre><code class="language-plaintext">https://n8n.example.com/webhook/whatsapp
</code></pre>
<p>Not <code>webhook-test</code> — that one only fires while the editor is open. You want <code>webhook</code>.</p>
<h2 id="heading-part-6-wiring-waha-to-n8n">Part 6: Wiring WAHA to n8n</h2>
<p>WAHA can POST to a webhook on every WhatsApp event. Tell it where to send those events.</p>
<p>In the WAHA dashboard, open your session and set the webhook URL. Or do it over the API:</p>
<pre><code class="language-bash">curl -X PUT http://your-server:3000/api/sessions/default \
  -H "X-Api-Key: super-secret-key-change-me" \
  -H "Content-Type: application/json" \
  -d '{
    "config": {
      "webhooks": [
        {
          "url": "https://n8n.example.com/webhook/whatsapp",
          "events": ["message", "session.status"]
        }
      ]
    }
  }'
</code></pre>
<p>The <code>message</code> event fires on every inbound message. <code>session.status</code> fires when the session connects, disconnects, or reconnects — which is useful for alerting when your bot goes down.</p>
<p>Test it. From another phone, send a WhatsApp message to your bot's number. Head back to the n8n editor. Within a second or two the webhook node lights up with the event data.</p>
<p>The payload looks roughly like this:</p>
<pre><code class="language-json">{
  "event": "message",
  "session": "default",
  "payload": {
    "id": "false_15555550123@c.us_3EB0...",
    "from": "15555550123@c.us",
    "body": "Hello",
    "timestamp": 1713801234,
    "fromMe": false
  }
}
</code></pre>
<p>Everything you need is in <code>payload</code>: who sent it (<code>from</code>), what they said (<code>body</code>), and when (<code>timestamp</code>).</p>
<h2 id="heading-part-7-building-the-first-auto-reply">Part 7: Building the first auto-reply</h2>
<p>A bot that only listens is boring. Let's make it answer.</p>
<p>You'll build a tiny keyword router: if the user sends <code>hi</code> or <code>hello</code>, the bot greets them. If they send <code>price</code>, it sends a pricing message. Anything else gets a fallback.</p>
<p>After the Webhook node, add a Switch node.</p>
<p>Configure the Switch node:</p>
<ul>
<li><p><strong>Mode</strong>: Expression</p>
</li>
<li><p><strong>Value</strong>: <code>{{ $json.payload.body.toLowerCase().trim() }}</code></p>
</li>
<li><p>Add routing rules:</p>
<ul>
<li><p>Rule 1: equals <code>hi</code> — output 0</p>
</li>
<li><p>Rule 2: equals <code>hello</code> — output 0</p>
</li>
<li><p>Rule 3: equals <code>price</code> — output 1</p>
</li>
<li><p>Fallback output: 2</p>
</li>
</ul>
</li>
</ul>
<p>After the Switch, add three HTTP Request nodes, one per output.</p>
<p>Configure each HTTP Request node identically, except for the body text:</p>
<ul>
<li><p><strong>Method</strong>: POST</p>
</li>
<li><p><strong>URL</strong>: <code>http://waha:3000/api/sendText</code> (inside the Docker network you can reach WAHA by its service name. From outside use the full public URL)</p>
</li>
<li><p><strong>Send Headers</strong>: on</p>
<ul>
<li><p><code>X-Api-Key</code>: <code>super-secret-key-change-me</code></p>
</li>
<li><p><code>Content-Type</code>: <code>application/json</code></p>
</li>
</ul>
</li>
<li><p><strong>Send Body</strong>: on</p>
<ul>
<li><p><strong>Body Content Type</strong>: JSON</p>
</li>
<li><p><strong>Specify Body</strong>: Using JSON</p>
</li>
</ul>
</li>
</ul>
<p>For the greeting node, the JSON body is:</p>
<pre><code class="language-json">{
  "session": "default",
  "chatId": "={{ $('Webhook').item.json.payload.from }}",
  "text": "Hi! I'm the bot. Send 'price' to see pricing, or anything else for help."
}
</code></pre>
<p>For the pricing node:</p>
<pre><code class="language-json">{
  "session": "default",
  "chatId": "={{ $('Webhook').item.json.payload.from }}",
  "text": "Our plans start at $49/month. Reply 'sales' to talk to a human."
}
</code></pre>
<p>For the fallback:</p>
<pre><code class="language-json">{
  "session": "default",
  "chatId": "={{ $('Webhook').item.json.payload.from }}",
  "text": "I didn't catch that. Try 'hi' or 'price'."
}
</code></pre>
<p>The <code>={{ ... }}</code> syntax is an n8n expression — at runtime it pulls values from earlier nodes.</p>
<p>Connect the Switch outputs to their matching HTTP Request nodes. Save the workflow. Click Activate in the top-right.</p>
<p>Send <code>hi</code> to your bot from any phone. It should reply within a second.</p>
<p>Congratulations — you have a WhatsApp bot running entirely on your own infrastructure.</p>
<h2 id="heading-part-8-a-second-example-proactive-booking-confirmations">Part 8: A Second Example — Proactive Booking Confirmations</h2>
<p>Auto-reply is useful. Proactive outbound is where the value really compounds. Here's a second workflow that sends a booking confirmation whenever a new row lands in a database.</p>
<p>Create a second workflow in n8n. Use one of these triggers:</p>
<ul>
<li><p><strong>Schedule Trigger</strong> — poll a database every minute for new rows</p>
</li>
<li><p><strong>Webhook Trigger</strong> — listen for a notification from your booking system</p>
</li>
<li><p><strong>Database Trigger</strong> (Postgres, MySQL, Supabase) — react to inserts in real time</p>
</li>
</ul>
<p>For this example, use a Schedule Trigger set to every minute, followed by a Postgres <strong>Execute Query</strong> node that reads pending confirmations:</p>
<pre><code class="language-sql">SELECT id, customer_phone, service_name, booking_time
FROM bookings
WHERE confirmation_sent = false
LIMIT 20;
</code></pre>
<p>After the Postgres node, add an HTTP Request node pointing to the same WAHA <code>sendText</code> endpoint you used earlier. The body:</p>
<pre><code class="language-json">{
  "session": "default",
  "chatId": "={{ $json.customer_phone }}@c.us",
  "text": "Hi! Your booking for {{ \(json.service_name }} on {{ \)json.booking_time }} is confirmed. Reply 'change' to reschedule."
}
</code></pre>
<p>Finally, add a second Postgres node that marks the booking as sent:</p>
<pre><code class="language-sql">UPDATE bookings
SET confirmation_sent = true, confirmation_sent_at = NOW()
WHERE id = {{ $json.id }};
</code></pre>
<p>Activate the workflow. Every minute, n8n pulls pending bookings, sends a WhatsApp confirmation, and marks them done.</p>
<p>This pattern generalizes. Replace the SQL with a call to Shopify for order confirmations, Stripe for receipt messages, or Calendly for appointment reminders. The WhatsApp layer stays the same — only the source of truth changes.</p>
<h2 id="heading-part-9-going-to-production">Part 9: Going to Production</h2>
<p>The setup above works, but it's not yet production-ready. Here's what to harden before you point real customers at it.</p>
<h3 id="heading-1-put-everything-behind-https">1. Put Everything Behind HTTPS</h3>
<p>Never expose n8n or WAHA directly on plain HTTP. Put a reverse proxy in front. Caddy is the easiest choice because it handles Let's Encrypt automatically.</p>
<p>A minimal <code>Caddyfile</code>:</p>
<pre><code class="language-plaintext">n8n.example.com {
    reverse_proxy n8n:5678
}

waha.example.com {
    reverse_proxy waha:3000
}
</code></pre>
<p>Run Caddy as another service in the same Docker Compose. TLS certificates are issued and renewed automatically.</p>
<h3 id="heading-2-rotate-the-api-keys">2. Rotate the API Keys</h3>
<p>Don't ship <code>super-secret-key-change-me</code> to production. Generate a real key:</p>
<pre><code class="language-bash">openssl rand -hex 32
</code></pre>
<p>Put it in a <code>.env</code> file, reference it as <code>${WHATSAPP_API_KEY}</code> in <code>docker-compose.yml</code>, and add <code>.env</code> to your <code>.gitignore</code>.</p>
<h3 id="heading-3-rate-limit-outbound-messages">3. Rate-limit Outbound Messages</h3>
<p>WhatsApp bans accounts that send too many messages too fast. A safe outbound rate for a fresh number is well under 20 messages per minute. For bursty replies, add an n8n Wait node between sends, or queue outgoing messages through a small custom function node that sleeps between requests.</p>
<h3 id="heading-4-scale-n8n-with-queue-mode">4. Scale n8n with Queue Mode</h3>
<p>By default, n8n runs everything in a single process. That's fine for low volume. For higher throughput, switch to Queue Mode:</p>
<ul>
<li><p>Add a Redis container.</p>
</li>
<li><p>Run one <code>n8n</code> main container (the web UI and webhook receiver).</p>
</li>
<li><p>Run one or more <code>n8n-worker</code> containers that pull jobs from the queue.</p>
</li>
</ul>
<p>Queue Mode is documented at <a href="https://docs.n8n.io/hosting/scaling/queue-mode/">docs.n8n.io/hosting/scaling/queue-mode/</a>. Setup adds two environment variables (<code>EXECUTIONS_MODE=queue</code>, <code>QUEUE_BULL_REDIS_HOST=redis</code>) and decouples incoming webhooks from workflow execution. The webhook responds in milliseconds while workers chew through the queue in the background.</p>
<h3 id="heading-5-monitor-the-session">5. Monitor the Session</h3>
<p>WhatsApp Web sessions drop. The phone loses connection, WhatsApp rotates security tokens, or your server reboots. Catch those drops early.</p>
<p>Subscribe to the <code>session.status</code> webhook event in WAHA. When status becomes <code>FAILED</code> or <code>STOPPED</code>, route it to an n8n workflow that posts to Slack, sends an email, or pages you. The faster you know, the faster you recover.</p>
<p>For overall uptime, point something like Uptime Kuma at <code>GET /api/sessions/default</code> on WAHA. If WAHA reports <code>WORKING</code>, you're fine. Anything else triggers an alert.</p>
<h3 id="heading-6-back-up-the-sessions-volume">6. Back Up the Sessions Volume</h3>
<p>The <code>waha-sessions</code> directory contains the logged-in state. If you lose it, you have to scan the QR code again — possibly from a phone that's no longer handy. Back it up nightly. A simple cron job with <code>tar</code> and <code>rclone</code> to S3-compatible storage is plenty.</p>
<h3 id="heading-7-add-a-live-agent-handoff">7. Add a Live-Agent Handoff</h3>
<p>Not every conversation should stay with the bot. When a user types <code>human</code> — or when your intent classifier can't answer confidently — hand off to a real agent.</p>
<p>Chatwoot is a solid open-source option: it has a dedicated WhatsApp channel, agent inbox, team assignment, and conversation history. The handoff is an n8n branch that stops processing bot replies and forwards the message stream to Chatwoot's API.</p>
<h2 id="heading-common-pitfalls">Common Pitfalls</h2>
<p>A few issues catch almost everyone on their first production deploy.</p>
<h3 id="heading-webhooks-timing-out">Webhooks Timing Out</h3>
<p>WAHA gives your webhook a few seconds to respond. If your n8n workflow is slow (calling an LLM, hitting a remote API), the webhook times out and WAHA retries, potentially causing duplicate replies.</p>
<p>Fix: make the webhook return <code>200</code> immediately and offload the slow work. In n8n, set the Webhook node's Response Mode to <em>Using Respond to Webhook Node</em>, add a Respond to Webhook node as the first step with a <code>200</code> and empty body, then do the heavy lifting after that.</p>
<h3 id="heading-duplicate-messages">Duplicate Messages</h3>
<p>WAHA delivers the same <code>message</code> event more than once in edge cases (phone comes back online, session reconnects). Store the <code>payload.id</code> somewhere — Redis, a database, or n8n's static data store — and drop any ID you've already processed.</p>
<h3 id="heading-messages-arriving-out-of-order">Messages Arriving Out of Order</h3>
<p>The webhook is async, and n8n may parallelize executions. If ordering matters — for example, in a multi-step conversation — key a queue by the sender's <code>chatId</code> and process each sender serially.</p>
<h3 id="heading-sessions-disconnecting-after-a-phone-restart">Sessions Disconnecting After a Phone Restart</h3>
<p>Normal WhatsApp Web behavior. WAHA auto-reconnects, but occasionally the linked-devices list needs a manual refresh. If a session refuses to come back, stop the WAHA container, delete that session's folder under <code>waha-sessions/</code>, start the container again, and rescan the QR.</p>
<h3 id="heading-your-number-gets-banned">Your Number Gets Banned</h3>
<p>The single biggest cause is rate: a new number blasting hundreds of messages an hour gets flagged fast. Warm up a number slowly — send a normal, human-like volume for the first week. Don't send to strangers unsolicited. Prefer inbound-driven replies over outbound pushes wherever you can.</p>
<h3 id="heading-the-wrong-chat-id-format">The Wrong Chat ID Format</h3>
<p>WhatsApp individual chats use <code>&lt;number&gt;@c.us</code> and groups use <code>&lt;groupId&gt;@g.us</code>. Don't include the <code>+</code> or spaces in the number. If WAHA returns a 404 when sending, the chat ID is almost always the problem.</p>
<h2 id="heading-where-to-go-next">Where to Go Next</h2>
<p>You now have the foundation. The same two-service stack supports almost any bot you can imagine — you're only limited by what you can build in an n8n workflow.</p>
<p>Some natural next steps:</p>
<ul>
<li><p><strong>Plug in AI replies:</strong> Add an OpenAI or Anthropic node after the Webhook, pass the user's message through it with a short system prompt, and send the response back through WAHA. Cap conversation length to prevent runaway token usage.</p>
</li>
<li><p><strong>Integrate a CRM:</strong> Look up the caller's <code>chatId</code> in HubSpot, Pipedrive, or your own database before deciding how to reply. Segment responses by customer tier.</p>
</li>
<li><p><strong>Send proactive notifications:</strong> Appointment reminders, shipping updates, payment receipts, abandoned-cart nudges. Keep the content transactional and expected — unsolicited marketing blasts are the fastest way to a ban.</p>
</li>
<li><p><strong>Log every conversation:</strong> Add a Postgres or Supabase node after the Webhook to persist messages for analytics and customer history. Your future self (and your support team) will thank you.</p>
</li>
<li><p><strong>Add media handling:</strong> WAHA exposes <code>sendImage</code>, <code>sendFile</code>, and <code>sendVoice</code> endpoints. Teach the bot to accept photos for support tickets, or send invoices as PDFs directly inside the chat.</p>
</li>
</ul>
<p>The WhatsApp layer stays the same. Everything interesting happens upstream in the workflow.</p>
<p><em>If you want to see production examples of n8n and WAHA running at scale — or you need a similar automation built for your business — I'm the founder of Achiya Automation, where we ship WhatsApp, n8n, and Chatwoot integrations. You can find more at</em> <a href="https://achiya-automation.com"><em>achiya-automation.com</em></a><em>.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build an AI-Powered Research Automation System with n8n, Groq, and Academic APIs ]]>
                </title>
                <description>
                    <![CDATA[ As a researcher and developer, I found myself spending hours manually searching academic databases, reading abstracts, and trying to synthesize findings across multiple sources. For my work on circula ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-an-ai-powered-research-automation-system-with-n8n-groq-and-academic-apis/</link>
                <guid isPermaLink="false">69b849372ad6ae5184dbb6b8</guid>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                    <category>
                        <![CDATA[ freeCodeCamp.org ]]>
                    </category>
                
                    <category>
                        <![CDATA[ General Programming ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Chidozie Managwu ]]>
                </dc:creator>
                <pubDate>Mon, 16 Mar 2026 18:17:27 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/d4660bc7-3f3c-4325-bee7-57770e821204.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>As a researcher and developer, I found myself spending hours manually searching academic databases, reading abstracts, and trying to synthesize findings across multiple sources.</p>
<p>For my work on circular economy and battery recycling, I needed a way to query multiple databases at once without the manual fatigue.</p>
<p>In this tutorial, you'll build an automated research pipeline using n8n that reduces roughly six hours of manual literature review into a five-minute automated process.</p>
<p>This isn’t a “cool demo workflow.” It’s a production-minded pipeline with parallel collection, normalisation, deduplication, structured AI extraction, scoring, and practical error handling.</p>
<h3 id="heading-table-of-contents">Table of Contents</h3>
<ol>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-the-problem-research-takes-too-long">The Problem: Research Takes Too Long</a></p>
</li>
<li><p><a href="#heading-the-tech-stack">The Tech Stack</a></p>
</li>
<li><p><a href="#heading-the-project-structure-how-to-think-about-an-n8n-workflow-like-software">The Project Structure: How to Think About an n8n Workflow Like Software</a></p>
</li>
<li><p><a href="#heading-stage-1-centralized-configuration">Stage 1: Centralised Configuration</a></p>
</li>
<li><p><a href="#heading-stage-2-parallel-api-collection=with-failure-isolation">Stage 2: Parallel API Collection (With Failure Isolation)</a></p>
</li>
<li><p><a href="#heading-stage-3-normalisation-and-deduplication-doifirst-title-fallback">Stage 3: Normalisation and Deduplication (DOI-first, Title fallback)</a></p>
</li>
<li><p><a href="#heading-stage-4-aipowered-content-extraction-strict-json">Stage 4: AI-Powered Content Extraction (Strict JSON)</a></p>
</li>
<li><p><a href="#heading-stage-5-scoring-and-synthesis">Stage 5: Scoring and Synthesis</a></p>
</li>
<li><p>[Beginner-Friendly Evals (Retrieval and Extraction QA)(#heading-beginnerfriendly-evals-retrieval-and-extraction-qa)</p>
</li>
<li><p><a href="#heading-key-learnings-and-error-handling">Key Learnings and Error Handling</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>You don’t need to be a DevOps engineer to follow this, but you should have:</p>
<ul>
<li><p>Basic comfort with APIs and JSON (request/response payloads)</p>
</li>
<li><p>Familiarity with spreadsheets (Google Sheets basics)</p>
</li>
<li><p>Willingness to use a small amount of JavaScript inside n8n Function/Code nodes</p>
</li>
</ul>
<p>Access to:</p>
<ul>
<li><p>An n8n instance (self-hosted or cloud)</p>
</li>
<li><p>A Groq API key (or a compatible LLM provider)</p>
</li>
<li><p>Optional API keys, depending on the databases you use</p>
</li>
</ul>
<p>What you’ll build assumes:</p>
<ul>
<li><p>You’re extracting from metadata + abstracts (not downloading full PDFs).</p>
</li>
<li><p>You can accept that some sources will occasionally rate-limit or return partial results (and your workflow will be designed to survive this).</p>
</li>
</ul>
<h2 id="heading-the-problem-research-takes-too-long">The Problem: Research Takes Too Long</h2>
<p>Manual research is often a bottleneck for innovation. Before building this automation, my workflow involved searching multiple academic databases, scanning abstracts, and manually extracting key findings. This process was not only slow but also prone to human error and inconsistent note-taking.</p>
<p>The goal of this automation is to provide a “full-stack research assistant” that handles the heavy lifting of collecting candidate papers, removing duplicates, extracting consistent fields, scoring relevance and quality, and delivering a curated daily or weekly report, so you can spend your time on high-level synthesis rather than repetitive collection.</p>
<h2 id="heading-the-tech-stack">The Tech Stack</h2>
<p>This workflow leverages a combination of automation tooling, high-speed LLM inference, and academic metadata providers.</p>
<table>
<thead>
<tr>
<th>Tool</th>
<th>Purpose</th>
</tr>
</thead>
<tbody><tr>
<td>n8n</td>
<td>The workflow engine that orchestrates all steps</td>
</tr>
<tr>
<td>Groq</td>
<td>Runs a fast LLM (for example, Llama 3.3 70B) for structured extraction/synthesis</td>
</tr>
<tr>
<td>Semantic Scholar / OpenAlex</td>
<td>Broad academic coverage for metadata, abstracts, citations</td>
</tr>
<tr>
<td>arXiv / PubMed</td>
<td>Strong specialised coverage (preprints, life sciences)</td>
</tr>
<tr>
<td>Google Sheets</td>
<td>A lightweight “research database” for storage + history</td>
</tr>
</tbody></table>
<p>Notes: coverage varies by provider. Some APIs return abstracts reliably, while others may omit them. Your pipeline should treat missing abstracts as a normal case, not a failure.</p>
<h2 id="heading-the-project-structure-how-to-think-about-an-n8n-workflow-like-software">The Project Structure: How to Think About an n8n Workflow Like Software</h2>
<p>While n8n is a visual tool, it helps to design your workflow as modular stages to avoid the “spaghetti workflow” problem.</p>
<pre><code class="language-text">.
├── configuration/         # Keywords, thresholds, limits, date filters
├── collectors/            # Parallel HTTP request nodes (multiple sources)
├── processing/            # Normalization + deduplication code nodes
├── extraction/            # LLM extraction nodes (strict JSON)
├── scoring/               # Relevance + quality scoring + filtering
└── delivery/              # Google Sheets + email/HTML report
</code></pre>
<p>Design principle: each stage should produce a clean, predictable output shape that the next stage can rely on.</p>
<h2 id="heading-stage-1-centralised-configuration">Stage 1: Centralised Configuration</h2>
<p>Instead of hardcoding search parameters (keywords, min year, citation thresholds) across multiple nodes, use one configuration node to define workflow variables.</p>
<p>This matters for maintainability (change a value once, not in ten nodes), reusability (repurpose the entire pipeline by swapping one config object), and debuggability (log the config at the start of each run so you can reproduce results).</p>
<p>Use a Set node, or a Code node returning JSON like this:</p>
<pre><code class="language-json">{
  "keywords": "circular economy battery recycling remanufacturing",
  "min_year": 2020,
  "max_results_per_source": 10,
  "min_citations": 2,
  "relevance_threshold": 15,
  "batch_size": 10
}
</code></pre>
<p>Tip: keep numeric fields as numbers (not strings) to avoid scoring bugs later.</p>
<h2 id="heading-stage-2-parallel-api-collection-with-failure-isolation">Stage 2: Parallel API Collection (With Failure Isolation)</h2>
<p>Your workflow should query multiple sources simultaneously. In n8n, you can branch from your configuration node into multiple HTTP Request nodes, and then merge results later.</p>
<p>The production mindset here is simple: APIs fail. Rate limits happen. Providers return partial data. The key is to prevent one failing collector from crashing the whole run.</p>
<p>To implement this, on each HTTP Request node, enable <strong>Continue On Fail</strong> (or the equivalent “don’t stop workflow” behaviour). Then, in the normalisation stage, treat missing or failed outputs as empty arrays so downstream stages still run.</p>
<p>In practice, it also helps to set explicit timeouts and add a small retry policy (one to two retries) for transient failures. “Good” looks like this: if two out of five sources fail, you still produce a useful report from the remaining three, and you log which sources failed so you can investigate later.</p>
<h2 id="heading-stage-3-normalisation-and-deduplication-doi-first-title-fallback">Stage 3: Normalisation and Deduplication (DOI-first, Title fallback)</h2>
<p>Each academic API returns different field names and shapes. One might use <code>title</code>, another <code>display_name</code>, another <code>paper_title</code>. Your next stage should normalise all inputs into one schema.</p>
<h3 id="heading-target-normalised-schema">Target normalised schema</h3>
<p>Here’s a simple baseline schema (expand later as needed):</p>
<pre><code class="language-json">{
  "title": "string",
  "abstract": "string|null",
  "doi": "string|null",
  "year": 2024,
  "citations": 12,
  "url": "string|null",
  "source": "Semantic Scholar|OpenAlex|arXiv|PubMed"
}
</code></pre>
<h3 id="heading-what-deduping-by-doi-means-and-what-a-doi-is">What deduping by DOI means (and what a DOI is)</h3>
<p>A <strong>DOI</strong> (Digital Object Identifier) is a unique, persistent identifier assigned to many scholarly publications. If a paper has a DOI, that DOI functions like a stable ID: the same paper may appear in multiple databases with slightly different metadata, but the DOI should remain consistent.</p>
<p>So, <strong>deduping by DOI</strong> means: if two records share the same DOI, treat them as the same paper and keep only one.</p>
<p>When a DOI is missing (which is common for some preprints and some API responses), the fallback is to dedupe using a normalised title key, lowercased, trimmed, punctuation stripped, and whitespace collapsed. It’s not as perfect as DOI-based matching, but it’s a strong pragmatic backup.</p>
<h3 id="heading-what-normalise-into-a-unified-object-means-whats-happening-in-the-code">What “normalise into a unified object” means (what’s happening in the code)</h3>
<p>“Normalise into a unified object” simply means converting every provider’s raw response into the same predictable shape (the schema above). Once everything looks the same, downstream steps, such as deduplication, scoring, AI extraction, and storage, become straightforward because they don’t need provider-specific logic.</p>
<p>In the code below, that’s what the <code>normalized</code> object is: it maps Semantic Scholar’s fields (<code>paper.title</code>, <code>paper.externalIds.DOI</code>, <code>paper.citationCount</code>) into your standard fields (<code>title</code>, <code>doi</code>, <code>citations</code>, etc.). After that, the workflow generates a dedupe key (<code>doi:...</code> if DOI exists, otherwise <code>title:...</code>) and uses a <code>Set</code> to keep only the first occurrence.</p>
<h4 id="heading-example-n8n-code-node-normalisation-dedupe-pattern">Example n8n Code Node (Normalisation + Dedupe Pattern)</h4>
<pre><code class="language-javascript">const itemsIn = $input.all();

const seen = new Set();
const results = [];

function titleKey(t) {
  return (t || "")
    .toLowerCase()
    .replace(/[\W_]+/g, " ")
    .replace(/\s+/g, " ")
    .trim();
}

for (const item of itemsIn) {
  // Example: Semantic Scholar response shape
  const papers = item.json?.data || [];

  for (const paper of papers) {
    // "Normalize into a unified object":
    // take the provider-specific fields and map them into our standard schema.
    const normalized = {
      title: paper.title || null,
      abstract: paper.abstract || null,
      doi: paper.externalIds?.DOI || null,
      year: paper.year || null,
      citations: paper.citationCount || 0,
      url: paper.url || null,
      source: "Semantic Scholar",
    };

    if (!normalized.title) continue;

    // Dedupe key: DOI is strongest; title is fallback
    const key = normalized.doi
      ? `doi:${normalized.doi.toLowerCase()}`
      : `title:${titleKey(normalized.title)}`;

    if (seen.has(key)) continue;
    seen.add(key);

    results.push(normalized);
  }
}

return results.map(r =&gt; ({ json: r }));
</code></pre>
<p>Production-minded note: keep a field like <code>source</code> so you can debug where bad metadata is coming from later.</p>
<h2 id="heading-stage-4-ai-powered-content-extraction-strict-json">Stage 4: AI-Powered Content Extraction (Strict JSON)</h2>
<p>Once you have a deduplicated list of papers, you can send each paper (or a small batch) to Groq for structured extraction.</p>
<h3 id="heading-why-structured-output-matters">Why structured output matters</h3>
<p>If your LLM returns narrative text instead of JSON, misses fields, or emits malformed JSON, your workflow breaks downstream. In a production workflow, that’s not a rare edge case; it’s something you should expect and design around.</p>
<p>That’s why you’ll use strict schema prompting <em>and</em> validate responses downstream.</p>
<h3 id="heading-system-prompt-vs-user-prompt-and-how-to-compose-them">System prompt vs user prompt (and how to compose them)</h3>
<p>A helpful way to think about prompts in production is:</p>
<ul>
<li><p>The <strong>system prompt</strong> defines the <em>non-negotiable contract</em>: output format, allowed keys, no commentary, and what to do in uncertain cases. This is where you say “return ONLY valid JSON” and “no extra keys.”</p>
</li>
<li><p>The <strong>user prompt</strong> provides the <em>variable data</em> for this specific request: title, year, citations, abstract, and the exact schema you want filled.</p>
</li>
</ul>
<p>Composing them this way keeps your workflow stable. The system prompt stays mostly constant (your formatting contract), while the user prompt changes per paper (your payload). It also makes debugging easier: if outputs start failing, you can adjust the system constraints without rewriting every payload template.</p>
<h3 id="heading-suggested-extraction-schema">Suggested extraction schema</h3>
<p>Extract only what you can support from abstract-level data:</p>
<ul>
<li><p><code>research_question</code></p>
</li>
<li><p><code>methodology</code></p>
</li>
<li><p><code>key_findings</code></p>
</li>
<li><p><code>limitations</code></p>
</li>
<li><p><code>notes</code> (for missing abstract / ambiguity)</p>
</li>
</ul>
<h3 id="heading-example-prompt-system-user">Example prompt (system + user)</h3>
<p><strong>System:</strong></p>
<p>You are a research extraction engine. You must return ONLY valid JSON.<br>No markdown. No extra keys. No commentary.<br>If the abstract is missing or too vague, set fields to null and include a reason in "notes".</p>
<p><strong>User:</strong></p>
<p>Extract structured fields from this paper.</p>
<p>TITLE: {{title}}<br>YEAR: {{year}}<br>CITATIONS: {{citations}}<br>ABSTRACT: {{abstract}}</p>
<p>Return JSON with keys:<br>research_question (string|null)<br>methodology (string|null)<br>key_findings (array of strings)<br>limitations (array of strings)<br>notes (string)</p>
<p>Model settings: keep temperature low (around 0.2–0.3) and keep responses short and structured.</p>
<h3 id="heading-batch-processing-to-avoid-timeouts">Batch processing to avoid timeouts</h3>
<p>Instead of sending 50 papers at once, process them in batches (for example, 10). This reduces latency spikes, failure blast radius, and cost surprises. Smaller batches also make it easier to retry only the failing chunk rather than re-running everything.</p>
<h2 id="heading-stage-5-scoring-and-synthesis">Stage 5: Scoring and Synthesis</h2>
<p>Not every retrieved paper is worth your time. Without scoring, your pipeline becomes a firehose: you’ve automated collection, but you still have to manually decide what to read. Scoring is what turns “a big list of results” into a shortlist you can trust.</p>
<p>I recommend computing two signals:</p>
<ul>
<li><p><strong>Relevance</strong>: Is this actually about your research question?</p>
</li>
<li><p><strong>Quality/priority</strong>: If it’s relevant, is it worth reading first?</p>
</li>
</ul>
<p>For <strong>relevance</strong>, keep it simple and explainable. Count keyword hits in the title and abstract (and optionally in extracted <code>key_findings</code>). Title matches should be weighted higher because titles are deliberately compact summaries. Abstract hits are useful too, but cap them so long abstracts don’t dominate the score.</p>
<p>For <strong>quality/priority</strong>, use lightweight metadata you already have. Recency is a strong signal in fast-moving areas, and citations can help, but they should be treated as a weak signal (and capped) so newer high-value papers aren’t unfairly penalised.</p>
<p>A solid first scoring model is: add a title bonus, add a capped abstract bonus, add a capped citations bonus, and add a small recency bonus for papers from the last two years. Then filter using the <code>relevance_threshold</code> results from Stage 1. The advantage of this approach is that it’s easy to debug and tune: you can always explain why a paper passed or failed.</p>
<p>Once you’ve filtered down to your “gold” set, synthesis becomes safer and more useful. Write one row per accepted paper to Google Sheets, then generate a daily/weekly HTML summary (for example, top 5 papers with 1–2 key findings each) and include links so you can verify quickly.</p>
<h2 id="heading-beginner-friendly-evals-retrieval-and-extraction-qa">Beginner-Friendly Evals: Retrieval and Extraction QA</h2>
<p>AI workflows regress silently. A prompt tweak, a model update, or an API schema change can break extraction without throwing an obvious error. Adding lightweight evals is the difference between “it worked last week” and “it’s reliable.”</p>
<p>The goal here isn’t to build a full evaluation framework. It’s to add small, cheap checks that catch the most common failure modes:</p>
<ul>
<li><p>Are collectors still returning results?</p>
</li>
<li><p>Are we actually removing duplicates?</p>
</li>
<li><p>Is the LLM returning valid JSON with the keys we require?</p>
</li>
</ul>
<h3 id="heading-what-it-looks-like-in-n8n-a-concrete-example">What it looks like in n8n (a concrete example)</h3>
<p>A simple implementation is to add an <strong>“Assertions” Code node</strong> immediately after your extraction step, plus (optionally) another one after normalisation/deduplication.</p>
<p>At a high level, the workflow section looks like:</p>
<ol>
<li><p>Collectors (parallel HTTP Request nodes)</p>
</li>
<li><p>Merge results</p>
</li>
<li><p>Normalise + dedupe (Code node)</p>
</li>
<li><p>Split in Batches (optional)</p>
</li>
<li><p>LLM extraction (Groq/OpenAI-compatible node)</p>
</li>
<li><p><strong>Assertions (Code node)</strong></p>
</li>
<li><p>If node (pass/fail)</p>
</li>
<li><p>Delivery (Sheets + email)</p>
</li>
</ol>
<h3 id="heading-example-assertions-code-node-after-extraction">Example: Assertions code node after extraction</h3>
<p>This code node assumes each item is a paper with:</p>
<ul>
<li><p><code>title</code>, <code>abstract</code> in the normalised fields, and</p>
</li>
<li><p>an <code>extraction</code> field (or whatever you name it) containing the LLM response as an object or JSON string.</p>
</li>
</ul>
<p>Adapt the field name to match your actual node output, but the pattern is the same: parse, validate required keys, compute percentages, then decide whether to fail or warn.</p>
<pre><code class="language-javascript">const items = $input.all();

let total = items.length;
let withTitle = 0;
let withAbstract = 0;

let parseOk = 0;
let schemaOk = 0;

const requiredKeys = [
  "research_question",
  "methodology",
  "key_findings",
  "limitations",
  "notes",
];

const failures = [];

for (let i = 0; i &lt; items.length; i++) {
  const p = items[i].json;

  if (p.title &amp;&amp; String(p.title).trim().length &gt; 0) withTitle++;
  if (p.abstract &amp;&amp; String(p.abstract).trim().length &gt; 0) withAbstract++;

  // Adjust this depending on where you store the model output:
  const raw = p.extraction ?? p.llm ?? p.model_output;

  let obj = null;
  try {
    obj = typeof raw === "string" ? JSON.parse(raw) : raw;
    parseOk++;
  } catch (e) {
    failures.push({ index: i, title: p.title || null, reason: "JSON parse failed" });
    continue;
  }

  const hasAllKeys = requiredKeys.every(k =&gt; Object.prototype.hasOwnProperty.call(obj, k));
  if (!hasAllKeys) {
    failures.push({ index: i, title: p.title || null, reason: "Missing required keys" });
    continue;
  }

  // Optional: ensure arrays are arrays
  const arraysOk = Array.isArray(obj.key_findings) &amp;&amp; Array.isArray(obj.limitations);
  if (!arraysOk) {
    failures.push({ index: i, title: p.title || null, reason: "key_findings/limitations not arrays" });
    continue;
  }

  schemaOk++;
}

const pct = (n) =&gt; (total === 0 ? 0 : Math.round((n / total) * 100));

const report = {
  total_papers: total,
  pct_with_title: pct(withTitle),
  pct_with_abstract: pct(withAbstract),
  pct_extraction_json_parse_ok: pct(parseOk),
  pct_extraction_schema_ok: pct(schemaOk),
  failures_sample: failures.slice(0, 5),
};

// Decide pass/fail thresholds
const HARD_FAIL_PARSE_BELOW = 90;
const HARD_FAIL_SCHEMA_BELOW = 85;

const shouldFail =
  report.pct_extraction_json_parse_ok &lt; HARD_FAIL_PARSE_BELOW ||
  report.pct_extraction_schema_ok &lt; HARD_FAIL_SCHEMA_BELOW;

return [
  {
    json: {
      eval_report: report,
      shouldFail,
    },
  },
];
</code></pre>
<p>Then add an <strong>If node</strong>:</p>
<ul>
<li><p>If <code>shouldFail</code> is true, then route to an “Alert/Stop” branch (Slack/email/log) and optionally stop the workflow.</p>
</li>
<li><p>If false, then continue to the delivery stage.</p>
</li>
</ul>
<p>This is the automation equivalent of unit tests: small, cheap, and extremely effective. It also gives you a concrete paper trail when something changes upstream.</p>
<h2 id="heading-key-learnings-and-error-handling">Key Learnings and Error Handling</h2>
<p>Building this automation taught me that the best workflows are designed for failure.</p>
<p>First, error resilience is not optional. Never let one failing API crash the workflow. Use “Continue On Fail” on your HTTP nodes, merge partial results, and log which sources failed in your final report so you can debug without losing an entire run.</p>
<p>Second, batching is your friend. Process papers in batches (often 5–15) to reduce timeouts and cost spikes. Keep LLM payloads small and focused on what you actually need (metadata + abstract), and retry transient failures once rather than repeatedly hammering the model or API.</p>
<p>Third, structured prompting is what makes AI reliable in automation. A strict JSON schema is the difference between a workflow that runs unattended and one that breaks randomly. Keep temperature low, enforce the schema in the system prompt, and validate everything downstream with simple parse-and-assert checks.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>A good research pipeline doesn’t just retrieve papers – it turns scattered results into a consistent, deduplicated, scored, and review-ready shortlist you can trust.</p>
<p>By treating your n8n workflow like software modular stages, strict contracts between steps, and lightweight eval checks, you can reduce hours of manual literature review into a fast, repeatable process that survives real-world API failures and model quirks.</p>
<p>If you build this with good defaults (failure isolation, batching, normalisation, strict JSON extraction, and simple scoring), you end up with something you can run daily or weekly and actually rely on without the manual fatigue.</p>
<h3 id="heading-about-me">About Me</h3>
<p>I am Chidozie Managwu, an award-winning AI Product Architect and founder focused on helping global tech talent build real, production-ready skills. I contribute to global AI initiatives as a GAFAI Delegate and lead the AI Titans Network, a community for developers learning how to ship AI products.</p>
<p>My work has been recognised with the Global Tech Hero award and featured on platforms like HackerNoon.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build an Autonomous AI Agent with n8n and Decapod ]]>
                </title>
                <description>
                    <![CDATA[ I tried out Open Claw two weeks ago. I loved the potential, but did not enjoy the tool itself. I, like many others, struggled with the installation process. And working from Linux, the Mac specific or ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-build-an-autonomous-ai-agent-with-n8n-and-decapod/</link>
                <guid isPermaLink="false">69b1ce1f6c896b0519c1c8f5</guid>
                
                    <category>
                        <![CDATA[ agentic AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Lee Nathan ]]>
                </dc:creator>
                <pubDate>Wed, 11 Mar 2026 20:18:39 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/d27ea304-5db6-4172-823d-3f6aa0612d38.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>I tried out Open Claw two weeks ago. I loved the potential, but did not enjoy the tool itself.</p>
<p>I, like many others, struggled with the installation process. And working from Linux, the Mac specific orientation added extra pitfalls. It wasn't always clear whether configuration and management should be done in the docs, the CLI, or the interface.</p>
<p>I found the UI unintuitive and it left me wondering if it wasn't just a dev placeholder. The color choice in particular was especially harsh. All the red tricked the eye and made white text appear green. It also made everything seem like an error message.</p>
<p>I couldn't make heads or tails of the organization and structure. Workspaces, agents, and sessions are all terms I'm familiar with and understand. But the way Open Claw implements them made no sense to me.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/0816135a-a80f-4f56-819a-9c82920f0245.png" alt="A simple n8n workflow that clearly shows how telegram can be connected to an AI agent." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>Open Claw started as a way to connect a chat tool to an AI. I did that eight months ago with n8n. It's literally only a few nodes. It was so easy that I didn't think anything of it. In my opinion, Open Claw isn’t actually all that special. There’s no part of it that stands out as unique, except for the approach. It’s the Flappy Bird of the agentic AI world.</p>
<p>So I set out to make my own. And within a few hours, I'd whipped up a simple working prototype vibe-coded with Python and connected to Open WebUI (OWUI).</p>
<p>But I wanted to see what prompt OWUI was sending the agent, exactly. Now, if I was actually a Python guy, I would have done some console output. But instead, I went for my favorite tool: n8n (a powerful low-code automation system). And that's where things got interesting.</p>
<h2 id="heading-about-this-handbook">About This Handbook</h2>
<p>This handbook will introduce you to agentic AI creation using a hands-on approach and a starter project I created called Decapod.</p>
<p>Decapod is not a self-contained SaaS offering. There is no part of it that is black boxed and unavailable to hack on. Decapod is a collection of <code>docker-compose.yml</code> containers, scripts, AI agent prompts, and n8n workflows that work together to help give you a leg up on your path to building your own agentic AI empire.</p>
<p>Concepts and technologies you'll be introduced to and using:</p>
<ul>
<li><p>Agentic AI with tools and skills</p>
</li>
<li><p>Docker containers with Docker Compose</p>
</li>
<li><p>Open WebUI</p>
</li>
<li><p>n8n</p>
</li>
<li><p>S3 and MinIO</p>
</li>
<li><p>Caddy</p>
</li>
<li><p>Postgres</p>
</li>
</ul>
<p>For a list of required skills, services, and tools, please check out the "Requirements and Processes" section.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-decapod-the-diyers-dream-agent">Decapod - The DIYer's Dream Agent</a></p>
</li>
<li><p><a href="#heading-how-decapod-works">How Decapod Works</a></p>
<ul>
<li><p><a href="#heading-core-engine">Core Engine</a></p>
</li>
<li><p><a href="#heading-supakitchen-supabase-on-a-budget">Supakitchen - Supabase on a Budget</a></p>
</li>
<li><p><a href="#heading-open-webui-ai-chat-with-all-the-bells-and-whistles">Open WebUI - AI Chat With All the Bells and Whistles</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-requirements-and-processes-tools-i-use-and-recommend">Requirements and Processes - Tools I Use and Recommend</a></p>
<ul>
<li><a href="#heading-the-checklist">The Checklist</a></li>
</ul>
</li>
<li><p><a href="#heading-assembling-the-dream-team-ikea-style">Assembling the Dream Team - Ikea Style</a></p>
<ul>
<li><p><a href="#heading-accessing-your-vps-with-cursor-and-ssh">Accessing Your VPS With Cursor and SSH</a></p>
</li>
<li><p><a href="#heading-installing-and-configuring-the-docker-containers">Installing and Configuring the Docker Containers</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-configuration-and-wiring">Configuration and Wiring</a></p>
<ul>
<li><p><a href="#heading-initiate-the-database">Initiate the Database</a></p>
</li>
<li><p><a href="#heading-a-little-minio">A Little MinIO</a></p>
</li>
<li><p><a href="#heading-adding-the-workflows">Adding the Workflows</a></p>
</li>
<li><p><a href="#heading-getting-started-with-n8n">Getting Started With n8n</a></p>
</li>
<li><p><a href="#heading-now-get-owui-to-talk-to-decapod">Now, Get OWUI to Talk to Decapod</a></p>
</li>
<li><p><a href="#heading-there-was-supposed-to-be-an-earth-shattering-kaboom">There Was Supposed to Be an Earth Shattering Kaboom</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-the-ever-present-hello-world">The Ever-Present "Hello World"</a></p>
</li>
<li><p><a href="#heading-into-the-future">Into the Future!</a></p>
<ul>
<li><p><a href="#heading-a-work-in-progress">A Work in Progress</a></p>
</li>
<li><p><a href="#heading-adding-your-own-skills-limitless-potential">Adding Your Own Skills - Limitless Potential</a></p>
</li>
<li><p><a href="#heading-future-plans">Future Plans</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-got-questions-meet-captain-finn">Got Questions? Meet Captain Finn!</a></p>
</li>
</ul>
<h2 id="heading-decapod-the-diyers-dream-agent">Decapod – The DIYer's Dream Agent</h2>
<p>I'll be honest. I'd never even considered the security issues with Open Claw at first. But they're enormous! Let's open a giant hole in our server and give a fledgling alien intelligence root access and all of our API keys. What could possibly go wrong?</p>
<p>Decapod isn't a monolithic app. It's a collection of tools and n8n workflows that give you complete control over your agent and its tools. It's a framework to give <a href="https://monday.com/appdeveloper/blog/citizen-developer/">citizen developers</a> a leg up.</p>
<p>By switching to n8n, I accidentally solved a ton of issues and made a far superior (in my opinion) project:</p>
<ul>
<li><p>Double (or triple if you choose to host in a VPS) sandboxed security. My agent lives inside of n8n inside of a Docker container inside of a VPS.</p>
</li>
<li><p>The agent never sees a single API key or even ever needs to know exactly how you're connecting services. Credentials are handled by n8n.</p>
</li>
<li><p>Universal access – I prefer OWUI. But literally anything that can connect to a standard OpenAI API endpoint can connect to Decapod.</p>
</li>
<li><p>Over 1,000 integrations – What n8n does best is connecting any API to any other API via drag-and-drop nodes. And there are more than <a href="https://community.n8n.io/t/master-list-of-every-n8n-node/155146">1,000 of them</a>.</p>
</li>
<li><p>No more sketchy skills – Decapod uses skills, but they have to actually be connected to n8n workflows and nodes to work.</p>
</li>
</ul>
<p>More problems Decapod solves:</p>
<ul>
<li><p>Fewer tokens burned – Decapod maintains a clean boundary between what's best handled with code/logic and what's best handled by AI.</p>
</li>
<li><p>No endless loops and hung jobs – Decapod uses a jobs and tasks system that the AI can manage. So if it sees that a task has failed, it can change tasks or suspend the job.</p>
</li>
<li><p>HITL (Human In The Loop) – You can add a HITL sub-workflow before any AI skill to give them permission to proceed or not.</p>
</li>
<li><p>An MVP you can trust – The core Decapod system is just an MVP. But it's built on exclusively mature, open source, enterprise ready solutions: n8n, Open WebUI, Docker, Caddy, Postgres, and MinIO.</p>
</li>
</ul>
<h2 id="heading-how-decapod-works">How Decapod Works</h2>
<p>Decapod is middleware that acts like an OpenAI API. But it intercepts the API call and does agent work with the real API.</p>
<p>The OpenAI API standard is the most widely used in the industry. Almost every tool, like Open WebUI, Zed, and Obsidian have ways to connect to the OpenAI standard. So those tools can also connect to Decapod.</p>
<p>Decapod itself can connect to any API and pass available models through to other tools. I strongly prefer and recommend OpenRouter. OpenRouter also uses the OpenAI standard, but lets you connect to hundreds of mainstream and indie models under the same pricing system. Decapod is configured to work with OR out of the box.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/da54b254-62b5-4e4a-b5d3-b1de7dd5f0fe.png" alt="An n8n workflow with advanced routing." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>This is an image of the Decapod agent tool router – one of the key n8n workflows in Decapod.</p>
<h3 id="heading-core-engine">Core Engine</h3>
<p>Decapod consists of an agent with tools and skills. By tools, I mean the agentic tools that an AI can access to perform tasks as part of the API. And by skills, I'm referring to <a href="https://agentskills.io/home">Anthropic's Agent Skills standard</a>. It's the same skills standard used by Open Claw.</p>
<p>The Decapod agent has a limited, immutable set of tools for managing Decapod's state and job queue. One tool is used to call skills. Skills are dynamic and you can add as many as you like mid-flight.</p>
<p>Each skill consists of core instructions, followed by JSON specs. The agent builds a skill request based on the JSON and calls the use_skill tool to have it executed. Then Decapod calls a sub-workflow with a name that matches the skill and sends it the JSON.</p>
<p>One skill = one sub-workflow. JSON specs = sub-workflow's expected input.</p>
<p>When Decapod receives a user message, it passes it to the agent. If it's just a message, the agent responds. If it's a call to action, the agent picks a tool and gets to work.</p>
<p>Decapod loops through each job in the queue, handling the agent's tool calls and passing it back the results. When the agent is done, it concludes the job and stops sending tool calls. The final message is passed back to the user.</p>
<h3 id="heading-supakitchen-supabase-on-a-budget">Supakitchen – Supabase on a Budget</h3>
<p>I'm a huge fan of Supabase. It's all the fun of Firebase, except with data normalization. But I'm self-hosting Decapod because paying $20 per month for each of five or more services doesn't sit right with me.</p>
<p>As a mad scientist, I like to be able to try different tools without dealing with the freemium hoops. So I'm running Decapod on a Hetzner VPS with 8 gigs of RAM for about $18 per month. Those 8 gigs go really far in the self-hosted world, but Supabase is heavy.</p>
<p>What I really wanted was to give my agent file access and a database. I accomplished that with MinIO and Postgres. No real-time data, but my agent is async anyway. And agent authentication is done through n8n. So it's good enough.</p>
<p>But you do you! Decapod can work with any S3 compatible file store and any Postgres database. So if you want to use Supabase instead, go for it!</p>
<h3 id="heading-open-webui-ai-chat-with-all-the-bells-and-whistles">Open WebUI – AI Chat With All the Bells and Whistles</h3>
<p>You can use chat tools, like Discord, Telegram, Slack, and others, to chat with your AI easily enough. But if you want multiple sessions or to use different models, it can be tricky.</p>
<p>The easiest tool to set up and work with, by far, is Telegram. You get chat, UI elements, and even embedded apps without having to host your own server, like you do with Discord. I once used it to create a HITL lead qualification tool in a few hours.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/2f6501bc-b72e-4b69-bc7d-662d91d8746f.jpg" alt="A Telegram session showing buttons and commands for a lead gen system." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>BUT! While Telegram and friends do get the job done, if you want a new session you have to create a new bot for each and every one. If you want to switch models, you need to add /slash commands. If you want context management, you have to handle that server side.</p>
<p>That's why I prefer Open WebUI. OWUI gives you everything you expect from all of the best mainstream AI offerings, but with a direct tap to the API.</p>
<ul>
<li><p>It works great on browser and mobile as a progressive web app (PWA).</p>
</li>
<li><p>You can mod it with Python.</p>
</li>
<li><p>It has many ways to manage and supply context, including nested projects/folders and RAG support.</p>
</li>
<li><p>You can collaboratively work on notes with AI.</p>
</li>
</ul>
<p>Those are a few of my favorite features, but there are <a href="https://docs.openwebui.com/features/">so many more</a>. Why reinvent the wheel when the absolute best solution already exists?</p>
<h2 id="heading-requirements-and-processes-tools-i-use-and-recommend">Requirements and Processes – Tools I Use and Recommend</h2>
<p>Welcome to my lab-or-a-tory. We're out there on the fringes of agentic AI now. Doing weird experiments by stitching together pieces and parts. Let me show you how I work and tell you where you can and can't stray from my process.</p>
<p>Decapod is a finished MVP and should work right out of the box with minimal headache. But it doesn't have more than a few skills yet. So you'll need to build your own until it takes off. Fortunately, your Decapod agent can help.</p>
<h3 id="heading-the-checklist">The Checklist</h3>
<p><strong>Skills:</strong></p>
<ul>
<li><p>✅ A generalist's mindset, problem-solving skills, and a sense of adventure.</p>
<ul>
<li><p>You don't have to be an expert at anything to install Decapod. I'm not, and I built it.</p>
</li>
<li><p>But you do have to be comfortable with many different technologies.</p>
</li>
</ul>
</li>
<li><p>✅ The command line, Docker, and probably Node. Decapod is self hosted. So you'll need to get your hands a bit dirty.</p>
</li>
<li><p>✅ The ability to read and write a little JavaScript. This helps a lot with n8n code nodes to give it more utility.</p>
</li>
<li><p>✅ Familiarity with JSON and APIs. Everything in n8n is about passing JSON from node to node. And n8n is nothing if not a universal API connector.</p>
</li>
</ul>
<p><strong>Services:</strong></p>
<ul>
<li><p>✅ A domain name with DNS access.</p>
<ul>
<li><p>This is critical for n8n to work properly due to CORS and security issues.</p>
</li>
<li><p>Also, the OWUI PWA doesn't work when hosted through an IP. It's just a web page at that point.</p>
</li>
<li><p>Plus, it's just better for security overall with https support.</p>
</li>
<li><p>If cost is an issue, you can get an <a href="https://gen.xyz/">all-digit domain name from gen.xyz</a> for $0.99. Seems legit, but I haven't tried it myself.</p>
</li>
</ul>
</li>
<li><p>✅ A dedicated VPS with SSH access. (SSH access should be standard for any VPS.)</p>
<ul>
<li><p>You can technically host this on your own PC if you know it will be running 24/7. But using a VPS will give you peace of mind and avoid complicating your PC.</p>
</li>
<li><p>Big-name solutions like AWS and Google Cloud can wind up going off the rails and costing you big bucks if you don't know exactly what you're doing. Better to stick with less enterprise-oriented offerings. I've used the following:</p>
<ul>
<li><p><a href="https://www.hetzner.com/">Hetzner</a> – My current personal favorite. Germany based. High quality and affordable pricing with a few American servers. Even more affordable with European servers.</p>
</li>
<li><p><a href="https://www.digitalocean.com/">Digital Ocean</a> – US based. Can't go wrong. Decent prices. Many offerings. Almost exclusively American servers.</p>
</li>
<li><p><a href="https://webdock.io/en">Webdock</a> – Denmark based. The most affordable of the bunch.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>✅ An OpenRouter account. OR provides a universal interface for hundreds of AI models. There's no freemium upsell, like with Hugging Face, but there is a percentage add on when you buy credits/tokens. I feel like it's worth the extra fee to be able to easily swap from Claude to Kimi to GPT to DeepSeek as I please without more keys, more accounts, and more wiring. But this is optional. You can plug Decapod right into Kimi or Gemini and just leave it there if you like.</p>
</li>
</ul>
<p><strong>Tools:</strong></p>
<ul>
<li><p>✅ Cursor, or similar. I love Cursor. It matches my hands-on style. If you're freestyling and dreaming something into creation as you build it, AI will <strong>always</strong> take the wrong path if you take your hands off the wheel. Cursor lets me be in charge and play director while the AI does the heavy lifting and saves me from hours of Googling and digging through 10-year-old questions on Stack Overflow. Especially with the command line stuff. I could not have knocked out Decapod in two weeks without it. But it couldn't have built Decapod at all without me.</p>
</li>
<li><p>✅ Another AI bestie to help you dream, plot, and plan. Cursor is great, but very utilitarian. I always have a session open with a running commentary about my work. I'm constantly feeding it context and leaning on it to get a fresh perspective and solve more esoteric issues, like debugging n8n flow problems, for example. I use Claude for absolutely everything. It has the most natural conversational flow, it's good at taking meta instructions regarding its behavior, and it always has an eye on accuracy – very reliable.</p>
</li>
</ul>
<h2 id="heading-assembling-the-dream-team-ikea-style">Assembling the Dream Team – Ikea Style</h2>
<p>Here are the pieces and parts you'll find in your Dekkaplonkën Ikea flat pack (the GitHub repo).</p>
<ol>
<li>Four Docker containers containing five services with docker-compose files. Just heat and serve.</li>
</ol>
<ul>
<li><p>Infrastructure: Caddy for routing and SSL certificates for https security.</p>
</li>
<li><p>Infrastructure: Postgres for all your data needs.</p>
</li>
<li><p>MinIO: An S3 compatible file storage system.</p>
</li>
<li><p>n8n: The ultimate automation tool.</p>
</li>
<li><p>Open WebUI: The ultimate AI chat interface.</p>
</li>
<li><p>SQL tables</p>
<ul>
<li><p>A table for the decapod state.</p>
</li>
<li><p>A table for jobs, tasks, and tool chat history.</p>
</li>
</ul>
</li>
<li><p>S3 Files and Folders – Agent Templates</p>
<ul>
<li><p>Four starter skills (two actually implemented in n8n).</p>
</li>
<li><p>Two instructional files, including the persona and skill definitions.</p>
</li>
</ul>
</li>
<li><p>n8n Workflows (6,889 lines of pure JSON)</p>
<ul>
<li><p>API Middleware: The entry and exit point that manages the session and loops.</p>
</li>
<li><p>AI Tool Router: Executes your agent's tool requests.</p>
</li>
<li><p>Construct Message History: Injects instructions into your agent's chat history.</p>
</li>
<li><p>Get Job Queue: A one-off database call that gets active jobs ordered by priority and creation date (First In First Out).</p>
</li>
<li><p>Utility Workbench: A place for testing and managing your flows. Currently contains a Skill assembly jig.</p>
</li>
<li><p>Worker: Loops over job queues, talking to the agent and calling the tool router with its responses.</p>
</li>
<li><p>A write-file skill and a research-recipes skill.</p>
</li>
<li><p>A couple more placeholders. (Decapod is an MVP)</p>
</li>
</ul>
</li>
<li><p>Also</p>
<ul>
<li><p>A Docker cheatsheet.</p>
</li>
<li><p>A script to generate agents from the template.</p>
</li>
<li><p>A destructive script to upload local agent files to your S3 account by overwriting existing files. Good for dev. Bad if you let your agent start modding their own instructions.</p>
</li>
<li><p>Scripts to start and stop all Docker containers at once.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-accessing-your-vps-with-cursor-and-ssh">Accessing Your VPS With Cursor and SSH</h3>
<p>SSH is the standard way to access any server and has been forever. But working through a terminal can be slow and plodding. Fortunately, there's a better way.</p>
<p>Connect to the server with Cursor, VS Code, Antigravity, or whatever you use. This gives you:</p>
<ul>
<li><p>Multiple terminals to access the remote server.</p>
</li>
<li><p>The ability to view localhost servers as if they were on your own machine via port forwarding.</p>
</li>
<li><p>Drag and drop folder and file management.</p>
</li>
<li><p>No more Nano, Vim, or Emacs (unless you want to).</p>
</li>
<li><p>And the best part! Cursor can do all the remote file system work for you, including troubleshooting servers and containers, writing scripts for automating common tasks, and helping you hash out actionable plans.</p>
</li>
<li><p>(Cursor can also connect to your Decapod!)</p>
</li>
</ul>
<p>Every VPS provider will have their own way of managing SSH access. They usually make adding them part of the sign up process.</p>
<p>Generating and managing keys is a pretty well-paved path and I won't go over it. It's a good job for Cursor, if you need help.</p>
<p>However! I use Bitwarden for SSH key generation and management. They still need to be stored locally for tools on your computer to access. But it's nice to have them in a single secure location.</p>
<p>VS Code requires an extra plugin to access a remote server. Cursor comes with it preinstalled. Just click <code>Connect via SSH</code>, set up your connection, and you're good to go.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/36c686e7-2d7b-43a9-9078-98a5dd2af5be.png" alt="The cursor launch screen with a button to &quot;Connect via SSH.&quot;" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>📝 Side note: I was on the paid plan when I started, I swear. I tend to switch services a lot as new models are released and I discover different tools and options. But I only ever pay for 2 or 3 at a time.</p>
<p>I got about halfway through this article when Cursor expired. But I'm trying the new Gemini 3 models and switched to Antigravity mid-flight rather than re-up cursor.</p>
<h3 id="heading-installing-and-configuring-the-docker-containers">Installing and Configuring the Docker Containers</h3>
<p>Finally! After a novella's worth of lead-up, we, at long last, get to the actual installation. That will be shared in the next article – have a good night! Just kidding, please put down the brick.</p>
<p>Once you've SSHed in to a VPS, a Raspberry Pi with Ubuntu, or a Virtual Machine, you're ready to get started. I'm going to assume you know how to install tools like Docker and Node on your system and not go into a lot of detail. Ask your friendly neighborhood AI for help if you get stuck.</p>
<p>💡 Important! If you haven't already, get your domain name and open up the DNS page. You'll want to redirect "A" records to your IP for each relevant service.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/c381b917-a731-41bc-a62c-923646c87ae3.png" alt="DNS records for four subdomains." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>Start by cloning the Decapod repo.</p>
<pre><code class="language-shell">git clone https://github.com/leetheguy/decapod.git
</code></pre>
<p><code>cd decapod</code> and create your Docker network.</p>
<pre><code class="language-shell">docker network create web
</code></pre>
<p>Now we're going to go into each of the four Docker folders, configure them, and fire them up, starting with infrastructure.</p>
<p><code>cd infrastructure</code> <code>cp .env.example .env</code></p>
<p>Alternatively, you can move the files to rename them or just click on the file in the UI and <code>F2</code> to rename it. Whatever floats your goat 🐐.</p>
<p>Now edit the new <code>.env</code> file. You can get the data folder path by clicking on the infrastructure folder and <code>Ctrl/Cmd+Alt+C</code>. The rest is up to you. I used Bitwarden to generate a password here.</p>
<p>Next, copy the Caddyfile template into its own file.</p>
<p><code>cp caddy_config/Caddyfile.template caddy_config/Caddyfile</code></p>
<p>And start the Docker container with <code>docker compose up -d.</code></p>
<p>Back out of infrastructure and into <code>minio</code>. Same again with the <code>.env</code> – copy and configure. Make sure the URLs match your domain.</p>
<p>Once more for <code>n8n</code> and then again for <code>openwebui</code>.</p>
<p>OWUI config comes from the <code>infrastructure</code> and <code>minio</code> <code>.env</code> files:</p>
<ul>
<li><p>S3_ACCESS_KEY_ID=minio_admin</p>
</li>
<li><p>S3_SECRET_ACCESS_KEY=minio_password</p>
</li>
<li><p>S3_BUCKET_NAME=decapod</p>
</li>
<li><p>MINIO_ROOT_USER=minio_admin</p>
</li>
<li><p>MINIO_ROOT_PASSWORD=minio_password</p>
</li>
<li><p>POSTGRES_DB=postgres</p>
</li>
<li><p>POSTGRES_USER=postgres</p>
</li>
<li><p>POSTGRES_PASSWORD=postgres_password</p>
</li>
</ul>
<p>📝 Note! OWUI may take a moment or two to start. Go grab some water and it should be up by the time you get back.</p>
<h2 id="heading-configuration-and-wiring">Configuration and Wiring</h2>
<p>Roll up your sleeves! This is where we get up to our elbows in pieces and parts.</p>
<p>If everything went to plan, you should now have all five services up and running. You can confirm the containers are live with <code>docker ps</code>. You can check that they're actually properly connected by visiting s3, OWUI, and n8n.your-domain.com.</p>
<p>Create accounts for all three and sign in to each.</p>
<p>⚡️ Important! Get your n8n license key! It's free and gives you access to all community features. You'll be severely limited without it. Activate it under Usage and plan in the settings.</p>
<h3 id="heading-initiate-the-database">Initiate the Database</h3>
<p>Decapod only needs two data tables. You can add them from the command line. But I like pgAdmin.</p>
<p>Connect to your Postgres database in the usual way. But you'll need your server's IP for the host name instead of postgres (which you use to connect services inside of the Docker network) since pgAdmin isn't in your Docker network.</p>
<p>You'll find your SQL files in <code>components/pgsql_tables</code>. Create a decapod database and add both of the SQL files to it. A default <code>decapod_state</code> table record will be automatically generated when running the SQL.</p>
<p>In pgAdmin:</p>
<ul>
<li><p>Open the decapod server.</p>
</li>
<li><p>Create a decapod database by right-clicking on databases.</p>
</li>
<li><p>Select the new database.</p>
</li>
<li><p>Click the query tool button at the top of the explorer.</p>
</li>
<li><p>Copy and paste the decapod_state table into the query and run it with F5.</p>
</li>
<li><p>Clear the query, paste in job_queue, run it.</p>
</li>
</ul>
<p>Or ask Cursor or an AI bestie for help if you want to go pure command line.</p>
<h3 id="heading-a-little-minio">A Little MinIO</h3>
<p>Next up, you'll be adding your agent's instructions and persona files to your private S3 service. Start by visiting your MinIO server and adding a decapod bucket.</p>
<p>In <code>components/S3_structure/agents/</code>, you'll find a template for your agents. (I have the intention of making Decapod a multi-agent tool in a future release.) The template is meant to be copied to a new agent of your choice. But if you choose something other than Decapod, you'll need to update the state table.</p>
<p>You can do it manually if you wish. Copy the folder to match the new agent's name and update the <code>definitions/skills.yaml</code> file to include all the skills you want your agent to have. The name and description should exactly match what's found at the top of each skill file.</p>
<p>Alternatively, I vibe coded a script to make it a little easier. It's in the scripts folder and you'll need to install the <code>inquirer</code> Node module to use it. Run <code>cd scripts</code> and <code>create-agent.mjs</code> to use it.</p>
<p>You also need to make sure that the files and folder structure in your MinIO match those in <code>S3_structure</code>. Start by creating a bucket called decapod in your drive. Then upload the files from <code>S3_structure</code> into your bucket.</p>
<p>But that's easier said than done because they're on a remote server. And if you used the visual interface, you'd have to download them to your local machine first. So I made another script – <code>upload_S3_structure.sh</code>.</p>
<p>That script is strictly meant for dev purposes. It's absolute and destructive. Just a heavy mallet. So if you want to surgically alter your MinIO, do not use it! Remember kids: mallets and brain surgery don't mix.</p>
<p>Once your agent files are in place, you can let your agents edit them, Open Claw style, or you can edit them yourself. But MinIO doesn't give you much of anything in the way of features for their UI.</p>
<p>For a better experience, I'd recommend <a href="https://web.s3drive.app/">S3Drive</a>. When you go to sign up, look for the connect button towards the bottom to connect to your own MinIO endpoint.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/d3b5faa7-8e5d-4a35-84c9-0d97ea73d96c.png" alt="The S3Drive setup interface." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>S3Drive will let you edit your files in place after you've uploaded them. This is good for quick fixes or copying and pasting sections without a complete wipe.</p>
<h3 id="heading-adding-the-workflows">Adding the Workflows</h3>
<p>You'll find most of what makes Decapod Decapod in the components folder. And the heart of that is in n8n_workflows.</p>
<p>You can manually import those workflows one at a time and go over each one to make sure they're safe and sound. Or you can use the n8n CLI inside of the Docker container and save yourself some tedium.</p>
<p>These commands move the workflows to the Docker container, import them with the n8n CLI, and then remove them from the tmp directory.</p>
<pre><code class="language-shell">docker cp ./components/n8n_workflows n8n:/tmp/workflows

docker exec -u node n8n n8n import:workflow --input=/tmp/workflows --separate
docker exec -u node n8n n8n import:workflow --input=/tmp/workflows/skills --separate

docker exec -u root n8n rm -rf /tmp/workflows
</code></pre>
<p>Now, you should see the 10 workflows in n8n. I'd recommend drag-and-dropping the main workflows to a dedicated decapod folder and the two skills to decapod/skills, just to keep things tidy. But they reference each other by id, so do what you want.</p>
<h3 id="heading-getting-started-with-n8n">Getting Started With n8n</h3>
<p>Now would be a good time to start exploring the workflows in your n8n UI Personal tab. If you sort them by name, the main file will be on top. Crack it open and see it's not too intense, and it's self-documented. Blue for notes, Green for sub-workflows, and Red for nodes that require your credentials.</p>
<p>I'd recommend reading the notes and thoroughly exploring the sub-workflows to help you understand Decapod. It's your tool now! Create credentials as you go.</p>
<p>Because we're using a Docker network, creating credentials and connecting your services to each other couldn't be easier.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/de946994-8b01-436e-9a3b-aa79a46a0073.png" alt="The credentials page for an n8n Postgres connection." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>The standard to connect all of your services is to reference them by <code>name:port</code>. Because the Postgres credential has its own port field, you can just set it to Postgres. Port should be 5432.</p>
<p>📝 Note! All credential details, like your container names, ports, and passwords, can be found in your docker-compose and .env files.</p>
<p>For MinIO:</p>
<ul>
<li><p>Endpoint: <code>http://minio:9000</code></p>
</li>
<li><p>Force Path Style: Enabled! Important for MinIO.</p>
</li>
</ul>
<p>API Connections to OpenRouter:</p>
<ul>
<li><p>choose: Authentication -&gt; Predefined Credential Type</p>
</li>
<li><p>then: Credential Type -&gt; OpenRouter</p>
</li>
<li><p>Now just paste your API key from <a href="https://openrouter.ai/settings/keys">OpenRouter</a>.</p>
</li>
</ul>
<p>n8n – (meta access to your workflow):</p>
<ul>
<li><p>In a new tab, go to n8n Settings -&gt; n8n API.</p>
</li>
<li><p>Turn off expiration if you like.</p>
</li>
<li><p>Copy your key.</p>
</li>
<li><p>Paste it in the field.</p>
</li>
<li><p>Base URL: <code>http://n8n:5678/api/v1</code></p>
</li>
</ul>
<p>Once you've created credentials, you can reuse them for every relevant node that uses the same credential. Just select it from the dropdown.</p>
<p>💡 Tip! It may help to remove the red sticky notes as you add credentials. And don't forget the skills! I didn't sticky note them at all.</p>
<p>As a final step, make sure your n8n workflows are published in the following order:</p>
<ul>
<li><p>construct message history</p>
</li>
<li><p>get job queue</p>
</li>
<li><p>hitl yes/no</p>
</li>
<li><p>tool router</p>
</li>
<li><p>worker</p>
</li>
<li><p>middleware</p>
</li>
<li><p>and the two skills</p>
</li>
</ul>
<p>💡 Tip! Always make sure your n8n workflows are in a published state with a green dot before calling them. Otherwise, you'll be calling an outdated version.</p>
<h3 id="heading-now-get-owui-to-talk-to-decapod">Now, Get OWUI to Talk to Decapod</h3>
<p>OWUI is built for teams, so you have admin settings and personal settings. You'll want to edit the admin settings by clicking on the profile circle in the lower-left-hand corner, then Admin Panel -&gt; Settings -&gt; Connections.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/10840f19-1cbb-41e9-b066-e4c0033c0244.png" alt="Open WebUI's connections config page." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>From there:</p>
<ul>
<li><p>Ollama API Disabled: Just keeping things tidy.</p>
</li>
<li><p>Configure the OpenAI link by clicking on the gear and delete that too.</p>
</li>
<li><p>Direct Connections: Enabled</p>
</li>
<li><p>Cache Base Model List: Enabled Now add your Decapod connector with the plus button.</p>
</li>
<li><p>URL: <a href="http://n8n:5678/webhook/v1/decapod">http://n8n:5678/webhook/v1/decapod</a> (Click the cycle icon to confirm your connection.)</p>
</li>
<li><p>Auth: none (it's all in the same Docker network, so it's fine for now. You can add a password for production.)</p>
</li>
<li><p>Prefix ID: decapod (If you do decide to use OpenAI, Hugging Face, or whatever else, this will help distinguish the model hosts.)</p>
</li>
</ul>
<p>That's it. Save and go to the Models tab. Decapod passes OpenRouter models straight through. So if you see hundreds of models, take a victory lap! That means that Decapod is working, live, accepting requests, and you've even properly done your certifications (at least for OpenRouter).</p>
<p>Now create a new chat session and pick a model. I like Claude Haiku 4.5. Fast, cheap, and good. Pick three. I did all of my Decapod dev with it in the saddle, so I know it works. And 3.5 million tokens towards testing iterations cost me \(4, so I know it's reasonable. Alternatively, Kimi K2.5 will likely work and would be even a little bit cheaper. I burned through 4.7 million tokens installing a Docker container in Open Claw with Kimi for about \)3.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/8773faab-b7bd-47fe-90c7-0ed4aa0cbbed.png" alt="A successful communication between Open WebUI and Decapod." style="display:block;margin-left:auto" width="600" height="400" loading="lazy">

<p>Time to say hello to your little friend! Haiku is fast. So if it takes more than a few seconds to respond, something could be borked in your n8n flow. It happened to me as I was writing this article. I had some issues with both Postgres and MinIO.</p>
<p>💡 Tip: If the agent does get hung, it's easier to resend the message than stop and try again.</p>
<h3 id="heading-there-was-supposed-to-be-an-earth-shattering-kaboom">There Was Supposed to Be an Earth Shattering Kaboom</h3>
<p>So, your agent really wants to talk to you, but all you have is a pulsating dot. It's likely that something got misconfigured in n8n.</p>
<p>You can debug n8n by going to the middleware workflow and selecting <code>executions</code> from the top tab bar. If there's an error on the left list, look for a message in the lower right.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/af5f6ea5-ad99-45e0-88c6-49ccc479fac1.png" alt="An example n8n error message." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>This was when I had some database config issues and it couldn't find the state table.</p>
<p>Some sub-workflows may fail quietly. You can trace flow from the webhook entry point to the error. All successful nodes will light up green. The bad node will be red. Drill down, check executions, and repeat for each sub-workflow.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/935b1bcc-f03d-452e-a67d-da00e2265d39.png" alt="An portion of an n8n workflow showing a node that threw an error." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>When you find the culprit – the actual bad node in the bad execution – select "copy to editor" in the upper-right-hand corner. That will freeze the workflow to that state. Open the node, fix the credential or whatever, and click <code>Execute Step</code> to see if it's fixed.</p>
<p>Remember: after every change, always always always publish your update. Otherwise, n8n won't actually use the latest fixed version of your workflow.</p>
<p>Once you've successfully debugged your Decapod, make sure that you clean out the loose unfinished jobs in the job_queue table with pgAdmin or whatever. Otherwise, your agent will try to complete each of them before finishing the next job.</p>
<h2 id="heading-the-ever-present-hello-world">The Ever-Present "Hello World"</h2>
<p>OK! Now for the moment of truth. You got your agent to say hello back. That was the easy part because it didn't need to do any work or use any tools.</p>
<p>I set you up with two skills to put it to the test: write-file and research-recipes. The recipes skill connects your bot to a free recipe API (no key needed). It's not just pulling recipes out of training data.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/84eefbe8-ad4d-44fb-ae19-3291d85fe0e9.png" alt="A successful request to Decapod requiring tool use." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>Try this prompt: Would you please look up pizza recipes and save them to a file?</p>
<p>If all of your credentials are properly configured, you should get what you asked for. Open up MinIO or S3Drive and look in <code>/agents/decapod/documents</code> for the file.</p>
<h2 id="heading-into-the-future">Into the Future!</h2>
<p>I know that was a lot! (At least it felt like a lot from my end.) I hope it wasn't too painful. And look at the bright side: you just got a crash course on some really powerful technology. And if you made it through, that's a major accomplishment! The hard part is behind you. Now comes the fun.</p>
<h3 id="heading-a-work-in-progress">A Work in Progress</h3>
<p>I'll be honest. I just wanted to get Decapod out fast to prove how doable a personal agent is while Open Claw is still hot. Anyone can build their own Agentic AI with little or no code. And you don't have to settle for painful UI and poor security. You can have it all.</p>
<p>But, as I've said, Decapod is still an MVP. Complete and functional, but feature light. And I was stressing about that a little bit. I wanted multiple agents and more skills for the early adopters.</p>
<p>Then it hit me. Duh! You already have everything you need with n8n.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/466cc6ab-f038-4728-9fcf-06d9f631f75c.png" alt="An example of chatting with an n8n agent that has internet access." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>You can add an n8n agent node, connect it to a model and an MCP server, and have a sub-agent ready to go in minutes. Then have your agent produce a skill sheet to contact the sub-agent.</p>
<h3 id="heading-adding-your-own-skills-limitless-potential">Adding Your Own Skills – Limitless Potential</h3>
<p>Let's create a dead simple n8n agent to search the web. Then we'll add that to Decapod as a new skill.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/95ffe965-3180-44fb-8ac2-7835b3931224.png" alt="A request for Decapod to create a new skill sheet." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In this image I used the prompt:</p>
<blockquote>
<p>Thank you so much! Next up, I want to give you web search access via a sub-agent. So your web search skill wouldn't directly search the web, but would instead call a simple agent to do the search for you.</p>
<p>Would you please create a web-search.md skill for your future self to use? The only required field should be prompt.</p>
</blockquote>
<p>The agent's file folder is sandboxed by default, so the agent's <code>skills/web-search.md</code> is actually in the agent's private <code>documents</code> storage. I moved it to the actual skills folder and updated my agent's skills.yaml file with the new skill.</p>
<p>Now I'll create a new n8n skill workflow in <code>decapod/skills/</code>.</p>
<p>⚡️ Important! Your n8n skill workflow name must match the skill name exactly. So, <a href="http://web-search.md">web-search.md</a> would be a workflow called web-search. Decapod uses the name to look for the skill so it can be hot loaded without a secondary router.</p>
<p>The n8n screenshot above was pretty much exactly the whole thing. Try rebuilding it yourself. I used chat input to make sure it was working with n8n's chat interface. And I used the <a href="https://www.pulsemcp.com/servers/exa">Exa Web Search MCP</a> as the search tool. I used Haiku as the model, but an even simpler model would have likely been just fine. OpenRouter has a number of free models with tool abilities that would probably do the trick.</p>
<p>Once you have the workflow operating properly, replace the chat node with a "When Executed by Another Workflow" node with a <code>parameters</code> object as input.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/ddb03570-36d2-4a50-9acb-1a80cf02c11d.png" alt="The configuration of an n8n &quot;When Executed by Another Workflow&quot; node." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>Next, open up the utility/workbench workflow. This tool will help you turn your web-search workflow into a skill. Work through each node in order, testing the node with "Execute step" button as you go. Doing so will create output data that the next node can use as input data.</p>
<ol>
<li><p>get workflow id from name: Set name to "web-search".</p>
</li>
<li><p>deliver JSON arguments to skill: Set parameters object to { "prompt": "Can I please get a list of a variety of pizza recipes complete with links to their sources?" }; (or whatever matches your skill sheet)</p>
</li>
<li><p>call skill based on workflow id: Should be ready to execute.</p>
</li>
</ol>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/0752c8a8-ab15-4c49-90ee-29e822b90f57.png" alt="an example of a successful n8n call to a sub-workflow." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>If your output looks like that, your skill should be ready to go.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/3b13ac1e-1c61-4fce-9896-14d569593ca3.png" alt="Decapod returning search results for dessert pizza recipes." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In this image I used the prompt: Alright! I think you're all set. Try doing a search for dessert pizza recipes.</p>
<p>If your agent gives you the following error, make sure that it knows it MUST create a job before it can call the <code>use_skill</code> tool. It should know that from the instructions, but pobody's nerfect. (I'll need to tighten that up.)</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/675ce1d600925897ba44d754/6e63dc70-3bae-4389-aa68-80a22f6553b6.png" alt="An example response from a Decapod error." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>Hopefully that was also pretty painless and now your mind is exploding with possibilities like mine is. If you're unconcerned with safety or actively want to invoke Skynet, you can even give your agent a skill to create its own n8n skills with the <code>Create a workflow</code> node. But don't do that.</p>
<h2 id="heading-future-plans">Future Plans</h2>
<p>Here are a few more features I'd like to add:</p>
<ul>
<li><p>/slash commands – You shouldn't have to go into n8n or pgAdmin to see what your agent is doing and manage its job queue.</p>
</li>
<li><p>Streaming responses – I'd like to see what my agent is doing as it's doing it, but streaming is a bit tricky and was beyond the MVP.</p>
</li>
<li><p>Multiple states – With multiple states, you can run multiple agents simultaneously. Or you can have different agents/models for different sessions. For example, you can have a health and fitness session with one agent with its own context window, job queue, and skill set. And you can have another one to help you keep track of your coding education.</p>
</li>
<li><p>It's a bug, not a feature – There are many places where the state and model are hard-coded throughout the app. I also started working on features that didn't pan out and left some dangling nodes. I'd like to clean up the app and actually implement those features.</p>
</li>
</ul>
<p>If you've read this far and are totally all in, I'd love to hear feedback and suggestions for more features. I'd be fascinated to hear about how Decapod is being used. And I'm also happy to answer any questions.</p>
<h2 id="heading-got-questions-meet-captain-finn">Got Questions? Meet Captain Finn!</h2>
<p>Decapod is the culmination of a year spent studying and learning all things AI and automation. It's also the result of 20 years in the world of coding and app development.</p>
<p>I'm currently starting a community for AI Enthusiasts, Automation Inventors, and Systems Thinkers. It will be led by Captain Finn, a retro-futuristic space captain who got stranded without his crew in our time and space. He used AI, automation, and systems thinking to keep the ship working, give himself someone to talk to, and to wake up to the smell of fresh coffee every morning.</p>
<p>And yes, Finn himself is an AI persona, operating from AI-automated systems, like Decapod, that he will be teaching people about.</p>
<p>My goal is to create a welcoming environment for my fellow mad scientists, dreamers, and citizen developers to learn and grow with help from the community and Captain Finn Feldspar himself. I plan to release weekly articles, more tutorials like this, and other tips and tricks.</p>
<p>Whether you want help with Decapod, learning automation, or just want to geek out about the power and future of AI — Captain Finn's Fleet has a place for you.&nbsp;<a href="https://discord.gg/HJtTpBAjQ5">Join here for free.</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Learn n8n to Design, Develop, and Deploy Production-Grade AI Agents ]]>
                </title>
                <description>
                    <![CDATA[ n8n is an open-source, visual workflow automation tool that lets you connect applications, APIs, and AI models to build complex, intelligent automations. We just posted a course on the freeCodeCamp.org YouTube channel that will be your guide to maste... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/learn-n8n-to-design-develop-and-deploy-production-grade-ai-agents/</link>
                <guid isPermaLink="false">693b0c06b189322615083d9f</guid>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                    <category>
                        <![CDATA[ youtube ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Beau Carnes ]]>
                </dc:creator>
                <pubDate>Thu, 11 Dec 2025 18:23:02 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765477338398/5c72b733-c04f-4722-8c30-67e3bb9043e5.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>n8n is an open-source, visual workflow automation tool that lets you connect applications, APIs, and AI models to build complex, intelligent automations.</p>
<p>We just posted a course on the freeCodeCamp.org YouTube channel that will be your guide to mastering n8n. This beginner-level training covers integrating APIs, automating processes, and orchestrating intelligent agents. The goal is to provide a future-proof skillset, essential for roles in DevOps, AI, and data engineering. Marconi Darmawan from KodeKloud developed this course.</p>
<p>You will build practical AI agents, including an Email AI Agent and Research Workflows using services like OpenAI. A key focus is on intelligent systems, where you will set up a vector database (Pinecone) to build a Customer Support RAG Agent using Retrieval-Augmented Generation (RAG).</p>
<p>For advanced skill development, the course covers scalable design and deployment. You will learn Modular Component Patterns (MCPs) and practice Multi-Workflow Advanced Builds to coordinate complex tasks with a "team of agents." And you'll explore flexible hosting options, including cloud, Docker, and self-hosting with local LLMs like Ollama.</p>
<p>Here are the projects you will build:</p>
<ul>
<li><p>Build an Email AI Agent to automate communications</p>
</li>
<li><p>Design a Research Workflow with Perplexity &amp; OpenAI</p>
</li>
<li><p>Create a Customer Support RAG Agent using vector databases</p>
</li>
<li><p>Develop Slack and multimedia AI agents (text-to-image, text-to-video, image-to-video)</p>
</li>
<li><p>Replace manual processes with a team of agents through multi-workflow builds</p>
</li>
</ul>
<p>Watch the full course on <a target="_blank" href="https://youtu.be/UIf-SlmMays">the freeCodeCamp.org YouTube channel</a> (4-hour watch).</p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/UIf-SlmMays" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ Build Complex Workflows with n8n & Master AI Integration ]]>
                </title>
                <description>
                    <![CDATA[ n8n is an open-source workflow automation platform that lets you connect different apps, APIs, and services to easily automate tasks without needing to implement extensive code. We just posted a course on the freeCodeCamp.org YouTube channel that is ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-complex-workflows-with-n8n-and-master-ai-integration/</link>
                <guid isPermaLink="false">6915ea7e997289c5d5272773</guid>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                    <category>
                        <![CDATA[ youtube ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Beau Carnes ]]>
                </dc:creator>
                <pubDate>Thu, 13 Nov 2025 14:26:06 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762950806004/02d41988-6e9f-4962-8872-c41110175882.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>n8n is an open-source workflow automation platform that lets you connect different apps, APIs, and services to easily automate tasks without needing to implement extensive code.</p>
<p>We just posted a course on the freeCodeCamp.org YouTube channel that is designed to take you from a beginner to a capable developer, harnessing n8n's power to build sophisticated real-world solutions. Gavin Lon developed this course.</p>
<p>Gavin starts by discussing essential internet standards like REST and OAuth2 that make secure integrations possible, setting the foundation for your automation journey. You'll learn the practical steps of setting up n8n and then dive into four comprehensive workflow examples, including an AI-powered chatbot and an emergency WhatsApp notification system. Gavin will guide you step-by-step through creating these solutions, demonstrating how to seamlessly integrate diverse services like WhatsApp, Google Sheets, and AI agents.</p>
<p>Here are all the sections in the course:</p>
<ul>
<li><p>Introduction</p>
</li>
<li><p>Discussing Internet Standards, REST and OAuth2</p>
</li>
<li><p>Getting Started - Setup n8n on Hostinger (self-hosted)</p>
</li>
<li><p>Create First Workflow Example (AI - Chatbot)</p>
</li>
<li><p>Add an AIAgent Node to Workflow</p>
</li>
<li><p>Add OpenAI Chat Model sub node to AiAgent Node</p>
</li>
<li><p>Add Credentials to OpenAI Chat Model sub node</p>
</li>
<li><p>Add a System Prompt to AIAgent Node</p>
</li>
<li><p>Add Memory Sub Node to AiAgent (to Add Context)</p>
</li>
<li><p>Test ChatBot WorkFlow</p>
</li>
<li><p>Logging on to Hostinger and n8n</p>
</li>
<li><p>Create Workflow Example 2 (WhatsApp Panic App Registration)</p>
</li>
<li><p>Create Google Spreadsheet</p>
</li>
<li><p>Add and Setup OnSubmission Form Trigger Node</p>
</li>
<li><p>Add and Setup Google Sheets Node</p>
</li>
<li><p>Setup Credentials for Google Sheets Node (using OAuth2)</p>
</li>
<li><p>Create Workflow Example 3 (WhatsApp Panic Notifications)</p>
</li>
<li><p>Add WhatsApp Trigger Node and Setup Credentials</p>
</li>
<li><p>Setup other Workflow Nodes (WhatsApp Panic Notifications)</p>
</li>
<li><p>Test WhatsApp Panic App Notifications Workflow</p>
</li>
<li><p>Create Workflow Example 4 (CV Approval / Interview Scheduler)</p>
</li>
<li><p>Setup Google Drive Node and Configure Credentials</p>
</li>
<li><p>Setup Loop Node</p>
</li>
<li><p>Setup Human in the Loop Node</p>
</li>
<li><p>Setup AIAgent Node for Scheduling Interview on Google Calendar</p>
</li>
<li><p>Test CV Approval / Interview Scheduler Workflow - Human in the Loop</p>
</li>
<li><p>Abstract Sub-WorkFlow to Create Separation of Concerns</p>
</li>
<li><p>Extra Information</p>
</li>
<li><p>Use WhatsApp to Converse with ChatBot in Example 1</p>
</li>
<li><p>Use A WebHook Trigger Node to Trigger your Workflow</p>
</li>
<li><p>Use the HttpRequest node to make Requests to External Web Apis From Workflow</p>
</li>
<li><p>Integrate JavaScript Code into Workflow using Code Node</p>
</li>
<li><p>Outro</p>
</li>
</ul>
<p>Watch the full course <a target="_blank" href="https://www.youtube.com/watch?v=GIZzRGYpCbM">on the freeCodeCamp.org YouTube channel</a>.</p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/GIZzRGYpCbM" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build AI Workflows with n8n ]]>
                </title>
                <description>
                    <![CDATA[ n8n is a visual, node-based automation platform that lets you automate tasks with drag-and-drop nodes. It’s popular for multi-step automations and AI chains thanks to built-in nodes for agents and app integrations. In this tutorial, you’ll build a sm... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-build-ai-workflows-with-n8n/</link>
                <guid isPermaLink="false">68ed22c02001907a6cb6d50a</guid>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Workflow Automation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                    <category>
                        <![CDATA[ n8n workflows ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Soham Mehta ]]>
                </dc:creator>
                <pubDate>Mon, 13 Oct 2025 16:03:12 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760371342462/f8874220-238b-4819-a6f1-c35756b355bc.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>n8n is a visual, node-based automation platform that lets you automate tasks with drag-and-drop nodes. It’s popular for multi-step automations and AI chains thanks to built-in nodes for agents and app integrations.</p>
<p>In this tutorial, you’ll build a small personal calendar agent that listens to a chat message, extracts event details, and creates a Google Calendar entry. Along the way, you’ll learn how to set up n8n, add an AI Agent node, and pass structured data between nodes.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ol>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
<ul>
<li><a class="post-section-overview" href="#heading-how-to-set-up-your-n8n-account">How to Set Up Your n8n Account</a></li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-build-a-personal-calendar-agent">How to Build a Personal Calendar Agent</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-step-1-set-up-the-chat-trigger">Step 1: Set Up the Chat Trigger</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-configure-the-ai-agent">Step 2: Configure the AI Agent</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-3-add-google-calendar-node">Step 3: Add Google Calendar Node</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-4-time-to-test">Step 4: Time to Test!</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>n8n account – setup steps below.</p>
</li>
<li><p><a target="_blank" href="https://support.google.com/accounts/answer/27441">Google account</a> – you’ll create events in Google Calendar.</p>
</li>
</ul>
<h3 id="heading-how-to-set-up-your-n8n-account">How to Set Up Your n8n Account</h3>
<p>You can setup n8n either on the cloud or locally.</p>
<p>To set it up on the cloud (the easiest option), you can create a free trial account on the <a target="_blank" href="https://n8n.io/">n8n website</a>.</p>
<p>If you’d rather self-host via npm, you can install the free <a target="_blank" href="https://www.npmjs.com/package/n8n">n8n npm package</a> and run it on your localhost (here are the <a target="_blank" href="https://docs.n8n.io/hosting/installation/npm/">steps</a> for that).</p>
<p>You can also self-host via <a target="_blank" href="https://www.docker.com/">Docker</a> and run the n8n image on your machine. I’ll walk you through how to do that now.</p>
<p>First, download and install the <a target="_blank" href="https://www.docker.com/">Docker Desktop</a> application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760074532772/e3987b3e-403c-4a96-b7a7-ed3347acbca0.png" alt="Screenshot of the Docker website showing navigation links and buttons for &quot;Download Docker Desktop&quot; and &quot;Learn about Docker for AI&quot; under the heading &quot;Develop Agents&quot;." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Then click “Search Images” and select the <code>n8nio/n8n</code> image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760074368306/eff04dd0-dcae-4c34-b5a1-f28c0db72373.png" alt="Docker Desktop interface showing a search for &quot;n8nio&quot; under Images. Several container images are listed with download and star counts. Options to Pull and Run the selected image are visible." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Click <code>run</code> on the image and set your localhost port in the options.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760076583465/5f0bea8c-8d9d-4703-9561-1ede643c852e.png" alt="Screenshot of a browser window showing a &quot;Set up owner account&quot; page for n8n, a workflow automation tool. The form requires email, first name, last name, and password." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>You should now be able to access n8n on your localhost.</p>
<h2 id="heading-how-to-build-a-personal-calendar-agent">How to Build a Personal Calendar Agent</h2>
<p>Now for the fun part! We’re going to build a workflow that listens for a chat message, uses an AI Agent to understand the user's request, and automatically creates a Google Calendar event. This simple workflow highlights n8n’s new AI capabilities.</p>
<p>Here’s a breakdown of the steps we’ll go through below:</p>
<ol>
<li><p>Add a Chat node to send a message to the agent.</p>
</li>
<li><p>Let the AI Agent parse the message and extract key details (title, location, times).</p>
</li>
<li><p>Create a Google calendar event with those details.</p>
</li>
</ol>
<h3 id="heading-step-1-set-up-the-chat-trigger">Step 1: Set Up the Chat Trigger</h3>
<p>Every workflow starts with a trigger. This is the event that kicks everything off. Use a chat trigger that listens for new messages.</p>
<ol>
<li><p>Visit the dashboard at <code>https://&lt;YOUR_USERNAME&gt;.app.n8n.cloud/home/workflows</code> and Click <code>Create Workflow</code>.</p>
</li>
<li><p>Click “Add first step..” and add <code>On chat message</code> as the trigger.</p>
</li>
<li><p>In the node's properties panel, Enable <code>Make Chat Publicly Available</code> (This will provide a URL which you can share with friends to book events on your calendar).</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759649236134/c9ffc656-26b9-45a1-9c17-a6aa97a9e997.gif" alt="n8n workflow where you click &quot;On chat mesage&quot; as starting node" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-2-configure-the-ai-agent">Step 2: Configure the AI Agent</h3>
<p>This node is the “brain” of the workflow. The AI Agent node can understand natural language, make decisions, and extract structured data. Each agent has 4 main modules: model, prompt, tools, and output.</p>
<h4 id="heading-1-setup-the-model">1. Setup the Model</h4>
<p>Click the <strong>+</strong> icon after the trigger node and add the <code>AI Agent</code> node. The AI Agent needs a model to power its reasoning. Click + below <code>Chat Model</code> and select <code>OpenAI Chat Model</code> node.</p>
<p>Then select <code>n8n free OpenAI API credits</code> as your credential for now**.** In the future, you can sign up on the <a target="_blank" href="https://platform.openai.com/">OpenAI Platform</a> website and navigate to the "API keys" section to create a new secret key</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759692503994/ac37a0bf-1d39-4fbe-9393-94fda82ea7a8.gif" alt="Click &quot;AI agent&quot; node and select the OpenAI chat model " class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-2-enable-date-time-tool">2. Enable date time tool</h4>
<p>A tool is a connected node the agent can call during execution to perform actions (like fetching data, formatting dates, or running code) rather than only reasoning in text. We will be using the “Date Time” tool to convert the user readable date into a <a target="_blank" href="https://help.clicksend.com/en/articles/44235-what-is-a-unix-timestamp">Unix Timestamp</a> before calling the Google calendar API.</p>
<p>Here are the steps to enable this tool:</p>
<ol>
<li><p>Click the + button below the AI Agent Tool</p>
</li>
<li><p>Find the Date &amp; Time tool</p>
</li>
<li><p>Set Operation as <code>Format a Date</code></p>
</li>
<li><p>Select Date as <code>Defined automatically by the model</code> (allow agent to pass date itself)</p>
</li>
<li><p>Select Format as <code>Unix Timestamp</code></p>
</li>
<li><p>Rename Output field name to <code>unixTime</code></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759692990713/e59e5f3e-4898-4aeb-af3a-e7dffbb44615.gif" alt="n8n workflow where we click the Tool under AI agent and select the Date time tool and select &quot;Unix timestamp&quot; for format" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-3-add-agent-prompt">3. Add agent prompt</h4>
<p>An agent prompt is the set of instructions and context you give an AI Agent that defines its behavior, goals, and how it should interpret or respond to user inputs.</p>
<ol>
<li><p>Double-click the AI Agent to edit the prompt.</p>
</li>
<li><p>Select Source for Prompt (User Message) as <code>Define below</code></p>
</li>
<li><p>Copy the following prompt in the Prompt (User Message)</p>
</li>
</ol>
<pre><code class="lang-plaintext">## Overview
You are an agent which helps parse the user message to identify the following details:
1. The title for the meeting
2. The location of the meeting
3. The meeting start and end Unix times.

Here is the User Message: {{ $json.chatInput }}

## Rules for event time identification:
- The current date time now is: {{ $now }}
- Resolve relative phrases like "tomorrow", "next Friday", "in 2 hours" relative to now.
- If duration given (e.g., "30 min" or "2 hours"), compute end_time from start_time.
- If only a start time given, default duration = 60 minutes.

## Getting event_start and event_end unix
- Use the "Date &amp; Time" tool to convert the computed event start and end time to unixtime.
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759693520456/df1a9a71-f790-42c3-9128-7289aaa35b22.gif" alt="User interface of the n8n workflow editor displaying a workflow named &quot;My workflow 2.&quot; The screen shows nodes including &quot;When chat message received,&quot; &quot;AI Agent,&quot; and connections to OpenAI Chat Model and Date &amp; Time. The user is interacting with the interface, and various menu options are visible on the left sidebar." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h4 id="heading-4-set-up-structured-output">4. Set up structured output</h4>
<ol>
<li><p>Enable the <code>Require Specific Output Format</code> switch in AI Agent</p>
</li>
<li><p>Click + below Output Parser and select <code>Structured Output Parser</code></p>
</li>
<li><p>Copy the following example JSON which we want to extract from user message</p>
</li>
</ol>
<pre><code class="lang-json">{
    <span class="hljs-attr">"meeting_title"</span>: <span class="hljs-string">"Learn Geometry"</span>,
    <span class="hljs-attr">"meeting_location"</span>: <span class="hljs-string">"Library"</span>,
    <span class="hljs-attr">"event_start"</span>: <span class="hljs-number">1759644763</span>,
    <span class="hljs-attr">"event_end"</span>: <span class="hljs-number">1759644764</span>
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759693798967/bc8ba49e-09f3-43ed-9278-1cd4ada54560.gif" alt="Screenshot of an AI agent interface with a dialogue box showing parameters such as source for the user message and options for enabling specific output format. The workspace also displays input and output sections with no data." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-3-add-google-calendar-node">Step 3: Add Google Calendar Node</h3>
<p>The final step is to take the structured data from the AI Agent and create the calendar event.</p>
<ol>
<li><p>Click the <strong>+</strong> icon after the AI Agent node and search for the <strong>Google Calendar</strong> node.</p>
</li>
<li><p>Select Resource as <code>Event</code> and Operation as <code>Create</code></p>
</li>
<li><p>Create new OAuth2 credentials and sign in to your Google account. You'll be prompted to sign in to Google and grant N8N permission.</p>
</li>
</ol>
<p>Now, you’re going to map the data from the AI Agent to the fields in the Google Calendar node. This is where the magic happens.</p>
<ol>
<li><p>Select Start as <code>{{ DateTime.fromSeconds($json.output.event_start).toFormat("yyyy-MM-dd HH:mm:ss") }}</code></p>
</li>
<li><p>Select End as <code>{{ DateTime.fromSeconds($json.output.event_end).toFormat("yyyy-MM-dd HH:mm:ss") }}</code></p>
</li>
<li><p>Select Location as <code>{{ $json.output.meeting_location }}</code></p>
</li>
<li><p>Select Summary as <code>{{ $json.output.meeting_title }}</code></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759696195078/a35054a9-5c3d-44e8-9a30-5539f3675525.gif" alt="Workflow automation interface displaying a chat message integration with an AI agent, using nodes for OpenAI Chat Model, Date &amp; Time, and Structured Output Parser. Sidebar shows options for AI, app actions, data transformation, and more. A save button and trial information are visible at the top. Add &quot;Google Calender&quot; create event node" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-step-4-time-to-test">Step 4: Time to Test!</h3>
<p>That’s it! You now have an AI-powered workflow that creates events on your calendar. You can activate your workflow using the toggle at the top of the screen. Click “Open Chat” to initiate a chat conversation and send a message. You will see the entire workflow in action along with input and output of each node</p>
<p>You can also click on the Google calendar node to find the <code>htmlLink</code> column which will provide a URL where you can see your created event.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759696671635/a98acab0-2fa6-4c86-976e-996d5af00c59.gif" alt="A workflow automation setup in n8n showing a process where a chat message triggers an AI agent, which interacts with the OpenAI Chat Model and a Structured Output Parser to create an event. The interface includes various options like adding projects, templates, and accessing the admin panel." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this tutorial, you’ve learned how to build a simple, AI-driven automation workflow in n8n’s visual interface. The true power lies in creating personalized AI workflows by easily customizing your own agent, prompt, and tools to fit your exact needs.</p>
<p>n8n's ecosystem thrives on <a target="_blank" href="https://n8n.io/workflows/">community templates</a>, allowing you to utilize thousands of pre-built solutions or share your own creations with the community. If this guide helped you, try extending the workflow on your own and explore the n8n docs for more nodes. Happy coding!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Vibe Code With Help From n8n ]]>
                </title>
                <description>
                    <![CDATA[ Learn the power of vibe coding and how it pairs perfectly with n8n to build full-stack AI-driven apps. We just published a crash course on the freeCodeCamp.org YouTube channel that will teach you the power of VibeCoding and how to automate real-world... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-vibe-code-with-help-from-n8n/</link>
                <guid isPermaLink="false">686e78b53e3a3fb3bf0c79eb</guid>
                
                    <category>
                        <![CDATA[ vibe coding ]]>
                    </category>
                
                    <category>
                        <![CDATA[ youtube ]]>
                    </category>
                
                    <category>
                        <![CDATA[ n8n ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Beau Carnes ]]>
                </dc:creator>
                <pubDate>Wed, 09 Jul 2025 14:12:05 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751989831875/d04a4ccc-1159-4ff1-944b-d75399a55a2e.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Learn the power of vibe coding and how it pairs perfectly with n8n to build full-stack AI-driven apps.</p>
<p>We just published a crash course on the freeCodeCamp.org YouTube channel that will teach you the power of VibeCoding and how to automate real-world workflows using n8n. This course starts by demystifying what software engineers actually do and introduces you to the VibeCoding movement, which is an approach that blends human creativity with AI-driven automation. Paulo Dichone from Vinci bits created this course.</p>
<p>You’ll learn about the origins of VibeCoding, why it’s accessible to everyone, and how it can transform your approach to building software. The course walks through the entire vibe coding workflow, connecting it to core software engineering principles so you can vibe code like a pro.</p>
<p>A key part of this course is n8n, an open-source workflow automation tool that lets you connect different apps, APIs, and services without writing tons of code. Think of n8n as a visual platform where you can drag, drop, and link together building blocks to automate tasks, process data, and build complex backend systems.</p>
<p>You will learn how to set up n8n, create your first workflows and webhooks, and use AI coding agents to build powerful frontends that interact with your automated backend. Along the way, you’ll tackle real challenges like processing files, handling binary data, and troubleshooting workflow issues.</p>
<p>Here are the different sections covered in this course:</p>
<ul>
<li><p>What do Software Engineers do?</p>
</li>
<li><p>Vibe Coding - Who Started This Movement?</p>
</li>
<li><p>Fear Not Vibe Coding - Here’s Why</p>
</li>
<li><p>Vibe Coding - the Full Workflow</p>
</li>
<li><p>Bring it All Together - Vibecoding and Software Engineering Principles - Vibecoding Like an Engineer</p>
</li>
<li><p>LLM and Context - Why Does it Matter?</p>
</li>
<li><p>The Path to Follow when Vibe Coding</p>
</li>
<li><p>How To Think About Vibe Coding to Build Production Applications</p>
</li>
<li><p>Setting up N8N</p>
</li>
<li><p>Basics on N8N - Create your First Workflow and Webhooks</p>
</li>
<li><p>Leveraging AI Coding Agents to Build the Frontend - Intro to <a target="_blank" href="http://Bolt.new">Bolt.new</a> - Upload Files to our N8N Backend</p>
</li>
<li><p>Adding an N8N Switch Node to our Backend Workflow - Processing PDF, TXT, and CSV Files</p>
</li>
<li><p>Understanding Binary Files and Troubleshooting N8N Issues</p>
</li>
<li><p>An Overview of the Workflow Architecture - Bulletproofing the Workflow for Further Processing Downstream</p>
</li>
<li><p>Adding a Code Node to our N8N Workflow for Processing Data</p>
</li>
<li><p>Deconstructing the Field Extraction Node Code</p>
</li>
<li><p>Combining Extracted Data through the Code Node</p>
</li>
<li><p>Testing the Workflow - Full File Upload and File Processing Workflow</p>
</li>
<li><p>Final Thoughts and Where to Go From Here</p>
</li>
</ul>
<p>By the end of this course, you’ll have the VibeCoding mindset and a production-ready AI automation system built with n8n and AI tools. Watch the full course <a target="_blank" href="https://youtu.be/qDtVzumlb8M">on the freeCodeCamp.org YouTube channel</a> (2-hour watch).</p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/qDtVzumlb8M" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
