<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Node.js - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Fri, 15 May 2026 09:48:08 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/nodejs/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Build a Complete SaaS Payment Flow with Stripe, Webhooks, and Email Notifications ]]>
                </title>
                <description>
                    <![CDATA[ Most Stripe tutorials end at the checkout page. The customer clicks "Pay," Stripe processes the charge, and the tutorial congratulates you on integrating payments. But that's only the first 10% of a r ]]>
                </description>
                <link>https://www.freecodecamp.org/news/saas-payment-flow-stripe-webhooks-email/</link>
                <guid isPermaLink="false">69fe0830f239332df4de5722</guid>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Software Engineering ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Web Development ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Magnus Rødseth ]]>
                </dc:creator>
                <pubDate>Fri, 08 May 2026 15:58:40 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/de7d5c4d-062c-4879-892c-4486c7c461af.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Most Stripe tutorials end at the checkout page. The customer clicks "Pay," Stripe processes the charge, and the tutorial congratulates you on integrating payments.</p>
<p>But that's only the first 10% of a real payment system.</p>
<p>What happens after the customer pays? You need to record the purchase in your database, send a confirmation email, and grant product access (a GitHub repo invitation, an API key, a license file). You need to notify yourself as the admin. You need to handle refunds two weeks later and send recovery emails when someone abandons checkout.</p>
<p>This is the complete payment lifecycle, and it's where most SaaS applications break.</p>
<p>This article walks you through building the entire flow, from the "Buy" button to the "Welcome" email and everything in between. Every code example comes from a production application processing real payments. You'll see how to design the database schema, create Stripe products, build the checkout flow, process purchases reliably, handle refunds, recover abandoned carts, and send transactional emails.</p>
<p>Here is what you'll learn:</p>
<ul>
<li><p>How to design a database schema that tracks every stage of a purchase</p>
</li>
<li><p>How to create Stripe products and prices programmatically</p>
</li>
<li><p>How to build a checkout flow with success/cancel handling</p>
</li>
<li><p>How to process webhooks securely with signature verification</p>
</li>
<li><p>How to split post-payment processing into durable, independently retried steps</p>
</li>
<li><p>How to handle full and partial refunds with automatic access revocation</p>
</li>
<li><p>How to recover revenue from abandoned checkouts</p>
</li>
<li><p>How to build transactional email templates with React Email and Resend</p>
</li>
<li><p>How to test the entire flow locally with Stripe CLI and Inngest</p>
</li>
</ul>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-how-to-design-the-payment-database-schema">How to Design the Payment Database Schema</a></p>
</li>
<li><p><a href="#heading-how-to-create-stripe-products-and-prices">How to Create Stripe Products and Prices</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-checkout-flow">How to Build the Checkout Flow</a></p>
</li>
<li><p><a href="#heading-how-to-handle-webhooks-securely">How to Handle Webhooks Securely</a></p>
</li>
<li><p><a href="#heading-how-to-process-purchases-with-durable-background-jobs">How to Process Purchases with Durable Background Jobs</a></p>
</li>
<li><p><a href="#heading-how-to-handle-refunds">How to Handle Refunds</a></p>
</li>
<li><p><a href="#heading-how-to-recover-abandoned-checkouts">How to Recover Abandoned Checkouts</a></p>
</li>
<li><p><a href="#heading-how-to-send-transactional-emails-with-react-email">How to Send Transactional Emails with React Email</a></p>
</li>
<li><p><a href="#heading-how-to-test-the-complete-flow-locally">How to Test the Complete Flow Locally</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow along, you should be familiar with:</p>
<ul>
<li><p>TypeScript and Node.js</p>
</li>
<li><p>SQL databases (the examples use PostgreSQL)</p>
</li>
<li><p>React (for email templates)</p>
</li>
<li><p>Basic understanding of webhooks</p>
</li>
</ul>
<p>You don't need prior experience with any of the specific libraries. This handbook explains each one as it appears.</p>
<h3 id="heading-what-you-need-installed">What You Need Installed</h3>
<p>Install these packages to run the code examples:</p>
<pre><code class="language-bash">bun add stripe drizzle-orm @neondatabase/serverless inngest resend @react-email/components
</code></pre>
<p>You'll also need:</p>
<ul>
<li><p>A <a href="https://dashboard.stripe.com/register">Stripe account</a> (test mode is fine)</p>
</li>
<li><p>A <a href="https://neon.tech">Neon</a> PostgreSQL database (or any PostgreSQL instance)</p>
</li>
<li><p>A <a href="https://resend.com">Resend</a> account for sending emails</p>
</li>
<li><p>The <a href="https://stripe.com/docs/stripe-cli">Stripe CLI</a> for local webhook testing</p>
</li>
</ul>
<h3 id="heading-environment-variables">Environment Variables</h3>
<p>Set up these environment variables in your <code>.env</code> file:</p>
<pre><code class="language-bash"># Database
DATABASE_URL=postgresql://...

# Stripe
STRIPE_SECRET_KEY=sk_test_...
STRIPE_WEBHOOK_SECRET=whsec_...
STRIPE_PRO_PRICE_ID=price_...

# Email
RESEND_API_KEY=re_...
EMAIL_FROM="Your App &lt;noreply@mail.yourapp.com&gt;"
ADMIN_EMAIL=you@yourapp.com

# App
BETTER_AUTH_URL=http://localhost:3000
</code></pre>
<h2 id="heading-how-to-design-the-payment-database-schema">How to Design the Payment Database Schema</h2>
<p>Before writing any Stripe code, you need a database schema that can track a purchase through every stage of its lifecycle: creation, completion, partial refund, and full refund.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69a694d8d4dc9b42434c218f/6d0650fa-a568-4cb5-8560-8a2414635476.png" alt="Purchase status state machine showing transitions from pending to completed via Stripe webhook, then to refunded or partially refunded" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>A purchase starts as <code>pending</code> when the user clicks "Buy." After Stripe confirms payment, it transitions to <code>completed</code>. From there, it can move to <code>refunded</code> or <code>partially_refunded</code>. Pending purchases that are never completed expire after 24 hours (abandoned carts).</p>
<p>Here is the schema I use in production, defined with <a href="https://orm.drizzle.team">Drizzle ORM</a>. The examples throughout this article grant access to a private GitHub repository because that's what this particular product sells.</p>
<p>Your "grant access" step will be different: upgrading a user to a Pro plan, provisioning API credits, unlocking course content, or activating a subscription. The schema fields and step logic change, but the durable execution pattern is the same.</p>
<pre><code class="language-typescript">// src/lib/db/schema.ts
import {
  boolean,
  integer,
  pgEnum,
  pgTable,
  text,
  timestamp,
  varchar,
} from "drizzle-orm/pg-core";

export const purchaseTierEnum = pgEnum("purchase_tier", ["pro"]);
export const purchaseStatusEnum = pgEnum("purchase_status", [
  "completed",
  "partially_refunded",
  "refunded",
]);

export const users = pgTable("users", {
  id: text("id").primaryKey(),
  email: varchar("email", { length: 255 }).notNull().unique(),
  emailVerified: boolean("email_verified").notNull().default(false),
  name: text("name"),
  image: text("image"),
  githubUsername: text("github_username"),
  createdAt: timestamp("created_at").notNull().defaultNow(),
  updatedAt: timestamp("updated_at").notNull().defaultNow(),
});

export const purchases = pgTable("purchases", {
  id: text("id")
    .primaryKey()
    .$defaultFn(() =&gt; crypto.randomUUID()),
  userId: text("user_id")
    .notNull()
    .references(() =&gt; users.id, { onDelete: "cascade" }),
  stripeCheckoutSessionId: text("stripe_checkout_session_id")
    .notNull()
    .unique(),
  stripeCustomerId: text("stripe_customer_id"),
  stripePaymentIntentId: text("stripe_payment_intent_id"),
  tier: purchaseTierEnum("tier").notNull(),
  status: purchaseStatusEnum("status").notNull().default("completed"),
  githubAccessGranted: boolean("github_access_granted")
    .notNull()
    .default(false),
  githubInvitationId: text("github_invitation_id"),
  amount: integer("amount").notNull(),
  currency: text("currency").notNull().default("usd"),
  purchasedAt: timestamp("purchased_at").notNull().defaultNow(),
  createdAt: timestamp("created_at").notNull().defaultNow(),
  updatedAt: timestamp("updated_at").notNull().defaultNow(),
});

export type Purchase = typeof purchases.$inferSelect;
export type NewPurchase = typeof purchases.$inferInsert;
</code></pre>
<p>Let me walk through the design decisions behind this schema.</p>
<h3 id="heading-why-three-stripe-id-columns">Why Three Stripe ID Columns?</h3>
<p>The <code>purchases</code> table stores three separate Stripe identifiers: <code>stripeCheckoutSessionId</code>, <code>stripeCustomerId</code>, and <code>stripePaymentIntentId</code>.</p>
<p>Each one serves a different purpose.</p>
<p>The <strong>checkout session ID</strong> is what you receive first. When a customer starts checkout, Stripe creates a session and gives you this ID. You use it to claim the purchase after the customer returns from Stripe's hosted checkout page.</p>
<p>The <code>unique()</code> constraint on this column is your idempotency guard. If someone tries to claim the same session twice, the database rejects the second insert.</p>
<p>The <strong>customer ID</strong> is Stripe's internal identifier for the buyer. You need this to look up the customer's payment history in Stripe's dashboard and to create future checkout sessions pre-filled with their billing info.</p>
<p>The <strong>payment intent ID</strong> is what Stripe sends in refund webhook events. When a <code>charge.refunded</code> event fires, it includes the payment intent ID but not the checkout session ID. Without storing this field, you would have no way to match a refund back to a purchase in your database.</p>
<h3 id="heading-why-track-access-state-in-your-database">Why Track Access State in Your Database</h3>
<p>The <code>githubAccessGranted</code> and <code>githubInvitationId</code> fields might look unnecessary. You could check GitHub's API to see if a user has access. But querying an external API every time you need to check a user's access state is slow, rate-limited, and unreliable.</p>
<p>By tracking access state in your own database, you can answer "does this user have access?" with a single indexed query. You also know whether access was ever granted, which is critical for refund processing. If <code>githubAccessGranted</code> is <code>false</code>, you don't need to revoke anything on refund.</p>
<h3 id="heading-why-a-status-enum-with-three-values">Why a Status Enum with Three Values?</h3>
<p>The <code>purchaseStatusEnum</code> has three values: <code>completed</code>, <code>partially_refunded</code>, and <code>refunded</code>.</p>
<p>This matters for downstream logic. Your dashboard, analytics, support tools, and email sequences all need to know the exact state of a purchase. A partially refunded customer still has access, but a fully refunded customer doesn't.</p>
<p>If you only tracked "refunded" as a boolean, you would lose the distinction between partial and full refunds. That distinction affects whether you revoke product access.</p>
<h3 id="heading-how-to-generate-and-run-migrations">How to Generate and Run Migrations</h3>
<p>After defining your schema, generate a migration file and apply it to your database:</p>
<pre><code class="language-bash"># Generate migration SQL from schema changes
bun run drizzle-kit generate

# Push schema directly (development only)
bun run drizzle-kit push

# Run migrations (production)
bun run drizzle-kit migrate
</code></pre>
<p>Drizzle Kit compares your TypeScript schema to the database and generates the SQL needed to bring them in sync. Review the generated migration file before running it in production. Schema changes are one of the few things you can't easily undo.</p>
<p>For development, <code>drizzle-kit push</code> is faster because it applies changes directly without creating migration files. For production, always use <code>drizzle-kit generate</code> followed by <code>drizzle-kit migrate</code> so you have a versioned record of every schema change.</p>
<h2 id="heading-how-to-create-stripe-products-and-prices">How to Create Stripe Products and Prices</h2>
<p>You can create products and prices through the Stripe dashboard, but managing them programmatically is better for reproducibility. Here's a seed script that creates everything you need:</p>
<pre><code class="language-typescript">// src/lib/payments/seed.ts
import { stripe } from "./index";

const PRODUCTS = [
  {
    name: "My SaaS Product",
    description: "Full access, one-time purchase",
    features: [
      "Full source code access",
      "Production-ready infrastructure",
      "Lifetime updates",
    ],
    metadata: { tier: "pro" },
    prices: [
      {
        lookupKey: "pro_one_time",
        unitAmount: 19900, // $199.00 in cents
        currency: "usd",
        nickname: "Pro One-Time",
      },
    ],
  },
];

async function main() {
  console.log("Seeding Stripe products and prices...\n");

  for (const config of PRODUCTS) {
    // Create or find product
    const products = await stripe.products.list({ active: true, limit: 100 });
    let product = products.data.find((p) =&gt; p.name === config.name);

    if (!product) {
      product = await stripe.products.create({
        name: config.name,
        description: config.description,
        marketing_features: config.features.map((f) =&gt; ({ name: f })),
        metadata: config.metadata,
      });
      console.log(`Created product "\({config.name}" (\){product.id})`);
    }

    // Create prices
    for (const priceConfig of config.prices) {
      const existing = await stripe.prices.list({
        lookup_keys: [priceConfig.lookupKey],
        active: true,
        limit: 1,
      });

      if (existing.data[0]) {
        console.log(`Price "${priceConfig.lookupKey}" already exists`);
        continue;
      }

      const price = await stripe.prices.create({
        product: product.id,
        unit_amount: priceConfig.unitAmount,
        currency: priceConfig.currency,
        nickname: priceConfig.nickname,
        lookup_key: priceConfig.lookupKey,
        transfer_lookup_key: true,
      });

      console.log(`Created price "\({priceConfig.lookupKey}" (\){price.id})`);
    }
  }

  console.log("\nDone! Add the price ID to your .env as STRIPE_PRO_PRICE_ID");
}

main().catch(console.error);
</code></pre>
<p>Run this with <code>bun run src/lib/payments/seed.ts</code>.</p>
<p>A few things worth noting.</p>
<ul>
<li><p><strong>Use</strong> <code>lookup_key</code> <strong>instead of hardcoding price IDs:</strong> Price IDs are different between test and live mode. Lookup keys let you reference prices by name (<code>pro_one_time</code>) rather than by Stripe's generated ID (<code>price_1P...</code>).  </p>
<p>The <code>transfer_lookup_key: true</code> option ensures that if you create a new price with the same lookup key, it replaces the old one automatically.</p>
</li>
<li><p><strong>Prices are in cents:</strong> Stripe's API expects amounts in the smallest currency unit. For USD, that means <code>19900</code> represents $199.00.  </p>
<p>This is a common source of bugs. Always store amounts in cents in your database and convert to dollars only at the display layer.</p>
</li>
<li><p><strong>The seed script is idempotent:</strong> You can run it multiple times safely. It checks for existing products and prices before creating new ones.</p>
</li>
</ul>
<h3 id="heading-how-to-set-up-the-stripe-client">How to Set Up the Stripe Client</h3>
<p>The Stripe client uses lazy initialization so that importing it doesn't throw if the API key is missing at module load time. This matters in build environments where environment variables aren't set.</p>
<pre><code class="language-typescript">// src/lib/payments/index.ts
import Stripe from "stripe";

let stripeClient: Stripe | null = null;

function getStripe(): Stripe {
  if (!stripeClient) {
    const secretKey = process.env.STRIPE_SECRET_KEY;
    if (!secretKey) {
      throw new Error("STRIPE_SECRET_KEY is not set");
    }
    stripeClient = new Stripe(secretKey);
  }
  return stripeClient;
}

export const stripe = new Proxy({} as Stripe, {
  get(_, prop) {
    return Reflect.get(getStripe(), prop);
  },
});
</code></pre>
<p>The <code>Proxy</code> wrapper is the key pattern here. Code across your application imports <code>stripe</code> and calls methods like <code>stripe.checkout.sessions.create(...)</code>. The proxy intercepts every property access and forwards it to the lazily initialized client.</p>
<p>This means the Stripe SDK only initializes when you actually use it, not when the module is imported.</p>
<h2 id="heading-how-to-build-the-checkout-flow">How to Build the Checkout Flow</h2>
<p>The checkout flow has three parts: creating the session, redirecting the customer, and handling the return.</p>
<h3 id="heading-how-to-create-a-checkout-session">How to Create a Checkout Session</h3>
<p>Here's the function that creates a Stripe Checkout session for a one-time payment:</p>
<pre><code class="language-typescript">// src/lib/payments/index.ts
export async function createOneTimeCheckoutSession(params: {
  priceId: string;
  successUrl: string;
  cancelUrl: string;
  metadata: Record&lt;string, string&gt;;
  customerEmail?: string;
  couponId?: string;
}) {
  const client = getStripe();

  const session = await client.checkout.sessions.create({
    mode: "payment",
    line_items: [{ price: params.priceId, quantity: 1 }],
    success_url: params.successUrl,
    cancel_url: params.cancelUrl,
    metadata: params.metadata,
    ...(params.customerEmail &amp;&amp; {
      customer_email: params.customerEmail,
    }),
    ...(params.couponId
      ? { discounts: [{ coupon: params.couponId }] }
      : { allow_promotion_codes: true }),
  });

  return session;
}
</code></pre>
<p>Three details matter here.</p>
<ul>
<li><p><strong>The</strong> <code>mode: "payment"</code> <strong>setting tells Stripe this is a one-time charge</strong>, not a subscription. For subscriptions, you would use <code>mode: "subscription"</code>. The mode affects which webhook events Stripe sends after payment.</p>
</li>
<li><p><strong>The</strong> <code>metadata</code> <strong>field is how you link the Stripe session back to your application.</strong> Pass your internal product tier, user ID, or any other data you need after payment. Stripe stores this metadata and includes it in webhook events and API responses.</p>
</li>
<li><p><strong>The</strong> <code>allow_promotion_codes: true</code> <strong>option shows a promo code field on the checkout page.</strong> If you have a specific coupon to apply (from a landing page URL parameter, for example), pass it via <code>discounts</code> instead. You can't use both at the same time.</p>
</li>
</ul>
<h3 id="heading-how-to-create-the-checkout-api-endpoint">How to Create the Checkout API Endpoint</h3>
<p>Here's the API endpoint that creates a checkout session and returns the URL:</p>
<pre><code class="language-typescript">// src/server/api.ts
app.post("/api/payments/checkout", async ({ set }) =&gt; {
  const priceId = process.env.STRIPE_PRO_PRICE_ID;

  if (!priceId) {
    set.status = 500;
    return { error: "Price not configured" };
  }

  const baseUrl = process.env.BETTER_AUTH_URL ?? "http://localhost:3000";
  const tier = "pro";

  const checkoutSession = await createOneTimeCheckoutSession({
    priceId,
    successUrl: `${baseUrl}/dashboard?purchase=success&amp;session_id={CHECKOUT_SESSION_ID}`,
    cancelUrl: `${baseUrl}/pricing`,
    metadata: { tier },
  });

  return { url: checkoutSession.url };
});
</code></pre>
<p>The <code>{CHECKOUT_SESSION_ID}</code> placeholder in the success URL is a Stripe template variable. Stripe replaces it with the actual session ID when redirecting the customer. This lets your frontend know which session just completed.</p>
<h3 id="heading-how-to-claim-the-purchase-after-checkout">How to Claim the Purchase After Checkout</h3>
<p>When the customer returns to your success URL, your frontend reads the <code>session_id</code> from the URL and sends it to a "claim" endpoint. This endpoint verifies the payment and creates the purchase record.</p>
<pre><code class="language-typescript">// src/server/api.ts
app.post(
  "/api/purchases/claim",
  async ({ body, request, set }) =&gt; {
    const session = await auth.api.getSession({
      headers: request.headers,
    });

    if (!session) {
      set.status = 401;
      return { error: "Unauthorized" };
    }

    const { sessionId } = body;

    // Check if this session was already claimed
    const existing = await db
      .select()
      .from(purchases)
      .where(eq(purchases.stripeCheckoutSessionId, sessionId))
      .limit(1);

    if (existing[0]) {
      return { success: true, alreadyClaimed: true, tier: existing[0].tier };
    }

    // Retrieve the Stripe checkout session to verify payment
    const stripeSession = await retrieveCheckoutSession(sessionId);

    if (stripeSession.payment_status !== "paid") {
      set.status = 400;
      return { error: "Payment not completed" };
    }

    const tier = (stripeSession.metadata?.tier ?? "pro") as PaymentTier;

    // Create purchase record
    await db.insert(purchases).values({
      userId: session.user.id,
      stripeCheckoutSessionId: sessionId,
      stripeCustomerId:
        typeof stripeSession.customer === "string"
          ? stripeSession.customer
          : stripeSession.customer?.id ?? null,
      stripePaymentIntentId:
        typeof stripeSession.payment_intent === "string"
          ? stripeSession.payment_intent
          : stripeSession.payment_intent?.id ?? null,
      tier,
      status: "completed",
      amount: stripeSession.amount_total ?? 0,
      currency: stripeSession.currency ?? "usd",
    });

    // Trigger background processing
    await inngest.send({
      name: "purchase/completed",
      data: {
        userId: session.user.id,
        tier,
        sessionId,
      },
    });

    return { success: true, tier };
  },
  {
    body: t.Object({
      sessionId: t.String(),
    }),
  }
);
</code></pre>
<p>This endpoint does four things, in order.</p>
<ol>
<li><p><strong>First, it checks if the session was already claimed.</strong> The <code>unique()</code> constraint on <code>stripeCheckoutSessionId</code> in the schema prevents duplicate records, but checking first lets you return a clean response without catching a database error.</p>
</li>
<li><p><strong>Second, it verifies payment with Stripe.</strong> Never trust data from the client. The frontend passes the session ID, but you must call Stripe's API to confirm that <code>payment_status</code> is <code>"paid"</code>.</p>
</li>
<li><p><strong>Third, it creates the purchase record.</strong> Notice how it extracts the <code>customer</code> and <code>payment_intent</code> from the Stripe session. Both fields are returned as either strings or expanded objects depending on your Stripe API settings, so the ternary handles both cases.</p>
</li>
<li><p><strong>Fourth, it sends a</strong> <code>purchase/completed</code> <strong>event to Inngest.</strong> This triggers the background processing flow that handles emails, access grants, analytics, and follow-up scheduling. The API endpoint doesn't do any of that work and returns <code>{ success: true }</code> immediately.</p>
</li>
</ol>
<p>This separation between recording the purchase and processing it is fundamental. The database insert is fast and reliable. The downstream processing (emails, API calls, analytics) is slow and unreliable.</p>
<p>By splitting them, you ensure the customer sees a success response instantly while the background work happens durably.</p>
<h2 id="heading-how-to-handle-webhooks-securely">How to Handle Webhooks Securely</h2>
<p>Your webhook endpoint is the entry point for Stripe events that happen outside your checkout flow: refunds, expired sessions, and disputes.</p>
<h3 id="heading-how-to-verify-webhook-signatures">How to Verify Webhook Signatures</h3>
<p>Every webhook from Stripe includes a signature header. You must verify this signature before processing the event. Without verification, anyone could send fake events to your webhook URL.</p>
<pre><code class="language-typescript">// src/lib/payments/index.ts
export async function constructWebhookEvent(
  payload: string | Buffer,
  signature: string
) {
  const webhookSecret = process.env.STRIPE_WEBHOOK_SECRET;
  if (!webhookSecret) {
    throw new Error("STRIPE_WEBHOOK_SECRET is not set");
  }
  const client = getStripe();
  return client.webhooks.constructEventAsync(payload, signature, webhookSecret);
}
</code></pre>
<p>One critical detail: <strong>use</strong> <code>constructEventAsync</code> <strong>instead of</strong> <code>constructEvent</code><strong>.</strong> The async version uses the Web Crypto API, which is compatible with modern runtimes like Bun and Cloudflare Workers. The synchronous version depends on Node.js's <code>crypto</code> module, which isn't available everywhere.</p>
<p>Another critical detail: <strong>pass the raw request body to signature verification.</strong> If your framework parses the body as JSON before you access it, the signature check fails. The signature is computed over the raw bytes of the request, not the parsed JSON.</p>
<h3 id="heading-how-to-build-the-webhook-endpoint">How to Build the Webhook Endpoint</h3>
<p>Here is the production webhook handler. Its only job is to validate the event and route it to the background job system.</p>
<pre><code class="language-typescript">// src/server/api.ts
app.post("/api/payments/webhook", async ({ request, set }) =&gt; {
  const body = await request.text();
  const sig = request.headers.get("stripe-signature");

  if (!sig) {
    set.status = 400;
    return { error: "Missing signature" };
  }

  try {
    const event = await constructWebhookEvent(body, sig);
    console.log(`[Webhook] Received ${event.type}`);

    if (event.type === "charge.refunded") {
      const charge = event.data.object as {
        id: string;
        payment_intent: string;
        amount: number;
        amount_refunded: number;
        currency: string;
      };
      await inngest.send({
        name: "stripe/charge.refunded",
        data: {
          chargeId: charge.id,
          paymentIntentId: charge.payment_intent,
          amountRefunded: charge.amount_refunded,
          originalAmount: charge.amount,
          currency: charge.currency,
        },
      });
    }

    if (event.type === "checkout.session.expired") {
      const session = event.data.object as {
        id: string;
        customer_email: string | null;
      };
      await inngest.send({
        name: "stripe/checkout.session.expired",
        data: {
          sessionId: session.id,
          customerEmail: session.customer_email,
        },
      });
    }

    return { received: true };
  } catch (error) {
    console.error("[Webhook] Stripe verification failed:", error);
    set.status = 400;
    return { error: "Webhook verification failed" };
  }
});
</code></pre>
<p>This is the "thin webhook handler" pattern. Notice what it does <strong>not</strong> do: it does not query the database, send emails, grant access, or call any external service. It validates the signature, extracts the fields it needs, and sends a typed event to Inngest.</p>
<p>The entire handler completes in milliseconds.</p>
<p>Why does this matter? Stripe expects your webhook to return a 2xx response within about 20 seconds. If your handler tries to do too much work (database queries, email sends, API calls), it risks timing out.</p>
<p>Stripe marks it as failed and retries the entire event. Now you have partial completion and duplicate processing.</p>
<p>The thin handler avoids this entirely. Validate, enqueue, return. All the real work happens asynchronously in durable background functions.</p>
<h3 id="heading-why-extract-fields-before-enqueueing">Why Extract Fields Before Enqueueing?</h3>
<p>You might notice that the webhook handler extracts specific fields from the Stripe event before sending them to Inngest:</p>
<pre><code class="language-typescript">await inngest.send({
  name: "stripe/charge.refunded",
  data: {
    chargeId: charge.id,
    paymentIntentId: charge.payment_intent,
    amountRefunded: charge.amount_refunded,
    originalAmount: charge.amount,
    currency: charge.currency,
  },
});
</code></pre>
<p>Why not forward the entire Stripe event? Two reasons.</p>
<p>First, Stripe event objects are large and deeply nested. Your background function only needs five fields. Sending the entire object means your durable function stores a large payload at every checkpoint, and over thousands of runs, this adds up.</p>
<p>Second, extracting fields at the boundary creates a clean contract between your webhook handler and your background functions. If Stripe changes the shape of their event objects in a future API version, you only need to update the extraction logic in the webhook handler. Your background functions keep working because they depend on your own typed data shape, not Stripe's.</p>
<h3 id="heading-how-to-set-up-webhooks-in-production">How to Set Up Webhooks in Production</h3>
<p>For production, you configure webhooks in the Stripe Dashboard:</p>
<ol>
<li><p>Go to Stripe Dashboard, then Developers, then Webhooks.</p>
</li>
<li><p>Add an endpoint pointing to your production URL: <code>https://yourapp.com/api/payments/webhook</code>.</p>
</li>
<li><p>Select the events you want to receive: <code>charge.refunded</code> and <code>checkout.session.expired</code>.</p>
</li>
<li><p>Copy the signing secret and add it to your production environment variables as <code>STRIPE_WEBHOOK_SECRET</code>.</p>
</li>
</ol>
<p>The production signing secret is different from the one the Stripe CLI generates for local testing. Make sure your environment variables are set correctly for each environment.</p>
<h3 id="heading-which-webhook-events-to-listen-for">Which Webhook Events to Listen For</h3>
<p>For a complete payment flow, you need these webhook events configured in Stripe:</p>
<table>
<thead>
<tr>
<th>Event</th>
<th>When It Fires</th>
<th>What You Do</th>
</tr>
</thead>
<tbody><tr>
<td><code>charge.refunded</code></td>
<td>Customer receives a refund</td>
<td>Revoke access (full refund) or update status (partial)</td>
</tr>
<tr>
<td><code>checkout.session.expired</code></td>
<td>Checkout session times out (24 hours)</td>
<td>Send abandoned cart recovery email</td>
</tr>
</tbody></table>
<p>For subscription-based billing, you would also listen for <code>customer.subscription.updated</code>, <code>customer.subscription.deleted</code>, and <code>invoice.payment_failed</code>. This article covers one-time payments, so the examples focus on the two events above.</p>
<p>The <code>checkout.session.completed</code> event is notably absent. For one-time payments, you typically process the purchase in the "claim" endpoint (shown in the previous section) rather than in a webhook, because you need the authenticated user's session to link the purchase to their account.</p>
<h2 id="heading-how-to-process-purchases-with-durable-background-jobs">How to Process Purchases with Durable Background Jobs</h2>
<p>This is the heart of the payment flow. After the purchase record is created and the <code>purchase/completed</code> event is sent, a durable function takes over and runs the entire post-payment workflow.</p>
<p>Each step in this function is individually checkpointed. If step 5 fails, steps 1 through 4 don't re-run. Step 5 retries on its own, and once it succeeds, steps 6 through 9 continue.</p>
<p>This is what "durable execution" means. It's the difference between a payment system that works in development and one that works in production.</p>
<p>I use <a href="https://www.inngest.com/">Inngest</a> for this. It is an event-driven durable execution platform that provides step-level checkpointing out of the box. You define functions with <code>step.run()</code> blocks, and Inngest handles retry logic, state persistence, and observability.</p>
<p>The Inngest client setup is minimal:</p>
<pre><code class="language-typescript">// src/lib/jobs/client.ts
import { Inngest } from "inngest";

export const inngest = new Inngest({
  id: "my-app",
});
</code></pre>
<p>Register your functions with the Inngest serve handler so the dev server (and production) can discover them:</p>
<pre><code class="language-typescript">import { serve } from "inngest/bun";
import { inngest } from "@/lib/jobs/client";
import { stripeFunctions } from "@/lib/jobs/functions/stripe";

const inngestHandler = serve({
  client: inngest,
  functions: [...stripeFunctions],
});

// Mount on your API
app.all("/api/inngest", async (ctx) =&gt; {
  return inngestHandler(ctx.request);
});
</code></pre>
<p>Here's the complete purchase function:</p>
<pre><code class="language-typescript">// src/lib/jobs/functions/stripe.ts
import { eq } from "drizzle-orm";
import { createElement } from "react";

import { inngest } from "../client";
import { trackServerEvent } from "@/lib/analytics/server";
import { brand } from "@/lib/brand";
import { db, purchases, users } from "@/lib/db";
import {
  sendEmail,
  PurchaseConfirmationEmail,
  AdminPurchaseNotificationEmail,
  RepoAccessGrantedEmail,
} from "@/lib/email";
import { addCollaborator } from "@/lib/github";

export const handlePurchaseCompleted = inngest.createFunction(
  { id: "purchase-completed", triggers: [{ event: "purchase/completed" }] },
  async ({ event, step }) =&gt; {
    const { userId, tier, sessionId } = event.data as {
      userId: string;
      tier: string;
      sessionId: string;
    };

    // Step 1: Look up user and purchase details
    const { user, purchase } = await step.run(
      "lookup-user-and-purchase",
      async () =&gt; {
        const userResult = await db
          .select({
            id: users.id,
            email: users.email,
            name: users.name,
            githubUsername: users.githubUsername,
          })
          .from(users)
          .where(eq(users.id, userId))
          .limit(1);

        const foundUser = userResult[0];
        if (!foundUser) {
          throw new Error(`User not found: ${userId}`);
        }

        const purchaseResult = await db
          .select({
            amount: purchases.amount,
            currency: purchases.currency,
            stripePaymentIntentId: purchases.stripePaymentIntentId,
          })
          .from(purchases)
          .where(eq(purchases.stripeCheckoutSessionId, sessionId))
          .limit(1);

        const foundPurchase = purchaseResult[0];

        return {
          user: foundUser,
          purchase: foundPurchase ?? {
            amount: 0,
            currency: "usd",
            stripePaymentIntentId: null,
          },
        };
      }
    );

    // Step 2: Track purchase in analytics
    await step.run("track-purchase-to-posthog", async () =&gt; {
      try {
        await trackServerEvent(userId, "purchase_completed_server", {
          tier,
          amount_cents: purchase.amount,
          currency: purchase.currency,
          stripe_session_id: sessionId,
          stripe_payment_intent_id: purchase.stripePaymentIntentId,
        });
      } catch (error) {
        console.error(`Failed to track to PostHog:`, error);
      }
    });

    // Step 3: Send purchase confirmation to customer
    await step.run("send-purchase-confirmation", async () =&gt; {
      await sendEmail({
        to: user.email,
        subject: `Your ${brand.name} purchase is confirmed!`,
        template: createElement(PurchaseConfirmationEmail, {
          amount: purchase.amount,
          currency: purchase.currency,
          customerEmail: user.email,
        }),
      });
    });

    // Step 4: Send admin notification
    await step.run("send-admin-notification", async () =&gt; {
      const adminEmail = process.env.ADMIN_EMAIL;
      if (!adminEmail) return;

      await sendEmail({
        to: adminEmail,
        subject: `New template sale: ${user.email}`,
        template: createElement(AdminPurchaseNotificationEmail, {
          amount: purchase.amount,
          currency: purchase.currency,
          customerEmail: user.email,
          customerName: user.name,
          stripeSessionId: purchase.stripePaymentIntentId ?? sessionId,
        }),
      });
    });

    // Early return if user has no GitHub username
    if (!user.githubUsername) {
      return { success: true, userId, tier, githubAccessGranted: false };
    }

    // Step 5: Grant GitHub repository access
    const collaboratorResult = await step.run(
      "add-github-collaborator",
      async () =&gt; {
        return addCollaborator(user.githubUsername!);
      }
    );

    // Step 6: Track GitHub access granted
    await step.run("track-github-access", async () =&gt; {
      await trackServerEvent(userId, "github_access_granted", {
        tier,
        github_username: user.githubUsername,
        invitation_status: collaboratorResult.status,
      });
    });

    // Step 7: Update purchase record
    await step.run("update-purchase-record", async () =&gt; {
      await db
        .update(purchases)
        .set({
          githubAccessGranted: true,
          githubInvitationId: collaboratorResult.status,
          updatedAt: new Date(),
        })
        .where(eq(purchases.stripeCheckoutSessionId, sessionId));
    });

    // Step 8: Send repo access email
    await step.run("send-repo-access-email", async () =&gt; {
      const repoUrl = brand.social.github;
      await sendEmail({
        to: user.email,
        subject: `Your ${brand.name} repository access is ready!`,
        template: createElement(RepoAccessGrantedEmail, { repoUrl }),
      });
    });

    // Step 9: Schedule follow-up email sequence
    await step.run("schedule-follow-up", async () =&gt; {
      const purchaseRecord = await db
        .select({ id: purchases.id })
        .from(purchases)
        .where(eq(purchases.stripeCheckoutSessionId, sessionId))
        .limit(1);

      if (purchaseRecord[0]) {
        await inngest.send({
          name: "purchase/follow-up.scheduled",
          data: {
            userId,
            purchaseId: purchaseRecord[0].id,
            tier,
          },
        });
      }
    });

    return { success: true, userId, tier, githubAccessGranted: true };
  }
);
</code></pre>
<p>That's a lot of code. Let me break down why each step exists and why it must be separate.</p>
<h3 id="heading-step-1-look-up-user-and-purchase">Step 1: Look Up User and Purchase</h3>
<pre><code class="language-typescript">const { user, purchase } = await step.run(
  "lookup-user-and-purchase",
  async () =&gt; {
    // Database queries for user and purchase records
    return { user: foundUser, purchase: foundPurchase };
  }
);
</code></pre>
<p>This step queries the database for the user and purchase details. Every subsequent step depends on these values (the user's email, the purchase amount, the user's GitHub username).</p>
<p>Because this is wrapped in <code>step.run()</code>, the return value is cached by Inngest. If a later step fails and the function retries, this step doesn't re-run. The cached values are replayed instead.</p>
<p>If the user doesn't exist in the database, this step throws an error that halts the entire function. There's no point continuing if the user can't be found.</p>
<h3 id="heading-step-2-track-analytics">Step 2: Track Analytics</h3>
<pre><code class="language-typescript">await step.run("track-purchase-to-posthog", async () =&gt; {
  try {
    await trackServerEvent(userId, "purchase_completed_server", {
      tier,
      amount_cents: purchase.amount,
      currency: purchase.currency,
    });
  } catch (error) {
    console.error(`Failed to track to PostHog:`, error);
  }
});
</code></pre>
<p>Analytics tracking gets its own step because analytics services have their own failure modes. PostHog could be rate-limited or temporarily unreachable. If that happens, you don't want it to block the confirmation email.</p>
<p>Notice the try-catch. A tracking failure logs the error but doesn't halt the function. Analytics data is valuable but not critical to the purchase flow.</p>
<h3 id="heading-steps-3-and-4-email-notifications">Steps 3 and 4: Email Notifications</h3>
<p>The customer confirmation and admin notification are separate steps because they are independent operations. If Resend returns a 500 when sending the admin email, the customer should still get their confirmation.</p>
<pre><code class="language-typescript">// Step 3: Customer confirmation
await step.run("send-purchase-confirmation", async () =&gt; {
  await sendEmail({
    to: user.email,
    subject: `Your ${brand.name} purchase is confirmed!`,
    template: createElement(PurchaseConfirmationEmail, {
      amount: purchase.amount,
      currency: purchase.currency,
      customerEmail: user.email,
    }),
  });
});

// Step 4: Admin notification
await step.run("send-admin-notification", async () =&gt; {
  const adminEmail = process.env.ADMIN_EMAIL;
  if (!adminEmail) return;

  await sendEmail({
    to: adminEmail,
    subject: `New template sale: ${user.email}`,
    template: createElement(AdminPurchaseNotificationEmail, {
      // ... admin-specific fields
    }),
  });
});
</code></pre>
<p>The admin notification step includes a guard: if <code>ADMIN_EMAIL</code> isn't set, it returns early. This makes the function work in development environments where you haven't configured all environment variables.</p>
<h3 id="heading-step-5-grant-product-access">Step 5: Grant Product Access</h3>
<pre><code class="language-typescript">if (!user.githubUsername) {
  return { success: true, userId, tier, githubAccessGranted: false };
}

const collaboratorResult = await step.run(
  "add-github-collaborator",
  async () =&gt; {
    return addCollaborator(user.githubUsername!);
  }
);
</code></pre>
<p>This is the step most likely to fail. GitHub's API has rate limits, can time out, and the user's GitHub username might be invalid.</p>
<p>By making it its own step, a GitHub API failure doesn't re-trigger the confirmation email (step 3) or the admin notification (step 4). Those are already checkpointed.</p>
<p>Notice the early return before step 5. If the user has no GitHub username linked, the function returns after step 4. The remaining steps only run when there's a GitHub account to grant access to.</p>
<h3 id="heading-steps-6-7-track-and-update">Steps 6-7: Track and Update</h3>
<p>After granting GitHub access, the function tracks the event in analytics (step 6) and updates the purchase record in the database (step 7).</p>
<p>The database update is intentionally ordered after the GitHub API call. You only set <code>githubAccessGranted: true</code> after the invitation actually succeeded. If you updated the record first and the GitHub step failed, your database would say access was granted when it was not.</p>
<h3 id="heading-step-8-send-access-email">Step 8: Send Access Email</h3>
<pre><code class="language-typescript">await step.run("send-repo-access-email", async () =&gt; {
  const repoUrl = brand.social.github;
  await sendEmail({
    to: user.email,
    subject: `Your ${brand.name} repository access is ready!`,
    template: createElement(RepoAccessGrantedEmail, { repoUrl }),
  });
});
</code></pre>
<p>This email only sends after the GitHub invitation is confirmed. The ordering is deliberate. You don't tell the customer "your access is ready" if the invitation hasn't been sent.</p>
<h3 id="heading-step-9-schedule-follow-up-sequence">Step 9: Schedule Follow-Up Sequence</h3>
<pre><code class="language-typescript">await step.run("schedule-follow-up", async () =&gt; {
  const purchaseRecord = await db
    .select({ id: purchases.id })
    .from(purchases)
    .where(eq(purchases.stripeCheckoutSessionId, sessionId))
    .limit(1);

  if (purchaseRecord[0]) {
    await inngest.send({
      name: "purchase/follow-up.scheduled",
      data: {
        userId,
        purchaseId: purchaseRecord[0].id,
        tier,
      },
    });
  }
});
</code></pre>
<p>The final step triggers a separate function that handles the follow-up email sequence: day 7 onboarding tips, day 14 feedback request, day 30 testimonial request. This is an event-driven chain: one function completes and triggers another.</p>
<p>The follow-up function uses <code>step.sleep()</code> to wait between emails without consuming compute resources:</p>
<pre><code class="language-typescript">export const handlePurchaseFollowUp = inngest.createFunction(
  {
    id: "purchase-follow-up",
    triggers: [{ event: "purchase/follow-up.scheduled" }],
    cancelOn: [
      {
        event: "purchase/follow-up.cancelled",
        match: "data.purchaseId",
      },
    ],
  },
  async ({ event, step }) =&gt; {
    await step.sleep("wait-7-days", "7d");
    await step.run("send-day-7-email", async () =&gt; {
      // Send onboarding tips
    });

    await step.sleep("wait-14-days", "7d");
    await step.run("send-day-14-email", async () =&gt; {
      // Send feedback request
    });
  }
);
</code></pre>
<p>The <code>cancelOn</code> option is worth noting. If the purchase is refunded, you send a <code>purchase/follow-up.cancelled</code> event, and the entire follow-up sequence stops. No stale emails to customers who refunded.</p>
<h3 id="heading-the-rule-for-step-separation">The Rule for Step Separation</h3>
<p>Any operation that calls an external service or could fail independently should be its own step. A database query is a step because the database can be temporarily unreachable. An email send or API call is a step because those services can return errors or hit rate limits.</p>
<p>If two operations always succeed or fail together, they can share a step. But when in doubt, make it separate. The overhead is negligible, and the reliability gain is significant.</p>
<h2 id="heading-how-to-handle-refunds">How to Handle Refunds</h2>
<p>Refund processing is the most commonly overlooked part of a payment system. You need to handle two cases: full refunds (revoke access) and partial refunds (keep access, update status).</p>
<p>Here's the complete refund handler:</p>
<pre><code class="language-typescript">// src/lib/jobs/functions/stripe.ts
export const handleRefund = inngest.createFunction(
  { id: "refund-processed", triggers: [{ event: "stripe/charge.refunded" }] },
  async ({ event, step }) =&gt; {
    const data = event.data as {
      chargeId: string;
      paymentIntentId: string;
      amountRefunded: number;
      originalAmount: number;
      currency: string;
    };

    const chargeId = data.chargeId;
    const paymentIntentId = data.paymentIntentId;
    const currency = data.currency;
    const amountRefunded = data.amountRefunded;
    const originalAmount = data.originalAmount;
    const isFullRefund = amountRefunded &gt;= originalAmount;

    // Step 1: Look up the purchase and user
    const { user, purchase } = await step.run(
      "lookup-purchase-by-payment-intent",
      async () =&gt; {
        const purchaseResult = await db
          .select({
            id: purchases.id,
            userId: purchases.userId,
            stripePaymentIntentId: purchases.stripePaymentIntentId,
            githubAccessGranted: purchases.githubAccessGranted,
          })
          .from(purchases)
          .where(eq(purchases.stripePaymentIntentId, paymentIntentId))
          .limit(1);

        const foundPurchase = purchaseResult[0];
        if (!foundPurchase) {
          return { user: null, purchase: null };
        }

        const userResult = await db
          .select({
            id: users.id,
            email: users.email,
            name: users.name,
            githubUsername: users.githubUsername,
          })
          .from(users)
          .where(eq(users.id, foundPurchase.userId))
          .limit(1);

        return { user: userResult[0] ?? null, purchase: foundPurchase };
      }
    );

    if (!purchase || !user) {
      return { success: false, reason: "no_matching_purchase" };
    }

    let accessRevoked = false;

    // Step 2: Revoke GitHub access (only for full refunds)
    if (isFullRefund &amp;&amp; user.githubUsername &amp;&amp; purchase.githubAccessGranted) {
      const revokeResult = await step.run(
        "revoke-github-access",
        async () =&gt; {
          return removeCollaborator(user.githubUsername!);
        }
      );
      accessRevoked = revokeResult.success;
    }

    // Step 3: Update purchase status
    await step.run("update-purchase-status", async () =&gt; {
      if (isFullRefund) {
        await db
          .update(purchases)
          .set({
            status: "refunded",
            githubAccessGranted: false,
            updatedAt: new Date(),
          })
          .where(eq(purchases.id, purchase.id));
      } else {
        await db
          .update(purchases)
          .set({
            status: "partially_refunded",
            updatedAt: new Date(),
          })
          .where(eq(purchases.id, purchase.id));
      }
    });

    // Step 4: Track refund in analytics
    await step.run("track-refund-event", async () =&gt; {
      try {
        await trackServerEvent(user.id, "refund_processed", {
          charge_id: chargeId,
          payment_intent_id: paymentIntentId,
          amount_cents: amountRefunded,
          original_amount_cents: originalAmount,
          currency,
          is_full_refund: isFullRefund,
          github_access_revoked: accessRevoked,
        });
      } catch (error) {
        console.error(`Failed to track to PostHog:`, error);
      }
    });

    // Step 5: Notify customer
    await step.run("send-customer-notification", async () =&gt; {
      if (isFullRefund) {
        await sendEmail({
          to: user.email,
          subject: `Your ${brand.name} refund has been processed`,
          template: createElement(AccessRevokedEmail, {
            customerEmail: user.email,
            refundAmount: amountRefunded,
            currency,
          }),
        });
      } else {
        await sendEmail({
          to: user.email,
          subject: `Your ${brand.name} partial refund has been processed`,
          template: createElement(PartialRefundEmail, {
            customerEmail: user.email,
            refundAmount: amountRefunded,
            originalAmount,
            currency,
          }),
        });
      }
    });

    // Step 6: Notify admin
    await step.run("send-admin-notification", async () =&gt; {
      const adminEmail = process.env.ADMIN_EMAIL;
      if (!adminEmail) return;

      await sendEmail({
        to: adminEmail,
        subject: `\({isFullRefund ? "Full" : "Partial"} refund processed: \){user.email}`,
        template: createElement(AdminRefundNotificationEmail, {
          customerEmail: user.email,
          customerName: user.name,
          githubUsername: user.githubUsername,
          refundAmount: amountRefunded,
          originalAmount,
          currency,
          stripeChargeId: chargeId,
          accessRevoked,
          isPartialRefund: !isFullRefund,
        }),
      });
    });

    return { success: true, accessRevoked, isFullRefund, userId: user.id };
  }
);
</code></pre>
<h3 id="heading-how-full-refunds-differ-from-partial-refunds">How Full Refunds Differ from Partial Refunds</h3>
<p>The function distinguishes between the two with a simple comparison:</p>
<pre><code class="language-typescript">const isFullRefund = amountRefunded &gt;= originalAmount;
</code></pre>
<p>For a <strong>full refund</strong>, three things happen:</p>
<ol>
<li><p>GitHub access is revoked (the <code>removeCollaborator</code> call).</p>
</li>
<li><p>The purchase status is set to <code>"refunded"</code>.</p>
</li>
<li><p>The customer receives an <code>AccessRevokedEmail</code> explaining that their access has been removed.</p>
</li>
</ol>
<p>For a <strong>partial refund</strong>, the customer keeps access:</p>
<ol>
<li><p>GitHub access is <strong>not</strong> revoked.</p>
</li>
<li><p>The purchase status is set to <code>"partially_refunded"</code>.</p>
</li>
<li><p>The customer receives a <code>PartialRefundEmail</code> showing the refunded amount and the original amount.</p>
</li>
</ol>
<p>This distinction matters for your database integrity. Downstream systems (your dashboard, analytics, support tools) need accurate status values. A <code>partially_refunded</code> purchase still represents an active customer.</p>
<h3 id="heading-how-conditional-steps-work">How Conditional Steps Work</h3>
<p>The "revoke GitHub access" step only runs when three conditions are all true: it's a full refund, the user has a GitHub username, and access was previously granted.</p>
<pre><code class="language-typescript">if (isFullRefund &amp;&amp; user.githubUsername &amp;&amp; purchase.githubAccessGranted) {
  const revokeResult = await step.run("revoke-github-access", async () =&gt; {
    return removeCollaborator(user.githubUsername!);
  });
  accessRevoked = revokeResult.success;
}
</code></pre>
<p>If any of those conditions is false, the step is skipped entirely. Inngest handles this cleanly. The function continues to step 3 (update purchase status) with <code>accessRevoked</code> still set to <code>false</code>.</p>
<h2 id="heading-how-to-recover-abandoned-checkouts">How to Recover Abandoned Checkouts</h2>
<p>When a customer starts checkout but doesn't complete it, Stripe eventually expires the session (after 24 hours by default). You can listen for this event and send a recovery email.</p>
<p>The key insight is that you don't want to send the email immediately. Give the customer an hour to come back on their own.</p>
<pre><code class="language-typescript">// src/lib/jobs/functions/stripe.ts
export const handleCheckoutExpired = inngest.createFunction(
  {
    id: "checkout-expired",
    triggers: [{ event: "stripe/checkout.session.expired" }],
  },
  async ({ event, step }) =&gt; {
    const { customerEmail, sessionId } = event.data as {
      customerEmail: string | null;
      sessionId: string;
    };

    if (!customerEmail) {
      return { success: false, reason: "no_email" };
    }

    // Wait 1 hour before sending recovery email
    await step.sleep("wait-before-recovery-email", "1h");

    // Send abandoned cart email
    await step.run("send-abandoned-cart-email", async () =&gt; {
      const baseUrl =
        process.env.BETTER_AUTH_URL ?? "https://your-app.com";
      const checkoutUrl = `${baseUrl}/pricing`;

      await sendEmail({
        to: customerEmail,
        subject: `Your ${brand.name} checkout is waiting`,
        template: createElement(AbandonedCartEmail, {
          customerEmail,
          checkoutUrl,
        }),
      });
    });

    // Track the recovery attempt
    await step.run("track-abandoned-cart", async () =&gt; {
      try {
        await trackServerEvent("anonymous", "abandoned_cart_email_sent", {
          customer_email: customerEmail,
          session_id: sessionId,
        });
      } catch (error) {
        console.error(`Failed to track to PostHog:`, error);
      }
    });

    return { success: true, customerEmail };
  }
);
</code></pre>
<p>The <code>step.sleep("wait-before-recovery-email", "1h")</code> line pauses the function for one hour without consuming compute resources. Inngest schedules the function to resume after the delay. No cron jobs, no Redis queues, no <code>setTimeout</code> that gets lost when your server restarts.</p>
<p>There is a guard at the top of the function. If the checkout session has no customer email (the customer closed the page before entering their email), the function returns early. You can't send a recovery email without an address.</p>
<p>You could extend this pattern with a second sleep and follow-up email three days later. You could also check if the customer has since completed a purchase (by querying the database in a <code>step.run()</code>) and skip the email if they have.</p>
<h3 id="heading-why-one-hour-is-the-right-delay">Why One Hour Is the Right Delay</h3>
<p>Sending the recovery email immediately after checkout expiration feels aggressive. The customer might still be comparing options, waiting for payday, or just distracted. An immediate email says "we noticed you left," which feels surveillance-like.</p>
<p>Waiting 24 hours is too long. The customer has moved on. They have forgotten your product or found an alternative.</p>
<p>One hour is the sweet spot I found through testing. The customer's intent is still fresh, and the email feels helpful rather than pushy.</p>
<p>Your mileage may vary. The delay is configurable: change <code>"1h"</code> to <code>"30m"</code> or <code>"3h"</code> and redeploy.</p>
<h3 id="heading-why-this-is-better-than-a-cron-job">Why This Is Better Than a Cron Job</h3>
<p>Without durable execution, abandoned cart recovery typically works like this: a cron job runs every hour, queries the database for expired sessions that haven't been recovered yet, sends emails to each one, and marks them as recovered.</p>
<p>This approach has several problems. You need a <code>recovered_at</code> column to avoid sending duplicate emails. You need to handle the case where the cron job crashes halfway through the batch, and you need to tune the cron interval carefully.</p>
<p>The <code>step.sleep()</code> approach eliminates all of this. Each expired session gets its own function instance with its own timer. There's no batch processing, no database flag, and no duplicate risk.</p>
<h2 id="heading-how-to-send-transactional-emails-with-react-email">How to Send Transactional Emails with React Email</h2>
<p>Every email in the payment flow is a React component rendered to HTML and sent via Resend. This gives you type-safe templates with props, component reuse, and the ability to preview emails in your browser during development.</p>
<h3 id="heading-how-to-set-up-the-email-client">How to Set Up the Email Client</h3>
<p>The email client wraps Resend with a simple <code>sendEmail</code> function:</p>
<pre><code class="language-typescript">// src/lib/email/index.ts
import { render } from "@react-email/components";
import type { ReactElement } from "react";
import { Resend } from "resend";

import { brand } from "@/lib/brand";

let resendClient: Resend | null = null;

function getResend(): Resend {
  if (!resendClient) {
    const apiKey = process.env.RESEND_API_KEY;
    if (!apiKey) {
      throw new Error("RESEND_API_KEY is not set");
    }
    resendClient = new Resend(apiKey);
  }
  return resendClient;
}

interface SendEmailOptions {
  to: string | string[];
  subject: string;
  template: ReactElement;
  from?: string;
  replyTo?: string;
}

export async function sendEmail({
  to,
  subject,
  template,
  from = process.env.EMAIL_FROM ?? brand.emails.from,
  replyTo,
}: SendEmailOptions) {
  const resend = getResend();
  const html = await render(template);

  return resend.emails.send({
    from,
    to,
    subject,
    html,
    replyTo,
  });
}
</code></pre>
<p>The <code>render()</code> function from <code>@react-email/components</code> converts a React element into an HTML string. This HTML is what Resend delivers to the customer's inbox.</p>
<p>The <code>from</code> address defaults to your brand's email configuration. You need a verified domain in Resend for this to work. During development, Resend's free tier lets you send to your own email address without domain verification.</p>
<h3 id="heading-how-to-build-a-purchase-confirmation-template">How to Build a Purchase Confirmation Template</h3>
<p>Here's the real purchase confirmation email template:</p>
<pre><code class="language-tsx">// src/lib/email/emails/purchase-confirmation.tsx
import {
  Body,
  Container,
  Head,
  Heading,
  Hr,
  Html,
  Link,
  Preview,
  Section,
  Text,
} from "@react-email/components";

import { brand } from "@/lib/brand";

interface PurchaseConfirmationEmailProps {
  amount: number;
  currency: string;
  customerEmail: string;
}

const colors = {
  primary: "#d97757",
  background: "#faf9f5",
  foreground: "#30302e",
  muted: "#6b6860",
  border: "#e5e4df",
  card: "#ffffff",
  success: "#16a34a",
  successLight: "#f0fdf4",
};

export default function PurchaseConfirmationEmail({
  amount,
  currency,
  customerEmail,
}: PurchaseConfirmationEmailProps) {
  const formattedAmount = new Intl.NumberFormat("en-US", {
    style: "currency",
    currency: currency.toUpperCase(),
  }).format(amount / 100);

  return (
    &lt;Html&gt;
      &lt;Head /&gt;
      &lt;Preview&gt;Your {brand.name} purchase is confirmed!&lt;/Preview&gt;
      &lt;Body style={main}&gt;
        &lt;Container style={container}&gt;
          &lt;Section style={header}&gt;
            &lt;Text style={logoText}&gt;{brand.name}&lt;/Text&gt;
          &lt;/Section&gt;

          &lt;Hr style={divider} /&gt;

          &lt;Section style={successBadge}&gt;
            &lt;Text style={successText}&gt;Payment Successful&lt;/Text&gt;
          &lt;/Section&gt;

          &lt;Heading style={h1}&gt;Thank you for your purchase!&lt;/Heading&gt;

          &lt;Text style={text}&gt;
            Your payment has been processed successfully. We are now setting
            up your GitHub repository access. You will receive another email
            shortly with your access link.
          &lt;/Text&gt;

          &lt;Section style={detailsBox}&gt;
            &lt;Text style={detailsTitle}&gt;Order Details&lt;/Text&gt;

            &lt;Section style={detailRow}&gt;
              &lt;Text style={detailLabel}&gt;Product&lt;/Text&gt;
              &lt;Text style={detailValue}&gt;{brand.name}&lt;/Text&gt;
            &lt;/Section&gt;

            &lt;Section style={detailRow}&gt;
              &lt;Text style={detailLabel}&gt;Amount&lt;/Text&gt;
              &lt;Text style={detailValue}&gt;{formattedAmount}&lt;/Text&gt;
            &lt;/Section&gt;

            &lt;Section style={detailRow}&gt;
              &lt;Text style={detailLabel}&gt;Email&lt;/Text&gt;
              &lt;Text style={detailValue}&gt;{customerEmail}&lt;/Text&gt;
            &lt;/Section&gt;
          &lt;/Section&gt;

          &lt;Text style={text}&gt;
            This is a one-time purchase. No recurring charges will be made.
          &lt;/Text&gt;

          &lt;Hr style={divider} /&gt;

          &lt;Text style={footer}&gt;
            Questions about your purchase? Reply to this email or reach
            out at{" "}
            &lt;Link
              href={`mailto:${brand.emails.support}`}
              style={link}
            &gt;
              {brand.emails.support}
            &lt;/Link&gt;
          &lt;/Text&gt;
        &lt;/Container&gt;
      &lt;/Body&gt;
    &lt;/Html&gt;
  );
}

PurchaseConfirmationEmail.PreviewProps = {
  amount: 9900,
  currency: "usd",
  customerEmail: "customer@example.com",
} satisfies PurchaseConfirmationEmailProps;
</code></pre>
<p>A few things to note about this template.</p>
<ul>
<li><p><strong>Currency formatting happens in the template:</strong> The <code>amount</code> prop is in cents (the same format stored in your database and returned by Stripe). The <code>Intl.NumberFormat</code> call converts it to a human-readable string like "$99.00" and keeps currency formatting logic in one place.</p>
</li>
<li><p><strong>The</strong> <code>PreviewProps</code> <strong>object is for development.</strong> React Email uses these props to render a preview in the browser. The <code>satisfies</code> keyword ensures the preview props match the component's interface.</p>
</li>
<li><p><strong>All styles are inline objects.</strong> Email clients strip <code>&lt;style&gt;</code> tags and ignore most CSS. Inline styles are the only reliable way to style emails across Gmail, Outlook, Apple Mail, and every other client.</p>
</li>
</ul>
<h3 id="heading-how-to-build-a-repo-access-template">How to Build a Repo Access Template</h3>
<p>The repo access email is sent after the GitHub invitation succeeds:</p>
<pre><code class="language-tsx">// src/lib/email/emails/repo-access-granted.tsx
import {
  Body,
  Button,
  Container,
  Head,
  Heading,
  Hr,
  Html,
  Link,
  Preview,
  Section,
  Text,
} from "@react-email/components";

import { brand } from "@/lib/brand";

interface RepoAccessGrantedEmailProps {
  repoUrl: string;
}

export default function RepoAccessGrantedEmail({
  repoUrl,
}: RepoAccessGrantedEmailProps) {
  return (
    &lt;Html&gt;
      &lt;Head /&gt;
      &lt;Preview&gt;Your {brand.name} repository access is ready!&lt;/Preview&gt;
      &lt;Body style={main}&gt;
        &lt;Container style={container}&gt;
          &lt;Section style={header}&gt;
            &lt;Text style={logoText}&gt;{brand.name}&lt;/Text&gt;
          &lt;/Section&gt;

          &lt;Hr style={divider} /&gt;

          &lt;Heading style={h1}&gt;You are in!&lt;/Heading&gt;

          &lt;Text style={text}&gt;
            Your GitHub repository access has been granted. You now have
            full access to the {brand.name} codebase.
          &lt;/Text&gt;

          &lt;Section style={buttonContainer}&gt;
            &lt;Button style={button} href={repoUrl}&gt;
              Open Repository
            &lt;/Button&gt;
          &lt;/Section&gt;

          &lt;Section style={infoBox}&gt;
            &lt;Text style={infoTitle}&gt;Quick Start&lt;/Text&gt;
            &lt;Text style={infoText}&gt;
              &lt;strong&gt;1.&lt;/strong&gt; Clone the repository to your machine
            &lt;/Text&gt;
            &lt;Text style={infoText}&gt;
              &lt;strong&gt;2.&lt;/strong&gt; Run{" "}
              &lt;code style={codeStyle}&gt;bun install&lt;/code&gt; to install
              dependencies
            &lt;/Text&gt;
            &lt;Text style={infoText}&gt;
              &lt;strong&gt;3.&lt;/strong&gt; Follow the README for environment setup
            &lt;/Text&gt;
            &lt;Text style={infoText}&gt;
              &lt;strong&gt;4.&lt;/strong&gt; Run{" "}
              &lt;code style={codeStyle}&gt;bun dev&lt;/code&gt; to start building
            &lt;/Text&gt;
          &lt;/Section&gt;

          &lt;Hr style={divider} /&gt;

          &lt;Text style={footer}&gt;
            Need help? Reply to this email or reach out at{" "}
            &lt;Link
              href={`mailto:${brand.emails.support}`}
              style={link}
            &gt;
              {brand.emails.support}
            &lt;/Link&gt;
          &lt;/Text&gt;
        &lt;/Container&gt;
      &lt;/Body&gt;
    &lt;/Html&gt;
  );
}
</code></pre>
<p>This template includes a <code>&lt;Button&gt;</code> component that links directly to the GitHub repository. The quick start section gives the customer immediate next steps so they aren't left wondering what to do after gaining access.</p>
<h3 id="heading-how-to-build-an-abandoned-cart-template">How to Build an Abandoned Cart Template</h3>
<p>The abandoned cart email brings the customer back to your pricing page:</p>
<pre><code class="language-tsx">// src/lib/email/emails/abandoned-cart.tsx
import {
  Body,
  Button,
  Container,
  Head,
  Heading,
  Hr,
  Html,
  Preview,
  Section,
  Text,
} from "@react-email/components";

import { brand } from "@/lib/brand";

interface AbandonedCartEmailProps {
  customerEmail: string;
  checkoutUrl: string;
}

export default function AbandonedCartEmail({
  customerEmail,
  checkoutUrl,
}: AbandonedCartEmailProps) {
  return (
    &lt;Html&gt;
      &lt;Head /&gt;
      &lt;Preview&gt;Your {brand.name} checkout is waiting for you&lt;/Preview&gt;
      &lt;Body style={main}&gt;
        &lt;Container style={container}&gt;
          &lt;Section style={header}&gt;
            &lt;Text style={logoText}&gt;{brand.name}&lt;/Text&gt;
          &lt;/Section&gt;

          &lt;Hr style={divider} /&gt;

          &lt;Heading style={h1}&gt;You left something behind&lt;/Heading&gt;

          &lt;Text style={text}&gt;
            We noticed you started a checkout but did not complete your
            purchase. No worries. Your cart is still waiting for you.
          &lt;/Text&gt;

          &lt;Text style={text}&gt;
            {brand.name} gives you everything you need to ship your
            startup this weekend: authentication, payments, email,
            background jobs, and more. All wired together and ready
            to go.
          &lt;/Text&gt;

          &lt;Section style={buttonContainer}&gt;
            &lt;Button style={button} href={checkoutUrl}&gt;
              Complete Your Purchase
            &lt;/Button&gt;
          &lt;/Section&gt;

          &lt;Text style={textSmall}&gt;
            If you ran into any issues during checkout or have questions
            about {brand.name}, just reply to this email. I read every
            message personally.
          &lt;/Text&gt;

          &lt;Hr style={divider} /&gt;

          &lt;Text style={footer}&gt;
            This email was sent to {customerEmail} because you started
            a checkout on {brand.name}. If this was not you, you can
            safely ignore this email.
          &lt;/Text&gt;
        &lt;/Container&gt;
      &lt;/Body&gt;
    &lt;/Html&gt;
  );
}
</code></pre>
<p>The tone matters here. "You left something behind" is friendly, not pushy. The email explains the product's value briefly, includes a single clear call to action, and the footer explains why they received the email.</p>
<h3 id="heading-how-templates-integrate-with-durable-steps">How Templates Integrate with Durable Steps</h3>
<p>Every email template is invoked via <code>createElement</code> inside a <code>step.run()</code> block:</p>
<pre><code class="language-typescript">await step.run("send-purchase-confirmation", async () =&gt; {
  await sendEmail({
    to: user.email,
    subject: `Your ${brand.name} purchase is confirmed!`,
    template: createElement(PurchaseConfirmationEmail, {
      amount: purchase.amount,
      currency: purchase.currency,
      customerEmail: user.email,
    }),
  });
});
</code></pre>
<p>The <code>createElement</code> call creates a React element from the template component with the given props. The <code>sendEmail</code> function renders it to HTML via React Email's <code>render()</code> and sends it through Resend.</p>
<p>Because this is inside a <code>step.run()</code>, the email send is checkpointed. If Resend is down and the step fails, it retries on its own without re-running previous steps. The customer never gets a duplicate email.</p>
<h2 id="heading-how-to-test-the-complete-flow-locally">How to Test the Complete Flow Locally</h2>
<p>Testing the complete payment lifecycle locally requires three things running simultaneously: your application, the Stripe CLI forwarding webhook events, and the Inngest dev server processing background jobs.</p>
<h3 id="heading-step-1-start-the-stripe-cli">Step 1: Start the Stripe CLI</h3>
<p>Install the Stripe CLI and log in:</p>
<pre><code class="language-bash"># macOS
brew install stripe/stripe-cli/stripe

# Authenticate
stripe login
</code></pre>
<p>Forward webhook events to your local server:</p>
<pre><code class="language-bash">stripe listen --forward-to localhost:3000/api/payments/webhook
</code></pre>
<p>The CLI prints a webhook signing secret starting with <code>whsec_</code>. Copy this to your <code>.env</code> as <code>STRIPE_WEBHOOK_SECRET</code>.</p>
<h3 id="heading-step-2-start-the-inngest-dev-server">Step 2: Start the Inngest Dev Server</h3>
<p>The Inngest dev server gives you real-time visibility into every function execution, every step, and every retry:</p>
<pre><code class="language-bash">npx inngest-cli@latest dev -u http://localhost:3000/api/inngest
</code></pre>
<p>Open <code>http://localhost:8288</code> in your browser. This is the Inngest dashboard where you'll watch your durable functions execute step by step.</p>
<h3 id="heading-step-3-start-your-application">Step 3: Start Your Application</h3>
<pre><code class="language-bash">bun run dev
</code></pre>
<p>Your application should now be running on <code>http://localhost:3000</code>.</p>
<h3 id="heading-step-4-test-the-purchase-flow">Step 4: Test the Purchase Flow</h3>
<ol>
<li><p>Go to your pricing page and click the checkout button.</p>
</li>
<li><p>Use Stripe's test card number <code>4242 4242 4242 4242</code> with any future expiration date and any CVC.</p>
</li>
<li><p>Complete the checkout. Stripe redirects you to your success URL.</p>
</li>
<li><p>Your frontend calls the <code>/api/purchases/claim</code> endpoint with the session ID.</p>
</li>
<li><p>Watch the Inngest dashboard. You should see the <code>purchase-completed</code> function trigger and each step execute in sequence.</p>
</li>
</ol>
<p>In the Inngest dashboard, you will see:</p>
<ul>
<li><p><strong>Step 1:</strong> "lookup-user-and-purchase" completes with the user and purchase data.</p>
</li>
<li><p><strong>Step 2:</strong> "track-purchase-to-posthog" completes (or logs a warning if PostHog isn't configured).</p>
</li>
<li><p><strong>Step 3:</strong> "send-purchase-confirmation" completes. Check your email.</p>
</li>
<li><p><strong>Step 4:</strong> "send-admin-notification" completes (if <code>ADMIN_EMAIL</code> is set).</p>
</li>
<li><p><strong>Steps 5-9:</strong> Run if the user has a GitHub username linked.</p>
</li>
</ul>
<h3 id="heading-step-5-test-a-refund">Step 5: Test a Refund</h3>
<p>Trigger a refund through the Stripe CLI:</p>
<pre><code class="language-bash">stripe trigger charge.refunded
</code></pre>
<p>Or go to the Stripe dashboard, find the test payment, and issue a refund manually. The Stripe CLI will forward the <code>charge.refunded</code> webhook to your local server.</p>
<p>In the Inngest dashboard, you'll see the <code>refund-processed</code> function trigger with its own set of steps: lookup, conditional access revocation, status update, analytics tracking, and email notifications.</p>
<h3 id="heading-step-6-test-abandoned-cart-recovery">Step 6: Test Abandoned Cart Recovery</h3>
<p>Trigger a checkout expiration:</p>
<pre><code class="language-bash">stripe trigger checkout.session.expired
</code></pre>
<p>The <code>checkout-expired</code> function will appear in the Inngest dashboard. You'll see the 1-hour sleep step. In the dev server, you can fast-forward through sleeps by clicking the "Skip" button in the dashboard. This lets you test the delayed email without actually waiting an hour.</p>
<h3 id="heading-how-to-simulate-step-failures">How to Simulate Step Failures</h3>
<p>To test the retry behavior, temporarily throw an error in one of your steps:</p>
<pre><code class="language-typescript">const collaboratorResult = await step.run(
  "add-github-collaborator",
  async () =&gt; {
    throw new Error("Simulated GitHub API failure");
  }
);
</code></pre>
<p>In the Inngest dashboard, you'll see:</p>
<ul>
<li><p>Steps 1 through 4 succeed and their results are cached.</p>
</li>
<li><p>Step 5 fails and is retried with exponential backoff.</p>
</li>
<li><p>Steps 6 through 9 remain pending.</p>
</li>
</ul>
<p>Remove the thrown error, and on the next retry, step 5 succeeds. Steps 6 through 9 execute, while steps 1 through 4 aren't re-executed. This is the checkpointing behavior that makes durable execution reliable.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building a complete SaaS payment flow is more than integrating Stripe Checkout. It's the entire lifecycle from "Buy" button to "Welcome" email, including the parts that happen when things go wrong.</p>
<p>Here's what you built in this tutorial:</p>
<ul>
<li><p>A <strong>database schema</strong> that tracks purchases through every state: completed, partially refunded, and fully refunded.</p>
</li>
<li><p>A <strong>Stripe product and price seed script</strong> that creates your catalog programmatically.</p>
</li>
<li><p>A <strong>checkout flow</strong> with session creation, payment verification, and idempotent purchase claiming.</p>
</li>
<li><p>A <strong>thin webhook handler</strong> that validates signatures and routes events to background jobs.</p>
</li>
<li><p>A <strong>9-step durable purchase function</strong> where each step is independently checkpointed and retried.</p>
</li>
<li><p>A <strong>refund handler</strong> that distinguishes between full and partial refunds, revoking access only when appropriate.</p>
</li>
<li><p>An <strong>abandoned cart recovery flow</strong> that waits an hour before sending a friendly recovery email.</p>
</li>
<li><p><strong>Three transactional email templates</strong> built with React Email: purchase confirmation, repo access granted, and abandoned cart.</p>
</li>
<li><p>A <strong>local testing setup</strong> with Stripe CLI, Inngest dev server, and step-by-step observability.</p>
</li>
</ul>
<p>The most important pattern is the separation between receiving and processing. Your API endpoints and webhook handlers should be thin: validate, record, enqueue, return. All the complex multi-step work happens in durable background functions where failures are isolated and retried at the step level.</p>
<p>This pattern scales. Add a new step to the purchase flow, and it gets the same checkpointing and retry behavior. Add a new webhook event, and you route it to a new durable function.</p>
<p>Your requirements may differ. You might sell subscriptions instead of one-time purchases, or provision API keys instead of GitHub access. The specific steps change, but the architecture stays the same.</p>
<p>If you want to start with all of these patterns already wired together in a production-ready codebase, <a href="https://eden-stack.com?utm_source=freecodecamp&amp;utm_medium=article&amp;utm_campaign=saas-payment-flow-stripe-webhooks-email">Eden Stack</a> includes the complete payment flow described in this article, along with 30+ additional production-tested patterns for authentication, email, analytics, background jobs, and more.</p>
<p><em>Magnus Rødseth builds AI-native applications and is the creator of</em> <a href="https://eden-stack.com?utm_source=freecodecamp&amp;utm_medium=article&amp;utm_campaign=saas-payment-flow-stripe-webhooks-email"><em>Eden Stack</em></a><em>, a production-ready starter kit with 30+ Claude skills encoding production patterns for AI-native SaaS development.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Handle Stripe Webhooks Reliably with Background Jobs ]]>
                </title>
                <description>
                    <![CDATA[ You've set up Stripe. Checkout works. Customers can pay. But what happens after payment? The webhook handler is where most payment integrations silently break. Your server crashes halfway through gran ]]>
                </description>
                <link>https://www.freecodecamp.org/news/stripe-webhooks-background-jobs/</link>
                <guid isPermaLink="false">69e8f14f5d1c10710571b1ae</guid>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Software Engineering ]]>
                    </category>
                
                    <category>
                        <![CDATA[ api ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Magnus Rødseth ]]>
                </dc:creator>
                <pubDate>Wed, 22 Apr 2026 16:03:27 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/460d0b4c-c95d-4356-a6df-a0c0c52b78b6.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>You've set up Stripe. Checkout works. Customers can pay. But what happens <em>after</em> payment?</p>
<p>The webhook handler is where most payment integrations silently break. Your server crashes halfway through granting access. Your email service is down when you try to send the confirmation. Your database times out during a write.</p>
<p>Stripe retries the entire webhook, but your handler already sent the confirmation email before it crashed. Now the customer gets two emails and no access.</p>
<p>This article shows you how to fix this. You'll learn how to build webhook handlers that survive failures by splitting your post-payment logic into durable, independently retried steps. The pattern works for any multi-step webhook processing, not just Stripe.</p>
<p>Here's what you'll learn:</p>
<ul>
<li><p>Why Stripe webhooks fail silently in production</p>
</li>
<li><p>How a naïve inline handler breaks under real-world conditions</p>
</li>
<li><p>The pattern: webhook receives, validates, and enqueues (nothing more)</p>
</li>
<li><p>How to build a durable purchase flow with individually checkpointed steps</p>
</li>
<li><p>How to handle refunds and abandoned checkouts with the same pattern</p>
</li>
<li><p>How to test webhook handlers locally</p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow along, you should be familiar with:</p>
<ul>
<li><p>Node.js and TypeScript</p>
</li>
<li><p>Basic Stripe integration (checkout sessions, webhooks)</p>
</li>
<li><p>SQL databases (the examples use PostgreSQL with Drizzle ORM)</p>
</li>
<li><p>npm or any Node.js package manager</p>
</li>
</ul>
<p>You don't need prior experience with Inngest or durable execution. This article explains both from scratch.</p>
<h3 id="heading-what-you-need-to-install">What You Need to Install</h3>
<p>If you want to run the code examples, install these packages:</p>
<pre><code class="language-bash">npm install inngest stripe drizzle-orm @react-email/components resend
</code></pre>
<p>You'll also need the <a href="https://stripe.com/docs/stripe-cli">Stripe CLI</a> for local webhook testing. Install it via Homebrew on macOS (<code>brew install stripe/stripe-cli/stripe</code>) or follow the instructions in Stripe's documentation for other platforms.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-why-stripe-webhooks-fail-silently">Why Stripe Webhooks Fail Silently</a></p>
</li>
<li><p><a href="#heading-the-naive-approach-and-why-it-breaks">The Naïve Approach (and Why It Breaks)</a></p>
</li>
<li><p><a href="#heading-the-pattern-webhook-to-event-to-durable-function">The Pattern: Webhook to Event to Durable Function</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-the-webhook-endpoint">How to Set Up the Webhook Endpoint</a></p>
</li>
<li><p><a href="#heading-how-to-build-a-durable-purchase-flow">How to Build a Durable Purchase Flow</a></p>
</li>
<li><p><a href="#heading-how-to-handle-refunds-with-the-same-pattern">How to Handle Refunds with the Same Pattern</a></p>
</li>
<li><p><a href="#heading-how-to-recover-abandoned-checkouts">How to Recover Abandoned Checkouts</a></p>
</li>
<li><p><a href="#heading-how-to-test-webhook-handlers-locally">How to Test Webhook Handlers Locally</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-why-stripe-webhooks-fail-silently">Why Stripe Webhooks Fail Silently</h2>
<p>The happy path is easy. A customer pays, Stripe sends a <code>checkout.session.completed</code> event to your server, and your handler processes it. In development, this works every time.</p>
<p>Production is different: Your webhook handler typically needs to do several things after a successful payment. It looks up the user in the database, records the purchase, sends a confirmation email, notifies the admin, grants access to the product (maybe via a GitHub invitation or an API key), and schedules follow-up emails. That's five or six operations involving three or four external services.</p>
<p>Here are the failure modes that will eventually hit your webhook handler:</p>
<h4 id="heading-1-your-server-crashes-mid-processing">1. Your server crashes mid-processing</h4>
<p>The database write succeeded, but the email never sent. Stripe retries the webhook, and your handler runs again.</p>
<p>Now you have a duplicate database entry or a unique constraint error that kills the retry.</p>
<h4 id="heading-2-an-external-service-is-temporarily-down">2. An external service is temporarily down</h4>
<p>Your email provider returns a 500. Your GitHub API call gets rate-limited. Your analytics service times out.</p>
<p>The webhook handler throws, and Stripe retries the entire thing. But the steps that already succeeded (the database write, the first email) run again.</p>
<h4 id="heading-3-the-handler-times-out">3. The handler times out</h4>
<p>Stripe expects a 2xx response within about 20 seconds. If your handler does too much work, Stripe marks it as failed and retries. Your handler may have partially completed before the timeout.</p>
<h4 id="heading-4-partial-completion-with-no-rollback">4. Partial completion with no rollback</h4>
<p>This is the worst failure mode. Steps 1 through 3 succeed. Step 4 fails. Stripe retries, and steps 1 through 3 run again.</p>
<p>The customer gets two confirmation emails. The database gets a duplicate record. But step 4 still fails because the underlying issue (a rate limit, a service outage) hasn't been resolved.</p>
<h4 id="heading-5-race-conditions-on-retry">5. Race conditions on retry</h4>
<p>Stripe can deliver the same event more than once even without a failure on your end. Network glitches, load balancer timeouts, and Stripe's own retry logic mean your handler must be prepared for duplicate deliveries. If your handler isn't idempotent at every step, duplicates compound the partial-completion problem.</p>
<p>Stripe's retry behavior is well-designed. It uses exponential backoff and retries up to dozens of times over several days. But Stripe retries the <em>entire webhook delivery</em>.</p>
<p>It has no way to know that your handler completed steps 1 through 3 and only needs to retry step 4. That distinction is your responsibility.</p>
<p>The core problem is that your webhook handler does too many things in a single request. Every external call is a potential failure point, and you have no checkpointing between them. When one fails, you lose track of which ones already succeeded.</p>
<h2 id="heading-the-naive-approach-and-why-it-breaks">The Naïve Approach (and Why It Breaks)</h2>
<p>Here's what a typical webhook handler looks like. I've seen hundreds of variations of this pattern across codebases, tutorials, and Stack Overflow answers:</p>
<pre><code class="language-typescript">app.post("/api/payments/webhook", async (req, res) =&gt; {
  const event = stripe.webhooks.constructEvent(
    req.body,
    req.headers["stripe-signature"],
    process.env.STRIPE_WEBHOOK_SECRET
  );

  if (event.type === "checkout.session.completed") {
    const session = event.data.object;

    // Step 1: Look up the user
    const user = await db.users.findOne({ id: session.metadata.userId });

    // Step 2: Record the purchase
    await db.purchases.insert({
      userId: user.id,
      stripeSessionId: session.id,
      amount: session.amount_total,
      status: "completed",
    });

    // Step 3: Send confirmation email
    await sendEmail({
      to: user.email,
      subject: "Purchase confirmed!",
      template: "purchase-confirmation",
    });

    // Step 4: Grant product access (GitHub repo invitation)
    await addCollaborator(user.githubUsername);

    // Step 5: Send access email
    await sendEmail({
      to: user.email,
      subject: "Your repository access is ready!",
      template: "repo-access",
    });

    // Step 6: Track analytics
    await analytics.track(user.id, "purchase_completed", {
      amount: session.amount_total,
    });
  }

  res.json({ received: true });
});
</code></pre>
<p>This looks clean. It reads top-to-bottom. Every tutorial teaches it this way.</p>
<p>Now walk through what happens when step 4 fails. Maybe GitHub's API is rate-limited and the <code>addCollaborator</code> call throws an error. Your handler returns a 500 to Stripe.</p>
<p>Here is the state after the failure:</p>
<ul>
<li><p>The user exists in the database (step 1 was just a lookup, no problem).</p>
</li>
<li><p>A purchase record was created (step 2 succeeded).</p>
</li>
<li><p>The confirmation email was sent (step 3 succeeded).</p>
</li>
<li><p>GitHub access was <strong>not</strong> granted (step 4 failed).</p>
</li>
<li><p>The access email was <strong>not</strong> sent (step 5 never ran).</p>
</li>
<li><p>Analytics were <strong>not</strong> tracked (step 6 never ran).</p>
</li>
</ul>
<p>Stripe retries the webhook. Your handler runs again from the top:</p>
<ul>
<li><p>Step 1: Looks up the user again. Fine.</p>
</li>
<li><p>Step 2: Tries to insert another purchase record. If you have a unique constraint on <code>stripeSessionId</code>, this throws. If you don't, you now have a duplicate.</p>
</li>
<li><p>Step 3: Sends the confirmation email again. The customer gets a second "Purchase confirmed!" email.</p>
</li>
<li><p>Step 4: Tries GitHub access again. Maybe it works this time, maybe not.</p>
</li>
<li><p>Steps 5-6: May or may not run depending on step 4.</p>
</li>
</ul>
<p>You can patch this with idempotency checks: "if purchase already exists, skip step 2." But now your handler is full of conditional logic for every step. And you still have the duplicate email problem, because there's no way to check "did I already send this email?" without building your own tracking system.</p>
<p>This approach doesn't scale. Every new step adds another failure mode, another idempotency check, and another edge case.</p>
<h2 id="heading-the-pattern-webhook-to-event-to-durable-function">The Pattern: Webhook to Event to Durable Function</h2>
<p>The fix is a separation of concerns. Your webhook handler should do exactly one thing: validate the incoming event and enqueue it for processing. Nothing else.</p>
<p>All the actual work (database writes, emails, API calls, analytics) moves into a durable background function where each step is individually checkpointed, retried, and tracked.</p>
<p>Here's the flow:</p>
<pre><code class="language-text">Stripe webhook
    |
    v
Webhook endpoint (validate signature, extract event, enqueue)
    |
    v
Background job system (receives event)
    |
    v
Durable function
    |-- Step 1: Look up user and purchase (checkpointed)
    |-- Step 2: Track analytics (checkpointed)
    |-- Step 3: Send confirmation email (checkpointed)
    |-- Step 4: Send admin notification (checkpointed)
    |-- Step 5: Grant GitHub access (checkpointed)
    |-- Step 6: Track GitHub access (checkpointed)
    |-- Step 7: Update purchase record (checkpointed)
    |-- Step 8: Send repo access email (checkpointed)
    |-- Step 9: Schedule follow-up sequence (checkpointed)
</code></pre>
<p>Each step wrapped in <code>step.run()</code> is a durable checkpoint. If step 5 fails:</p>
<ul>
<li><p>Steps 1 through 4 do <strong>not</strong> re-run. Their results are cached.</p>
</li>
<li><p>Step 5 retries independently, with its own retry counter.</p>
</li>
<li><p>Once step 5 succeeds, steps 6 through 9 continue.</p>
</li>
</ul>
<p>This is what "durable execution" means. The function's progress survives failures. You get step-level retries instead of function-level retries. No duplicate emails. No duplicate database writes. No partial completion.</p>
<p>I use <a href="https://www.inngest.com/">Inngest</a> for this. It's an event-driven durable execution platform that provides step-level checkpointing out of the box. You define functions with <code>step.run()</code> blocks, and Inngest handles retry logic, state persistence, and observability. No Redis, no worker processes, no custom retry code.</p>
<p>Other tools can achieve similar results (Temporal, for example), but Inngest's developer experience with TypeScript is what sold me. You write normal async functions. The <code>step.run()</code> wrapper is the only addition.</p>
<h2 id="heading-how-to-set-up-the-webhook-endpoint">How to Set Up the Webhook Endpoint</h2>
<p>Your webhook endpoint should be minimal. Validate the signature, extract the event data, send it to your background job system, and return a 200 immediately.</p>
<p>Here's the real webhook endpoint from my production codebase:</p>
<pre><code class="language-typescript">import { constructWebhookEvent } from "@/lib/payments";
import { inngest } from "@/lib/jobs";

app.post("/api/payments/webhook", async ({ request, set }) =&gt; {
  const body = await request.text();
  const sig = request.headers.get("stripe-signature");

  if (!sig) {
    set.status = 400;
    return { error: "Missing signature" };
  }

  try {
    const event = await constructWebhookEvent(body, sig);
    console.log(`[Webhook] Received ${event.type}`);

    if (event.type === "charge.refunded") {
      const charge = event.data.object;
      await inngest.send({
        name: "stripe/charge.refunded",
        data: {
          chargeId: charge.id,
          paymentIntentId: charge.payment_intent,
          amountRefunded: charge.amount_refunded,
          originalAmount: charge.amount,
          currency: charge.currency,
        },
      });
    }

    if (event.type === "checkout.session.expired") {
      const session = event.data.object;
      await inngest.send({
        name: "stripe/checkout.session.expired",
        data: {
          sessionId: session.id,
          customerEmail: session.customer_email,
        },
      });
    }

    return { received: true };
  } catch (error) {
    console.error("[Webhook] Stripe verification failed:", error);
    set.status = 400;
    return { error: "Webhook verification failed" };
  }
});
</code></pre>
<p>Notice what this handler does <strong>not</strong> do: it does not look up users, write to the database, send emails, or call external APIs. It validates the Stripe signature, extracts the relevant fields, and sends a typed event to Inngest. The entire handler completes in milliseconds.</p>
<p>The <code>constructWebhookEvent</code> function wraps Stripe's signature verification:</p>
<pre><code class="language-typescript">import Stripe from "stripe";

export async function constructWebhookEvent(
  payload: string | Buffer,
  signature: string
) {
  const webhookSecret = process.env.STRIPE_WEBHOOK_SECRET;
  if (!webhookSecret) {
    throw new Error("STRIPE_WEBHOOK_SECRET is not set");
  }
  const client = new Stripe(process.env.STRIPE_SECRET_KEY);
  return client.webhooks.constructEventAsync(payload, signature, webhookSecret);
}
</code></pre>
<p>One critical detail: you must pass the <strong>raw request body</strong> (as a string or buffer) to Stripe's signature verification. If your framework parses the body as JSON before you can access the raw string, the signature check will fail. This is the number one cause of "webhook signature verification failed" errors.</p>
<p>The Inngest client setup is minimal:</p>
<pre><code class="language-typescript">import { Inngest } from "inngest";

export const inngest = new Inngest({
  id: "my-app",
});
</code></pre>
<p>For the purchase flow specifically, a different endpoint sends the event (the "claim" route that the frontend calls after the customer returns from Stripe checkout). But the principle is identical: validate, enqueue, return.</p>
<pre><code class="language-typescript">// After verifying payment status with Stripe
await inngest.send({
  name: "purchase/completed",
  data: {
    userId: session.user.id,
    tier,
    sessionId,
  },
});
</code></pre>
<h2 id="heading-how-to-build-a-durable-purchase-flow">How to Build a Durable Purchase Flow</h2>
<p>This is the core of the article. The <code>handlePurchaseCompleted</code> function processes a purchase after payment using 9 individually checkpointed steps. Every step is real production code.</p>
<p>The example below grants access to a private GitHub repository because that's what this particular product sells.</p>
<p>Your product's "grant access" step will be different: upgrading a user to a Pro membership, provisioning API credits, unlocking a course, or activating a subscription. The durable step pattern is the same regardless of what you're delivering.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69a694d8d4dc9b42434c218f/935ca377-52ff-4fc2-8e97-98fb7712c896.png" alt="Durable purchase flow with 9 numbered steps, showing step 5 failing and retrying while steps 1 through 4 remain checkpointed" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>If step 5 fails (for example, the email provider is down), Inngest retries only step 5. Steps 1 through 4 are already checkpointed and don't re-execute. Steps 6 through 9 wait until step 5 succeeds.</p>
<pre><code class="language-typescript">import { eq } from "drizzle-orm";
import { createElement } from "react";

import { inngest } from "@/lib/jobs/client";
import { trackServerEvent } from "@/lib/analytics/server";
import { brand } from "@/lib/brand";
import { db, purchases, users } from "@/lib/db";
import {
  sendEmail,
  PurchaseConfirmationEmail,
  AdminPurchaseNotificationEmail,
  RepoAccessGrantedEmail,
} from "@/lib/email";
import { addCollaborator } from "@/lib/github";

export const handlePurchaseCompleted = inngest.createFunction(
  { id: "purchase-completed", triggers: [{ event: "purchase/completed" }] },
  async ({ event, step }) =&gt; {
    const { userId, tier, sessionId } = event.data;

    // Step 1: Look up user and purchase details
    const { user, purchase } = await step.run(
      "lookup-user-and-purchase",
      async () =&gt; {
        const userResult = await db
          .select({
            id: users.id,
            email: users.email,
            name: users.name,
            githubUsername: users.githubUsername,
          })
          .from(users)
          .where(eq(users.id, userId))
          .limit(1);

        const foundUser = userResult[0];
        if (!foundUser) {
          throw new Error(`User not found: ${userId}`);
        }

        const purchaseResult = await db
          .select({
            amount: purchases.amount,
            currency: purchases.currency,
            stripePaymentIntentId: purchases.stripePaymentIntentId,
          })
          .from(purchases)
          .where(eq(purchases.stripeCheckoutSessionId, sessionId))
          .limit(1);

        const foundPurchase = purchaseResult[0];

        return {
          user: foundUser,
          purchase: foundPurchase ?? {
            amount: 0,
            currency: "usd",
            stripePaymentIntentId: null,
          },
        };
      }
    );

    // Step 2: Track purchase completion in analytics
    await step.run("track-purchase-to-posthog", async () =&gt; {
      await trackServerEvent(userId, "purchase_completed_server", {
        tier,
        amount_cents: purchase.amount,
        currency: purchase.currency,
        stripe_session_id: sessionId,
      });
    });

    // Step 3: Send purchase confirmation to customer
    await step.run("send-purchase-confirmation", async () =&gt; {
      await sendEmail({
        to: user.email,
        subject: `Your purchase is confirmed!`,
        template: createElement(PurchaseConfirmationEmail, {
          amount: purchase.amount,
          currency: purchase.currency,
          customerEmail: user.email,
        }),
      });
    });

    // Step 4: Send admin notification
    await step.run("send-admin-notification", async () =&gt; {
      const adminEmail = process.env.ADMIN_EMAIL;
      if (!adminEmail) return;

      await sendEmail({
        to: adminEmail,
        subject: `New sale: ${user.email}`,
        template: createElement(AdminPurchaseNotificationEmail, {
          amount: purchase.amount,
          currency: purchase.currency,
          customerEmail: user.email,
          customerName: user.name,
          stripeSessionId: purchase.stripePaymentIntentId ?? sessionId,
        }),
      });
    });

    // Early return if user has no GitHub username
    if (!user.githubUsername) {
      return { success: true, userId, tier, githubAccessGranted: false };
    }

    // Step 5: Grant GitHub repository access
    const collaboratorResult = await step.run(
      "add-github-collaborator",
      async () =&gt; {
        return addCollaborator(user.githubUsername!);
      }
    );

    // Step 6: Track GitHub access granted
    await step.run("track-github-access", async () =&gt; {
      await trackServerEvent(userId, "github_access_granted", {
        tier,
        github_username: user.githubUsername,
        invitation_status: collaboratorResult.status,
      });
    });

    // Step 7: Update purchase record
    await step.run("update-purchase-record", async () =&gt; {
      await db
        .update(purchases)
        .set({
          githubAccessGranted: true,
          githubInvitationId: collaboratorResult.status,
          updatedAt: new Date(),
        })
        .where(eq(purchases.stripeCheckoutSessionId, sessionId));
    });

    // Step 8: Send repo access email
    await step.run("send-repo-access-email", async () =&gt; {
      await sendEmail({
        to: user.email,
        subject: `Your repository access is ready!`,
        template: createElement(RepoAccessGrantedEmail, {
          repoUrl: "https://github.com/your-org/your-repo",
        }),
      });
    });

    // Step 9: Schedule follow-up email sequence
    await step.run("schedule-follow-up", async () =&gt; {
      const purchaseRecord = await db
        .select({ id: purchases.id })
        .from(purchases)
        .where(eq(purchases.stripeCheckoutSessionId, sessionId))
        .limit(1);

      if (purchaseRecord[0]) {
        await inngest.send({
          name: "purchase/follow-up.scheduled",
          data: {
            userId,
            purchaseId: purchaseRecord[0].id,
            tier,
          },
        });
      }
    });

    return { success: true, userId, tier, githubAccessGranted: true };
  }
);
</code></pre>
<p>That's a lot of code. Let me walk through each step and explain why it's a separate checkpoint.</p>
<h3 id="heading-step-1-look-up-user-and-purchase">Step 1: Look Up User and Purchase</h3>
<pre><code class="language-typescript">const { user, purchase } = await step.run(
  "lookup-user-and-purchase",
  async () =&gt; {
    // ... database queries ...
    return { user: foundUser, purchase: foundPurchase };
  }
);
</code></pre>
<p>This step queries the database for the user and purchase records. If the database is temporarily unreachable, this step retries on its own.</p>
<p>The return value (<code>user</code> and <code>purchase</code>) is cached by Inngest. Every subsequent step can use <code>user.email</code>, <code>user.githubUsername</code>, and <code>purchase.amount</code> without re-querying the database.</p>
<p>If this step fails permanently (the user doesn't exist), it throws an error that halts the entire function. This is intentional. There's no point continuing if you can't find the user.</p>
<h3 id="heading-step-2-track-analytics">Step 2: Track Analytics</h3>
<pre><code class="language-typescript">await step.run("track-purchase-to-posthog", async () =&gt; {
  await trackServerEvent(userId, "purchase_completed_server", {
    tier,
    amount_cents: purchase.amount,
  });
});
</code></pre>
<p>Analytics tracking is a separate step because analytics services have their own failure modes (rate limits, outages, network timeouts). If PostHog is down, you don't want it to block the confirmation email.</p>
<p>In the production code, this step wraps the call in a try-catch so that a tracking failure doesn't halt the entire function. The analytics event is "nice to have," not critical.</p>
<h3 id="heading-step-3-send-purchase-confirmation-email">Step 3: Send Purchase Confirmation Email</h3>
<pre><code class="language-typescript">await step.run("send-purchase-confirmation", async () =&gt; {
  await sendEmail({
    to: user.email,
    subject: `Your purchase is confirmed!`,
    template: createElement(PurchaseConfirmationEmail, {
      amount: purchase.amount,
      currency: purchase.currency,
      customerEmail: user.email,
    }),
  });
});
</code></pre>
<p>This is the customer-facing confirmation. It's a separate step from the admin notification (step 4) because they're independent operations. If the admin email fails, the customer should still get their confirmation.</p>
<p>The <code>sendEmail</code> function uses Resend under the hood. If Resend returns a 500, this step retries. Because step 2 (analytics) already completed and is checkpointed, it won't re-run.</p>
<h3 id="heading-step-4-send-admin-notification">Step 4: Send Admin Notification</h3>
<pre><code class="language-typescript">await step.run("send-admin-notification", async () =&gt; {
  const adminEmail = process.env.ADMIN_EMAIL;
  if (!adminEmail) return;

  await sendEmail({
    to: adminEmail,
    subject: `New sale: ${user.email}`,
    template: createElement(AdminPurchaseNotificationEmail, { /* ... */ }),
  });
});
</code></pre>
<p>Admin notifications are completely independent from customer-facing operations. Separating them means a failure in one doesn't affect the other.</p>
<h3 id="heading-step-5-grant-github-access">Step 5: Grant GitHub Access</h3>
<pre><code class="language-typescript">const collaboratorResult = await step.run(
  "add-github-collaborator",
  async () =&gt; {
    return addCollaborator(user.githubUsername!);
  }
);
</code></pre>
<p>This is the step most likely to fail. GitHub's API has rate limits: it can time out, and the user's GitHub username might be invalid.</p>
<p>By making this its own step, a GitHub API failure doesn't trigger re-sends of the confirmation email (step 3) or the admin notification (step 4). Those steps are already checkpointed.</p>
<p>Notice the early return before this step: if the user has no GitHub username, the function returns early after step 4. The remaining steps only run when there's a GitHub account to grant access to.</p>
<h3 id="heading-step-6-track-github-access">Step 6: Track GitHub Access</h3>
<pre><code class="language-typescript">await step.run("track-github-access", async () =&gt; {
  await trackServerEvent(userId, "github_access_granted", {
    tier,
    github_username: user.githubUsername,
    invitation_status: collaboratorResult.status,
  });
});
</code></pre>
<p>This uses the <code>collaboratorResult</code> from step 5. Because <code>step.run()</code> caches return values, <code>collaboratorResult.status</code> is available here even if the function was interrupted and resumed between steps 5 and 6.</p>
<h3 id="heading-step-7-update-purchase-record">Step 7: Update Purchase Record</h3>
<pre><code class="language-typescript">await step.run("update-purchase-record", async () =&gt; {
  await db
    .update(purchases)
    .set({
      githubAccessGranted: true,
      githubInvitationId: collaboratorResult.status,
      updatedAt: new Date(),
    })
    .where(eq(purchases.stripeCheckoutSessionId, sessionId));
});
</code></pre>
<p>The database update happens after GitHub access is confirmed. You only mark <code>githubAccessGranted: true</code> after the collaborator invitation actually succeeded.</p>
<p>If you updated the record before granting access and the GitHub step failed, your database would say access was granted when it was not.</p>
<h3 id="heading-step-8-send-repo-access-email">Step 8: Send Repo Access Email</h3>
<pre><code class="language-typescript">await step.run("send-repo-access-email", async () =&gt; {
  await sendEmail({
    to: user.email,
    subject: `Your repository access is ready!`,
    template: createElement(RepoAccessGrantedEmail, {
      repoUrl: "https://github.com/your-org/your-repo",
    }),
  });
});
</code></pre>
<p>This email only sends after the GitHub invitation is confirmed (step 5) and the database is updated (step 7). The ordering matters. You don't want to tell the customer "your access is ready" if the invitation hasn't been sent.</p>
<h3 id="heading-step-9-schedule-follow-up-sequence">Step 9: Schedule Follow-Up Sequence</h3>
<pre><code class="language-typescript">await step.run("schedule-follow-up", async () =&gt; {
  const purchaseRecord = await db
    .select({ id: purchases.id })
    .from(purchases)
    .where(eq(purchases.stripeCheckoutSessionId, sessionId))
    .limit(1);

  if (purchaseRecord[0]) {
    await inngest.send({
      name: "purchase/follow-up.scheduled",
      data: {
        userId,
        purchaseId: purchaseRecord[0].id,
        tier,
      },
    });
  }
});
</code></pre>
<p>The final step triggers a separate Inngest function that handles the follow-up email sequence (day 7 onboarding tips, day 14 feedback request, day 30 testimonial request). This is an event-driven chain: one function completes and triggers another.</p>
<p>The follow-up function uses <code>step.sleep()</code> to wait between emails:</p>
<pre><code class="language-typescript">export const handlePurchaseFollowUp = inngest.createFunction(
  {
    id: "purchase-follow-up",
    triggers: [{ event: "purchase/follow-up.scheduled" }],
    cancelOn: [
      {
        event: "purchase/follow-up.cancelled",
        match: "data.purchaseId",
      },
    ],
  },
  async ({ event, step }) =&gt; {
    const { userId, purchaseId } = event.data;

    await step.sleep("wait-7-days", "7d");

    await step.run("send-day-7-email", async () =&gt; {
      // Check eligibility (user exists, not unsubscribed, not refunded)
      // Send onboarding tips email
    });

    await step.sleep("wait-14-days", "7d");

    await step.run("send-day-14-email", async () =&gt; {
      // Send feedback request email
    });

    await step.sleep("wait-30-days", "16d");

    await step.run("send-day-30-email", async () =&gt; {
      // Send testimonial request email
    });
  }
);
</code></pre>
<p>Notice the <code>cancelOn</code> option. If the purchase is refunded, you can send a <code>purchase/follow-up.cancelled</code> event, and the entire follow-up sequence stops. No stale emails sent to customers who asked for a refund.</p>
<h3 id="heading-why-each-step-must-be-separate">Why Each Step Must Be Separate</h3>
<p>The rule is simple: <strong>any operation that calls an external service or could fail independently should be its own step.</strong></p>
<p>A database query is a step because the database can be temporarily unreachable. An email send is a step because the email provider can return a 500. A GitHub API call is a step because it can be rate-limited.</p>
<p>If two operations always succeed or fail together (they share a single external call), they can be in the same step. But when in doubt, make it a separate step. The overhead is negligible, and the reliability gain is significant.</p>
<h2 id="heading-how-to-handle-refunds-with-the-same-pattern">How to Handle Refunds with the Same Pattern</h2>
<p>The refund flow follows the exact same durable step pattern. This function lives in the same file as <code>handlePurchaseCompleted</code>, so it shares the same imports (plus <code>removeCollaborator</code> from <code>@/lib/github</code> and the refund-specific email templates). Here's the <code>handleRefund</code> function:</p>
<pre><code class="language-typescript">export const handleRefund = inngest.createFunction(
  { id: "refund-processed", triggers: [{ event: "stripe/charge.refunded" }] },
  async ({ event, step }) =&gt; {
    const {
      chargeId,
      paymentIntentId,
      amountRefunded,
      originalAmount,
      currency,
    } = event.data;

    const isFullRefund = amountRefunded &gt;= originalAmount;

    // Step 1: Look up the purchase and user
    const { user, purchase } = await step.run(
      "lookup-purchase-by-payment-intent",
      async () =&gt; {
        const purchaseResult = await db
          .select({
            id: purchases.id,
            userId: purchases.userId,
            stripePaymentIntentId: purchases.stripePaymentIntentId,
            githubAccessGranted: purchases.githubAccessGranted,
          })
          .from(purchases)
          .where(eq(purchases.stripePaymentIntentId, paymentIntentId))
          .limit(1);

        const foundPurchase = purchaseResult[0];
        if (!foundPurchase) {
          return { user: null, purchase: null };
        }

        const userResult = await db
          .select({
            id: users.id,
            email: users.email,
            name: users.name,
            githubUsername: users.githubUsername,
          })
          .from(users)
          .where(eq(users.id, foundPurchase.userId))
          .limit(1);

        return { user: userResult[0] ?? null, purchase: foundPurchase };
      }
    );

    if (!purchase || !user) {
      return { success: false, reason: "no_matching_purchase" };
    }

    let accessRevoked = false;

    // Step 2: Revoke GitHub access (only for full refunds)
    if (isFullRefund &amp;&amp; user.githubUsername &amp;&amp; purchase.githubAccessGranted) {
      const revokeResult = await step.run(
        "revoke-github-access",
        async () =&gt; {
          return removeCollaborator(user.githubUsername!);
        }
      );
      accessRevoked = revokeResult.success;
    }

    // Step 3: Update purchase status
    await step.run("update-purchase-status", async () =&gt; {
      if (isFullRefund) {
        await db
          .update(purchases)
          .set({
            status: "refunded",
            githubAccessGranted: false,
            updatedAt: new Date(),
          })
          .where(eq(purchases.id, purchase.id));
      } else {
        await db
          .update(purchases)
          .set({
            status: "partially_refunded",
            updatedAt: new Date(),
          })
          .where(eq(purchases.id, purchase.id));
      }
    });

    // Step 4: Track refund in analytics
    await step.run("track-refund-event", async () =&gt; {
      await trackServerEvent(user.id, "refund_processed", {
        charge_id: chargeId,
        amount_cents: amountRefunded,
        original_amount_cents: originalAmount,
        currency,
        is_full_refund: isFullRefund,
        github_access_revoked: accessRevoked,
      });
    });

    // Step 5: Notify customer
    await step.run("send-customer-notification", async () =&gt; {
      if (isFullRefund) {
        await sendEmail({
          to: user.email,
          subject: "Your refund has been processed",
          template: createElement(AccessRevokedEmail, {
            customerEmail: user.email,
            refundAmount: amountRefunded,
            currency,
          }),
        });
      } else {
        await sendEmail({
          to: user.email,
          subject: "Your partial refund has been processed",
          template: createElement(PartialRefundEmail, {
            customerEmail: user.email,
            refundAmount: amountRefunded,
            originalAmount,
            currency,
          }),
        });
      }
    });

    // Step 6: Notify admin
    await step.run("send-admin-notification", async () =&gt; {
      const adminEmail = process.env.ADMIN_EMAIL;
      if (!adminEmail) return;

      await sendEmail({
        to: adminEmail,
        subject: `\({isFullRefund ? "Full" : "Partial"} refund: \){user.email}`,
        template: createElement(AdminRefundNotificationEmail, {
          customerEmail: user.email,
          customerName: user.name,
          githubUsername: user.githubUsername,
          refundAmount: amountRefunded,
          originalAmount,
          currency,
          stripeChargeId: chargeId,
          accessRevoked,
          isPartialRefund: !isFullRefund,
        }),
      });
    });

    return { success: true, accessRevoked, isFullRefund, userId: user.id };
  }
);
</code></pre>
<p>Three things are worth calling out in the refund flow.</p>
<ol>
<li><p><strong>Partial versus full refunds:</strong> The function distinguishes between the two using a simple comparison: <code>amountRefunded &gt;= originalAmount</code>. For a partial refund, the customer keeps access but the purchase status changes to <code>partially_refunded</code>. For a full refund, GitHub access is revoked and the status becomes <code>refunded</code>.  </p>
<p>This matters for your database integrity. Downstream systems (your dashboard, your analytics, your support tools) need accurate status values.</p>
</li>
<li><p><strong>Conditional step execution:</strong> The "revoke GitHub access" step only runs if three conditions are true: it's a full refund, the user has a GitHub username, and access was previously granted. Inngest handles this cleanly by skipping steps that don't need to run.  </p>
<p>This is more readable than deeply nested if-else blocks in a monolithic handler.</p>
</li>
<li><p><strong>Separate notifications for customers and admins:</strong> The customer gets a different email depending on whether the refund is full or partial. The admin always gets a detailed notification including the charge ID, the customer's GitHub username, and whether access was revoked.</p>
</li>
</ol>
<p>These are separate steps because a failure in the admin notification shouldn't block the customer notification. The customer's email is the higher priority.</p>
<h2 id="heading-how-to-recover-abandoned-checkouts">How to Recover Abandoned Checkouts</h2>
<p>Abandoned cart recovery is where the <code>step.sleep()</code> method shines. When a Stripe checkout session expires, you want to send a recovery email. But not immediately.</p>
<p>You want to wait an hour or so, giving the customer time to return on their own.</p>
<pre><code class="language-typescript">export const handleCheckoutExpired = inngest.createFunction(
  {
    id: "checkout-expired",
    triggers: [{ event: "stripe/checkout.session.expired" }],
  },
  async ({ event, step }) =&gt; {
    const { customerEmail, sessionId } = event.data;

    if (!customerEmail) {
      return { success: false, reason: "no_email" };
    }

    // Wait 1 hour before sending recovery email
    await step.sleep("wait-before-recovery-email", "1h");

    // Send abandoned cart email
    await step.run("send-abandoned-cart-email", async () =&gt; {
      const checkoutUrl = `https://yoursite.com/pricing`;

      await sendEmail({
        to: customerEmail,
        subject: "Your checkout is waiting",
        template: createElement(AbandonedCartEmail, {
          customerEmail,
          checkoutUrl,
        }),
      });
    });

    // Track the event
    await step.run("track-abandoned-cart", async () =&gt; {
      await trackServerEvent("anonymous", "abandoned_cart_email_sent", {
        customer_email: customerEmail,
        session_id: sessionId,
      });
    });

    return { success: true, customerEmail };
  }
);
</code></pre>
<p>The <code>step.sleep("wait-before-recovery-email", "1h")</code> line is the key. This pauses the function for one hour without consuming any compute resources.</p>
<p>Inngest handles the scheduling internally. After one hour, the function resumes and sends the email.</p>
<p>Without durable execution, you would need a cron job that queries a database for expired sessions, or a delayed job queue with Redis, or a <code>setTimeout</code> that gets lost when your server restarts. The <code>step.sleep()</code> approach is simpler, more readable, and more reliable.</p>
<p>There's also a guard at the top of the function. If Stripe doesn't have a customer email for the session (the customer closed the checkout before entering their email), the function returns early. There's no point scheduling a recovery email with no address to send it to.</p>
<p>This pattern scales to more complex recovery flows. You could add a second <code>step.sleep()</code> and send a follow-up recovery email three days later if the customer still hasn't purchased. You could check if the customer has since completed a purchase (by querying the database in a <code>step.run()</code>) and skip the email if they have.</p>
<p>Each additional step is one more <code>step.run()</code> or <code>step.sleep()</code> call. The function reads like a script describing your business logic, not a tangle of cron jobs and database flags.</p>
<h2 id="heading-how-to-test-webhook-handlers-locally">How to Test Webhook Handlers Locally</h2>
<p>Local testing is one of the biggest pain points with Stripe webhooks. You need Stripe to send events to your local machine, and you need your background job system running to process them. Here's the setup.</p>
<h3 id="heading-how-to-forward-stripe-events-locally">How to Forward Stripe Events Locally</h3>
<p>Install the <a href="https://stripe.com/docs/stripe-cli">Stripe CLI</a> and forward webhook events to your local server:</p>
<pre><code class="language-bash">stripe listen --forward-to localhost:3000/api/payments/webhook
</code></pre>
<p>The CLI prints a webhook signing secret (starting with <code>whsec_</code>). Set this as your <code>STRIPE_WEBHOOK_SECRET</code> environment variable for local development.</p>
<p>You can trigger test events directly:</p>
<pre><code class="language-bash">stripe trigger checkout.session.completed
stripe trigger charge.refunded
stripe trigger checkout.session.expired
</code></pre>
<h3 id="heading-how-to-run-the-inngest-dev-server">How to Run the Inngest Dev Server</h3>
<p>Inngest provides a local dev server that shows you every function execution, every step, and every retry in real time:</p>
<pre><code class="language-bash">npx inngest-cli@latest dev -u http://localhost:3000/api/inngest
</code></pre>
<p>The <code>-u</code> flag tells the Inngest dev server where your application is running so it can discover your functions. Open <code>http://localhost:8288</code> in your browser to see the Inngest dashboard.</p>
<h3 id="heading-how-to-watch-step-execution">How to Watch Step Execution</h3>
<p>The Inngest dev dashboard is where the durable execution pattern really clicks. When you trigger a Stripe event, you can see:</p>
<ol>
<li><p>The event arriving in the "Events" tab.</p>
</li>
<li><p>The function triggering in the "Runs" tab.</p>
</li>
<li><p>Each step executing one by one, with its input, output, and duration.</p>
</li>
<li><p>If a step fails, you see the error and the retry attempt.</p>
</li>
</ol>
<p>This visibility is something you don't get with inline webhook handlers. When a customer reports "I paid but didn't get access," you can look up the function run in the Inngest dashboard and see exactly which step failed and why. That kind of observability is invaluable in production.</p>
<h3 id="heading-how-to-simulate-failures">How to Simulate Failures</h3>
<p>To test the retry behavior, you can intentionally make a step fail. For example, temporarily throw an error in the "add-github-collaborator" step:</p>
<pre><code class="language-typescript">const collaboratorResult = await step.run(
  "add-github-collaborator",
  async () =&gt; {
    throw new Error("Simulated GitHub API failure");
  }
);
</code></pre>
<p>In the Inngest dashboard, you'll see:</p>
<ul>
<li><p>Steps 1 through 4 succeed and their results are cached.</p>
</li>
<li><p>Step 5 fails and is retried according to the retry policy.</p>
</li>
<li><p>Steps 6 through 9 remain pending until step 5 succeeds.</p>
</li>
</ul>
<p>Remove the thrown error, and on the next retry, step 5 succeeds. Steps 6 through 9 then execute in sequence, while steps 1 through 4 aren't re-executed. This is the checkpoint behavior in action.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The pattern for reliable Stripe webhooks comes down to one principle: <strong>separate receiving from processing.</strong></p>
<p>Your webhook endpoint validates the Stripe signature and sends a typed event to a background job system. That's all it does. The processing happens in a durable function where each step is individually checkpointed and retried.</p>
<p>Here's what this gives you:</p>
<ul>
<li><p><strong>No duplicate emails:</strong> A step that already succeeded doesn't re-run.</p>
</li>
<li><p><strong>No partial state:</strong> If step 5 fails, steps 1 through 4 are preserved and step 5 retries independently.</p>
</li>
<li><p><strong>Full observability:</strong> You can see exactly which step failed and why, for every function run.</p>
</li>
<li><p><strong>Built-in delayed execution:</strong> <code>step.sleep()</code> handles recovery emails and follow-up sequences without cron jobs.</p>
</li>
<li><p><strong>Composable workflows:</strong> One function can trigger another via events, creating chains like purchase completion leading to a 30-day follow-up sequence.</p>
</li>
</ul>
<p>This pattern isn't limited to Stripe. Any multi-step webhook processing benefits from durable execution: GitHub webhooks that trigger CI pipelines, Resend webhooks that track email delivery, or calendar webhooks that sync across services.</p>
<p>The principle is the same: Validate. Enqueue. Process durably.</p>
<p>I've used this pattern in production for <a href="https://eden-stack.com?utm_source=freecodecamp&amp;utm_medium=article&amp;utm_campaign=stripe-webhooks-background-jobs">Eden Stack</a>, where the purchase flow handles everything from payment confirmation to GitHub repository access grants to multi-week email sequences. The 9-step purchase function has processed every payment without a single missed step or duplicate email.</p>
<p>If you're building a SaaS with Stripe, start with the webhook endpoint pattern from this article. Keep the endpoint thin and move the processing into durable steps. You'll save yourself from the 3 AM debugging session when a customer says "I paid but nothing happened."</p>
<p>If you want the complete Stripe webhook and Inngest integration pre-built with purchase flows, refund handling, and follow-up email sequences ready to go, <a href="https://eden-stack.com?utm_source=freecodecamp&amp;utm_medium=article&amp;utm_campaign=stripe-webhooks-background-jobs">Eden Stack</a> includes everything from this article alongside 30+ additional production-tested patterns.</p>
<p><em>Magnus Rodseth builds AI-native applications and is the creator of</em> <a href="https://eden-stack.com?utm_source=freecodecamp&amp;utm_medium=article&amp;utm_campaign=stripe-webhooks-background-jobs"><em>Eden Stack</em></a><em>, a production-ready starter kit with 30+ Claude skills encoding production patterns for AI-native SaaS development.</em></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Streamline Search in Web Applications with Elasticsearch  ]]>
                </title>
                <description>
                    <![CDATA[ They say data is the new gold. But navigating through a large dataset to meet the demands of consumers in record time still gives backend devs a headache. Conventional database queries often aren't to ]]>
                </description>
                <link>https://www.freecodecamp.org/news/streamline-search-functionality-in-web-apps-with-elasticsearch/</link>
                <guid isPermaLink="false">69e10d82b67a275a9d505023</guid>
                
                    <category>
                        <![CDATA[ elasticsearch ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ indexing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Search Engines ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Oluwatobi ]]>
                </dc:creator>
                <pubDate>Thu, 16 Apr 2026 16:25:38 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/e6563d07-a253-4fd9-b1f6-54dc98a48319.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>They say data is the new gold. But navigating through a large dataset to meet the demands of consumers in record time still gives backend devs a headache.</p>
<p>Conventional database queries often aren't totally reliable in getting accurate search results fast. But fortunately, Elasticsearch comes to the rescue.</p>
<p>In this article, I'll walk you through how to use Elasticsearch to enhance database searches and analytics while still maintaining efficiency.</p>
<p>Here are the prerequisites for this tutorial:</p>
<ul>
<li><p>A Node.js environment</p>
</li>
<li><p>Basic backend knowledge</p>
</li>
</ul>
<p>With that, let's get started. But first of all, what is Elasticsearch?</p>
<h3 id="heading-table-of-content">Table of Content</h3>
<ul>
<li><p><a href="#heading-what-is-elasticsearch">What is Elasticsearch?</a></p>
</li>
<li><p><a href="#heading-elasticsearch-key-terms">Elasticsearch Key Terms</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-elasticsearch">How to Set Up Elasticsearch</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-the-demo-project">How to Set Up the Demo Project</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-elasticsearch-in-your-project">How to Set Up Elasticsearch in Your Project</a></p>
</li>
<li><p><a href="#heading-how-to-work-with-indexes-in-elasticsearch">How to Work with Indexes in Elasticsearch</a></p>
</li>
<li><p><a href="#heading-search-implementation">Search Implementation</a></p>
</li>
<li><p><a href="#heading-full-code">Full Code</a></p>
</li>
<li><p><a href="#heading-wrapping-up">Wrapping Up</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-what-is-elasticsearch">What is Elasticsearch?</h2>
<p>Elasticsearch is a search engine built by Apache that can index words and phrases, providing advanced text and vector search capabilities. It also has other useful features such as search analytics and an auto-complete feature.</p>
<p>Note that Elasticsearch isn't a database, even though it does provide indexing features (which popular databases also do).</p>
<p>Other popular alternatives to this tool used in production environments include <a href="https://www.algolia.com/">Algolia</a>, <a href="https://opensearch.org/">OpenSearch</a> and <a href="https://www.meilisearch.com/">MeiliSearch</a>.</p>
<h2 id="heading-elasticsearch-key-terms">Elasticsearch Key Terms</h2>
<p>in this section, we'll go over some important terminology used in Elasticsearch. To ease your understanding, I'll make references to common database terminologies.</p>
<ul>
<li><p><strong>Index</strong>: This serves as a storage location for the data you're going to explore. It's like the database for Elasticsearch. It also shares other properties that DBs possess like uniqueness.</p>
</li>
<li><p><strong>Document:</strong> This is the smallest unit of information stored within the index. It's structurally identical to the MongoDB-based document and is also similar to rows in SQL-based databases.</p>
</li>
<li><p><strong>Mapping:</strong> Mapping refers to sets of rules or instructions that define how documents and fields are stored in the Elasticsearch index.</p>
</li>
<li><p><strong>Score:</strong> This is generated by Elasticsearch to show the degree of relevance of the search query to the stored index.</p>
</li>
<li><p><strong>Analyzer:</strong> When data is sent to the Elasticsearch engine for indexing, it initially passes through an analyzer which processes the text before indexing. This is achieved via Filters and Tokenizers.</p>
</li>
<li><p><strong>Tokenizers:</strong> This tool converts the gross unstructured data sent to the Elasticsearch engine into structured data tokens for further processing and storage.</p>
</li>
<li><p><strong>Aggregator:</strong> This search tool performs detailed analysis on the tokens stored in the index to generate actionable data insights. It's an advantage of the Elasticsearch engine. Mongo DB’s aggregator offer similar functions.</p>
</li>
<li><p><strong>Filter</strong>: A set of instructions which modifies tokens generated during the process of analysis. This could entail removal of fillers, capitalization rules, and so on.</p>
</li>
<li><p><strong>Bulk index:</strong> This refers to indexing more than one document at once. You typically do this when indexing a database with pre-existing content.</p>
</li>
</ul>
<h2 id="heading-how-to-set-up-elasticsearch">How to Set Up Elasticsearch</h2>
<p>For the purpose of this tutorial, we'll use Elasticsearch's installable software on our local machine. Online hosted versions of Elasticsearch also exist which work hitch-free as well.</p>
<p><a href="https://www.elastic.co/docs/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows">Here</a> is a link detailing how to setup Elasticsearch on Windows. For non-Windows users, you can also install Elasticsearch on <a href="https://www.elastic.co/docs/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos">Linux/Mac OS</a> or use <a href="https://www.elastic.co/docs/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker">Docker</a>.</p>
<p><strong>Note</strong> that for Windows users, make sure you run Elasticsearch as an Administrator to avoid installation errors.</p>
<p>After successful installation, you can test if it's functioning by navigating to <code>localhost:9200</code> which serves as the default local endpoint for Elasticsearch. There you'll see a success message on the screen similar to the image below:</p>
<img src="https://cdn.hashnode.com/uploads/covers/64bba6ecb09308034572f437/ad94560d-7629-45f1-a94d-eaed0b61cefe.png" alt="elastic search localhost homepage" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>With that , we'll move on to setting up our project and integrating ElasticSearch into our demo project.</p>
<h2 id="heading-how-to-set-up-the-demo-project">How to Set Up the Demo Project</h2>
<p>For the sake of this tutorial, we will be utilizing a ready-built forum-based backend application built in Node Express JS. &nbsp;Here is the link to the project.</p>
<p>to get the project up and running, clone this package and run</p>
<p><code>npm start</code></p>
<p><code>MySQL</code> will serve as the default database for this tutorial. &nbsp;Let's now proceed to the next section.</p>
<h2 id="heading-how-to-set-up-elasticsearch-in-your-project">How to Set Up Elasticsearch in Your Project</h2>
<p>The existing demo project is a backend implementation of a forum site which allows users to post text content and facilitate discussions through category-based threads.</p>
<p>Elasticsearch is great for ensuring that users can sift through these posts and threads to accurately locate key content using distinct keywords. This is more effective than using traditional database search queries which can be cumbersome.</p>
<p>To set up Elasticsearch, start by installing the Elasticsearch <code>npm</code> package. To do this, run the command below in your project directory:</p>
<pre><code class="language-shell">npm install @elastic/elasticsearch
</code></pre>
<p>After successful installation, create a <code>config.js</code> file where you'll setup your driver to connect to your Elasticsearch application.</p>
<pre><code class="language-javascript">const { Client } = require('@elastic/elasticsearch');

const esClient = new Client({
  node: 'http://localhost:9200',
  auth: {
    username: process.env.ELASTICSEARCH_USERNAME,
    password: process.env.ELASTICSEARCH_PASSWORD
  },
  maxRetries: 5,
  requestTimeout: 60000,
  tls: {
    rejectUnauthorized: process.env.NODE_ENV !== 'development'
  }
});

module.exports = esClient;
</code></pre>
<p>To access and use Elasticsearch's capabilities within your backend application, you'll need to setup and configure your Elasticsearch driver. The details are specified in the config file code above.</p>
<p>As mentioned earlier, Elasticsearch runs on the <code>localhost:9200</code> port. So your Elasticsearch node will be directed to the localhost port. Online hosted Elasticsearch nodes will also work in similar scenarios.</p>
<p>Next in the config file, you'll provide the authentication credentials required to access Elasticsearch. The requested username and password will be supplied within the Auth object. If you're running Elasticsearch locally, authentication may not be required unless security is enabled.</p>
<p>In this scenario, <code>MaxRetries</code> refers to the number of maximum unsuccessful attempts to access Elasticsearch. In this case, we've pegged it at 5 attempts. <code>requestTimeout</code> is the time in milliseconds after which the request will automatically terminate if it's not processed.</p>
<p>Once you've completion the Config file, you'll import this config and initialize the Elasticsearch client when your backend starts.</p>
<h2 id="heading-how-to-work-with-indexes-in-elasticsearch">How to Work with Indexes in Elasticsearch</h2>
<p>Before we start harnessing the full power of Elasticsearch, we need to customize its search capabilities within the backend of the project. This involves setting up an index within the Elasticsearch Engine that indexes all posts made to the backend application.</p>
<pre><code class="language-javascript">const esClient = require('./config');

const setupIndex = async () =&gt; {
  try {
    const indexExists = await esClient.indices.exists({
      index: INDEX_NAME
    });

    if (indexExists) {
      console.log(`Index "${INDEX_NAME}" already exists`);
      return;
    }

    await esClient.indices.create({
      index: INDEX_NAME,
      ...indexMapping
    });

    console.log(`Index "${INDEX_NAME}" created`);
  } catch (err) {
    console.error(err);
    throw err;
  }
};
</code></pre>
<p>The code above highlights creating a new index. First, you need to invoke the <code>setupIndex()</code> function. Within this function, you're providing the preferred name for your index. &nbsp;Elasticsearch then checks if the name already exists within its indexes.</p>
<p>The function terminates if the index name already exists (to prevent duplication). But if it doesn't exist, it proceeds to create an index with that unique name alongside the index Mapping rules (which we'll discuss further shortly).</p>
<p>After creating the index, you'll see a success message in your application console.</p>
<h3 id="heading-how-to-delete-an-index">How to Delete an Index</h3>
<p>After a while, an index may no longer serve its purpose and you may need to remove it from Elasticsearch.</p>
<p>You can do this by executing the <code>esClient.indices.delete()</code> command as shown below:</p>
<pre><code class="language-javascript">const deleteIndex = async () =&gt; {
  try {
    await esClient.indices.delete({ index: INDEX_NAME });
    console.log(`${INDEX_NAME} deleted`);
  } catch (err) {
    console.error("Error deleting index:", err);
  }
};
</code></pre>
<h3 id="heading-how-to-delete-a-post-within-an-index">How to Delete a Post within an Index</h3>
<p>Sometimes, posts get deleted and modified. Also, users may get banned, after which you'd want to remove their content from the stored database .</p>
<p>In these cases, you'll want to ensure true deletion – that is, both from the database and from Elasticsearch indexed storage.</p>
<p>To do this, you'll call the <code>esClient.delete()</code> function, passing the Elasticsearch Client ID and the post's unique ID that you want to delete as callback arguments to your <code>esClient.delete</code> function.</p>
<pre><code class="language-javascript">const deletePost = async (postId) =&gt; {
  try {
    await esClient.delete({
      index: INDEX_NAME,
      id: postId.toString(),
    });

    console.log("Post successfully deleted");
    return { success: true, postId };
  } catch (err) {
    console.error(err);
    throw err;
  }
};
</code></pre>
<h3 id="heading-how-to-index-a-post">How to Index a Post</h3>
<p>After setting up the Elasticsearch Index, you'll want to automatically index posts made to the database into the Elasticsearch index.</p>
<p>To do this, you'll need to make sure that the post is compatible with your index schema via the <code>transformPostTOESRepo</code> function. This function extracts and formats the post data so it matches the Elasticsearch document structure.</p>
<pre><code class="language-javascript">const transformPostToESDoc = (post) =&gt; {
  return {
  id: post.id,
  title: post.title,
  content: post.body,
  author: post.author,
  category: post.category,
  tags: post.tags,
  views: post.views || 0,
  published_at: post.created_at
};

const indexPost = async (postId) =&gt; {
  try {
    const postRepo = await getPostRepo();
    const post = await postRepo.findOne({ where: { id: postId } });

    if (!post) {
      throw new Error("Post not available");
    }

    const esDocument = transformPostToESDoc(post);

    await esClient.index({
      index: INDEX_NAME,
      id: post.id.toString(),
      document: esDocument
    });

    console.log("Post successfully indexed");
    return { success: true, postId };
  } catch (err) {
    console.error(err);
    throw err;
  }
};
</code></pre>
<p>The post to be indexed must have a unique ID. For ease of use, we used the unique post ID constraint that comes by default in regular databases. Optionally, you can also use UUID libraries to generate unique post IDs.</p>
<p>The Post information is then attached to the <code>esClient.index()</code> function as the document to be indexed. We also put appropriate error handling measures in place to prevent the app from crashing if the process is unsuccessful.</p>
<h3 id="heading-how-to-define-elastic-search-mapping-rules">How to Define Elastic Search Mapping Rules</h3>
<p>Elasticsearch mappings define how your data is stored and indexed. They specify the data type of each field and how text is analyzed for search.</p>
<p>In the example below, we'll define an index configuration that includes custom analyzers for autocomplete and mappings for each post field (like title, content, and author).</p>
<pre><code class="language-javascript">const indexMapping = {
  settings: {
    analysis: {
      analyzer: {
        autocomplete: {
          type: 'custom',
          tokenizer: 'standard',
          filter: ['lowercase', 'autocomplete_filter']
        },
        autocomplete_search: {
          type: 'custom',
          tokenizer: 'standard',
          filter: ['lowercase']
        }
      },
      filter: {
        autocomplete_filter: {
          type: 'edge_ngram',
          min_gram: 2,
          max_gram: 10
        }
      }
    }
  },
  mappings: {
    properties: {
      id: { type: 'integer' },
      title: {
        type: 'text',
        analyzer: 'autocomplete',
        search_analyzer: 'autocomplete_search',
        fields: {
          keyword: { type: 'keyword' },
          standard: { type: 'text' }
        }
      },
      content: {
        type: 'text',
        analyzer: 'standard'
      },
      category: {
        type: 'keyword'
      },
      tags: { type: 'keyword' },
      author: {
        type: 'text',
        fields: {
          keyword: { type: 'keyword' }
        }
      },
      views: { type: 'integer' },
      published_at: { type: 'date' }
    }
  }
};
</code></pre>
<p>The <code>indexMapping</code> object defines how Elasticsearch should store and process your data. It consists of two main parts: <code>settings</code> and <code>mappings</code>.</p>
<p>The <code>mappings</code> section defines the structure of your documents. Each field (like <code>title</code>, <code>content</code>, or <code>author</code>) has a type such as <code>text</code>, <code>keyword</code>, <code>integer</code>, or <code>date</code>. This tells Elasticsearch how to store and search that field.</p>
<p>For text fields, we can also define analyzers. Analyzers control how text is broken into smaller pieces (tokens) during indexing and search.</p>
<p>In the <code>settings</code> section, we defined a custom analyzer for autocomplete. This uses an <code>edge_ngram</code> filter to generate partial word matches, so users can find results as they type. We also defined a separate <code>search_analyzer</code> to ensure that search queries are processed correctly.</p>
<p>Together, these settings allow you to support features like autocomplete while keeping search results accurate and efficient.</p>
<h2 id="heading-search-implementation">Search Implementation</h2>
<p>In order to implement your search functionality, you'll need to build out the API. This involves building the business logic service and the API route. You'll also use <code>GET</code> requests and attach your search term as a query. The result it generates will be received as a JSON document.</p>
<p>Then you'll implement the search post service function. In this scenario, you'll be using the search engine capabilities to search for phrases within the index. In line with best practices, you'll use a pagination technique to minimize&nbsp;receiving unwanted information.</p>
<p>The search query will consist of the index name, pagination parameters (<code>from</code> and <code>size</code>) to control which results are returned, and the expected maximum size of the result. &nbsp;You'll also attach a query object specifying the modality of the search that the Elasticsearch engine should use.</p>
<pre><code class="language-javascript">const searchElastic = async (query, page = 1, size = 10) =&gt; {
  const searchQuery = {
    index: INDEX_NAME,
    from: (page - 1) * size,
    size,
    query: {
      bool: {
        must: [
          {
            multi_match: {
              query,
              fields: ["title^3", "content"],
              type: "best_fields",
              fuzziness: "AUTO"
            }
          }
        ]
      }
    }
  };

  const result = await esClient.search(searchQuery);
  return result.hits.hits;
};
</code></pre>
<p>In the code above, the function is named <code>searchElastic</code>. The function contains three variables which must be passed in order to execute it: <code>size</code>, <code>page</code> and <code>query</code>.</p>
<p>The <code>size</code> variable specifies the maximum number of documents per search query to be returned. The default count could be any integer.</p>
<p>The query uses a <code>multi_match</code> clause to search across multiple fields, such as <code>title</code> and <code>content</code>. The <code>title^3</code> syntax boosts matches in the title, making them more relevant than matches in other fields.</p>
<p>We also included a <code>must</code> clause which defines conditions that documents must match to be included in the results.</p>
<p>The search results are usually ranked based on their degree of relevance to the search query.</p>
<h2 id="heading-full-code">Full Code</h2>
<p>With this, you've completed this tutorial and have configured Elasticsearch to index posts made to your database. Here's the full code:</p>
<ol>
<li>Elasticsearch Client (config.js):</li>
</ol>
<pre><code class="language-javascript">const { Client } = require('@elastic/elasticsearch');

const esClient = new Client({
  node: 'http://localhost:9200',
  auth: {
    username: process.env.ELASTICSEARCH_USERNAME,
    password: process.env.ELASTICSEARCH_PASSWORD
  },
  maxRetries: 5,
  requestTimeout: 60000,
  tls: {
    rejectUnauthorized: process.env.NODE_ENV !== 'development'
  }
});

module.exports = esClient;
</code></pre>
<ol>
<li>Index mapping:</li>
</ol>
<pre><code class="language-javascript">const indexMapping = {
  settings: {
    analysis: {
      analyzer: {
        autocomplete: {
          type: 'custom',
          tokenizer: 'standard',
          filter: ['lowercase', 'autocomplete_filter']
        },
        autocomplete_search: {
          type: 'custom',
          tokenizer: 'standard',
          filter: ['lowercase']
        }
      },
      filter: {
        autocomplete_filter: {
          type: 'edge_ngram',
          min_gram: 2,
          max_gram: 10
        }
      }
    }
  },
  mappings: {
    properties: {
      id: { type: 'integer' },
      title: {
        type: 'text',
        analyzer: 'autocomplete',
        search_analyzer: 'autocomplete_search',
        fields: {
          keyword: { type: 'keyword' },
          standard: { type: 'text' }
        }
      },
      content: {
        type: 'text',
        analyzer: 'standard'
      },
      category: {
        type: 'keyword'
      },
      tags: { type: 'keyword' },
      author: {
        type: 'text',
        fields: {
          keyword: { type: 'keyword' }
        }
      },
      views: { type: 'integer' },
      published_at: { type: 'date' }
    }
  }
};
</code></pre>
<ol>
<li>Create index:</li>
</ol>
<pre><code class="language-javascript">const setupIndex = async () =&gt; {
  try {
    const indexExists = await esClient.indices.exists({
      index: INDEX_NAME
    });

    if (indexExists) {
      console.log(`Index "${INDEX_NAME}" already exists`);
      return;
    }

    await esClient.indices.create({
      index: INDEX_NAME,
      ...indexMapping
    });

    console.log(`Index "${INDEX_NAME}" created`);
  } catch (err) {
    console.error(err);
    throw err;
  }
};
</code></pre>
<ol>
<li>Delete index:</li>
</ol>
<pre><code class="language-javascript">const deleteIndex = async () =&gt; {
  try {
    await esClient.indices.delete({ index: INDEX_NAME });
    console.log(`${INDEX_NAME} deleted`);
  } catch (err) {
    console.error("Error deleting index:", err);
  }
};
</code></pre>
<ol>
<li>Delete document (post):</li>
</ol>
<pre><code class="language-javascript">const deletePost = async (postId) =&gt; {
  try {
    await esClient.delete({
      index: INDEX_NAME,
      id: postId.toString()
    });

    console.log("Post successfully deleted");
    return { success: true, postId };
  } catch (err) {
    console.error(err);
    throw err;
  }
};
</code></pre>
<ol>
<li>Transform and index post:</li>
</ol>
<pre><code class="language-javascript">const transformPostToESDoc = (post) =&gt; {
  return {
  id: post.id,
  title: post.title,
  content: post.body,
  author: post.author,
  category: post.category,
  tags: post.tags,
  views: post.views || 0,
  published_at: post.created_at
};

const indexPost = async (postId) =&gt; {
  try {
    const postRepo = await getPostRepo();
    const post = await postRepo.findOne({ where: { id: postId } });

    if (!post) {
      throw new Error("Post not available");
    }

    const esDocument = transformPostToESDoc(post);

    await esClient.index({
      index: INDEX_NAME,
      id: post.id.toString(),
      document: esDocument
    });

    console.log("Post successfully indexed");
    return { success: true, postId };
  } catch (err) {
    console.error(err);
    throw err;
  }
};
</code></pre>
<ol>
<li>Search function:</li>
</ol>
<pre><code class="language-javascript">const searchElastic = async (query, page = 1, size = 10) =&gt; {
  const searchQuery = {
    index: INDEX_NAME,
    from: (page - 1) * size,
    size,
    query: {
      bool: {
        must: [
          {
            multi_match: {
              query,
              fields: ["title^3", "content"],
              type: "best_fields",
              fuzziness: "AUTO"
            }
          }
        ]
      }
    }
  };

  const result = await esClient.search(searchQuery);
  return result.hits.hits;
};
</code></pre>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>Now you know how to use Elasticsearch to improve user search in your web applications. Elasticsearch is agnostic which allows you to use it across programming languages and frameworks. Its large community base also provides helpful user guides to make onboarding easier.</p>
<p>To further harness Elasticsearch's power, you can explore other tools within the <strong>ELK</strong> stack (Elasticsearch, Log Stash, and Kibana ) that'll help you generate high quality data visualizations for your data, especially for enterprise applications.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>A fast and reliable search engine isn’t negotiable in your web applications these days. Elasticsearch is your go-to for getting this done.</p>
<p>If you would like to read other articles that will enhance your tech journey, feel free to check out <a href="https://portfolio-oluwatobi.netlify.app/">my website here</a> . Stay active!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build an Online Marketplace with Next.js, Express, and Stripe Connect ]]>
                </title>
                <description>
                    <![CDATA[ Have you ever wondered how platforms like Etsy, Uber, or Teachable handle payments for thousands of sellers? The answer is a multi-vendor marketplace: an application where merchants can sign up, list  ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-online-marketplace-with-next-js-express-stripe-connect/</link>
                <guid isPermaLink="false">69d7ca9dfa7251682ec4b098</guid>
                
                    <category>
                        <![CDATA[ stripe ]]>
                    </category>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Next.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Web Development ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Michael Okolo ]]>
                </dc:creator>
                <pubDate>Thu, 09 Apr 2026 15:49:49 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/1181805a-87ae-440d-9673-64efeb073aad.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Have you ever wondered how platforms like Etsy, Uber, or Teachable handle payments for thousands of sellers? The answer is a <strong>multi-vendor marketplace</strong>: an application where merchants can sign up, list products or services, and receive payments directly from customers.</p>
<p>In this handbook, you'll build a complete marketplace from scratch using TypeScript. You won't need a traditional database. Instead, you'll use Stripe as your product catalog and payment engine.</p>
<p>This is how many real-world marketplaces work: Stripe stores the products, prices, and customer data, while your application handles the user experience.</p>
<p>Here's what you'll build:</p>
<ol>
<li><p>A merchant onboarding flow where sellers create accounts and connect with Stripe</p>
</li>
<li><p>A product management system where merchants can add and list products directly through Stripe</p>
</li>
<li><p>A checkout flow that supports both one-time payments and recurring subscriptions</p>
</li>
<li><p>Webhooks that listen for payment events in real time</p>
</li>
<li><p>A billing portal where customers can manage their subscriptions</p>
</li>
<li><p>A complete storefront where customers can browse and buy products</p>
</li>
</ol>
<p>You can also grab the complete source code from the GitHub repository linked at the end.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ul>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-what-is-stripe-connect">What is Stripe Connect?</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-the-project">How to Set Up the Project</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-the-backend">How to Set Up the Backend</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-express-backend">How to Build the Express Backend</a></p>
</li>
<li><p><a href="#heading-how-to-handle-merchant-onboarding">How to Handle Merchant Onboarding</a></p>
<ul>
<li><p><a href="#heading-how-to-create-a-connected-account">How to Create a Connected Account</a></p>
</li>
<li><p><a href="#heading-how-to-create-the-onboarding-link">How to Create the Onboarding Link</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-how-to-check-account-status">How to Check Account Status</a></p>
</li>
<li><p><a href="#heading-how-to-create-products-through-stripe">How to Create Products Through Stripe</a></p>
</li>
<li><p><a href="#heading-how-to-fetch-products">How to Fetch Products</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-checkout-flow">How to Build the Checkout Flow</a></p>
</li>
<li><p><a href="#heading-how-to-handle-webhooks">How to Handle Webhooks</a></p>
</li>
<li><p><a href="#heading-how-to-configure-webhooks-in-the-stripe-dashboard">How to Configure Webhooks in the Stripe Dashboard</a></p>
</li>
<li><p><a href="#heading-how-to-test-webhooks-locally">How to Test Webhooks Locally</a></p>
</li>
<li><p><a href="#heading-how-to-add-the-billing-portal">How to Add the Billing Portal</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-nextjs-frontend">How to Build the Next.js Frontend</a></p>
</li>
<li><p><a href="#heading-how-to-create-the-account-context">How to Create the Account Context</a></p>
</li>
<li><p><a href="#heading-how-to-create-the-account-status-hook">How to Create the Account Status Hook</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-merchant-onboarding-component">How to Build the Merchant Onboarding Component</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-product-create-product-list-and-checkout">How to Build the Product Create, Product List and Checkout</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-product-form">How to Build the Product Form</a></p>
</li>
<li><p><a href="#heading-how-to-build-the-main-page">How to Build the Main Page</a></p>
</li>
<li><p><a href="#heading-how-to-test-the-full-flow">How to Test the Full Flow</a></p>
</li>
<li><p><a href="#heading-how-the-payment-split-works">How the Payment Split Works</a></p>
</li>
<li><p><a href="#heading-next-steps">Next Steps</a></p>
</li>
<li><p><a href="#heading-acknowledgements">Acknowledgements</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before you begin, make sure you have the following:</p>
<ol>
<li><p>Node.js (version 18 or higher) installed on your machine</p>
</li>
<li><p>A basic understanding of React, TypeScript, and REST APIs</p>
</li>
<li><p>A Stripe account (sign up for free at <a href="http://stripe.com">stripe.com</a>)</p>
</li>
<li><p>A code editor like VS Code</p>
</li>
</ol>
<p>You do <strong>not</strong> need a database for this project. Stripe will store your products, prices, and customer information. This keeps the architecture simple and mirrors how many production marketplaces actually work.</p>
<h2 id="heading-what-is-stripe-connect"><strong>What is Stripe Connect?</strong></h2>
<p>Stripe Connect is a set of APIs designed for platforms and marketplaces. It lets you create accounts for your merchants (Stripe calls them "connected accounts"), route payments to them, and take a platform fee on every transaction.</p>
<p>In this tutorial, you will use Stripe’s <strong>V2 Accounts API</strong>, which is the newer and recommended way to create connected accounts. With the V2 API, you configure what each account can do (accept card payments, receive payouts) through a configuration object, and Stripe handles all compliance and identity verification through a hosted onboarding flow.</p>
<p>Here's how the payment flow works:</p>
<ol>
<li><p>A customer selects a product and clicks checkout on your marketplace.</p>
</li>
<li><p>Your server creates a Stripe Checkout Session linked to the merchant’s connected account.</p>
</li>
<li><p>The customer pays on Stripe’s hosted checkout page.</p>
</li>
<li><p>Stripe automatically splits the payment: the merchant gets their share, and your platform keeps an application fee.</p>
</li>
<li><p>Stripe sends a webhook event to your server confirming the payment.</p>
</li>
<li><p>The merchant can view their earnings and withdraw funds from their Stripe dashboard.</p>
</li>
</ol>
<h2 id="heading-how-to-set-up-the-project"><strong>How to Set Up the Project</strong></h2>
<p>Create a project folder with separate directories for your backend and frontend:</p>
<pre><code class="language-shell">mkdir marketplace &amp;&amp; cd marketplace
mkdir server client
</code></pre>
<h2 id="heading-how-to-set-up-the-backend"><strong>How to Set Up the Backend</strong></h2>
<p>Navigate into the server directory and initialize a TypeScript project:</p>
<pre><code class="language-shell">cd server
npm init -y
npm install express cors dotenv stripe
npm install -D typescript ts-node @types/express @types/cors @types/node
npx tsc --init
mkdir src
</code></pre>
<p>Open tsconfig.json and update it with these settings:</p>
<pre><code class="language-json">{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "lib": ["ES2020"],
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "resolveJsonModule": true
  },
  "include": ["src/**/*"]
}
</code></pre>
<p>Then create a .env file in the server root:</p>
<pre><code class="language-plaintext">STRIPE_SECRET_KEY=sk_test_your_key_here
DOMAIN=http://localhost:3000
</code></pre>
<p>You can find your Stripe test secret key in the Stripe Dashboard under Developers &gt; API Keys. The DOMAIN variable tells your server where to redirect customers after checkout.</p>
<p>Add these scripts to your package.json:</p>
<pre><code class="language-json">{
&nbsp; "scripts": {
    "dev": "ts-node src/index.ts",
    "build": "tsc",
    "start": "node dist/index.js"
  }
}
</code></pre>
<h2 id="heading-how-to-build-the-express-backend"><strong>How to Build the Express Backend</strong></h2>
<p>Create the file src/index.ts. This will be your entire backend. Let’s start with the setup and imports:</p>
<pre><code class="language-typescript">import express, { Request, Response, Router } from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import Stripe from 'stripe';

dotenv.config();

const app = express();
const router = Router();
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY as string);

app.use(cors({ origin: process.env.DOMAIN }));
app.use(express.static('public'));
</code></pre>
<p>Notice that we don't import any database client. Stripe is our data layer. Every product, price, customer, and transaction lives in Stripe. Your Express server is a thin orchestration layer that talks to the Stripe API on behalf of your frontend.</p>
<p>We also mount <code>express.static("public")</code> so you can serve static files later if needed. The webhook endpoint needs the raw request body, so we'll register it before the JSON parser. Let’s add that now.</p>
<h2 id="heading-how-to-handle-merchant-onboarding"><strong>How to Handle Merchant Onboarding</strong></h2>
<p>The first thing a merchant needs to do is create an account on your platform and connect it to Stripe. This involves two steps: creating a connected account, and then redirecting the merchant to Stripe’s hosted onboarding form.</p>
<h3 id="heading-how-to-create-a-connected-account">How to Create a Connected Account</h3>
<p>Add the following route to your src/index.ts:</p>
<pre><code class="language-typescript">// Type definitions for request bodies
interface CreateAccountBody {
  email: string;
}
interface AccountIdBody {
  accountId: string;
}

// Create a Connected Account using Stripe V2 API
router.post(
  '/create-connect-account',
  async (req: Request&lt;{}, {}, CreateAccountBody&gt;, res: Response) =&gt; {
    try {
      const account = await stripe.v2.core.accounts.create({
        display_name: req.body.email,
        contact_email: req.body.email,
        dashboard: 'full',
        defaults: {
          responsibilities: {
            fees_collector: 'stripe',
            losses_collector: 'stripe',
          },
        },
        identity: {
          country: 'GB',
          entity_type: 'company',
        },
        configuration: {
          customer: {},
          merchant: {
            capabilities: {
              card_payments: { requested: true },
            },
          },
        },
      });
      res.json({ accountId: account.id });
    } catch (error) {
      const message = error instanceof Error ? error.message : 'Unknown error';
      res.status(500).json({ error: message });
    }
  },
);
</code></pre>
<p>Let’s break down what this code does. The <code>stripe.v2.core.accounts.create()</code> method creates a new connected account using Stripe’s V2 API. Here are the key configuration options:</p>
<ol>
<li><p><code>dashboard: "full"</code> gives the merchant access to their own Stripe dashboard where they can view payments, manage payouts, and handle disputes.</p>
</li>
<li><p><code>responsibilities</code> tells Stripe who collects fees and who is liable for losses. Setting both to "stripe" means Stripe handles this, which is the simplest configuration.</p>
</li>
<li><p><code>identity</code> sets the country and entity type. Change "GB" to your merchants’ country code (for example, "US" for the United States).</p>
</li>
<li><p><code>configuration.merchant.capabilities</code> requests the <code>card_payments</code> capability, which lets the merchant accept credit card payments.</p>
</li>
</ol>
<h3 id="heading-how-to-create-the-onboarding-link">How to Create the Onboarding Link</h3>
<p>After creating the account, you need to redirect the merchant to Stripe’s hosted onboarding form. Add this route:</p>
<pre><code class="language-typescript">// Create Account Link for onboarding
router.post('/create-account-link', async (req: Request&lt;{}, {}, AccountIdBody&gt;, res: Response) =&gt; {
  const { accountId } = req.body;
  try {
    const accountLink = await stripe.v2.core.accountLinks.create({
      account: accountId,
      use_case: {
        type: 'account_onboarding',
        account_onboarding: {
          configurations: ['merchant', 'customer'],
          refresh_url: `${process.env.DOMAIN}`,
          return_url: `\({process.env.DOMAIN}?accountId=\){accountId}`,
        },
      },
    });
    res.json({ url: accountLink.url });
  } catch (error) {
    const message = error instanceof Error ? error.message : 'Unknown error';
    res.status(500).json({ error: message });
  }
});
</code></pre>
<p>The <code>accountLinks.create()</code> method generates a temporary URL that takes the merchant to Stripe’s onboarding form. On that form, Stripe collects the merchant’s identity documents, bank account details, and tax information. You don't need to build any of this yourself.</p>
<p>The <code>return_url</code> is where Stripe redirects the merchant after they complete onboarding. Notice that you append the <code>accountId</code> as a query parameter so your frontend can pick it up and store it.</p>
<h2 id="heading-how-to-check-account-status"><strong>How to Check Account Status</strong></h2>
<p>You need a way to check whether a merchant has finished onboarding and is ready to accept payments. Add this route:</p>
<pre><code class="language-typescript">// Get Connected Account Status
router.get(
  '/account-status/:accountId',
  async (req: Request&lt;{ accountId: string }&gt;, res: Response) =&gt; {
    try {
      const account = await stripe.v2.core.accounts.retrieve(req.params.accountId, {
        include: ['requirements', 'configuration.merchant'],
      });
      const payoutsEnabled =
        account.configuration?.merchant?.capabilities?.stripe_balance?.payouts?.status === 'active';
      const chargesEnabled =
        account.configuration?.merchant?.capabilities?.card_payments?.status === 'active';
      const summaryStatus = account.requirements?.summary?.minimum_deadline?.status;
      const detailsSubmitted = !summaryStatus || summaryStatus === 'eventually_due';
      res.json({
        id: account.id,
        payoutsEnabled,
        chargesEnabled,
        detailsSubmitted,
        requirements: account.requirements?.entries,
      });
    } catch (error) {
      const message = error instanceof Error ? error.message : 'Unknown error';
      res.status(500).json({ error: message });
    }
  },
);
</code></pre>
<p>This route retrieves the connected account and checks three important statuses:</p>
<ul>
<li><p><code>chargesEnabled</code> tells you if the merchant can accept payments.</p>
</li>
<li><p><code>payoutsEnabled</code> tells you if they can receive payouts to their bank account.</p>
</li>
<li><p><code>detailsSubmitted</code> tells you if they have completed the onboarding form.</p>
</li>
</ul>
<p>Your frontend will use these flags to show or hide features.</p>
<h2 id="heading-how-to-create-products-through-stripe"><strong>How to Create Products Through Stripe</strong></h2>
<p>Instead of storing products in a database, you'll create them directly in Stripe. Each product is created on the merchant’s connected account using the <code>stripeAccount</code> header. This means each merchant has their own isolated product catalog inside Stripe.</p>
<pre><code class="language-typescript">// Type definition for product creation
interface CreateProductBody {
  productName: string;
  productDescription: string;
  productPrice: number;
  accountId: string;
}
// Create a product on the connected account
router.post('/create-product', async (req: Request&lt;{}, {}, CreateProductBody&gt;, res: Response) =&gt; {
  const { productName, productDescription, productPrice, accountId } = req.body;
  try {
    // Create the product on the connected account
    const product = await stripe.products.create(
      {
        name: productName,
        description: productDescription,
      },
      { stripeAccount: accountId },
    ); // Create a price for the product
    const price = await stripe.prices.create(
      {
        product: product.id,
        unit_amount: productPrice,
        currency: 'usd',
      },
      { stripeAccount: accountId },
    );
    res.json({
      productName,
      productDescription,
      productPrice,
      priceId: price.id,
    });
  } catch (error) {
    const message = error instanceof Error ? error.message : 'Unknown error';
    res.status(500).json({ error: message });
  }
});
</code></pre>
<p>There are two Stripe API calls happening here. First, <code>stripe.products.create()</code> creates the product (name and description). Then <code>stripe.prices.create()</code> creates a price for that product (amount and currency).</p>
<p>Stripe separates products from prices because a single product can have multiple prices — for example, a monthly plan and an annual plan.</p>
<p>The <code>{ stripeAccount: accountId }</code> option on both calls tells Stripe to create these resources on the merchant’s connected account, not on your platform account. This is a critical detail: without it, the products would be created on your platform’s account and the merchant would never see them.</p>
<h2 id="heading-how-to-fetch-products"><strong>How to Fetch Products</strong></h2>
<p>Add a route to list all products for a given merchant:</p>
<pre><code class="language-typescript">// Fetch products for a specific account
router.get('/products/:accountId', async (req: Request&lt;{ accountId: string }&gt;, res: Response) =&gt; {
  const { accountId } = req.params;
  try {
    const options: Stripe.RequestOptions = {};
    if (accountId !== 'platform') {
      options.stripeAccount = accountId;
    }
    const prices = await stripe.prices.list(
      {
        expand: ['data.product'],
        active: true,
        limit: 100,
      },
      options,
    );
    const products = prices.data.map((price) =&gt; {
      const product = price.product as Stripe.Product;
      return {
        id: product.id,
        name: product.name,
        description: product.description,
        price: price.unit_amount,
        priceId: price.id,
        period: price.recurring ? price.recurring.interval : null,
      };
    });
    res.json(products);
  } catch (error) {
    const message = error instanceof Error ? error.message : 'Unknown error';
    res.status(500).json({ error: message });
  }
});
</code></pre>
<p>This route fetches all active prices from a merchant’s Stripe account and expands the product data (using <code>expand: ["data.product"]</code>) so you get the product name and description in the same API call. The period field will be null for one-time products and "month" or "year" for subscriptions.</p>
<h2 id="heading-how-to-build-the-checkout-flow"><strong>How to Build the Checkout Flow</strong></h2>
<p>Your checkout flow needs to handle two scenarios: one-time payments for individual products, and recurring subscriptions. Stripe’s Checkout Sessions handle both — you just need to set the mode based on the price type.</p>
<pre><code class="language-typescript">// Type definition for checkout
interface CheckoutBody {
  priceId: string;
  accountId: string;
}
// Create checkout session
router.post(
  '/create-checkout-session',
  async (req: Request&lt;{}, {}, CheckoutBody&gt;, res: Response) =&gt; {
    const { priceId, accountId } = req.body;
    try {
      // Retrieve the price to determine if it is
      // one-time or recurring
      const price = await stripe.prices.retrieve(priceId, { stripeAccount: accountId });
      const isSubscription = price.type === 'recurring';
      const mode = isSubscription ? 'subscription' : 'payment';
      const session = await stripe.checkout.sessions.create(
        {
          line_items: [
            {
              price: priceId,
              quantity: 1,
            },
          ],
          mode,
          success_url: `${process.env.DOMAIN}/done?session_id={CHECKOUT_SESSION_ID}`,
          cancel_url: `${process.env.DOMAIN}`,
          ...(isSubscription
            ? {
                subscription_data: {
                  application_fee_percent: 10,
                },
              }
            : {
                payment_intent_data: {
                  application_fee_amount: 123,
                },
              }),
        },
        { stripeAccount: accountId },
      );
      res.redirect(303, session.url as string);
    } catch (error) {
      const message = error instanceof Error ? error.message : 'Unknown error';
      res.status(500).json({ error: message });
    }
  },
);
</code></pre>
<p>Here's what this route does step by step. First, it retrieves the price from the merchant’s connected account to check whether it is a one-time price or a recurring subscription. Then it creates a Checkout Session with the appropriate mode — either "payment" or "subscription".</p>
<p>The <code>application_fee_amount</code> is your platform’s cut of the transaction, specified in the smallest currency unit (cents for USD). In this example, you take $1.23 or 10% per transaction. For a real marketplace, you would likely calculate this as a percentage of the product price.</p>
<p>Notice that <code>application_fee_amount</code> goes inside <code>subscription_data</code> for subscriptions but inside <code>payment_intent_data</code> for one-time payments. This is a Stripe requirement — the two modes use different configuration objects.</p>
<p>Finally, the route uses <code>res.redirect(303, session.url)</code> to send the customer directly to Stripe’s hosted checkout page.</p>
<h2 id="heading-how-to-handle-webhooks"><strong>How to Handle Webhooks</strong></h2>
<p>Webhooks are how Stripe tells your server about events that happen asynchronously — like a successful payment, a failed charge, or a subscription cancellation.</p>
<p>In a production marketplace, you should <strong>never</strong> rely solely on redirect URLs to confirm payments. A customer might close their browser before the redirect completes. Webhooks are your source of truth.</p>
<p>Add the webhook endpoint <strong>before</strong> the JSON body parser. Stripe sends webhook payloads as raw bytes, and you need the raw body to verify the signature:</p>
<pre><code class="language-typescript">// IMPORTANT: Register this BEFORE app.use(express.json())
app.post(
  '/api/webhook',
  express.raw({ type: 'application/json' }),
  (req: Request, res: Response) =&gt; {
    let event: Stripe.Event = JSON.parse(req.body.toString()); // If you have an endpoint secret, verify the
    // signature for security
    const endpointSecret = process.env.WEBHOOK_SECRET;
    if (endpointSecret) {
      const signature = req.headers['stripe-signature'] as string;
      try {
        event = stripe.webhooks.constructEvent(req.body, signature, endpointSecret) as Stripe.Event;
      } catch (err) {
        const message = err instanceof Error ? err.message : 'Unknown error';
        console.log('Webhook signature verification failed:', message);
        res.sendStatus(400);
        return;
      }
    } // Handle the event
    switch (event.type) {
      case 'checkout.session.completed': {
        const session = event.data.object as Stripe.Checkout.Session;
        console.log('Payment successful for session:', session.id); // Fulfill the order: send email, grant access,
        // update your records, and so on
        break;
      }
      case 'checkout.session.expired': {
        const session = event.data.object as Stripe.Checkout.Session;
        console.log('Session expired:', session.id); // Optionally notify the customer or clean up
        // any pending records
        break;
      }
      case 'checkout.session.async_payment_succeeded': {
        const session = event.data.object as Stripe.Checkout.Session;
        console.log('Delayed payment succeeded for session:', session.id); // Fulfill the order now that payment cleared
        break;
      }
      case 'checkout.session.async_payment_failed': {
        const session = event.data.object as Stripe.Checkout.Session;
        console.log('Payment failed for session:', session.id); // Notify the customer that payment failed
        break;
      }
      case 'customer.subscription.deleted': {
        const subscription = event.data.object as Stripe.Subscription;
        console.log('Subscription cancelled:', subscription.id); // Revoke access for the customer
        break;
      }
      default:
        console.log('Unhandled event type:', event.type);
    }
    res.send();
  },
);
</code></pre>
<p>The webhook handler checks for five key events.</p>
<ul>
<li><p><code>checkout.session.completed</code> fires when a payment succeeds — this is where you would fulfill an order, send a confirmation email, or grant access.</p>
</li>
<li><p><code>checkout.session.expired</code> fires when a session expires before the customer completes payment.</p>
</li>
<li><p><code>checkout.session.async_payment_succeeded</code> fires when a delayed payment method (like a bank transfer) finally goes through.</p>
</li>
<li><p><code>checkout.session.async_payment_failed</code> fires when a delayed payment method fails.</p>
</li>
<li><p>And <code>customer.subscription.deleted</code> fires when a subscription is cancelled.</p>
</li>
</ul>
<h2 id="heading-how-to-configure-webhooks-in-the-stripe-dashboard"><strong>How to Configure Webhooks in the Stripe Dashboard</strong></h2>
<p>Before you can receive webhook events, you need to tell Stripe where to send them and which events you care about. Follow these steps:</p>
<ol>
<li><p>Go to the Stripe Dashboard and navigate to Developers &gt; Webhooks.</p>
</li>
<li><p>Click "Add destination."</p>
</li>
<li><p>Under the account type, select "Connected and V2 accounts" since your payments go through connected merchant accounts.</p>
</li>
<li><p>Under "Events to listen for," click "All events" and select the following five events:</p>
<ul>
<li><p><code>checkout.session.async_payment_succeeded</code> — Occurs when a payment intent using a delayed payment method finally succeeds.</p>
</li>
<li><p><code>checkout.session.completed</code> — Occurs when a Checkout Session has been successfully completed.</p>
</li>
<li><p><code>checkout.session.expired</code> — Occurs when a Checkout Session expires before completion.</p>
</li>
<li><p><code>checkout.session.async_payment_failed</code> — Occurs when a payment intent using a delayed payment method fails.</p>
</li>
<li><p><code>customer.subscription.deleted</code> — Occurs whenever a customer’s subscription ends.</p>
</li>
</ul>
</li>
<li><p>Enter your webhook endpoint URL. For production, this would be something like <a href="https://yourdomain.com/api/webhook">https://yourdomain.com/api/webhook</a>. For local development, you will use the Stripe CLI instead (covered next).</p>
</li>
<li><p>Click "Add destination" to save.</p>
</li>
</ol>
<h2 id="heading-how-to-test-webhooks-locally"><strong>How to Test Webhooks Locally</strong></h2>
<p>For local development, you don't need to expose your server to the internet. Install the Stripe CLI and run:</p>
<pre><code class="language-shell">brew install stripe/stripe-cli/stripe
stripe login
stripe listen --forward-to localhost:4242/webhook
</code></pre>
<p>The CLI will print a webhook signing secret that starts with <code>whsec_</code>. Add this to your .env file as <code>WEBHOOK_SECRET</code>. The CLI intercepts all webhook events from Stripe and forwards them to your local server, so you can test the full payment flow without deploying anything.</p>
<h2 id="heading-how-to-add-the-billing-portal"><strong>How to Add the Billing Portal</strong></h2>
<p>The billing portal lets customers manage their subscriptions without you building any UI for it. Stripe hosts the entire experience — customers can update their payment method, change plans, or cancel their subscription.</p>
<pre><code class="language-typescript">// Create a billing portal session
router.post(
&nbsp; "/create-portal-session",
&nbsp; async (req: Request, res: Response) =&gt; {
&nbsp;&nbsp;&nbsp; const { session_id } = req.body as {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; session_id: string;
&nbsp;&nbsp;&nbsp; };
&nbsp;
&nbsp;&nbsp;&nbsp; try {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; const session =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; await stripe.checkout.sessions.retrieve(
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; session_id
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; );
&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; const portalSession =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; await stripe.billingPortal.sessions.create({
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; customer_account: session.customer_account as string,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return_url: `\({process.env.DOMAIN}?session_id=\){session_id}`,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; });
&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; res.redirect(303, portalSession.url);
&nbsp;&nbsp;&nbsp; } catch (error) {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; const message =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; error instanceof Error
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ? error.message
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : "Unknown error";
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; res.status(500).json({ error: message });
&nbsp;&nbsp;&nbsp; }
&nbsp; }
);
</code></pre>
<p>This route takes a <code>session_id</code> from a previous checkout, retrieves the associated customer, and creates a billing portal session. The <code>customer_account</code> field links the portal to the correct connected account so the customer sees only their subscriptions with that specific merchant.</p>
<p>Now add the JSON parser and mount the router. This must come <strong>after</strong> the webhook route:</p>
<pre><code class="language-typescript">// JSON and URL-encoded parsers (AFTER webhook route)
app.use(express.urlencoded({ extended: true }));
app.use(express.json());

// Mount all routes under /api
app.use('/api', router);
const PORT: number = parseInt(process.env.PORT || '4242', 10);
app.listen(PORT, () =&gt; {
  console.log(`Server running on port ${PORT}`);
});
</code></pre>
<h2 id="heading-how-to-build-the-nextjs-frontend"><strong>How to Build the Next.js Frontend</strong></h2>
<p>Navigate to the client directory and create a new Next.js project with TypeScript:</p>
<pre><code class="language-shell">cd ../client
npx create-next-app@latest . --typescript --app --tailwind --eslint
npm install axios
</code></pre>
<h2 id="heading-how-to-create-the-account-context"><strong>How to Create the Account Context</strong></h2>
<p>You need a way to share the merchant’s account ID across all components. Create a context provider at <code>contexts/AccountContext.tsx</code>:</p>
<pre><code class="language-typescript">'use client';
import { createContext, useContext, useState, ReactNode } from 'react';
import { useSearchParams } from 'next/navigation';

interface AccountContextType {
  accountId: string | null;
  setAccountId: (id: string | null) =&gt; void;
}

const AccountContext = createContext&lt;AccountContextType | undefined&gt;(undefined);

export function useAccount(): AccountContextType {
  const context = useContext(AccountContext);
  if (!context) {
    throw new Error('useAccount must be used within AccountProvider');
  }
  return context;
}

export function AccountProvider({ children }: { children: ReactNode }) {
  const searchParams = useSearchParams();
  const [accountId, setAccountId] = useState&lt;string | null&gt;(searchParams.get('accountId'));

  return (
    &lt;AccountContext.Provider value={{ accountId, setAccountId }}&gt;
      {children}
    &lt;/AccountContext.Provider&gt;
  );
}
</code></pre>
<p>This context stores the current merchant’s account ID and makes it available throughout the app. On initial load, it checks the URL for an accountId query parameter — this is how Stripe’s onboarding redirect passes the account ID back to your app.</p>
<h2 id="heading-how-to-create-the-account-status-hook"><strong>How to Create the Account Status Hook</strong></h2>
<p>Create a custom hook at <code>hooks/useAccountStatus.ts</code> that polls the account status:</p>
<pre><code class="language-typescript">'use client';
import { useState, useEffect } from 'react';
import { useAccount } from '@/contexts/AccountContext';
interface AccountStatus {
  id: string;
  payoutsEnabled: boolean;
  chargesEnabled: boolean;
  detailsSubmitted: boolean;
}
export default function useAccountStatus() {
  const [accountStatus, setAccountStatus] = useState&lt;AccountStatus | null&gt;(null);
  const { accountId, setAccountId } = useAccount();
  useEffect(() =&gt; {
    if (!accountId) return;
    const fetchStatus = async () =&gt; {
      try {
        const res = await fetch(`http://localhost:4242/api/account-status/${accountId}`);
        if (!res.ok) throw new Error('Failed to fetch');
        const data: AccountStatus = await res.json();
        setAccountStatus(data);
      } catch {
        setAccountId(null);
      }
    };
    fetchStatus();
    const interval = setInterval(fetchStatus, 5000);
    return () =&gt; clearInterval(interval);
  }, [accountId, setAccountId]);
  return {
    accountStatus,
    needsOnboarding: !accountStatus?.chargesEnabled &amp;&amp; !accountStatus?.detailsSubmitted,
  };
}
</code></pre>
<p>This hook polls the account status every 5 seconds. This is important because Stripe’s onboarding is asynchronous — a merchant might complete the form, but it can take a moment for Stripe to verify their details and activate their account. The <code>needsOnboarding</code> flag tells your UI whether to show the onboarding button or the merchant dashboard.</p>
<h2 id="heading-how-to-build-the-merchant-onboarding-component"><strong>How to Build the Merchant Onboarding Component</strong></h2>
<p>Create <code>components/ConnectOnboarding.tsx</code>:</p>
<pre><code class="language-typescript">'use client';
import { useState } from 'react';
import { useAccount } from '@/contexts/AccountContext';
import useAccountStatus from '@/hooks/useAccountStatus';
const API_URL = 'http://localhost:4242/api';
export default function ConnectOnboarding() {
  const [email, setEmail] = useState&lt;string&gt;('');
  const { accountId, setAccountId } = useAccount();
  const { accountStatus, needsOnboarding } = useAccountStatus();
  const handleCreateAccount = async () =&gt; {
    const res = await fetch(`${API_URL}/create-connect-account`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ email }),
    });
    const data = await res.json();
    setAccountId(data.accountId);
  };
  const handleStartOnboarding = async () =&gt; {
    const res = await fetch(`${API_URL}/create-account-link`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ accountId }),
    });
    const data = await res.json();
    window.location.href = data.url;
  };
  if (!accountId) {
    return (
      &lt;div className="max-w-md mx-auto p-6"&gt;
        &lt;h2 className="text-xl font-bold mb-4"&gt;Create Your Seller Account&lt;/h2&gt;
        &lt;input
          type="email"
          placeholder="Your email"
          value={email}
          onChange={(e) =&gt; setEmail(e.target.value)}
          className="w-full border p-2 rounded mb-4"
        /&gt;
        &lt;button
          onClick={handleCreateAccount}
          className="w-full bg-green-600 text-white p-2 rounded hover:bg-green-700"
        &gt;
          Create Connect Account
        &lt;/button&gt;
      &lt;/div&gt;
    );
  }
  return (
    &lt;div className="max-w-md mx-auto p-6"&gt;
      &lt;h3 className="font-semibold mb-2"&gt;Account: {accountId} &lt;/h3&gt;
      &lt;p className="mb-2"&gt;Charges: {accountStatus?.chargesEnabled ? 'Active' : 'Pending'} &lt;/p&gt;
      &lt;p className="mb-4"&gt;Payouts: {accountStatus?.payoutsEnabled ? 'Active' : 'Pending'} &lt;/p&gt;
      {needsOnboarding &amp;&amp; (
        &lt;button
          onClick={handleStartOnboarding}
          className="bg-purple-600 text-white px-6 py-2 rounded hover:bg-purple-700"
        &gt;
          Complete Onboarding
        &lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}
</code></pre>
<p>This component handles both states of the merchant experience. If no account exists, it shows a simple email form. After account creation, it shows the account status and an onboarding button if needed.</p>
<h2 id="heading-how-to-build-the-product-create-product-list-and-checkout"><strong>How to Build the Product Create, Product List and Checkout</strong></h2>
<p>Create <code>components/Products.tsx</code>:</p>
<pre><code class="language-typescript">'use client';
import { useState, useEffect } from 'react';
import { useAccount } from '@/contexts/AccountContext';
import useAccountStatus from '@/hooks/useAccountStatus';
const API_URL = 'http://localhost:4242/api';
interface Product {
  id: string;
  name: string;
  description: string | null;
  price: number | null;
  priceId: string;
  period: string | null;
}
export default function Products() {
  const { accountId } = useAccount();
  const { needsOnboarding } = useAccountStatus();
  const [products, setProducts] = useState&lt;Product[]&gt;([]);
  useEffect(() =&gt; {
    if (!accountId || needsOnboarding) return;
    const fetchProducts = async () =&gt; {
      const res = await fetch(`\({API_URL}/products/\){accountId}`);
      const data: Product[] = await res.json();
      setProducts(data);
    };
    fetchProducts();
    const interval = setInterval(fetchProducts, 5000);
    return () =&gt; clearInterval(interval);
  }, [accountId, needsOnboarding]);
  return (
    &lt;div className="grid grid-cols-1 md:grid-cols-3 gap-6 mt-6"&gt;
      {' '}
      {products.map((product) =&gt; (
        &lt;div key={product.priceId} className="border rounded-lg p-4 shadow-sm"&gt;
          &lt;h3 className="text-lg font-semibold"&gt;&nbsp; {product.name}&lt;/h3&gt;

          &lt;p className="text-gray-600 mt-1"&gt;&nbsp; {product.description}&lt;/p&gt;

          &lt;p className="text-xl font-bold mt-3"&gt;
            ${((product.price ?? 0) / 100).toFixed(2)}
            {product.period ? ` / ${product.period}` : ''}
          &lt;/p&gt;

          &lt;form action={`${API_URL}/create-checkout-session`} method="POST"&gt;
            &lt;input type="hidden" name="priceId" value={product.priceId} /&gt;
            &lt;input type="hidden" name="accountId" value={accountId ?? ''} /&gt;
            &lt;button
              type="submit"
              className="mt-4 w-full bg-blue-600 text-white py-2 rounded hover:bg-blue-700"
            &gt;
              {product.period ? 'Subscribe' : 'Buy Now'}
            &lt;/button&gt;
          &lt;/form&gt;
        &lt;/div&gt;
      ))}
    &lt;/div&gt;
  );
}

</code></pre>
<p>The Products component fetches all products from the merchant’s Stripe account and displays them in a responsive grid. The checkout button submits a form directly to your backend, which redirects the customer to Stripe’s hosted checkout page. Notice how the button text changes based on whether the product is a one-time purchase or a subscription.</p>
<h2 id="heading-how-to-build-the-product-form"><strong>How to Build the Product Form</strong></h2>
<p>Merchants need a way to add products from the frontend. Create <code>components/ProductForm.tsx</code>:</p>
<pre><code class="language-typescript">'use client';
import { useState } from 'react';
import { useAccount } from '@/contexts/AccountContext';
import useAccountStatus from '@/hooks/useAccountStatus';
const API_URL = 'http://localhost:4242/api';
interface ProductFormData {
  productName: string;
  productDescription: string;
  productPrice: number;
}
export default function ProductForm() {
  const { accountId } = useAccount();
  const { needsOnboarding } = useAccountStatus();
  const [showForm, setShowForm] = useState&lt;boolean&gt;(false);
  const [formData, setFormData] = useState&lt;ProductFormData&gt;({
    productName: '',
    productDescription: '',
    productPrice: 1000,
  });
  const handleSubmit = async (e: React.FormEvent): Promise&lt;void&gt; =&gt; {
    e.preventDefault();
    if (!accountId || needsOnboarding) return;
    await fetch(`${API_URL}/create-product`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        ...formData,
        accountId,
      }),
    }); // Reset form and hide it
    setFormData({
      productName: '',
      productDescription: '',
      productPrice: 1000,
    });
    setShowForm(false);
  }; // Only show the form if the merchant has completed
  // onboarding and can accept charges
  if (!accountId || needsOnboarding) return null;
  return (
    &lt;div className="my-6"&gt;
      &lt;button
        onClick={() =&gt; setShowForm(!showForm)}
        className="bg-green-600 text-white px-4 py-2 rounded hover:bg-green-700"
      &gt;
        {showForm ? 'Cancel' : 'Add New Product'}
      &lt;/button&gt;

      {showForm &amp;&amp; (
        &lt;form onSubmit={handleSubmit} className="mt-4 max-w-md space-y-4"&gt;
          &lt;div&gt;
            &lt;label className="block text-sm font-medium mb-1"&gt;Product Name&lt;/label&gt;

            &lt;input
              type="text"
              value={formData.productName}
              onChange={(e) =&gt;
                setFormData({
                  ...formData,
                  productName: e.target.value,
                })
              }
              className="w-full border p-2 rounded"
              required
            /&gt;
          &lt;/div&gt;

          &lt;div&gt;
            &lt;label className="block text-sm font-medium mb-1"&gt;Description&lt;/label&gt;
            &lt;input
              type="text"
              value={formData.productDescription}
              onChange={(e) =&gt;
                setFormData({
                  ...formData,
                  productDescription: e.target.value,
                })
              }
              className="w-full border p-2 rounded"
            /&gt;
          &lt;/div&gt;
          &lt;div&gt;
            &lt;label className="block text-sm font-medium mb-1"&gt;Price (in cents)&lt;/label&gt;

            &lt;input
              type="number"
              value={formData.productPrice}
              onChange={(e) =&gt;
                setFormData({
                  ...formData,
                  productPrice: parseInt(e.target.value),
                })
              }
              className="w-full border p-2 rounded"
              required
            /&gt;
          &lt;/div&gt;
          &lt;button
            type="submit"
            className="bg-blue-600 text-white px-4 py-2 rounded hover:bg-blue-700"
          &gt;
            Create Product
          &lt;/button&gt;
        &lt;/form&gt;
      )}
    &lt;/div&gt;
  );
}
</code></pre>
<p>This component only renders after the merchant has completed onboarding (the <code>if (!accountId || needsOnboarding) return null</code> check at the top). It toggles a form where the merchant enters a product name, description, and price in cents. When submitted, it calls your <code>/api/create-product</code> endpoint, which creates both the product and its price on the merchant’s connected Stripe account.</p>
<p>The price field uses cents because that is what Stripe expects. So if a merchant wants to sell a product for \(25.00, they enter 2500. In a production app, you would add a friendlier input that lets merchants type \)25.00 and converts it to cents automatically.</p>
<h2 id="heading-how-to-build-the-main-page"><strong>How to Build the Main Page</strong></h2>
<p>Finally, put it all together in <code>app/page.tsx</code>:</p>
<pre><code class="language-typescript">'use client';
import { AccountProvider } from '@/contexts/AccountContext';
import ConnectOnboarding from '@/components/ConnectOnboarding';
import Products from '@/components/Products';
import ProductForm from '@/components/ProductForm';
export default function Home() {
  return (
    &lt;AccountProvider&gt;
      {' '}
      &lt;main className="max-w-6xl mx-auto p-8"&gt;
        &lt;h1 className="text-3xl font-bold mb-8"&gt; Marketplace Dashboard &lt;/h1&gt;
        &lt;ConnectOnboarding /&gt;
        &lt;ProductForm /&gt;
        &lt;Products /&gt;
      &lt;/main&gt;
    &lt;/AccountProvider&gt;
  );
}
</code></pre>
<h2 id="heading-how-to-test-the-full-flow"><strong>How to Test the Full Flow</strong></h2>
<p>Start both servers:</p>
<pre><code class="language-shell"># Terminal 1 - Backend
cd server
npm run dev
&nbsp;
# Terminal 2 - Frontend
cd client
npm run dev
&nbsp;
# Terminal 3 - Stripe webhook listener
stripe listen --forward-to localhost:4242/api/webhook
</code></pre>
<p>Now test the complete flow:</p>
<ol>
<li><p>Go to <a href="http://localhost:3000">http://localhost:3000</a> and enter an email to create a merchant account.</p>
</li>
<li><p>Click "Complete Onboarding" and fill out Stripe’s test onboarding form. Use test data like 000-000-0000 for the phone number and 0000 for the last four digits of SSN.</p>
</li>
<li><p>Wait a few seconds for the account status to update. Once charges are active, you can add products.</p>
</li>
<li><p>Create a product using the product form (set the price in cents — for example, 2500 for $25.00).</p>
</li>
<li><p>Click "Buy Now" on a product to start the checkout flow.</p>
</li>
<li><p>On Stripe’s checkout page, use the test card number 4242 4242 4242 4242 with any future expiry date and any CVC.</p>
</li>
<li><p>Check your terminal — you should see the webhook event confirming the payment.</p>
</li>
<li><p>Check the Stripe Dashboard to see the payment, the application fee, and the transfer to the connected account.</p>
</li>
</ol>
<h2 id="heading-how-the-payment-split-works"><strong>How the Payment Split Works</strong></h2>
<p>Here is exactly what happens when a customer pays $25.00 for a product:</p>
<ol>
<li><p>The customer pays $25.00 on Stripe’s checkout page.</p>
</li>
<li><p>Stripe deducts its processing fee (approximately 2.9% + $0.30 for US cards).</p>
</li>
<li><p>Your platform takes the application fee you set ($1.23 in our example).</p>
</li>
<li><p>The remaining amount is transferred to the merchant’s connected Stripe account.</p>
</li>
<li><p>The merchant can withdraw their funds to their bank account from the Stripe Dashboard.</p>
</li>
</ol>
<p>You control the application fee in the checkout route. In a production marketplace, you would calculate this as a percentage of the transaction. For example, to take a 10% fee:</p>
<pre><code class="language-plaintext">onst applicationFee = Math.round(
&nbsp; (price.unit_amount ?? 0) * 0.1
);
</code></pre>
<h2 id="heading-next-steps"><strong>Next Steps</strong></h2>
<p>You now have a working marketplace. Here are improvements to consider for production:</p>
<ul>
<li><p>Add authentication with NextAuth.js so merchants can securely log in and manage their accounts across sessions.</p>
</li>
<li><p>Add runtime validation with Zod to validate all request bodies before they reach Stripe.</p>
</li>
<li><p>Add image uploads for products using Cloudinary or AWS S3, then pass the image URL to Stripe’s product metadata.</p>
</li>
<li><p>Build separate merchant and customer views. Right now the app combines both experiences on one page.</p>
</li>
<li><p>Deploy your backend to Railway or Render and your frontend to Vercel. Update the webhook URL in your Stripe Dashboard to point to your production server.</p>
</li>
</ul>
<p>You can find the complete source code for this tutorial on GitHub: <a href="https://github.com/michaelokolo/marketplace">https://github.com/michaelokolo/marketplace</a></p>
<h2 id="heading-acknowledgements"><strong>Acknowledgements</strong></h2>
<p>Some API usage patterns in this tutorial are inspired by examples from the <a href="https://docs.stripe.com">official Stripe documentation</a>. These examples were adapted to demonstrate how to build a complete multi-vendor marketplace architecture.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this handbook, you built a complete online marketplace where merchants can onboard through Stripe Connect, create products stored directly in Stripe, and receive payments from customers — all without a traditional database.</p>
<p>You learned how to use Stripe’s V2 Accounts API for merchant onboarding, create products and prices on connected accounts, build a checkout flow that handles both one-time payments and subscriptions, listen for payment events with webhooks, and give customers a billing portal to manage their subscriptions.</p>
<p>The key insight is that Stripe Connect handles the hardest parts of running a marketplace — payment splitting, tax compliance, identity verification, and fund transfers. Your job is to build a great user experience on top of it.</p>
<p>If you found this tutorial helpful, share it with someone who is learning to build full-stack applications. Happy coding!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up OpenClaw and Design an A2A Plugin Bridge ]]>
                </title>
                <description>
                    <![CDATA[ OpenClaw is getting attention because it turns a popular AI idea into something you can actually run yourself. Instead of opening one more browser tab, you run a Gateway on your own machine or server  ]]>
                </description>
                <link>https://www.freecodecamp.org/news/openclaw-a2a-plugin-architecture-guide/</link>
                <guid isPermaLink="false">69d542ca5da14bc70e7c1559</guid>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Open Source ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ software architecture ]]>
                    </category>
                
                    <category>
                        <![CDATA[ APIs ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Nataraj Sundar ]]>
                </dc:creator>
                <pubDate>Tue, 07 Apr 2026 17:45:46 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/4be03b02-d128-49e9-afcb-fea0f771e746.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>OpenClaw is getting attention because it turns a popular AI idea into something you can actually run yourself. Instead of opening one more browser tab, you run a Gateway on your own machine or server and connect it to communication tools you already use.</p>
<p>That matters because OpenClaw is self-hosted, multi-channel, open source, and built around agent workflows such as sessions, tools, plugins, and multi-agent routing. It feels less like a toy chatbot and more like an operator-controlled agent runtime.</p>
<p>In this guide, you'll do three things. First, you'll learn what OpenClaw is and why developers are paying attention to it. Second, you'll get it running the beginner-friendly way through the dashboard. Third, you'll walk through an original design contribution: a proposed OpenClaw-to-A2A plugin architecture and a <a href="https://github.com/natarajsundar/openclaw-a2a-secure-agent-runtime"><code>proof-of-concept</code></a> relay that shows how OpenClaw’s session model could map to the A2A protocol.</p>
<p>That last part is important, so I want to frame it carefully. The A2A integration in this article is <strong>not</strong> presented as a built-in OpenClaw feature. It's a documented architecture proposal built on top of the extension points OpenClaw already exposes.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>This guide is beginner-friendly for OpenClaw itself, but it assumes a few basics so you can follow the architecture and proof-of-concept sections comfortably.</p>
<p>Before you continue, you should be familiar with:</p>
<ul>
<li><p>Basic JavaScript or Node.js (reading and running scripts)</p>
</li>
<li><p>How HTTP APIs work (requests, responses, JSON payloads)</p>
</li>
<li><p>Using a terminal to run commands</p>
</li>
<li><p>High-level concepts like services, APIs, or microservices</p>
</li>
</ul>
<p>You don't need prior experience with OpenClaw or A2A. The setup steps walk through everything you need to get started.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ol>
<li><p><a href="#heading-what-openclaw-is">What OpenClaw Is</a></p>
</li>
<li><p><a href="#heading-why-openclaw-is-getting-so-much-attention">Why OpenClaw Is Getting So Much Attention</a></p>
</li>
<li><p><a href="#heading-what-the-a2a-protocol-is">What the A2A Protocol Is</a></p>
</li>
<li><p><a href="#heading-how-openclaw-and-a2a-relate">How OpenClaw and A2A Relate</a></p>
</li>
<li><p><a href="#heading-what-you-need-before-you-start">What You Need Before You Start</a></p>
</li>
<li><p><a href="#heading-step-1-install-openclaw">Install OpenClaw</a></p>
</li>
<li><p><a href="#heading-step-2-run-the-onboarding-wizard">Run the Onboarding Wizard</a></p>
</li>
<li><p><a href="#heading-step-3-check-the-gateway-and-open-the-dashboard">Check the Gateway and Open the Dashboard</a></p>
</li>
<li><p><a href="#heading-step-4-use-openclaw-as-a-private-coding-assistant">Use OpenClaw as a Private Coding Assistant</a></p>
</li>
<li><p><a href="#heading-step-5-understand-multi-agent-routing">Understand Multi Agent Routing</a></p>
</li>
<li><p><a href="#heading-where-a2a-could-fit-later">Where A2A Could Fit Later</a></p>
</li>
<li><p><a href="#heading-a-proposed-openclaw-to-a2a-plugin-architecture">A Proposed OpenClaw to A2A Plugin Architecture</a></p>
</li>
<li><p><a href="#heading-build-the-proof-of-concept-relay">Build the Proof of Concept Relay</a></p>
</li>
<li><p><a href="#heading-how-the-proof-of-concept-maps-to-a-real-openclaw-plugin">How the Proof of Concept Maps to a Real OpenClaw Plugin</a></p>
</li>
<li><p><a href="#heading-security-notes-before-you-go-further">Security Notes Before You Go Further</a></p>
</li>
<li><p><a href="#heading-final-thoughts">Final Thoughts</a></p>
</li>
</ol>
<h2 id="heading-what-openclaw-is">What OpenClaw Is</h2>
<p>According to the <a href="https://docs.openclaw.ai/">official docs</a>, OpenClaw is a self-hosted gateway that connects chat apps like WhatsApp, Telegram, Discord, iMessage, and a browser dashboard to AI agents.</p>
<p>That wording is useful because it tells you where OpenClaw sits in the stack. It's not just a model wrapper. It's a Gateway that handles sessions, routing, and app connections, while agents, tools, plugins, and providers do the actual work.</p>
<p>Here is the simplest mental model:</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/ad5f3295-8fdf-4f9c-8488-f69808850295.png" alt="Diagram showing OpenClaw architecture where multiple chat apps and a browser dashboard connect to a central Gateway, which routes requests to different agents that use model providers and tools." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>If you're new to the project, this is the practical way to think about it:</p>
<ul>
<li><p>your chat apps are the front door</p>
</li>
<li><p>the Gateway is the traffic and control layer</p>
</li>
<li><p>the agent is the reasoning layer</p>
</li>
<li><p>the model provider and tools are what let the agent actually do work</p>
</li>
</ul>
<p>That's one reason OpenClaw feels different from a normal browser-only assistant.</p>
<h2 id="heading-why-developers-are-paying-attention-to-openclaw">Why Developers Are Paying Attention to OpenClaw</h2>
<p>OpenClaw is getting a lot of attention for a few reasons.</p>
<p>The first reason is control. The docs position OpenClaw as self-hosted and multi-channel, which means you can run it on your own machine or server instead of depending on a fully hosted assistant.</p>
<p>The second reason is that OpenClaw already looks like an agent platform. The docs talk about sessions, plugins, tools, skills, multi-agent routing, and ACP-backed external coding harnesses. That's a much richer story than “ask a model a question in a web page.”</p>
<p>The third reason is workflow fit. A lot of people don't want another inbox. They want an assistant that can live in the tools they already check every day.</p>
<p>There's also a broader industry trend behind the hype. Developers are actively looking for ways to connect multiple agents and multiple tools without giving up visibility into what's happening. OpenClaw sits directly in that conversation.</p>
<h2 id="heading-what-the-a2a-protocol-is">What the A2A Protocol Is</h2>
<p>A2A, short for Agent2Agent, is an open protocol for communication between agent systems. The <a href="https://a2a-protocol.org/latest/specification/">A2A specification</a> says its purpose is to help independent agent systems discover each other, negotiate interaction modes, manage collaborative tasks, and exchange information without exposing internal memory, tools, or proprietary logic.</p>
<p>That last point matters. A2A is about interoperability between agent systems, not about exposing all of one agent's internals to another.</p>
<p>A2A introduces a few core concepts that are worth learning early:</p>
<ul>
<li><p><strong>Agent Card</strong>: a JSON description of the remote agent, its URL, skills, capabilities, and auth requirements</p>
</li>
<li><p><strong>Task</strong>: the main unit of remote work</p>
</li>
<li><p><strong>Artifact</strong>: the output of a task</p>
</li>
<li><p><strong>Context ID</strong>: a stable interaction boundary across multiple related turns</p>
</li>
</ul>
<p>A2A tasks follow a fairly clean lifecycle:</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/3b5a43e8-dabd-45e3-bff1-0081e2b37e0d.png" alt="State diagram illustrating the A2A task lifecycle including submitted, working, input required, completed, failed, rejected, and canceled states.." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>The A2A docs also explain that A2A and MCP are complementary, not competing. A2A is for agent-to-agent collaboration. MCP is for agent-to-tool communication.</p>
<p>That distinction is useful when you compare A2A with OpenClaw, because OpenClaw already has strong local tool and session concepts.</p>
<h2 id="heading-how-openclaw-and-a2a-relate">How OpenClaw and A2A Relate</h2>
<p>OpenClaw and A2A are not the same thing, but they line up in interesting ways.</p>
<p>OpenClaw already documents several features that point in a multi-agent direction:</p>
<ul>
<li><p><a href="https://docs.openclaw.ai/concepts/multi-agent/">multi-agent routing</a> for multiple isolated agents in one running Gateway</p>
</li>
<li><p><a href="https://docs.openclaw.ai/concepts/session-tool/">session tools</a> such as <code>sessions_send</code> and <code>sessions_spawn</code></p>
</li>
<li><p>a <a href="https://docs.openclaw.ai/tools/plugin/">plugin system</a> that can register tools, HTTP routes, Gateway RPC methods, and background services</p>
</li>
<li><p><a href="https://docs.openclaw.ai/tools/acp-agents/">ACP support</a> and the <a href="https://docs.openclaw.ai/cli/acp"><code>openclaw acp</code> bridge</a> for external coding clients</p>
</li>
</ul>
<p>But it's still important to stay precise here.</p>
<p>OpenClaw documents ACP, plugins, and local multi-agent coordination today. The docs I checked do <strong>not</strong> describe native A2A support as a first-class built-in capability.</p>
<p>That means the honest claim is this:</p>
<p><strong>OpenClaw can be meaningfully connected to A2A in theory because the architectural pieces line up, but the A2A bridge still has to be built.</strong></p>
<h3 id="heading-acp-versus-a2a">ACP versus A2A</h3>
<p>ACP and A2A solve different problems.</p>
<p>ACP in OpenClaw today is about bridging an IDE or coding client to a Gateway-backed session.</p>
<p>A2A is about one agent system talking to another agent system across a protocol boundary.</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/9790f239-528c-422f-bbc5-3e82c7f1a171.png" alt="Diagram showing A2A interaction where an OpenClaw agent communicates through a plugin to discover a remote agent via an Agent Card and send tasks for execution." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/c4d4279b-3099-4c1b-92b6-3eaf817a6e84.png" alt="Diagram showing ACP flow where an IDE or coding client connects through an OpenClaw ACP bridge to a Gateway-backed session." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>That difference is one reason I prefer the phrase <strong>plugin bridge</strong> here instead of <strong>native A2A support</strong>.</p>
<h2 id="heading-what-you-need-before-you-start">What You Need Before You Start</h2>
<p>The easiest first run does <strong>not</strong> require WhatsApp, Telegram, or Discord.</p>
<p>The OpenClaw onboarding docs say the fastest first chat is the dashboard. That makes this a much more approachable beginner setup.</p>
<p>Before you start, you'll need:</p>
<ol>
<li><p>Node 24 if possible, or Node 22.16+ for compatibility</p>
</li>
<li><p>an API key for the model provider you want to use</p>
</li>
<li><p>If you're on Windows, WSL2 is the recommended path for the full experience. Native Windows works for core CLI and Gateway flows, but the docs call out caveats and position WSL2 as the more stable setup.</p>
</li>
<li><p>about five minutes for the first dashboard-based run</p>
</li>
</ol>
<h2 id="heading-step-1-install-openclaw">Step 1: Install OpenClaw</h2>
<p>The official getting-started page recommends the installer script.</p>
<p>On macOS, Linux, or WSL2, run:</p>
<pre><code class="language-bash">curl -fsSL https://openclaw.ai/install.sh | bash
</code></pre>
<p>On Windows PowerShell, the docs show this:</p>
<pre><code class="language-powershell">iwr -useb https://openclaw.ai/install.ps1 | iex
</code></pre>
<p>If you're on Windows, the platform docs recommend installing WSL2 first:</p>
<pre><code class="language-powershell">wsl --install
</code></pre>
<p>Then open Ubuntu and continue with the Linux commands there.</p>
<h2 id="heading-step-2-run-the-onboarding-wizard">Step 2: Run the Onboarding Wizard</h2>
<p>Once the CLI is installed, run the onboarding wizard.</p>
<pre><code class="language-bash">openclaw onboard --install-daemon
</code></pre>
<p>The onboarding wizard is the recommended path in the docs. It configures auth, gateway settings, optional channels, skills, and workspace defaults in one guided flow.</p>
<p>The most beginner-friendly choice is to keep the first run simple. Don't worry about chat apps yet. Get the local Gateway working first.</p>
<h2 id="heading-step-3-check-the-gateway-and-open-the-dashboard">Step 3: Check the Gateway and Open the Dashboard</h2>
<p>After onboarding, verify that the Gateway is running.</p>
<pre><code class="language-bash">openclaw gateway status
</code></pre>
<p>Then open the dashboard:</p>
<pre><code class="language-bash">openclaw dashboard
</code></pre>
<p>The docs call this the fastest first chat because it avoids channel setup. It's also the safest way to start, because the dashboard is local and the OpenClaw docs clearly say the Control UI is an admin surface and should not be exposed publicly.</p>
<p>The beginner setup flow looks like this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/eab78250-65d6-4d97-be3d-bf7167b9099e.png" alt="Sequence diagram showing OpenClaw setup flow from installation and onboarding to starting the Gateway and opening the dashboard for the first chat." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>If you can chat in the dashboard, your day-zero setup is working.</p>
<h2 id="heading-step-4-use-openclaw-as-a-private-coding-assistant">Step 4: Use OpenClaw as a Private Coding Assistant</h2>
<p>The best first use case is not to drop OpenClaw into a public group chat.</p>
<p>Use it as a private coding assistant in the dashboard.</p>
<p>For example, try a prompt like this:</p>
<blockquote>
<p>I am building a small Node.js utility that reads Markdown files and generates a table of contents. Turn this idea into a project plan, a README outline, and the first five implementation tasks.</p>
</blockquote>
<p>That kind of prompt is ideal for a first run because it gives you something concrete back right away.</p>
<p>You can also use it to:</p>
<ol>
<li><p>turn rough notes into a plan,</p>
</li>
<li><p>summarize a bug report into action items,</p>
</li>
<li><p>draft a README,</p>
</li>
<li><p>propose a folder structure, or</p>
</li>
<li><p>write a safe first implementation checklist.</p>
</li>
</ol>
<p>That is already enough to make OpenClaw useful before you touch any advanced protocol work.</p>
<h2 id="heading-step-5-understand-multi-agent-routing">Step 5: Understand Multi Agent Routing</h2>
<p>Once the basic setup is working, it helps to understand OpenClaw’s local multi-agent model.</p>
<p>The docs describe multi-agent routing as a way to run multiple isolated agents in one Gateway, with separate workspaces, state directories, and sessions.</p>
<p>That means you can imagine setups like this:</p>
<ul>
<li><p>a personal assistant</p>
</li>
<li><p>a coding assistant</p>
</li>
<li><p>a research assistant</p>
</li>
<li><p>an alerts assistant</p>
</li>
</ul>
<p>OpenClaw already has a model for that:</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/c640a7c4-0421-4513-a2c2-658916504e3b.png" alt="Diagram illustrating OpenClaw multi-agent routing where incoming messages are matched to different agents such as main, coding, and alerts, each with separate sessions." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>You don't need to set this up on day one.</p>
<p>But it matters for the A2A discussion, because once you understand how OpenClaw routes work between local agents, it becomes much easier to think about routing work to <strong>remote</strong> agents through a protocol like A2A.</p>
<h2 id="heading-where-a2a-could-fit-later">Where A2A Could Fit Later</h2>
<p>A2A could fit into OpenClaw in two broad ways.</p>
<h3 id="heading-option-1-openclaw-as-an-a2a-client">Option 1: OpenClaw as an A2A Client</h3>
<p>In this model, OpenClaw stays your personal edge assistant.</p>
<p>It receives a request from the dashboard or a chat app, decides the task needs a specialist, discovers a remote A2A agent through an Agent Card, sends the task, waits for updates or artifacts, and translates the result back into a normal OpenClaw reply.</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/99a2e611-54ac-4c0f-8f8f-c1ce3246bb96.png" alt="Diagram showing OpenClaw acting as an A2A client, delegating tasks from a local session to a remote agent via an Agent Card and returning results to the user." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>This is the cleaner story for a personal assistant. OpenClaw stays the front door, and A2A becomes a delegation path behind the scenes.</p>
<h3 id="heading-option-2-openclaw-as-an-a2a-server">Option 2: OpenClaw as an A2A Server</h3>
<p>In this model, OpenClaw exposes some of its own capabilities to other agents.</p>
<p>A plugin could theoretically publish an A2A Agent Card, advertise a narrow skill set, accept A2A tasks, and map those tasks into OpenClaw sessions or sub-agent runs.</p>
<p>That's technically plausible because the plugin system can register HTTP routes, tools, Gateway methods, and background services.</p>
<p>It's also the riskier direction for a personal assistant, which is why I think <strong>client-first</strong> is the right starting point.</p>
<h2 id="heading-a-proposed-openclaw-to-a2a-plugin-architecture">A Proposed OpenClaw to A2A Plugin Architecture</h2>
<p>This section is my original contribution in the article.</p>
<p>I think the cleanest first architecture is <strong>not</strong> a full bidirectional bridge. It's a narrow outbound delegation plugin that lets OpenClaw call a small allowlist of remote A2A agents.</p>
<p>The design goal is simple:</p>
<p><strong>Reuse OpenClaw for user-facing conversations and local tool access, but use A2A only when a remote specialist agent is the best place to do the work.</strong></p>
<p>Here is the architecture I would start with:</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/e88f06dd-f108-48b2-a9ee-b74eac6b733b.png" alt="Architecture diagram of an OpenClaw-to-A2A plugin showing components such as delegation tool, policy engine, Agent Card cache, session-to-task mapper, task poller, and remote A2A agent." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<h3 id="heading-why-this-design-is-a-good-fit-for-openclaw">Why This Design is a Good Fit for OpenClaw</h3>
<p>This proposal is grounded in extension points OpenClaw already documents.</p>
<p>A plugin can register:</p>
<ul>
<li><p>an <strong>agent tool</strong> for delegation,</p>
</li>
<li><p>a <strong>Gateway method</strong> for health and diagnostics,</p>
</li>
<li><p>an <strong>HTTP route</strong> for future callbacks or webhook verification, and</p>
</li>
<li><p>a <strong>background service</strong> for cache warming, task subscriptions, or cleanup.</p>
</li>
</ul>
<p>That means the bridge doesn't have to modify OpenClaw core to be credible.</p>
<h3 id="heading-the-mapping-table">The Mapping Table</h3>
<p>The most important design decision is how to map OpenClaw’s session model to A2A’s task model.</p>
<p>Here is the mapping I recommend:</p>
<table>
<thead>
<tr>
<th>OpenClaw concept</th>
<th>A2A concept</th>
<th>Why this mapping works</th>
</tr>
</thead>
<tbody><tr>
<td><code>sessionKey</code></td>
<td><code>contextId</code></td>
<td>A single OpenClaw conversation should keep a stable remote context across related delegated turns</td>
</tr>
<tr>
<td>one delegated remote call</td>
<td>one <code>Task</code></td>
<td>each remote specialization request becomes a discrete unit of work</td>
</tr>
<tr>
<td>plugin tool call</td>
<td><code>SendMessage</code></td>
<td>the delegation tool is the natural point where the local agent crosses the protocol boundary</td>
</tr>
<tr>
<td>remote output</td>
<td><code>Artifact</code></td>
<td>A2A wants task outputs returned as artifacts rather than chat-only replies</td>
</tr>
<tr>
<td>plugin HTTP route</td>
<td>callback or future push handler</td>
<td>gives you a place to verify webhooks if you later adopt async push</td>
</tr>
<tr>
<td>Gateway method</td>
<td>status endpoint</td>
<td>gives operators a direct way to inspect relay health without going through the model</td>
</tr>
<tr>
<td>background service</td>
<td>polling or cache work</td>
<td>keeps asynchronous and maintenance work out of the tool call path</td>
</tr>
</tbody></table>
<p>This is the key architectural claim in the article:</p>
<p><strong>Treat the OpenClaw session as the long-lived conversational boundary, and treat each remote A2A task as one delegated execution inside that boundary.</strong></p>
<p>That preserves both sides cleanly.</p>
<h3 id="heading-the-design-in-one-sentence">The Design in One Sentence</h3>
<p>The <code>a2a_delegate</code> tool should:</p>
<ol>
<li><p>resolve an allowlisted remote Agent Card,</p>
</li>
<li><p>reuse an existing A2A <code>contextId</code> for the current <code>sessionKey</code> when possible,</p>
</li>
<li><p>create a fresh remote <code>Task</code> for the new delegated turn,</p>
</li>
<li><p>normalize remote artifacts back into a simple local answer, and</p>
</li>
<li><p>never expose the whole OpenClaw Gateway directly to the public internet.</p>
</li>
</ol>
<p>I like this design because it is incremental, testable, and consistent with OpenClaw’s personal-assistant trust model.</p>
<h2 id="heading-build-the-proof-of-concept-relay">Build the Proof of Concept Relay</h2>
<p>To make the architecture concrete, I built a small proof-of-concept relay.</p>
<p><a href="https://github.com/natarajsundar/openclaw-a2a-secure-agent-runtime">https://github.com/natarajsundar/openclaw-a2a-secure-agent-runtime</a></p>
<p>It's intentionally small. It doesn't try to become a full production plugin. Instead, it proves the hardest conceptual part of the bridge: how to map one OpenClaw session to a reusable A2A context while creating a fresh A2A task per delegated turn.</p>
<p>Here's the repository layout:</p>
<pre><code class="language-plaintext">openclaw-a2a-secure-agent-runtime/
├── README.md
├── package.json
├── examples/
│   └── openclaw-plugin-entry.example.ts
├── src/
│   ├── a2a-client.mjs
│   ├── agent-card-cache.mjs
│   ├── demo.mjs
│   ├── mock-remote-agent.mjs
│   ├── openclaw-a2a-relay.mjs
│   ├── session-task-map.mjs
│   └── utils.mjs
└── test/
    └── relay.test.mjs
</code></pre>
<p>The PoC does six things:</p>
<ol>
<li><p>fetches a remote Agent Card from <code>/.well-known/agent-card.json</code>,</p>
</li>
<li><p>caches it with simple <code>ETag</code> revalidation,</p>
</li>
<li><p>records local <code>sessionKey</code> to remote <code>contextId</code> mappings,</p>
</li>
<li><p>sends an A2A <code>SendMessage</code> request,</p>
</li>
<li><p>polls <code>GetTask</code> until the task finishes, and</p>
</li>
<li><p>converts the remote artifact into a local text answer.</p>
</li>
</ol>
<h3 id="heading-run-the-demo">Run the Demo</h3>
<p>The repo uses only built-in Node.js modules.</p>
<pre><code class="language-shell">cd openclaw-a2a-secure-agent-runtime
npm run demo
</code></pre>
<p>The demo spins up a mock remote A2A server, delegates one task, delegates a second task from the <strong>same</strong> local session, and shows that the same remote <code>contextId</code> is reused.</p>
<h3 id="heading-the-core-relay-idea">The Core Relay Idea</h3>
<p>This is the important logic in plain English:</p>
<ol>
<li><p>look up the most recent remote mapping for the current OpenClaw <code>sessionKey</code></p>
</li>
<li><p>reuse the old <code>contextId</code> if one exists</p>
</li>
<li><p>create a fresh A2A <code>Task</code> for the new request</p>
</li>
<li><p>poll until that task becomes <code>TASK_STATE_COMPLETED</code></p>
</li>
<li><p>turn the returned artifact into a normal text result that OpenClaw can send back to the user</p>
</li>
</ol>
<p>That makes the bridge predictable.</p>
<p>Here's a shortened version of the relay logic:</p>
<pre><code class="language-js">const previous = await sessionTaskMap.latestForSession(sessionKey, remoteBaseUrl);
const contextId = previous?.contextId ?? crypto.randomUUID();

const sendResult = await client.sendMessage({
  text,
  contextId,
  metadata: {
    openclawSessionKey: sessionKey,
    requestedSkillId: skillId,
  },
});

let task = sendResult.task;
while (!isTerminalTaskState(task.status?.state)) {
  await sleep(pollIntervalMs);
  task = await client.getTask(task.id);
}

return {
  contextId,
  taskId: task.id,
  answer: taskArtifactsToText(task),
};
</code></pre>
<p>That's the heart of the design.</p>
<h3 id="heading-why-this-repo-is-a-useful-proof-of-concept">Why This Repo is a Useful Proof of Concept</h3>
<p>A lot of “integration” articles stay too abstract. This repo avoids that problem in three ways.</p>
<p>First, it makes the session-to-context mapping explicit.</p>
<p>Second, it includes a mock remote A2A agent so you can test the flow without needing a large external setup.</p>
<p>Third, it includes a test that checks the most important invariant: repeated delegations from one local OpenClaw session reuse the same A2A context.</p>
<p>That is the piece I most wanted to make concrete, because it is where architecture turns into implementation.</p>
<h2 id="heading-how-the-proof-of-concept-maps-to-a-real-openclaw-plugin">How the Proof of Concept Maps to a Real OpenClaw Plugin</h2>
<p>The proof of concept is the relay core.</p>
<p>A real OpenClaw plugin would wrap that relay with four extension surfaces that the OpenClaw docs already describe.</p>
<h3 id="heading-1-a-delegation-tool">1: A Delegation Tool</h3>
<p>This is the main entry point.</p>
<p>A plugin would register an optional tool like <code>a2a_delegate</code> so the local agent can explicitly choose to delegate work.</p>
<p>That tool should be optional, not always-on, because remote delegation is a side effect and should be easy to disable.</p>
<h3 id="heading-2-a-gateway-method-for-diagnostics">2: A Gateway Method for Diagnostics</h3>
<p>A method like <code>a2a.status</code> would let you inspect whether the relay is healthy, which remote cards are cached, and whether any tasks are still being tracked.</p>
<p>That is much better than asking the model to “tell me if the bridge is healthy.”</p>
<h3 id="heading-3-a-plugin-http-route">3: A Plugin HTTP Route</h3>
<p>You may not need this on day one.</p>
<p>But once you move beyond polling and want push-style callbacks or webhook verification, a plugin route gives you the right boundary for that work.</p>
<h3 id="heading-4-a-background-service">4: A Background Service</h3>
<p>A small service is a clean place to do cache warming, cleanup, or later subscription handling.</p>
<p>That keeps the tool path focused on delegation instead of maintenance work.</p>
<p>If I were turning this into a real plugin package, I would sequence the work in this order:</p>
<ol>
<li><p>wrap the relay in <code>registerTool</code>,</p>
</li>
<li><p>add a small config schema with an allowlist of remote agents,</p>
</li>
<li><p>add <code>a2a.status</code>,</p>
</li>
<li><p>keep polling as the first async model,</p>
</li>
<li><p>add a callback route only if a real use case needs it.</p>
</li>
</ol>
<p>That is the most practical path from theory to a real extension.</p>
<p>I tested the relay flow locally with the mock remote agent and confirmed that repeated delegations from the same local session reused the same remote <code>contextId</code>.</p>
<h2 id="heading-security-notes-before-you-go-further">Security Notes Before You Go Further</h2>
<p>This is the section you should not skip.</p>
<p>The OpenClaw security docs explicitly say the project assumes a <strong>personal assistant</strong> trust model: one trusted operator boundary per Gateway. They also say a shared Gateway for mutually untrusted or adversarial users is not the supported boundary model.</p>
<p>That has a direct consequence for A2A.</p>
<p>A2A is designed for communication across agent systems and organizational boundaries. That is powerful, but it is also a different threat model from a single private OpenClaw deployment.</p>
<p>So the safer design is <strong>not</strong> this:</p>
<ul>
<li><p>expose your personal OpenClaw Gateway publicly,</p>
</li>
<li><p>let arbitrary remote agents reach it,</p>
</li>
<li><p>and hope the tool boundaries are enough.</p>
</li>
</ul>
<p>The safer design is closer to this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/694ca88d5ac09a5d68c63854/5ab4460a-6c00-4880-a29c-ddc1db00b5fa.png" alt="Diagram illustrating separation between a private OpenClaw deployment and an external A2A interoperability boundary, highlighting secure delegation through a controlled relay." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>This diagram shows two separate trust boundaries.</p>
<p>On the left is your <strong>private OpenClaw deployment</strong>. This includes your Gateway, your sessions, your workspace, and any credentials or tools your agent can access. This boundary is designed for a single trusted operator.</p>
<p>On the right is the <strong>external A2A ecosystem</strong>, where remote agents live. These agents may belong to other teams or organizations and operate under different security assumptions.</p>
<p>The key idea is that communication between these two sides should happen through a <strong>controlled relay layer</strong>, not by directly exposing your OpenClaw Gateway. The relay enforces allowlists, limits what data is sent out, and ensures that remote agents cannot directly access your local tools or state.</p>
<p>This separation lets you experiment with agent interoperability while keeping your personal assistant environment safe.</p>
<p>In plain English, keep your personal assistant boundary private.</p>
<p>If you experiment with A2A, treat that as a <strong>separate exposure boundary</strong> with its own allowlists, auth, and operational controls.</p>
<p>That is why the proof-of-concept relay in this article starts with an explicit remote allowlist.</p>
<h3 id="heading-why-this-design-and-not-the-other-one">Why This Design and Not the Other One?</h3>
<p>A natural question is why this article proposes an <strong>outbound-only A2A bridge first</strong>, instead of immediately building a full bidirectional or server-style integration.</p>
<p>The short answer is that OpenClaw’s current design is centered around a <strong>personal assistant trust boundary</strong>, where one operator controls the Gateway, sessions, and tools. Introducing external agents into that environment requires careful control over what is exposed.</p>
<p>Starting with outbound delegation gives you a safer and more incremental path.</p>
<p>Outbound-only first means:</p>
<ul>
<li><p>preserving the personal-assistant trust boundary, so your local OpenClaw deployment remains private and operator-controlled</p>
</li>
<li><p>avoiding exposing the OpenClaw Gateway as a public A2A server before you have strong auth, policy, and monitoring in place</p>
</li>
<li><p>allowing you to test remote delegation patterns (Agent Cards, tasks, artifacts) without committing to full interoperability complexity</p>
</li>
<li><p>keeping OpenClaw as the user-facing control plane, while remote agents act as optional specialists</p>
</li>
</ul>
<p>This approach follows a common systems design pattern: start with <strong>controlled outbound integration</strong>, validate behavior and constraints, and only then consider expanding to inbound or bidirectional communication.</p>
<p>In practice, this means you can experiment with A2A safely, learn how the models fit together, and evolve the system without introducing unnecessary risk early on.</p>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>OpenClaw is worth learning because it gives you a self-hosted assistant that can live in the communication tools you already use.</p>
<p>The simplest beginner path is still the right one:</p>
<ol>
<li><p>install it,</p>
</li>
<li><p>run onboarding,</p>
</li>
<li><p>check the Gateway,</p>
</li>
<li><p>open the dashboard,</p>
</li>
<li><p>try one private workflow.</p>
</li>
</ol>
<p>That is already a real end-to-end setup.</p>
<p>A2A belongs in the conversation because it gives you a credible way to connect OpenClaw to remote specialist agents later.</p>
<p>But the most important thing in this article isn't the buzzword. It's the boundary design.</p>
<p>If you keep OpenClaw as the private user-facing edge and use a narrow plugin bridge for outbound delegation, the OpenClaw session model and the A2A task model can fit together cleanly.</p>
<p>That is the architectural idea I wanted to make concrete here.</p>
<h3 id="heading-diagram-attribution">Diagram Attribution</h3>
<p>All diagrams in this article were created by the author specifically for this guide.</p>
<h2 id="heading-further-reading">Further Reading</h2>
<ul>
<li><p><a href="https://docs.openclaw.ai/">OpenClaw docs home</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/start/getting-started">OpenClaw Getting Started</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/start/wizard">OpenClaw Onboarding Wizard</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/concepts/multi-agent/">OpenClaw Multi-Agent Routing</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/concepts/session-tool/">OpenClaw Session Tools</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/tools/plugin/">OpenClaw Plugin System</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/plugins/agent-tools">OpenClaw Plugin Agent Tools</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/cli/acp">OpenClaw ACP bridge</a></p>
</li>
<li><p><a href="https://docs.openclaw.ai/gateway/security">OpenClaw Security</a></p>
</li>
<li><p><a href="https://a2a-protocol.org/latest/specification/">A2A specification</a></p>
</li>
<li><p><a href="https://a2a-protocol.org/latest/topics/agent-discovery/">A2A Agent Discovery</a></p>
</li>
<li><p><a href="https://a2a-protocol.org/latest/topics/a2a-and-mcp/">A2A and MCP</a></p>
</li>
<li><p><a href="https://a2a-protocol.org/latest/definitions/">A2A protocol definition and schema</a></p>
</li>
<li><p><a href="https://a2a-protocol.org/latest/announcing-1.0/">A2A version 1.0 announcement</a></p>
</li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Set Up WebAuthn in Node.js for Passwordless Biometric Login ]]>
                </title>
                <description>
                    <![CDATA[ JWT auth feels clean until a stolen token still looks valid to your server. That's the real problem: a bearer token proves possession of a token, but it doesn't prove possession of a trusted device. I ]]>
                </description>
                <link>https://www.freecodecamp.org/news/set-up-webauthn-in-node-js-for-passwordless-biometric-login/</link>
                <guid isPermaLink="false">69bc51d6b238fd45a32f1cb0</guid>
                
                    <category>
                        <![CDATA[ #webauthn ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ biometric authentication ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Sumit Saha ]]>
                </dc:creator>
                <pubDate>Thu, 19 Mar 2026 19:43:18 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/8cc356c5-b011-4316-8236-a0443eb2cc03.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>JWT auth feels clean until a stolen token still looks valid to your server. That's the real problem: a bearer token proves possession of a token, but it doesn't prove possession of a trusted device. If an attacker gets a reusable token, replay starts to look like a normal login.</p>
<p>WebAuthn changes the shape of the system. The private key stays on the user's device. Your server stores a public key, a credential ID, and a counter. Each registration or login signs a fresh challenge. The browser, the authenticator, and your backend all take part in the ceremony.</p>
<p>This guide walks you through the full path in Node.js. You'll set up the backend, wire registration and login, store passkeys correctly, replace long-lived bearer auth with short server sessions, support backup devices, and add step-up verification for risky actions.</p>
<p><strong>Warning</strong>: WebAuthn works in secure contexts. Use localhost for local development, and use HTTPS everywhere else.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-why-jwt-alone-falls-short">Why JWT Alone Falls Short</a></p>
</li>
<li><p><a href="#heading-what-webauthn-changes">What WebAuthn Changes</a></p>
</li>
<li><p><a href="#heading-initialize-the-project">Initialize the Project</a></p>
</li>
<li><p><a href="#heading-install-dependencies">Install Dependencies</a></p>
</li>
<li><p><a href="#heading-define-the-data-model">Define the Data Model</a></p>
</li>
<li><p><a href="#heading-build-the-server-foundation">Build the Server Foundation</a></p>
</li>
<li><p><a href="#heading-registration-ceremony">Registration Ceremony</a></p>
</li>
<li><p><a href="#heading-authentication-ceremony">Authentication Ceremony</a></p>
</li>
<li><p><a href="#heading-what-replaces-the-long-lived-jwt">What Replaces the Long-Lived JWT</a></p>
</li>
<li><p><a href="#heading-multi-device-and-recovery-logic">Multi-Device and Recovery Logic</a></p>
</li>
<li><p><a href="#heading-step-up-authentication-for-sensitive-actions">Step-Up Authentication for Sensitive Actions</a></p>
</li>
<li><p><a href="#heading-recap">Recap</a></p>
</li>
<li><p><a href="#heading-try-it-yourself">Try it Yourself</a></p>
</li>
<li><p><a href="#heading-final-words">Final Words</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow along and get the most out of this guide, you should have:</p>
<ul>
<li><p>Basic knowledge of JavaScript and Node.js.</p>
</li>
<li><p>Basic knowledge of TypeScript and Express.</p>
</li>
<li><p>Basic frontend to backend flow. You should be able to follow <code>fetch()</code> requests from the browser to the server and back.</p>
</li>
<li><p>A modern browser and a passkey-capable authenticator, such as Touch ID, Face ID, Windows Hello, Android biometrics, or a security key.</p>
</li>
<li><p>Local testing on <code>localhost</code>. The demo uses <code>localhost</code> as the relying party ID and origin during development.</p>
</li>
<li><p>No prior WebAuthn knowledge required. The article explains the registration and authentication flow step by step.</p>
</li>
</ul>
<h2 id="heading-why-jwt-alone-falls-short">Why JWT Alone Falls Short</h2>
<p>JWTs are not the villain.</p>
<p>The weak point is the usual deployment pattern around JWTs. Teams often place long-lived tokens in places attackers love, then trust those tokens for too long.</p>
<p>The failure path usually looks like this:</p>
<ul>
<li><p>Your server issues a reusable bearer token.</p>
</li>
<li><p>The browser stores it.</p>
</li>
<li><p>Malware, XSS, session theft, or a fake login flow grabs it.</p>
</li>
<li><p>The attacker replays it.</p>
</li>
<li><p>Your backend sees a valid token and treats the request as real.</p>
</li>
</ul>
<p>That pattern falls apart fast on high-risk routes. Admin actions, money movement, payout approval, email change, API key creation, or destructive writes deserve stronger proof.</p>
<p>WebAuthn gives you stronger proof because the secret never leaves the authenticator.</p>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/4cecbde6-69be-4790-8c1c-68fb28d3cc00.png" alt="JWT replay vs WebAuthn challenge flow diagram" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In the above diagram, the left side (reusable JWT path) shows the risk of reusable tokens. A server issues a JWT after login. The browser stores the token. An attacker steals the token and sends requests. The backend accepts the token until expiration. Replay becomes possible.</p>
<p>On the right side (WebAuthn path), the flow changes. The server sends a fresh challenge for each login. Your device signs the challenge using a private key stored inside the authenticator. The server verifies the signature before creating a short session.</p>
<p>The key point is simple: JWTs rely on a stored bearer secret, while WebAuthn relies on device-bound cryptographic proof created for a single challenge.</p>
<h2 id="heading-what-webauthn-changes">What WebAuthn Changes</h2>
<p>WebAuthn uses asymmetric cryptography.</p>
<p>The authenticator creates a key pair. The private key stays on the device. Your backend stores the public key and uses it later to verify signatures. During login, your server sends a fresh challenge. The device signs it, and your backend verifies the result against the stored public key.</p>
<p>That changes three things at once:</p>
<ul>
<li><p>The browser never receives a reusable password secret.</p>
</li>
<li><p>A stolen public key is useless for login.</p>
</li>
<li><p>Each ceremony depends on a fresh server challenge.</p>
</li>
</ul>
<p>On the web, passkeys ride on top of WebAuthn. A passkey might live on the local device, a synced platform account, or a physical security key. In practice, your app still deals with the same core objects: credential ID, public key, transports, counter, device type, and backup state.</p>
<h2 id="heading-initialize-the-project">Initialize the Project</h2>
<p>So far, we've talked about why WebAuthn matters and how it changes the login experience. Now it's time to build a small project so you can see the whole flow working end to end.</p>
<p>In this demo, you'll create a simple Node.js app where a user can register a passkey, sign in with that passkey, and then access protected routes through a short-lived session. The idea here is not to build a polished full-stack product. The idea is to build the core backend flow clearly, so you can understand how registration, login, sessions, and step-up verification connect to each other.</p>
<p>Before we start, make sure Node.js and npm are installed on your machine. You can install Node.js from the official website, or use nvm if you prefer managing multiple Node versions.</p>
<pre><code class="language-shell">node -v
npm -v
</code></pre>
<p>The expected output is a Node LTS version and an npm version number.</p>
<p>Next, create a new project folder and initialize the basic structure:</p>
<pre><code class="language-shell">mkdir webauthn-node-demo
cd webauthn-node-demo
npm init -y
npx tsc --init
mkdir src
</code></pre>
<h2 id="heading-install-dependencies">Install Dependencies</h2>
<p>Now that your project is initialized, install the required packages.</p>
<ol>
<li><strong>TypeScript and tsx:</strong> TypeScript types the backend while <code>tsx</code> runs TypeScript files during development.</li>
</ol>
<pre><code class="language-shell">npm install -D typescript tsx @types/node
npx tsc -v
npx tsx --version
</code></pre>
<ol>
<li><strong>Express and session management:</strong> Express handles the HTTP routes, and <code>express-session</code> stores short-lived server session state.</li>
</ol>
<pre><code class="language-shell">npm install express express-session @types/express @types/express-session
</code></pre>
<ol>
<li><strong>SimpleWebAuthn:</strong><code>@simplewebauthn/server</code> generates registration options and verifies responses. <code>@simplewebauthn/browser</code> starts browser-side flows.</li>
</ol>
<pre><code class="language-shell">npm install @simplewebauthn/server @simplewebauthn/browser
</code></pre>
<p>Open your <code>package.json</code> and update the "scripts" block to include these commands:</p>
<pre><code class="language-json">{
    "scripts": {
        "dev": "tsx watch src/app.ts",
        "build": "tsc",
        "start": "node dist/app.js"
    }
}
</code></pre>
<h2 id="heading-define-the-data-model">Define the Data Model</h2>
<p>Before we write the routes, we need to decide how this app will store passkeys.</p>
<p>With password-based login, you usually store a password hash. With WebAuthn, you store something different. After a user registers a passkey, your server needs to keep the credential data required to verify future login attempts. That includes things like the credential ID, public key, counter, and some metadata about the authenticator.</p>
<p>This is why the data model matters from the beginning. Registration and authentication both depend on this stored data, so it's worth making the structure clear before we move deeper into the flow.</p>
<p>A good way to think about it is this. A user can have one or more passkeys, and each passkey should be stored as its own record linked to that user.</p>
<p>Create a new file named <code>src/app.ts</code>. We'll build the backend in this file, and we'll start by defining the data model at the top.</p>
<pre><code class="language-typescript">// src/app.ts
type Passkey = {
    id: string;
    publicKey: Uint8Array;
    counter: number;
    deviceType: "singleDevice" | "multiDevice";
    backedUp: boolean;
    transports?: string[];
};

type User = {
    id: string;
    email: string;
    webAuthnUserID: Uint8Array;
    passkeys: Passkey[];
};

const users = new Map&lt;string, User&gt;();

function findUserByEmail(email: string) {
    return [...users.values()].find((user) =&gt; user.email === email);
}
</code></pre>
<p>What matters here:</p>
<ul>
<li><p><code>id</code> identifies the credential later.</p>
</li>
<li><p><code>publicKey</code> verifies future signatures.</p>
</li>
<li><p><code>counter</code> helps detect cloned or misbehaving authenticators.</p>
</li>
<li><p><code>deviceType</code> and <code>backedUp</code> give you useful recovery signals.</p>
</li>
<li><p><code>webAuthnUserID</code> should be a stable binary value, stored once per user.</p>
</li>
</ul>
<p>Tip: If your database returns <code>Buffer</code> or another binary wrapper for <code>publicKey</code>, convert it back to <code>Uint8Array</code> before verification.</p>
<h2 id="heading-build-the-server-foundation">Build the Server Foundation</h2>
<p>Next, append the core Express app and relying party settings to <code>src/app.ts</code>:</p>
<pre><code class="language-typescript">// src/app.ts
import express from "express";
import session from "express-session";
import { randomBytes, randomUUID } from "node:crypto";
import {
    generateAuthenticationOptions,
    generateRegistrationOptions,
    verifyAuthenticationResponse,
    verifyRegistrationResponse,
    type WebAuthnCredential,
} from "@simplewebauthn/server";

const rpName = "Node Auth Lab";
const rpID = "localhost";
const origin = "http://localhost:3000";

declare module "express-session" {
    interface SessionData {
        currentChallenge?: string;
        pendingUserId?: string;
        userId?: string;
        stepUpUntil?: number;
    }
}

const app = express();

app.use(express.json());

app.use(
    session({
        secret: "replace-this-in-production",
        resave: false,
        saveUninitialized: false,
        cookie: {
            httpOnly: true,
            sameSite: "lax",
            secure: false,
            maxAge: 10 * 60 * 1000,
        },
    }),
);
</code></pre>
<p>This gives you the shared state you need for:</p>
<ul>
<li><p>registration challenge tracking</p>
</li>
<li><p>authentication challenge tracking</p>
</li>
<li><p>logged-in session state</p>
</li>
<li><p>short step-up windows for risky actions</p>
</li>
</ul>
<h2 id="heading-registration-ceremony">Registration Ceremony</h2>
<p>Registration is the part where a user's device creates a new passkey and connects it to their account.</p>
<p>You will often see the word ceremony in WebAuthn documentation. In simple terms, it refers to the full exchange between your server, the browser, and the authenticator during registration or login. So when we say registration ceremony, we simply mean the full process of creating and verifying a new passkey.</p>
<p>There are three parts involved here:</p>
<ul>
<li><p>Your backend prepares the registration options and challenge.</p>
</li>
<li><p>The browser starts the WebAuthn request.</p>
</li>
<li><p>Then the authenticator, such as a phone, laptop, or security key, creates the key pair and returns the result.</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/d1c87944-a244-4b82-b7ac-d10f109b1024.png" alt="WebAuthn registration flow" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In the above diagram, the registration ceremony links a new passkey to your account. The browser asks the server for registration options. The server generates a challenge and returns a JSON configuration. Next, the browser starts the WebAuthn ceremony. Your authenticator creates a new key pair after biometric or security key verification. The private key stays inside the device. Then the browser sends the attestation response to the server.</p>
<p>Verification happens next. The server checks challenge, origin, and relying party ID. After validation, the server stores credential ID, public key, counter value, device type, and backup state. The account now holds a passkey.</p>
<p>Now that the overall flow is clear, let's build it step by step. We'll start by returning registration options from the backend.</p>
<h3 id="heading-1-return-registration-options-from-the-backend">1. Return Registration Options from the Backend</h3>
<p>This endpoint creates a new user if needed, generates the registration options, and stores the challenge server-side.</p>
<p>Append the following route to your <code>src/app.ts</code> file:</p>
<pre><code class="language-typescript">// src/app.ts
app.post("/auth/register/options", async (req, res) =&gt; {
    const { email } = req.body;

    if (!email) {
        return res.status(400).json({ error: "Email is required" });
    }

    let user = findUserByEmail(email);

    if (!user) {
        user = {
            id: randomUUID(),
            email,
            webAuthnUserID: randomBytes(32),
            passkeys: [],
        };

        users.set(user.id, user);
    }

    const options = await generateRegistrationOptions({
        rpName,
        rpID,
        userName: user.email,
        userDisplayName: user.email,
        userID: user.webAuthnUserID,
        attestationType: "none",
        excludeCredentials: user.passkeys.map((passkey) =&gt; ({
            id: passkey.id,
            transports: passkey.transports,
        })),
        authenticatorSelection: {
            residentKey: "preferred",
            userVerification: "preferred",
        },
    });

    req.session.currentChallenge = options.challenge;
    req.session.pendingUserId = user.id;

    res.json(options);
});
</code></pre>
<p>A few decisions here matter:</p>
<ul>
<li><p><code>attestationType: 'none'</code> keeps the flow lighter unless you need richer device provenance.</p>
</li>
<li><p><code>excludeCredentials</code> stops duplicate registration of the same authenticator.</p>
</li>
<li><p><code>userVerification: 'preferred'</code> lets the browser lean toward biometrics or local device unlock.</p>
</li>
</ul>
<h3 id="heading-2-start-registration-in-the-browser">2. Start Registration in the Browser</h3>
<p>On the browser side, you ask your backend for options, then pass them into <code>startRegistration()</code>.</p>
<p>Now, create a new file at <code>src/browser.ts</code> to handle the client-side WebAuthn interactions. Add the following function:</p>
<pre><code class="language-typescript">// src/browser.ts
import { startRegistration } from "@simplewebauthn/browser";

export async function registerPasskey(email: string) {
    const optionsResp = await fetch("/auth/register/options", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify({ email }),
    });

    const optionsJSON = await optionsResp.json();

    const registrationResponse = await startRegistration({ optionsJSON });

    const verifyResp = await fetch("/auth/register/verify", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify(registrationResponse),
    });

    return verifyResp.json();
}
</code></pre>
<p><strong>Note:</strong> Browsers can't run TypeScript or bare npm imports directly. In a real application, you would import <code>src/browser.ts</code> into your frontend framework (React, Vue, and so on) or bundle it using a tool like Vite, Webpack, or esbuild before serving it to the client.</p>
<p>Under the hood, the browser now speaks to the authenticator. That might trigger Face ID, Touch ID, Windows Hello, Android biometrics, or a physical security key prompt.</p>
<h3 id="heading-3-verify-the-registration-response-and-save-the-passkey">3. Verify the Registration Response and Save the Passkey</h3>
<p>Once the browser sends the response back, verify it against the challenge and relying party details you stored earlier.</p>
<p>Append this verification route to <code>src/app.ts</code>:</p>
<pre><code class="language-typescript">// src/app.ts
app.post("/auth/register/verify", async (req, res) =&gt; {
    const user = users.get(req.session.pendingUserId ?? "");

    if (!user || !req.session.currentChallenge) {
        return res.status(400).json({ verified: false });
    }

    let verification;

    try {
        verification = await verifyRegistrationResponse({
            response: req.body,
            expectedChallenge: req.session.currentChallenge,
            expectedOrigin: origin,
            expectedRPID: rpID,
        });
    } catch (error) {
        return res.status(400).json({
            verified: false,
            error:
                error instanceof Error ? error.message : "Registration failed",
        });
    }

    if (!verification.verified || !verification.registrationInfo) {
        return res.status(400).json({ verified: false });
    }

    const { credential, credentialDeviceType, credentialBackedUp } =
        verification.registrationInfo;

    user.passkeys.push({
        id: credential.id,
        publicKey: credential.publicKey,
        counter: credential.counter,
        transports: credential.transports,
        deviceType: credentialDeviceType,
        backedUp: credentialBackedUp,
    });

    req.session.currentChallenge = undefined;
    req.session.pendingUserId = undefined;

    res.json({ verified: true });
});
</code></pre>
<p>At this point, the user has no password hash in the hot path. The authenticator now holds the private key. Your server stores only what it needs for later verification.</p>
<h2 id="heading-authentication-ceremony">Authentication Ceremony</h2>
<p>Authentication is the part where the user proves they still have the passkey they registered earlier. Just like registration, this process is also called a ceremony in WebAuthn. Here, the goal is not to create a new credential. The goal is to verify an existing one in a secure way.</p>
<p>There are four parts involved here:</p>
<ul>
<li><p>Your server creates a fresh challenge.</p>
</li>
<li><p>The browser passes that challenge to the authenticator.</p>
</li>
<li><p>The authenticator signs it using the private key stored on the device.</p>
</li>
<li><p>Then the server verifies the response using the public key it stored during registration.</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/0296f66d-47c7-4ca5-ab0c-c5a6f9a8eef8.png" alt="WebAuthn authentication flow" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In the above diagram, login occurs through a challenge-response process. The browser first asks the server for authentication options. The server generates a new challenge and returns the allowed credential list. Your browser then triggers the authenticator. The authenticator signs the challenge using the private key stored on the device. The browser sends the signed assertion to the backend.</p>
<p>Verification follows. The server validates signature, challenge, origin, and credential ID. The stored counter updates. A short session is issued after verification. Login depends on device proof instead of reusable credentials.</p>
<p>With that flow in mind, let's move into the implementation. We'll begin by returning authentication options from the backend.</p>
<h3 id="heading-1-return-authentication-options">1. Return Authentication Options</h3>
<p>Fetch the user, list allowed credentials, and store the new challenge.</p>
<pre><code class="language-typescript">// src/app.ts
app.post("/auth/login/options", async (req, res) =&gt; {
    const { email } = req.body;
    const user = findUserByEmail(email);

    if (!user) {
        return res.status(404).json({ error: "User not found" });
    }

    const options = await generateAuthenticationOptions({
        rpID,
        allowCredentials: user.passkeys.map((passkey) =&gt; ({
            id: passkey.id,
            transports: passkey.transports,
        })),
        userVerification: "preferred",
    });

    req.session.currentChallenge = options.challenge;
    req.session.pendingUserId = user.id;

    res.json(options);
});
</code></pre>
<h3 id="heading-2-start-authentication-in-the-browser">2. Start Authentication in the Browser</h3>
<p>The browser receives the options, then starts the ceremony. Add the login function to your <code>src/browser.ts</code> file:</p>
<pre><code class="language-typescript">// src/browser.ts
import { startAuthentication } from "@simplewebauthn/browser";

export async function loginWithPasskey(email: string) {
    const optionsResp = await fetch("/auth/login/options", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify({ email }),
    });

    const optionsJSON = await optionsResp.json();

    const authenticationResponse = await startAuthentication({ optionsJSON });

    const verifyResp = await fetch("/auth/login/verify", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify(authenticationResponse),
    });

    return verifyResp.json();
}
</code></pre>
<h3 id="heading-3-verify-the-assertion-and-update-the-counter">3. Verify the Assertion and Update the Counter</h3>
<p>This is the moment where the backend decides whether the login is real.</p>
<pre><code class="language-typescript">// src/app.ts
app.post("/auth/login/verify", async (req, res) =&gt; {
    const user = users.get(req.session.pendingUserId ?? "");

    if (!user || !req.session.currentChallenge) {
        return res.status(400).json({ verified: false });
    }

    const passkey = user.passkeys.find((item) =&gt; item.id === req.body.id);

    if (!passkey) {
        return res
            .status(400)
            .json({ verified: false, error: "Passkey not found" });
    }

    const credential: WebAuthnCredential = {
        id: passkey.id,
        publicKey: passkey.publicKey,
        counter: passkey.counter,
        transports: passkey.transports,
    };

    let verification;

    try {
        verification = await verifyAuthenticationResponse({
            response: req.body,
            expectedChallenge: req.session.currentChallenge,
            expectedOrigin: origin,
            expectedRPID: rpID,
            credential,
            requireUserVerification: true,
        });
    } catch (error) {
        return res.status(400).json({
            verified: false,
            error:
                error instanceof Error
                    ? error.message
                    : "Authentication failed",
        });
    }

    if (!verification.verified) {
        return res.status(400).json({ verified: false });
    }

    passkey.counter = verification.authenticationInfo.newCounter;

    req.session.userId = user.id;
    req.session.currentChallenge = undefined;
    req.session.pendingUserId = undefined;

    res.json({ verified: true });
});
</code></pre>
<p>Two details here matter more than they first appear:</p>
<ul>
<li><p><code>requireUserVerification: true</code> forces a stronger ceremony for the verification step.</p>
</li>
<li><p><code>newCounter</code> should overwrite your stored counter after each successful login.</p>
</li>
</ul>
<p>That counter update is one of the few signals you have for spotting cloned or broken authenticators.</p>
<h2 id="heading-what-replaces-the-long-lived-jwt">What Replaces the Long-lived JWT</h2>
<p>Don't run this full WebAuthn flow, then issue a week-long bearer token and call the job done. That throws away the best part of the design.</p>
<p>A better model is:</p>
<ul>
<li><p>WebAuthn proves identity</p>
</li>
<li><p>the server creates a short session</p>
</li>
<li><p>the browser receives only an HTTP-only session cookie</p>
</li>
<li><p>risky actions ask for a fresh WebAuthn assertion again</p>
</li>
</ul>
<p>A tiny route guard shows the idea:</p>
<pre><code class="language-typescript">// src/app.ts
function requireSession(
    req: express.Request,
    res: express.Response,
    next: express.NextFunction,
) {
    if (!req.session.userId) {
        return res.status(401).json({ error: "Unauthorized" });
    }

    next();
}

app.get("/me", requireSession, (req, res) =&gt; {
    const user = users.get(req.session.userId ?? "");

    if (!user) {
        return res.status(404).json({ error: "User not found" });
    }

    res.json({
        id: user.id,
        email: user.email,
        passkeys: user.passkeys.length,
    });
});
</code></pre>
<p>This keeps the post-login browser state smaller and less reusable. The browser carries only a session cookie. The server owns the session state and its lifetime.</p>
<p>Tip: Short sessions plus fresh WebAuthn for risky actions is a stronger shape than one long bearer token with broad scope.</p>
<h2 id="heading-multi-device-and-recovery-logic">Multi-Device and Recovery Logic</h2>
<p>Strong auth fails fast if the first lost phone locks the user out forever. You need a real backup story from day one.</p>
<p>The clean pattern looks like this:</p>
<ul>
<li><p>register one platform passkey on the user's main device</p>
</li>
<li><p>register one extra credential, such as a security key or another trusted device</p>
</li>
<li><p>verify a contact channel during account setup</p>
</li>
<li><p>expose passkey management in account settings</p>
</li>
<li><p>store device metadata so users see what is registered</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/7c8d8867-38d1-403f-a409-54e0988221d8.png" alt="Multi-device recovery and step-up auth" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In the above diagram, the left side shows the recovery strategy. You start with a primary passkey on your main device. Then you add another trusted authenticator such as a second device or hardware security key. A recovery channel should exist. Device inventory helps track active credentials, and lost devices must be revoked quickly.</p>
<p>The right side focuses on step up authentication. Sensitive actions require fresh verification. Examples include payouts, email change, API key generation, or destructive operations. When such an action begins, the server issues a new challenge. Your authenticator signs again. Access lasts for a short step up window before new verification becomes required.</p>
<p>A simple product rule helps here. Don't hide the second passkey flow inside a deep settings page. Put "Add another passkey" right after the first successful registration.</p>
<p>Passkeys also sync across major platform ecosystems. That helps the user experience, but your backend should still treat each registered credential as a first-class record with its own ID, public key, counter, device type, and backup state.</p>
<p>For account recovery, keep the bar high. Recovery should not become the weakest path in the whole system. Emailed magic links, recovery codes, and support-driven recovery all need rate limits, audit trails, and strict checks.</p>
<h2 id="heading-step-up-authentication-for-sensitive-actions">Step-up Authentication for Sensitive Actions</h2>
<p>Logging in once should not grant silent permission for every dangerous route.</p>
<p>Step-up auth means this:</p>
<ul>
<li><p>the user is already signed in</p>
</li>
<li><p>they try a sensitive action</p>
</li>
<li><p>the server demands a fresh WebAuthn ceremony</p>
</li>
<li><p>the app grants a short window for that one class of actions</p>
</li>
</ul>
<p>You can use step-up auth for:</p>
<ul>
<li><p>payout approval</p>
</li>
<li><p>credential management</p>
</li>
<li><p>email or phone change</p>
</li>
<li><p>API key creation</p>
</li>
<li><p>organization deletion</p>
</li>
<li><p>role elevation</p>
</li>
<li><p>access to billing controls</p>
</li>
</ul>
<p>Start by issuing new authentication options with strict user verification.</p>
<pre><code class="language-typescript">// src/app.ts
app.post("/auth/step-up/options", requireSession, async (req, res) =&gt; {
    const user = users.get(req.session.userId ?? "");

    if (!user) {
        return res.status(404).json({ error: "User not found" });
    }

    const options = await generateAuthenticationOptions({
        rpID,
        allowCredentials: user.passkeys.map((passkey) =&gt; ({
            id: passkey.id,
            transports: passkey.transports,
        })),
        userVerification: "required",
    });

    req.session.currentChallenge = options.challenge;

    res.json(options);
});
</code></pre>
<p>Then verify the response and issue a short step-up window.</p>
<pre><code class="language-typescript">// src/app.ts
app.post("/auth/step-up/verify", requireSession, async (req, res) =&gt; {
    const user = users.get(req.session.userId ?? "");
    const passkey = user?.passkeys.find((item) =&gt; item.id === req.body.id);

    if (!user || !passkey || !req.session.currentChallenge) {
        return res.status(400).json({ verified: false });
    }

    let verification;
    try {
        verification = await verifyAuthenticationResponse({
            response: req.body,
            expectedChallenge: req.session.currentChallenge,
            expectedOrigin: origin,
            expectedRPID: rpID,
            credential: {
                id: passkey.id,
                publicKey: passkey.publicKey,
                counter: passkey.counter,
                transports: passkey.transports,
            },
            requireUserVerification: true,
        });
    } catch (error) {
        return res.status(400).json({
            verified: false,
            error: error instanceof Error ? error.message : "Step-up failed",
        });
    }

    if (!verification.verified) {
        return res.status(400).json({ verified: false });
    }

    passkey.counter = verification.authenticationInfo.newCounter;
    req.session.stepUpUntil = Date.now() + 5 * 60 * 1000;
    req.session.currentChallenge = undefined;

    res.json({ verified: true });
});
</code></pre>
<p>A tiny guard handles the rest.</p>
<pre><code class="language-typescript">// src/app.ts
function requireRecentStepUp(
    req: express.Request,
    res: express.Response,
    next: express.NextFunction,
) {
    if (!req.session.stepUpUntil || req.session.stepUpUntil &lt; Date.now()) {
        return res.status(403).json({ error: "Fresh verification required" });
    }

    next();
}

app.post("/billing/payout", requireSession, requireRecentStepUp, (req, res) =&gt; {
    res.json({ ok: true });
});
</code></pre>
<p>This is where WebAuthn stops being a login feature and starts becoming part of your authorization model.</p>
<p>Now that all your routes and guards are built, append the server start command to the very bottom of <code>src/app.ts</code> to bring the backend to life:</p>
<pre><code class="language-typescript">// src/app.ts
const PORT = 3000;
app.listen(PORT, () =&gt; {
    console.log(`Server listening on http://localhost:${PORT}`);
});
</code></pre>
<h2 id="heading-recap">Recap</h2>
<p>You started with a common weak pattern: a reusable token trusted for too long. Then you replaced the core proof model:</p>
<ul>
<li><p>the device keeps the private key</p>
</li>
<li><p>the server stores the public key and counter</p>
</li>
<li><p>each ceremony signs a fresh challenge</p>
</li>
<li><p>the app uses short server sessions after verification</p>
</li>
<li><p>risky routes trigger fresh step-up authentication</p>
</li>
</ul>
<p>That's the real shift.</p>
<p>WebAuthn is not a cosmetic login upgrade. It changes where trust lives. Once you move from reusable bearer proof to device-bound cryptographic proof, your Node.js auth stack starts to behave like a modern security system instead of a thin session wrapper.</p>
<h2 id="heading-try-it-yourself">Try it Yourself</h2>
<p>The full source code is available on GitHub. <a href="https://github.com/logicbaselabs/web-authn">Clone the repository</a> here and follow the setup guide in the <code>README</code> to test the biometric login flow locally.</p>
<h2 id="heading-final-words">Final Words</h2>
<p>If you found the information here valuable, feel free to share it with others who might benefit from it.</p>
<p>I’d really appreciate your thoughts – mention me on X <a href="https://x.com/sumit_analyzen">@sumit_analyzen</a> or on Facebook <a href="https://facebook.com/sumit.analyzen">@sumit.analyzen</a>, <a href="https://youtube.com/@logicBaseLabs">watch my coding tutorials</a>, or simply <a href="https://www.linkedin.com/in/sumitanalyzen/">connect with me on LinkedIn</a>.</p>
<p>You can also checkout my official website <a href="https://www.sumitsaha.me">www.sumitsaha.me</a> for more details about me.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Troubleshoot Ghost CMS: Fixing WSL, Docker, and ActivityPub Errors ]]>
                </title>
                <description>
                    <![CDATA[ Setting up Ghost CMS (Content Management System) on your local machine is a great way to develop themes and test new features. But if you're using Windows or Docker, you might run into errors that sto ]]>
                </description>
                <link>https://www.freecodecamp.org/news/fix-ghost-cms-errors/</link>
                <guid isPermaLink="false">69bc3254b238fd45a31f6959</guid>
                
                    <category>
                        <![CDATA[ ghost ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ WSL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ troubleshooting ]]>
                    </category>
                
                    <category>
                        <![CDATA[ debugging ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Abdul Talha ]]>
                </dc:creator>
                <pubDate>Thu, 19 Mar 2026 17:28:52 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/85f5e0bb-26ff-42ce-ba66-afec6df4bb5d.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Setting up Ghost CMS (Content Management System) on your local machine is a great way to develop themes and test new features. But if you're using Windows or Docker, you might run into errors that stop your progress. And debugging takes time away from your actual development work.</p>
<p>In this guide, you'll learn the root causes and exact fixes for three common Ghost CMS deployment errors:</p>
<ul>
<li><p><strong>Error 1:</strong> SQLite installation failures on Windows.</p>
</li>
<li><p><strong>Error 2:</strong> Docker containers crashing with Code 137 (memory limits).</p>
</li>
<li><p><strong>Error 3:</strong> "Loading Interrupted" errors in the ActivityPub Network tab.</p>
</li>
</ul>
<p>By the end of this article, you'll have a stable, working local Ghost setup. You'll know how to properly use WSL for Node.js apps, manage Docker resources, and successfully configure Ghost's new social web features.</p>
<h2 id="heading-error-1-sqlite-installation-failures-on-windows">Error 1: SQLite Installation Failures on Windows</h2>
<h3 id="heading-the-symptom"><strong>The Symptom</strong></h3>
<p>When you run the command <code>ghost install local</code> on a Windows machine, the setup fails. You will see a long list of red text in your terminal that looks like this:</p>
<pre><code class="language-plaintext">Error: Cannot find module 'sqlite3'
...
node-pre-gyp ERR! stack Error: Failed to execute...
...
MSB4019: The imported project "C:\Microsoft.Cpp.Default.props" was not found.
</code></pre>
<p>The error usually mentions "sqlite3" and says it "failed to execute" or is "missing."</p>
<h3 id="heading-the-cause"><strong>The Cause</strong></h3>
<p>Ghost uses SQLite to store your blog's data. SQLite is a "native module." This means it needs a small piece of code that must be built to fit your computer's system perfectly.</p>
<p>Because Ghost was created to run on Linux servers, it expects to find Linux build tools to make these files. Windows uses different tools and a different way of organising files. When the Ghost CLI tries to build the SQLite files on Windows, it can't find the tools it needs, so the installation stops. Using WSL gives Ghost the Linux environment it expects.</p>
<h3 id="heading-how-to-fix-it">How to Fix it:</h3>
<p>You can use Windows Subsystem for Linux (WSL) to create a working setup.</p>
<ol>
<li><p>Open your WSL terminal (like Ubuntu).</p>
</li>
<li><p>Check your tools by running <code>node --version</code>, <code>npm --version</code>, and <code>python3 --version</code>.</p>
</li>
<li><p>Install the Ghost CLI globally inside WSL:</p>
<pre><code class="language-plaintext">npm install -g ghost-cli@latest
</code></pre>
</li>
<li><p>Run the local setup command:</p>
<pre><code class="language-plaintext">ghost install local
</code></pre>
</li>
<li><p>Start the server:</p>
<pre><code class="language-plaintext">ghost start
</code></pre>
</li>
</ol>
<h3 id="heading-how-to-verify">How to Verify:</h3>
<p>Open your web browser and go to <code>http://localhost:2368</code>. You should see the default Ghost welcome page load without errors.</p>
<h2 id="heading-error-2-docker-container-exiting-with-code-137">Error 2: Docker Container Exiting with Code 137</h2>
<h3 id="heading-the-symptom">The Symptom:</h3>
<p>When you're running Ghost using Docker Compose, the containers crash. The terminal logs show <code>Ghost admin container exiting with code 137</code> or <code>Admin service killed due to memory constraints</code>.</p>
<h3 id="heading-the-cause">The Cause:</h3>
<p>So why does this happen? Well, error code 137 means your computer ran out of memory (RAM) and stopped the container. This usually happens if you try to run the full Ghost developer setup (which includes 15+ extra tools) on a standard computer.</p>
<h3 id="heading-how-to-fix-it">How to Fix it:</h3>
<p>To fix this error, you can switch from the complex setup to a simple setup using the official Ghost Docker image.</p>
<p>To do this, first stop and remove the broken containers:</p>
<pre><code class="language-plaintext">docker-compose down -v
docker system prune -a
</code></pre>
<p>Then create a new <code>docker-compose.yml</code> file with only the basic tools (Ghost and a database):</p>
<pre><code class="language-plaintext">services:
  ghost:
    image: ghost:latest
    restart: always
    ports:
      - "2368:2368"
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: root
      database__connection__password: yourpassword
      database__connection__database: ghost
      url: http://localhost:2368
    volumes:
      - ghost_content:/var/lib/ghost/content

  mysql:
    image: mysql:8.0
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: yourpassword
      MYSQL_DATABASE: ghost
    volumes:
      - mysql_data:/var/lib/mysql

volumes:
  ghost_content:
  mysql_data:
</code></pre>
<p>Then start the simple setup:</p>
<pre><code class="language-plaintext">docker-compose up -d
</code></pre>
<h3 id="heading-how-to-verify">How to Verify:</h3>
<p>Type <code>docker-compose ps</code> in your terminal. You should see both the <code>ghost</code> and <code>mysql</code> containers listed with a status of "Up".</p>
<h2 id="heading-error-3-loading-interrupted-in-network-analytics">Error 3: "Loading Interrupted" in Network Analytics</h2>
<h3 id="heading-the-symptom">The Symptom:</h3>
<p>When you click the <strong>Analytics → Network</strong> tab in your local Ghost admin panel, the page shows a "Loading Interrupted" error. Your terminal logs show 404 errors and webhook failures:</p>
<pre><code class="language-plaintext">INFO "GET /.ghost/activitypub/v1/feed/reader/" 404 52ms
ERROR No webhook secret found - cannot initialise
</code></pre>
<h3 id="heading-the-cause">The Cause:</h3>
<p>The Network tab acts as an ActivityPub reader, not a normal analytics dashboard. This error happens because ActivityPub is not set up for local use. It needs extra tools (Caddy, Redis) and a clean web address without port numbers to work.</p>
<h3 id="heading-how-to-fix-it">How to Fix it:</h3>
<p>To fix this error, just run Ghost with its required Docker tools and update your local config file to turn on the social web features.</p>
<p>First, start the required tools (Caddy, MySQL, Redis) from your Ghost folder:</p>
<pre><code class="language-plaintext">SSH_AUTH_SOCK=/dev/null docker compose up -d caddy mysql redis
</code></pre>
<p>Then open your <code>config.local.json</code> file. Set the URL to a clean localhost address (remove the <code>:2368</code> port) and turn on the developer features:</p>
<pre><code class="language-plaintext">{
    "url": "http://localhost",
    "social_web_enabled": true,
    "enableDeveloperExperiments": true
}
</code></pre>
<p>Stop your current Ghost process:</p>
<pre><code class="language-plaintext">pkill -f "yarn dev:ghost"
</code></pre>
<p>And restart Ghost with the new settings:</p>
<pre><code class="language-plaintext">yarn dev:ghost
</code></pre>
<h3 id="heading-how-to-verify">How to Verify:</h3>
<p>Log back into your Ghost admin panel and click <strong>Analytics → Network</strong>. The error message will be gone, and you will see the ActivityPub feed instead.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Local setups can be hard, especially when mixing Windows, Docker, and new features like ActivityPub.</p>
<p>By fixing these three errors, you did more than just get Ghost running. You learned how to bypass Windows limits using WSL, how to manage Docker memory, and how Ghost routes social web traffic.</p>
<p>You now have a stable, fast, and fully working Ghost CMS workspace ready for your content.</p>
<p><strong>Let’s connect!</strong> You can find my latest work on my <a href="https://blog.abdultalha.tech/portfolio"><strong>Technical Writing Portfolio</strong></a> or reach out to me on <a href="https://www.linkedin.com/in/abdul-talha/"><strong>LinkedIn</strong></a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Create REST API Documentation in Node.js Using Scalar ]]>
                </title>
                <description>
                    <![CDATA[ A REST API documentation is a guide that explains how clients can make use of the REST APIs in an application. It details the available endpoints, how to send requests and what responses to expect. It ]]>
                </description>
                <link>https://www.freecodecamp.org/news/rest-api-documentation-with-scalar/</link>
                <guid isPermaLink="false">699f1d4e6049477bf67a9ad8</guid>
                
                    <category>
                        <![CDATA[ documentation ]]>
                    </category>
                
                    <category>
                        <![CDATA[ APIs ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ openai ]]>
                    </category>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Orim Dominic Adah ]]>
                </dc:creator>
                <pubDate>Wed, 25 Feb 2026 16:03:26 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5fc16e412cae9c5b190b6cdd/ec5d63f1-d65f-4852-a297-9c517fc76e2f.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>A REST API documentation is a guide that explains how clients can make use of the REST APIs in an application. It details the available endpoints, how to send requests and what responses to expect. It may also contain explanations of concepts that are specific to the scope of the application.</p>
<p>Without API documentation, application development is considered incomplete because developers cannot build software to interact with it, rendering the application effectively useless.</p>
<p>In this article, you will learn how to create beautiful REST API documentation that also allows you to test the APIs for free using an OpenAPI specification and Scalar in Node.js projects. You will use <a href="https://github.com/asteasolutions/zod-to-openapi">asteasolutions/zod-to-openapi</a> to generate OpenAPI specification and use Scalar to create a web page from the specification.</p>
<p>To get the most out of this article, you should have experience developing REST APIs with Express or NestJS. You should also have experience with documenting REST APIs and using <a href="https://zod.dev/">zod</a>.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-rest-api-documentation-tools-for-nodejs">REST API Documentation Tools for Node.js</a></p>
<ul>
<li><p><a href="#heading-swagger">Swagger</a></p>
</li>
<li><p><a href="#heading-postman">Postman</a></p>
</li>
<li><p><a href="#heading-redoc">ReDoc</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-zod-to-openapi-and-scalar-for-rest-api-documentation">zod-to-openapi and Scalar for REST API Documentation</a></p>
<ul>
<li><a href="#heading-how-zod-to-openapi-works-with-scalar">How zod-to-openapi Works with Scalar</a></li>
</ul>
</li>
<li><p><a href="#heading-benefits-of-using-zod-to-openapi-and-scalar-to-create-rest-api-documentation">Benefits of Using zod-to-openapi and Scalar to Create REST API Documentation</a></p>
<ul>
<li><p><a href="#heading-openapi-specification-support">OpenAPI Specification Support</a></p>
</li>
<li><p><a href="#heading-open-source-and-free-to-use">Open Source and Free to Use</a></p>
</li>
<li><p><a href="#heading-better-documentation-experience">Better Documentation Experience</a></p>
</li>
<li><p><a href="#heading-developer-friendly-ui">Developer-friendly UI</a></p>
</li>
<li><p><a href="#heading-markdown-support">Markdown Support</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-how-to-create-the-api-documentation">How to Create the API Documentation</a></p>
<ul>
<li><p><a href="#heading-set-up-the-project">Set up the Project</a></p>
</li>
<li><p><a href="#heading-how-to-set-up-zod-to-openapi">How to Set Up zod-to-openapi</a></p>
</li>
<li><p><a href="#heading-how-to-generate-the-documentation-ui-with-scalar">How to Generate the Documentation UI with Scalar</a></p>
</li>
<li><p><a href="#heading-document-the-endpoints">Document the Endpoints</a></p>
</li>
</ul>
</li>
<li><p><a href="#heading-how-to-use-scalar-with-nestjs">How to Use Scalar with NestJS</a></p>
</li>
<li><p><a href="#heading-how-to-resolve-content-security-policy-csp-errors-when-used-with-helmet">How to Resolve Content Security Policy (CSP) Errors When Used with Helmet</a></p>
</li>
<li><p><a href="#heading-absence-of-asyncapi-documentation-feature">Absence of AsyncAPI Documentation Feature</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-rest-api-documentation-tools-for-nodejs">REST API Documentation Tools for Node.js</h2>
<p>A variety of tools already exist for documenting REST APIs and they have different strengths and weaknesses depending on their use case. While some of them are completely free to use, others operate a freemium model, and while some have an interface for testing APIs, others have a presentation-only user interface (UI).</p>
<p>Some of the most popular tools for documenting REST APIs in Node.js projects are listed below:</p>
<ul>
<li><p>Swagger</p>
</li>
<li><p>Postman</p>
</li>
<li><p>Redoc</p>
</li>
</ul>
<h3 id="heading-swagger">Swagger</h3>
<p>In order to document REST APIs in Express with Swagger, you need <a href="https://www.npmjs.com/package/swagger-jsdoc">swagger-jsdoc</a> and <a href="https://www.npmjs.com/package/swagger-ui-express">swagger-express-ui</a>. swagger-jsdoc collates and parses JSDoc-annotated documentation comments in the codebase and generates an OpenAPI specification document. swagger-ui-express uses the generated document to created a web page that renders the API documentation and test the APIs.</p>
<p>One of Swagger’s strengths lies in its support for the <a href="https://swagger.io/docs/specification/v3_0/about/">OpenAPI specification</a> which is an industry standard. Swagger is free to use, has a vibrant open source community and strong support for many programming languages and frameworks. It supports only REST APIs.</p>
<p>Its major drawback is the poor developer experience in manually writing JSDoc comments or YAML for the documentation. The process can be clumsy and developers can forget to include some annotations. Another drawback is that the JSDoc comments can interfere with reading functional code. Lastly, some developers have complained about its boring UI.</p>
<h3 id="heading-postman">Postman</h3>
<p><a href="https://www.postman.com/">Postman</a> is a cloud-based desktop API client application that allows developers and technical writers to write, test, collaborate on and publish API documentation. Unlike Swagger, it does not require deep programming experience to use most of its features. It is also not limited to REST API documentation — it can document APIs for GraphQL, websockets and gRPC.</p>
<p>Postman provides a UI to fill in details of an API documentation. The documentation process is manual and sometimes, its content can get out of sync with that of deployed applications. It is not free to use for teams and collaboration and hides the real behaviour of browser interaction with the APIs like CORS and streaming.</p>
<h3 id="heading-redoc">ReDoc</h3>
<p><a href="https://redocly.com/docs/redoc">ReDoc</a> is an open-source tool used to generate API documentation from an OpenAPI (Swagger) specification. It supports GraphQL, AsyncAPI and the OpenAPI specification and it renders a more beautiful documentation than Swagger. <a href="https://www.npmjs.com/package/redoc-express">redoc-express</a> is used to document Express REST APIs with Redoc.</p>
<p>Redoc’s major drawback is that its free community edition is presentation-only. It does not support testing the APIs. Similar to Swagger, a drawback it has is manually updating the application's OpenAPI document via a YAML specification file or JSDoc comments.</p>
<h2 id="heading-zod-to-openapi-and-scalar-for-rest-api-documentation">zod-to-openapi and Scalar for REST API Documentation</h2>
<p><a href="https://github.com/asteasolutions/zod-to-openapi">asteasolutions/zod-to-openapi</a> is a TypeScript library that generates OpenAPI specification from <a href="https://zod.dev/">zod</a> schemas. It provides typed methods which serve as guardrails for documenting API components instead of using code comments so that:</p>
<ul>
<li><p>The library methods serve as guardrails for what to document and how to document it</p>
</li>
<li><p>The documentation is consistent across the codebase</p>
</li>
<li><p>The documentation doesn't negatively affect code readability</p>
</li>
</ul>
<p>A sample snippet used to document a POST request for creating a user with zod-to-openapi is shown below:</p>
<pre><code class="language-typescript">// CreateUser and User are zod schema

registry.registerPath({
  method: "post",
  path: "/api/users",
  summary: "Create user",
  tags: ["users"],
  request: {
    body: {
      content: {
        "application/json": {
          schema: schema.CreateUser, 
        },
      },
      description: "Create user payload",
      required: true,
    },
  },
  responses: {
    201: {
      description: "User created",
      content: {
        "application/json": {
          schema: z.object({
            message: z.string(),
            data: schema.User,
          }),
        },
      },
    },
  },
});
</code></pre>
<p><a href="https://scalar.com/">Scalar</a> is a tool that generates beautiful, organized and searchable API documentation from OpenAPI documents. The documentation generated also supports testing the APIs and this makes Scalar effectively function as an API documentation generator and a lightweight API client. The image below shows a sample documentation generated by Scalar:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/66e28b713f978a0e2cd2b763/56ca6e73-23a8-4245-a8d3-88c1f67b16be.svg" alt="Sample Scalar documentation UI" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<h3 id="heading-how-zod-to-openapi-works-with-scalar">How zod-to-openapi Works with Scalar</h3>
<p>zod-to-openapi provides the functionality to generate an OpenAPI specification from code. Scalar uses the document generated to create a documentation web page that presents the information in the document in an organized and beautiful way that also allows for testing the APIs.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/66e28b713f978a0e2cd2b763/b327b21d-c80f-472c-ba6d-6c63d212faa8.png" alt="b327b21d-c80f-472c-ba6d-6c63d212faa8" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<h2 id="heading-benefits-of-using-zod-to-openapi-and-scalar-to-create-rest-api-documentation">Benefits of Using zod-to-openapi and Scalar to Create REST API Documentation</h2>
<p>When you combine zod-to-openapi and Scalar to create REST API documentation for your Express applications, you get a myriad of benefits. Some of the benefits are explained below:</p>
<h3 id="heading-openapi-specification-support">OpenAPI Specification Support</h3>
<p>The OpenAPI specification is a format for describing REST APIs. It takes takes into consideration the important components of an API necessary for clients to use it effectively. These components include:</p>
<ul>
<li><p>Request paths, methods, headers, path parameters and query parameters,</p>
</li>
<li><p>Schemas of request and response payloads,</p>
</li>
<li><p>Authentication requirements,</p>
</li>
<li><p>Descriptions for information not accommodated by other components</p>
</li>
</ul>
<p>zod-to-openapi provides methods for all of these to be included in the documentation and it generates an OpenAPI specification-compliant document that Scalar uses to generate the documentation web page.</p>
<h3 id="heading-open-source-and-free-to-use">Open Source and Free to Use</h3>
<p>zod-to-openapi is open source and free to use. It has no plans to be unsupported or sunsetted soon because like Ruby on Rails and Laravel, the creators of the project use it in their day-to-day work.</p>
<p>Scalar is open source too. It has paid plans but the features in the paid plans are only really useful for enterprise applications. The free version supports the necessary features needed to create useful REST API documentation.</p>
<h3 id="heading-better-documentation-experience">Better Documentation Experience</h3>
<p>In terms of user experience, the union of zod-to-openapi and Scalar provides the following benefits when writing documentation with both tools:</p>
<h4 id="heading-guardrails-with-zod-to-openapi-methods">Guardrails with zod-to-openapi Methods</h4>
<p>The methods provided by zod-to-openapi serve as guardrails to ensure that developers don't omit or forget the documentation of important components of APIs. The methods also ensure that these components are documented in an OpenAPI specification-compliant manner through the typed nature of the methods' parameters.</p>
<h4 id="heading-avoid-the-clumsiness-of-comments-and-yaml-files">Avoid the Clumsiness of Comments and YAML Files</h4>
<p>With zod-to-openapi, you don't document the APIs using comments or a YAML file. You document APIs using methods from zod-to-openapi. This removes the cluttering of code with comments and the clumsiness around manually updating large YAML files of OpenAPI specification.</p>
<h4 id="heading-accuracy-and-auto-generation-of-documentation">Accuracy and Auto-generation of Documentation</h4>
<p>When you use zod-to-openapi and Scalar, your API documentation is generated automatically when the application runs. zod-to-openapi does the collation and compilation of the documented APIs, and Scalar creates a web page for it that can be hosted on an API route of the same application. You don't need to manually run CLI commands to generate the documentation.</p>
<p>Another benefit of accuracy and auto-generation is that the job of API documentation is not split between technical writer and backend developer. The documentation is in the code and this makes development faster and more seamless in terms of API documentation.</p>
<h3 id="heading-developer-friendly-ui">Developer-friendly UI</h3>
<p>While Swagger’s UI is functional, some developers consider its presentation somewhat minimal, particularly when displaying detailed endpoint descriptions. ReDoc improves on visual design but does not offer API testing features. Scalar, on the other hand, delivers a more refined and intuitive interface with greater customization options than ReDoc.</p>
<p>Beyond its design advantages, Scalar provides auto-generated code samples in multiple programming languages. This enables developers to integrate APIs more efficiently using examples tailored to their specific tech stack.</p>
<p>Scalar's UI provides a search feature that allows developers to quickly locate specific sections of the API documentation. It also includes an AI chat interface that enables users to understand how different API endpoints can help address their specific use cases. This approach is more efficient than manually reviewing the entire documentation.</p>
<p>Lastly, you can test the API endpoints from the UI. When you make authenticated API requests, the UI caches authentication tokens so that you don't have to type or paste them for subsequent requests.</p>
<h3 id="heading-markdown-support">Markdown Support</h3>
<p>zod-to-openapi and Scalar have <a href="https://scalar.com/products/api-references/markdown">Markdown support</a>. With Markdown, you can include conceptual documentation and more information about API endpoints that are not supported by the default documentation components like headers and the request body.</p>
<p>You can embed images, include tables and format text in the documentation. You can use Markdown to include notes that explain concepts related to the API.</p>
<h2 id="heading-how-to-create-the-api-documentation">How to Create the API Documentation</h2>
<p>In this section, you will create an Express CRUD API project that uses zod-to-openapi and Scalar to document its APIs. To practice along, clone the Express starter project from GitHub at <a href="https://github.com/orimdominic/freeCodeCamp-zod-to-openapi-scalar#">orimdominic/freeCodeCamp-zod-to-openapi-scalar</a>.</p>
<h3 id="heading-set-up-the-project">Set up the Project</h3>
<p>After cloning the project:</p>
<ul>
<li><p>install its dependencies using your preferred Node.js package manager</p>
</li>
<li><p>start the server using the <code>serve</code> script</p>
</li>
</ul>
<pre><code class="language-shell"># Install dependencies
npm install

# Start the application
npm run serve
</code></pre>
<p>You should see the following output on the terminal if the application runs successfully:</p>
<pre><code class="language-shell">&gt; freecodecamp-zod-to-openapi-scalar@1.0.0 serve
&gt; node --experimental-strip-types --watch src/index.ts

Listening on :3000
</code></pre>
<p>The project has two modules - Users and Pets.</p>
<p>The router configuration for each module is defined in the <code>router.ts</code> file, while the route controllers are located in the <code>controllers.ts</code> file within each module's folder under <code>src/modules</code>. The controllers do not contain business logic, they simply respond with JSON values generated by the <a href="https://fakerjs.dev/">Faker</a> library.</p>
<h3 id="heading-how-to-set-up-zod-to-openapi">How to Set Up zod-to-openapi</h3>
<p>Install <a href="https://github.com/asteasolutions/zod-to-openapi">asteasolutions/zod-to-openapi</a> using your preferred Node.js package manager. If you use npm, run the code snippet below in your terminal:</p>
<pre><code class="language-shell">npm install @asteasolutions/zod-to-openapi
</code></pre>
<p>After the installation, create a folder called <code>lib</code> (library) in the <code>src</code> folder. In the <code>lib</code> folder, create a file called <code>openapi.ts</code>. The file will house the code that sets up zod-to-openapi for collating the API documentation and generating the OpenAPI specification.</p>
<p>Copy and paste the code snippet below into <code>src/lib/openapi.ts</code>:</p>
<pre><code class="language-typescript">import z from "zod";
import {
  extendZodWithOpenApi,
  OpenApiGeneratorV31,
  OpenAPIRegistry,
} from "@asteasolutions/zod-to-openapi";

extendZodWithOpenApi(z);

export const registry = new OpenAPIRegistry();

export const bearerAuth = registry.registerComponent("securitySchemes", "bearerAuth", {
  type: "http",
  scheme: "bearer",
  bearerFormat: "JWT",
});

export function generateOpenAPIDocument() {
  const generator = new OpenApiGeneratorV3(registry.definitions);

  return generator.generateDocument({
    openapi: "3.1.0",
    info: {
      title: "Users API",
      version: "1.0.0",
      description: `Backend API documentation for users application.`,
    },
    tags: [
      {
        name: "users",
        description: "For operations carried out by admin users",
      },
    ],
    servers: [
      {
        url: "http://localhost:3000",
        description: "Local server",
      },
    ],
  });
}
</code></pre>
<blockquote>
<p>zod-to-openapi v8 requires zod v4. If you use zod v3, you should use v7.3.4 of zod-to-openapi.</p>
</blockquote>
<p><code>extendZodWithOpenApi</code> is a method provided by <code>zod-to-openapi</code> that enhances Zod schemas by adding an <code>openapi</code> method. The <code>openapi</code> method allows you to attach additional documentation to request payloads, responses, parameters, and their properties, which are then displayed in the API documentation rendered by Scalar.</p>
<p>It is important to call <code>extendZodWithOpenApi</code> before loading any files that use the <code>openapi</code> method, otherwise accessing <code>openapi</code> on Zod objects will result in errors.</p>
<p>An alternative is to use the <code>meta</code> method on zod v4 schemas for the additional documentation. For example, <code>schemaOne</code> and <code>schemaTwo</code> in the code snippet below are the same:</p>
<pre><code class="language-typescript">const schemaOne = z
  .string()
  .openapi({ description: 'Name of the user', example: 'Test' });

const schemaTwo = z
  .string()
  .meta({description: 'Name of the user', example: 'Test' });
</code></pre>
<p>The <code>meta</code> method supports all metadata information that you'd normally pass to <code>openapi</code> and will produce exactly the same results.</p>
<p>The <code>OpenAPIRegistry</code> is a utility that is used to collate API documentation which would later be passed to an OpenAPI specification generator. <code>registry</code> is created from <code>OpenAPIRegistry</code>, exported, and used to document API endpoints and components in modules where it is imported.</p>
<pre><code class="language-plaintext">export const registry = new OpenAPIRegistry();
</code></pre>
<p><code>bearerAuth</code> is a component created by the <code>registry</code> to represent JWT authentication. When <code>bearerAuth</code> is included in the documentation of an endpoint, the UI renders an input for submitting an authentication token for authenticated requests as shown in the image below.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/66e28b713f978a0e2cd2b763/2bfe72ca-a5ee-4ce5-ba5f-6ed6e1c52774.png" alt="Scalar input UI for submitting JWT" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>In the <code>registerComponent</code> method, <code>"securitySchemes"</code> registers a security scheme component. <code>"bearerAuth"</code> , the first argument of <code>registerComponent</code>, is the name given to the component and it can be changed to a name that you prefer. It appears in the top right of the authentication token input, shown in the image above. The third input to <code>registerComponent</code> is an object that defines the component.</p>
<p>When <code>generateOpenAPIDocument</code> function is executed, it collates all the registry API definitions in the project, generates the OpenAPI specification through <code>generator.generateDocument</code>, and returns the specification as JSON.</p>
<p>The <code>tags</code> property in <code>generator.generateDocument</code> organizes API endpoints into sections on the documentation UI. For example, all API endpoints with the <code>Users</code> tag in their registry definition will be placed under the <code>Users</code> section of the UI. <code>description</code> can be written in Markdown within template literals.</p>
<p>The <code>servers</code> property is a collection of the servers connected to the application. If you have multiple servers, you have the option of selecting what server to use for the base URL in the documentation UI for making API requests from it.</p>
<p>With this setup in place, when endpoints are documented with the registry, <code>generateOpenAPIDocument</code> will have an OpenAPI specification to return.</p>
<h3 id="heading-how-to-generate-the-documentation-ui-with-scalar">How to Generate the Documentation UI with Scalar</h3>
<p>In this section, you will set up Scalar and connect it to the return value of <code>generateOpenAPIDocument</code>. You will also connect Scalar with an Express route, allowing the application to serve the documentation UI at that route.</p>
<p>Scalar has an <a href="https://scalar.com/products/api-references/integrations/express">Express API reference</a> library that makes it easier for you to connect it with the OpenAPI specification and Express. Install <code>scalar/express-api-reference</code> using your preferred Node.js package manager. If you use npm, use the snippet below:</p>
<pre><code class="language-shell">npm install @scalar/express-api-reference 
</code></pre>
<p>Copy and paste the code snippet below into <code>src/app.js</code>:</p>
<pre><code class="language-typescript">import express from "express";
import router from "./router.ts";
import { generateOpenAPIDocument } from "./lib/openapi.ts";
import { apiReference } from "@scalar/express-api-reference";

const app = express();
app.use(express.json(), express.urlencoded({ extended: true }));

app.get("/", function (req, res) {
  return res.send("OK");
});

app.use("/api", router);

const apiDocJsonContent = generateOpenAPIDocument();

app.use(
  "/docs", // documentation route
  apiReference({
    content: apiDocJsonContent,
    title: "Users API",
    pageTitle: "Users API",
  }),
);

export default app;
</code></pre>
<p>In the code snippet above, <code>generateOpenAPIDocument</code> is imported from <code>src/lib/openapi.ts</code>, and <code>apiReference</code> is imported from <code>@scalar/express-api-reference</code>. When executed, <code>generateOpenAPIDocument</code> returns the OpenAPI specification, which is stored in <code>apiDocJsonContent</code> for caching to improve perfoemance.</p>
<p>A <code>GET /docs</code> route is then created, with the Scalar <code>apiReference</code> function acting as the controller. It accepts <code>apiDocJsonContent</code> and returns a web page whenever the <code>GET /docs</code> route is accessed.</p>
<p>With this setup in place, run the application using <code>npm run serve</code> and visit the documentation page at <code>http://localhost:3000/docs</code> in your browser. You should see a user interface similar to the image below:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/66e28b713f978a0e2cd2b763/387698d3-ddcc-4263-8377-428399010892.png" alt="Scalar documentation starter UI" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>To view the codebase at this point, run <code>git checkout set-up-openapi-scalar</code> .</p>
<h3 id="heading-document-the-endpoints">Document the Endpoints</h3>
<p>You have set up zod-to-openapi and connected it with Scalar. You have also hooked it up with a route in the backend application. In this section, you will write code to document the endpoints in the application for generating the OpenAPI specification and rendering it on the documentation UI.</p>
<p>To document the route for creating users ( <code>POST /api/users</code> ), in <code>src/modules/users/router.ts</code> , import <code>registry</code>, the schemas and zod using the snippet below:</p>
<pre><code class="language-typescript">import z from "zod";
import { registry } from "../../lib/openapi.ts";
import { 
    UserSchema, 
    UserListItemSchema,
    UpdateUserSchema, 
    CreateUserSchema, 
} from "./types.ts";
</code></pre>
<p>Copy and paste the code below above the create user route to document the create user endpoint:</p>
<pre><code class="language-typescript">registry.registerPath({
  method: "post",
  path: "/api/users",
  summary: "Create user",
  tags: ["users"],
  request: {
    body: {
      content: {
        "application/json": {
          schema: CreateUserSchema,
        },
      },
      description: "Create user payload",
      required: true,
    },
  },
  responses: {
    201: {
      description: "User created",
      content: {
        "application/json": {
          schema: z.object({
            message: z.string().openapi({ example: "User created" }),
            data: UserSchema,
          }),
        },
      },
    },
  },
});
</code></pre>
<p>Visit the documentation page and you will find see a web page similar to the image below:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/66e28b713f978a0e2cd2b763/22e0f15f-a7ee-4806-ab86-cfe922a3feda.png" alt="Scalar documentation UI for the create user route" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>The UI result of some of the input fields of <code>registry.registerPath</code> have been labelled in the image above. The description in the API endpoint is italicised because its value is Markdown in a template string.</p>
<p>By registering the route path for creating users with <code>registry.registerPath</code> and filling its values, you added the documentation of the route to the registry definitions and that makes it included in the OpenAPI specification.</p>
<p>To test the endpoint from the documentation UI:</p>
<ul>
<li><p>click the <em>Test Request</em> button</p>
</li>
<li><p>fill in the payload in the dialog that appears and</p>
</li>
<li><p>click the <em>Send</em> button</p>
</li>
</ul>
<p>To document the <code>get</code> user by <code>id</code> route (<code>GET /api/users/:id</code> ), import <code>bearerAuth</code> from <code>src/lib/openapi.ts</code>, copy the code snippet below and paste it above the get user by id route definition.</p>
<pre><code class="language-typescript">registry.registerPath({
  method: "get",
  path: "/api/users/{userId}",
  summary: "Get user details by id",
  tags: ["Users"],
  security: [{ [bearerAuth.name]: [] }],
  request: {
    params: z.object({ userId: z.int() }),
  },
  responses: {
    200: {
      description: "User retrieved",
      content: { "application/json": { schema: UserSchema } },
    },
  },
});
</code></pre>
<p>When the <code>request.params</code> field is defined using a Zod object, it generates an input UI on the documentation web page that enables users to provide values for path parameters such as <code>userId</code>, highlighted in the image below:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/66e28b713f978a0e2cd2b763/ff4db093-580b-47eb-bd91-2216966566c0.png" alt="Request path parameter from registry definition on Scalar" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>The complete code for the documentation of all endpoints in this section can be accessed when you check out the <code>complete-project</code> branch by running <code>git checkout complete-project</code> in your terminal. It contains documentation for the endpoint for uploading user photo, which demonstrates how to document endpoints that accept file uploads.</p>
<h2 id="heading-how-to-use-scalar-with-nestjs">How to Use Scalar with NestJS</h2>
<p>Scalar has a library that integrates with NestJS. You can use supply the Swagger document created by <a href="https://docs.nestjs.com/openapi/introduction">swagger/nestjs</a> to the <a href="https://scalar.com/products/api-references/integrations/nestjs">Scalar NestJS integration library</a> to generate the Scalar documentation UI.</p>
<p>In root folder of your NestJS project, install the Scalar NestJS integration library:</p>
<pre><code class="language-shell">npm install @scalar/nestjs-api-reference
</code></pre>
<p>Update the <code>main.ts</code> file of your NestJS project with the code snippet below:</p>
<pre><code class="language-typescript">import { NestFactory } from '@nestjs/core';
import { apiReference } from '@scalar/nestjs-api-reference';
import { DocumentBuilder, SwaggerModule } from '@nestjs/swagger';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  const options = new DocumentBuilder()
    .setTitle('Cats example')
    .setDescription('The cats API description')
    .setVersion('1.0')
    .addTag('cats')
    .addBearerAuth()
    .build();

  const openApiSpecification = SwaggerModule.createDocument(app, options);
  
  // integrate the documentation with NestJS
  app.use(
    '/api/docs', // documentation route
    apiReference({
      content: openApiSpecification,
    }),
  )

  await app.listen(3000);
  console.log(`Application is running on: ${await app.getUrl()}`);
}

bootstrap();
</code></pre>
<p>With this setup in place, you can visit the <code>/api/docs</code> route in your browser to view the Scalar documentation for your NestJS application.</p>
<h2 id="heading-how-to-resolve-content-security-policy-csp-errors-when-used-with-helmet">How to Resolve Content Security Policy (CSP) Errors When Used with Helmet</h2>
<p>If you use <a href="https://www.npmjs.com/package/helmet">Helmet</a> in your Express or NestJS project, you will encounter <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP">CSP</a> errors when you try to render the Scalar documentation UI. To resolve the errors, update the Helmet CSP configuration in your code to have the value of the object in the code snippet below:</p>
<pre><code class="language-typescript">{
    directives: {
      defaultSrc: [`'self'`],
      styleSrc: [`'self'`, `'unsafe-inline'`],
      imgSrc: [`'self'`, 'data:', 'validator.swagger.io'],
      scriptSrc: ['self', 'https:', 'unsafe-inline'],
    },
  }
</code></pre>
<h2 id="heading-absence-of-asyncapi-documentation-feature">Absence of AsyncAPI Documentation Feature</h2>
<p>At the time of writing, Scalar does not fully support rendering AsyncAPI specifications for event-driven architecture APIs, although it is currently under development. You can track the progress of its development through the GitHub issue linked in the documentation to stay informed about its release.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You have learned about zod-to-openapi and how it makes it easier for you to generate an OpenAPI specification for your REST APIs than writing comments or large YAML files. You also learned how to use the specification document generated to render a beautiful API documentation UI which also functions as a lightweight API client. Endeavour to implement it in your projects that need a documentation uplift.</p>
<p>Feel free to <a href="https://www.linkedin.com/in/orimdominicadah/">connect with me on LinkedIn</a> for questions or clarifications. Thank you for reading this far and I hope this helps you achieve what you intended to achieve. Don’t hesitate to share this article if you feel that it would help someone else out there. Cheers!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build a Custom PDF Text Extractor with Node.js and TypeScript ]]>
                </title>
                <description>
                    <![CDATA[ Extracting text from PDFs sounds simple until you try to do it. And it can be even more challenging for JavaScript developers, with various libraries to choose from and so on. I encountered this problem while I was building my SaaS app. I scoured thr... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-a-custom-pdf-text-extractor-with-nodejs-and-typescript/</link>
                <guid isPermaLink="false">698e11a1f5e2fbcb44cccc0a</guid>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Chidera Humphrey ]]>
                </dc:creator>
                <pubDate>Thu, 12 Feb 2026 17:45:05 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770918274288/fea825aa-9dc4-468b-abbf-04fb1e74ec22.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Extracting text from PDFs sounds simple until you try to do it. And it can be even more challenging for JavaScript developers, with various libraries to choose from and so on.</p>
<p>I encountered this problem while I was building my SaaS app. I scoured through StackOverflow, Reddit, and Quora, but didn't find a satisfying answer. Some solutions were impractical, while others needed complex configuration.</p>
<p>After going through the struggle, I said, “You know what? Screw it. Let me build my own little PDF parser”. With the help of Claude and Node.js, I built a custom PDF parser for my SaaS app.</p>
<p>In this tutorial, I’ll show you how I built my custom PDF parser using Node.js and how you can do the same.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-why-build-a-custom-pdf-text-extractor">Why Build a Custom PDF Text Extractor?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-sample-of-what-well-be-building">Sample of What We’ll Be Building</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-the-project">Setting Up the Project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-core-implementation-building-the-extractor">Core Implementation: Building the Extractor</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-adding-page-specific-extraction">Adding Page-Specific Extraction</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-adding-a-lightweight-metadata-only-endpoint">Adding a Lightweight Metadata-Only Endpoint</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-adding-searchfind-functionality">Adding Search/Find Functionality</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-the-searchfind-function">Creating the Search/Find Function</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-handling-edge-cases-and-best-practices">Handling Edge Cases and Best Practices</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-best-practices">Best Practices</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-unit-testing-your-pdf-parser">Unit Testing Your PDF Parser</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deploying-your-pdf-parser-api">Deploying Your PDF Parser API</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-next-steps-integrate-into-your-saas">Next Steps: Integrate Into Your SaaS</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-resources">Resources</a></p>
</li>
</ul>
<h2 id="heading-why-build-a-custom-pdf-text-extractor">Why Build a Custom PDF Text Extractor?</h2>
<p>You might ask yourself: "Why build a custom PDF parser when libraries already exist?"</p>
<p>Popular JavaScript PDF parsers have various trade-offs. Here's a quick comparison of common options:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Library</td><td>Text Extraction</td><td>TypeScript Support</td><td>Dependencies</td><td>Layout/Table Support</td><td>Best For</td></tr>
</thead>
<tbody>
<tr>
<td><strong>pdf-parse</strong></td><td>Basic only</td><td>Partial</td><td>None</td><td>Poor</td><td>Quick, simple text extraction</td></tr>
<tr>
<td><strong>pdfjs-dist</strong></td><td>Advanced</td><td>Full</td><td>None</td><td>Moderate</td><td>Custom parsing &amp; rendering</td></tr>
<tr>
<td><strong>pdf2json</strong></td><td>JSON output</td><td>Partial</td><td>None</td><td>Good for structure</td><td>Exporting structured data</td></tr>
<tr>
<td><strong>pdf-text-extract</strong></td><td>Text only</td><td>None</td><td>Requires Poppler</td><td>Basic</td><td>CLI or simple scripts</td></tr>
</tbody>
</table>
</div><p>These libraries work well for specific use cases, but building your own parser still has advantages:</p>
<ul>
<li><p>You choose the tech stack that fits your application</p>
</li>
<li><p>You add only the features your project needs</p>
</li>
</ul>
<p>And the good news is, you can build a JavaScript-native parser for your project's needs without having to rely on external dependencies or adapt libraries built for different ecosystems.</p>
<p>A custom parser gives you full control without the bloat of unnecessary functionality.</p>
<h2 id="heading-sample-of-what-well-be-building">Sample of What We’ll Be Building</h2>
<p>Here’s a screen recording of our text extractor in action:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770821775518/59ccda88-6913-4084-8d73-2141852530f7.gif" alt="Working demo of the PDF extractor" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow along with this tutorial, I assume:</p>
<ul>
<li><p>You have Node.js installed on your machine. If you don’t have Node.js installed, you can install it from the <a target="_blank" href="https://nodejs.org/en/download">official Node.js website</a>.</p>
</li>
<li><p>You know how to write basic TypeScript code.</p>
</li>
</ul>
<h2 id="heading-setting-up-the-project">Setting Up the Project</h2>
<p>In this section, you’ll set up your project. This project uses TypeScript in Node.js rather than JavaScript.</p>
<p>Don’t worry if you don’t know how to configure TypeScript for Node.js. I’ll show you how to do it in this section.</p>
<h3 id="heading-initializing-a-nodejs-app">Initializing a Node.js app</h3>
<p>Open the folder where you want your project to live, and create a Node.js project:</p>
<pre><code class="lang-bash">npm init -y
</code></pre>
<p>Install the necessary packages:</p>
<pre><code class="lang-bash">npm install cors express-fileupload pdf-parse
</code></pre>
<ul>
<li><p><code>cors</code>: Enables Cross-Origin Resource Sharing, allowing your API to accept requests from different domains or ports.</p>
</li>
<li><p><code>express-fileupload</code>: Middleware for handling file uploads in Express, making it easy to process uploaded PDFs.</p>
</li>
<li><p><code>pdf-parse</code>: A lightweight PDF parsing library for extracting text and metadata from PDF files.</p>
</li>
<li><p><code>express</code>: The web framework for Node.js that handles routing, middleware, and server setup.</p>
</li>
</ul>
<p>Now let’s continue with our installs:</p>
<pre><code class="lang-bash">npm install -D typescript ts-node @types/node @types/express nodemon prettier dotenv @types/cors @types/express-fileupload
</code></pre>
<p>The <code>-D</code> flag directs <code>npm</code> to install these libraries as development dependencies.</p>
<ul>
<li><p><code>ts-node</code>: Lets you run TypeScript code directly in Node.js without compiling to JavaScript first</p>
</li>
<li><p><code>@types/node</code>: Adds TypeScript type definitions for Node.js core modules like <code>fs</code>, <code>path</code>, and <code>http</code></p>
</li>
<li><p><code>@types/express</code>: Provides TypeScript type definitions for the Express.js framework and its middleware</p>
</li>
<li><p><code>nodemon</code>: Automatically restarts your development server whenever you save changes to your code</p>
</li>
<li><p><code>prettier</code>: A code formatter that ensures consistent style and readability across your entire project</p>
</li>
</ul>
<h3 id="heading-configuring-typescript-in-the-nodejs-app">Configuring TypeScript in the Node.js app</h3>
<p>Let’s start by generating a <code>tsconfig.json</code> file:</p>
<pre><code class="lang-bash">npx tsc --init
</code></pre>
<p>TypeScript projects use the <code>tsconfig.json</code> file to manage the project’s settings. The configuration file is located in the root of your project.</p>
<p>After running the command, you should see a <code>tsconfig.json</code> file that looks like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-comment">// Visit https://aka.ms/tsconfig to read more about this file</span>
  <span class="hljs-attr">"compilerOptions"</span>: {
    <span class="hljs-comment">// File Layout</span>
    <span class="hljs-comment">// "rootDir": "./src",</span>
    <span class="hljs-comment">// "outDir": "./dist",</span>

    <span class="hljs-comment">// Environment Settings</span>
    <span class="hljs-comment">// See also https://aka.ms/tsconfig/module</span>
    <span class="hljs-attr">"module"</span>: <span class="hljs-string">"nodenext"</span>,
    <span class="hljs-attr">"target"</span>: <span class="hljs-string">"esnext"</span>,
    <span class="hljs-attr">"types"</span>: [],
    <span class="hljs-comment">// For nodejs:</span>
    <span class="hljs-comment">// "lib": ["esnext"],</span>
    <span class="hljs-comment">// "types": ["node"],</span>
    <span class="hljs-comment">// and npm install -D @types/node</span>

    <span class="hljs-comment">// Other Outputs</span>
    <span class="hljs-attr">"sourceMap"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"declaration"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"declarationMap"</span>: <span class="hljs-literal">true</span>,

    <span class="hljs-comment">// Stricter Typechecking Options</span>
    <span class="hljs-attr">"noUncheckedIndexedAccess"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"exactOptionalPropertyTypes"</span>: <span class="hljs-literal">true</span>,

    <span class="hljs-comment">// Style Options</span>
    <span class="hljs-comment">// "noImplicitReturns": true,</span>
    <span class="hljs-comment">// "noImplicitOverride": true,</span>
    <span class="hljs-comment">// "noUnusedLocals": true,</span>
    <span class="hljs-comment">// "noUnusedParameters": true,</span>
    <span class="hljs-comment">// "noFallthroughCasesInSwitch": true,</span>
    <span class="hljs-comment">// "noPropertyAccessFromIndexSignature": true,</span>

    <span class="hljs-comment">// Recommended Options</span>
    <span class="hljs-attr">"strict"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"jsx"</span>: <span class="hljs-string">"react-jsx"</span>,
    <span class="hljs-attr">"verbatimModuleSyntax"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"isolatedModules"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"noUncheckedSideEffectImports"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"moduleDetection"</span>: <span class="hljs-string">"force"</span>,
    <span class="hljs-attr">"skipLibCheck"</span>: <span class="hljs-literal">true</span>,
  }
}
</code></pre>
<p>Add <code>"node"</code> to the <code>types</code> array like this:</p>
<pre><code class="lang-json"><span class="hljs-string">"types"</span>: [<span class="hljs-string">"node"</span>]
</code></pre>
<p>Then modify your <code>package.json</code> file with the following code:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"main"</span>: <span class="hljs-string">"index.ts"</span>,
    <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"test"</span>: <span class="hljs-string">"echo \"Error: no test specified\" &amp;&amp; exit 1"</span>,
    <span class="hljs-attr">"dev"</span>: <span class="hljs-string">"nodemon --watch src --ext ts,json --exec \"node --loader ts-node/esm src/server.ts\""</span>,
    <span class="hljs-attr">"build"</span>: <span class="hljs-string">"tsc"</span>,
    <span class="hljs-attr">"start"</span>: <span class="hljs-string">"node src/server.js"</span>
  },
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"module"</span>
}
</code></pre>
<p>This ensures that the entry point of your app is a TypeScript file, and you can use <code>import</code> statement instead of <code>require</code>.</p>
<p>In the next section, you’re going to build the PDF parser.</p>
<h2 id="heading-core-implementation-building-the-extractor">Core Implementation: Building the Extractor</h2>
<p>After configuring your Node.js app, the next step is to build the PDF parser.</p>
<p>Create a new directory in your Node.js app, and create a <code>server.ts</code> file.</p>
<p>Now import the necessary packages for building the PDF parser:</p>
<pre><code class="lang-tsx">import express, { type Request, type Response } from "express";
import fileUpload, { type UploadedFile } from "express-fileupload";
import { PDFParse } from "pdf-parse";
import cors from "cors";

const app = express();
const PORT = process.env.PORT || 8080;
</code></pre>
<p>Let’s understand what’s happening:</p>
<ul>
<li><p><code>fileUpload</code> is the module for uploading files in an Express app. The <code>UploadedFile</code> type is a TypeScript type for the uploaded file.</p>
</li>
<li><p><code>PDFParse</code> is the core parsing module. It provides the basic functionality of parsing PDF files.</p>
</li>
<li><p><code>cors</code> is the module for protecting the app from origins not specified.</p>
</li>
<li><p>You created an Express app with the following line: <code>const app = express();</code>.</p>
</li>
<li><p><code>PORT</code> is the port you want your app to be hosted on.</p>
</li>
</ul>
<h3 id="heading-configuring-cors-middleware">Configuring CORS Middleware</h3>
<p>Setting up CORS allows requests from specified origins. This protects your app from attacks.</p>
<pre><code class="lang-tsx">app.use(
  cors({
    origin: ["http://localhost:3000", "https://yourwebsite.com"],
  })
);
</code></pre>
<h3 id="heading-implementing-file-upload-middleware">Implementing File Upload Middleware</h3>
<p>To handle file uploads in your API, you’ll use the <code>express-fileupload</code> middleware. This middleware intercepts incoming file uploads and makes them accessible through <code>req.files</code>.</p>
<p>You can run checks on the incoming file, such as file size and number of files.</p>
<pre><code class="lang-tsx">import fileUpload, { type UploadedFile } from "express-fileupload";

app.use(
  fileUpload({
    limits: { fileSize: 50 * 1024 * 1024 }, // 50 MB limit
    abortOnLimit: true,
  })
);
</code></pre>
<p>Key options:</p>
<ul>
<li><p><code>fileSize</code>: Sets the maximum file size allowed (50 MB in this case)</p>
</li>
<li><p><code>abortOnLimit</code>: When <code>true</code>, automatically rejects uploads that exceed the size limit and prevents further processing</p>
</li>
</ul>
<p>Here’s why this is important:</p>
<ul>
<li><p><strong>Security</strong>: Limits prevent server overload from massive files.</p>
</li>
<li><p><strong>Performance</strong>: Automatically rejects oversized PDFs before processing.</p>
</li>
<li><p><strong>User Experience</strong>: Gives clear error messages for files that are too large.</p>
</li>
</ul>
<h3 id="heading-creating-the-parser-logic">Creating the Parser Logic</h3>
<p>The parser logic is the core function that parses the PDFs. It’s an asynchronous function that extracts text content and metadata from a PDF buffer.</p>
<pre><code class="lang-tsx">async function parsePDF(file: Uint8Array) {
  const parser = new PDFParse(file);
  const data = await parser.getText();
  const info = await parser.getInfo({ parsePageInfo: true });
  return { text: data?.text || "", info, numpages: info?.pages || 0 };
}
</code></pre>
<p>Let’s understand what’s happening in the code:</p>
<ul>
<li><p>The function accepts a <code>Uint8Array</code> buffer containing the raw PDF file data.</p>
</li>
<li><p>You initialized a new <code>PDFParse</code> object with the PDF buffer.</p>
</li>
<li><p>You called <code>getText()</code> to extract all text content from the PDF.</p>
</li>
<li><p>You called <code>getInfo()</code> with <code>parsePageInfo: true</code> to retrieve document information, including page count.</p>
</li>
<li><p>You returned an object containing:</p>
<ul>
<li><p><code>text</code>: The extracted text content (or empty string if none found)</p>
</li>
<li><p><code>info</code>: Document metadata (author, title, creation date, and so on)</p>
</li>
<li><p><code>numpages</code>: Total number of pages in the PDF</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-why-is-the-parser-logic-asynchronous">Why is the parser logic asynchronous?</h4>
<p>Both <code>getText()</code> and <code>getInfo()</code> are asynchronous operations. They require time to parse through the PDF document, so <code>await</code> ensures the operations complete before returning results. This prevents blocking your server while processing large PDF files.</p>
<h3 id="heading-creating-the-pdf-upload-and-processing-endpoint">Creating the PDF Upload and Processing Endpoint</h3>
<p>Now that you have the core <code>parsePDF()</code> function, you need an endpoint that accepts file uploads and processes them using this function.</p>
<pre><code class="lang-tsx">app.post("/upload", async (req: Request, res: Response) =&gt; {
  try {
    if (!req.files || !("file" in req.files)) {
      return res.status(400).json({
        error: "No PDF file shared.",
        body: `Body is ${JSON.stringify(req.body)}`,
      });
    }

    const pdfFile = req.files.file as UploadedFile;
    const unit8ArrayData = new Uint8Array(pdfFile?.data);
    const result = await parsePDF(unit8ArrayData);
    console.log("PDF parsed successfully: ", result);
    res.json({ result, success: true });
  } catch (error) {
    console.error("Error processing PDF:", error);
    if (error instanceof Error) {
      return res.status(500).json({ error: error.message, success: false });
    }
    res.status(500).json({
      error: "Failed to process PDF due to an unknown error.",
      success: false,
    });
  }
});
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ul>
<li><p>You defined a POST route handler at <code>/upload</code> that processes PDF file uploads. The handler uses <code>req.files</code> to access uploaded files and validate that a "file" field exists in the request.</p>
</li>
<li><p>The handler extracts the uploaded PDF file and converts it to a <code>Uint8Array</code> buffer, which is the required format for the <code>parsePDF()</code> function that performs the actual PDF parsing.</p>
</li>
<li><p>You implemented comprehensive error handling with a try-catch block that:</p>
<ul>
<li><p>Logs errors to the console for debugging purposes</p>
</li>
<li><p>Returns specific error messages when the error is an instance of the <code>Error</code> class</p>
</li>
<li><p>Provides a generic error response for unexpected failures while maintaining the <code>success: false</code> flag for consistent client responses</p>
</li>
</ul>
</li>
</ul>
<p>This route handler creates a PDF processing endpoint that validates inputs, processes files efficiently, and provides clear error feedback.</p>
<h3 id="heading-starting-your-server">Starting Your Server</h3>
<p>The last step is to start your Express server and confirm it’s running correctly.</p>
<pre><code class="lang-tsx">const PORT = process.env.PORT || 8080;

app.listen(PORT, () =&gt; {
  console.log(`🚀 Server is running on http://localhost:${PORT}`);
});
</code></pre>
<ul>
<li><p><code>app.listen()</code>: Binds the Express server to the specified PORT and starts listening for incoming requests.</p>
</li>
<li><p>PORT configuration: The server uses the <code>PORT</code> environment variable if set, otherwise defaults to <code>8080</code>.</p>
</li>
<li><p>Callback function: Once the server starts, the callback logs a message to the console with the server URL.</p>
</li>
</ul>
<p>Use the following command to start your server:</p>
<pre><code class="lang-bash">npm run dev
</code></pre>
<p>When the server starts successfully, you'll see the message below in your console:</p>
<pre><code class="lang-bash">🚀 Server is running on http://localhost:8080
</code></pre>
<p>Your PDF parser API is now ready to accept file uploads and process PDFs.</p>
<p>You can verify that your PDF parser is working by visiting the URL using Postman or any API client of your choice.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770821817467/2afa85b4-2042-411a-abd2-410f9cab349f.gif" alt="Working demo of the custom PDF extractor" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Congratulations! You’ve built a custom PDF parser.</p>
<p>This PDF parser is sufficient for simple parsing tasks. But you can extend the functionality of the parser to make it more robust.</p>
<p>In the next sections, you’ll add extra features, such as handling corrupt files.</p>
<h2 id="heading-adding-page-specific-extraction">Adding Page-Specific Extraction</h2>
<p>When working with large PDF documents, extracting the entire file can be inefficient and unnecessary. This feature allows users to specify a page range and extract text only from those pages. This makes your parser more flexible and performant for real-world use cases.</p>
<p>For example, a user might want to extract only pages 5-10 from a 100-page report. By adding optional query parameters <code>startPage</code> and <code>endPage</code> to your endpoint, you give users fine-grained control over which portions of a PDF they want parsed.</p>
<p>In this section, you’ll create a page-specific extraction function and an endpoint to handle parameters from request queries.</p>
<h3 id="heading-creating-the-page-specific-extraction-function">Creating the Page-specific extraction function</h3>
<p>The page-specific extraction function is the core function that parses the specified pages.</p>
<pre><code class="lang-tsx">// function to extract text from a range of pages
async function parsePageRangeFromPDF(
  file: Uint8Array,
  startPage: number,
  endPage: number,
) {
  const parser = new PDFParse(file);
  const info = await parser.getInfo({ parsePageInfo: true });
  const totalPages = Array.isArray(info?.pages)
    ? info.pages.length
    : (info?.pages as number) || 0;

  if (startPage &lt; 1 || endPage &gt; totalPages || startPage &gt; endPage) {
    throw new Error(
      `Invalid page range. PDF has ${totalPages} pages. Please provide a valid range where start &gt;= 1, end &lt;= ${totalPages}, and start &lt;= end.`,
    );
  }

  const data = await parser.getText();
  const lines = data?.text?.split("\n") || [];

  // Note: pdf-parse doesn't provide direct page filtering, so getText() returns all text
  // For accurate page range extraction, consider using a different PDF library
  return { text: data?.text || "", startPage, endPage, totalPages };
}
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ol>
<li><p>You defined an <code>async</code> function <code>parsePageRangeFromPDF</code> that extracts text from a specific range of pages within a PDF document. The function accepts a <code>Uint8Array</code> PDF file and two numeric parameters for the start and end page range.</p>
</li>
<li><p>The function uses the <code>PDFParse</code> library to analyze the PDF structure, first extracting metadata, including the total page count, using <code>parser.getInfo()</code>. It then validates that the requested page range falls within the actual PDF boundaries.</p>
</li>
<li><p>After successful validation, the function extracts all text from the PDF using <code>parser.getText()</code> and splits it into individual lines. The function returns an object containing the extracted text along with metadata about the requested range and total pages.</p>
</li>
</ol>
<p>This function creates a reusable utility for extracting specific page ranges from PDFs with proper validation and error handling.</p>
<h3 id="heading-creating-the-page-specific-extraction-endpoint">Creating the Page-Specific Extraction Endpoint</h3>
<p>Now that you’ve created the function for parsing specified pages of a PDF, you’ll create the endpoint for accepting uploads and parsing specified pages.</p>
<pre><code class="lang-tsx">// Page range PDF text extraction endpoint
app.post("/upload-page-range", async (req: Request, res: Response) =&gt; {
  try {
    if (!req.files || !("file" in req.files)) {
      return res.status(400).json({
        error: "No PDF file shared.",
      });
    }

    // Get page range from query params or body
    const startPage = parseInt(
      (req.query.startPage as string) || (req.body?.startPage as string) || "1"
    );
    const endPage = parseInt(
      (req.query.endPage as string) || (req.body?.endPage as string) || "1"
    );

    if (isNaN(startPage) || isNaN(endPage)) {
      return res.status(400).json({
        error:
          "Invalid page range. Please provide valid integers for startPage and endPage.",
      });
    }

    const pdfFile = req.files.file as UploadedFile;
    const unit8ArrayData = new Uint8Array(pdfFile?.data);
    const result = await parsePageRangeFromPDF(
      unit8ArrayData,
      startPage,
      endPage
    );
    console.log(
      `Pages ${startPage}-${endPage} extracted successfully: `,
      result
    );
    res.json({ result, success: true });
  } catch (error) {
    console.error("Error processing PDF: ", error);
    if (error instanceof Error) {
      return res.status(400).json({ error: error.message, success: false });
    }
    res.status(500).json({
      error: "Failed to process PDF due to an unknown error.",
      success: false,
    });
  }
});
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ol>
<li><p>You defined a POST route handler at <code>/upload-page-range</code> that extracts text from a specific page range within uploaded PDF files. The handler first validates that a PDF file exists in the request using <code>req.files</code>, returning a 400 error if no file is provided.</p>
</li>
<li><p>The function extracts the <code>startPage</code> and <code>endPage</code> parameters from either query parameters or request body, providing default values of "1" if neither is specified. It then validates that both values are valid integers using <code>isNaN()</code> checks. This ensures robust input handling for page range requests.</p>
</li>
<li><p>Once the PDF is converted to a buffer, it's passed to <code>parsePageRangeFromPDF()</code> to extract the requested page range. The API responds with the extracted text and range details, while errors are clearly categorized: validation issues return 400, server problems return 500.</p>
</li>
</ol>
<p>This endpoint creates a specialized PDF processing route that allows clients to extract text from specific page ranges rather than entire documents.</p>
<p>Now, you can extract from specific pages using request parameters:</p>
<pre><code class="lang-bash">curl -F <span class="hljs-string">"file=@yourfile.pdf"</span> <span class="hljs-string">"http://localhost:8080/upload-page-range?startPage=5&amp;endPage=7"</span>
</code></pre>
<p>or using request body:</p>
<pre><code class="lang-bash">curl -X POST -F <span class="hljs-string">"file=@yourfile.pdf"</span> \
  -F <span class="hljs-string">"startPage=5"</span> \
  -F <span class="hljs-string">"endPage=7"</span> \
  http://localhost:8080/upload-page-range
</code></pre>
<p>In the next section, you’ll add an endpoint for getting only the metadata of an uploaded file.</p>
<h2 id="heading-adding-a-lightweight-metadata-only-endpoint">Adding a Lightweight Metadata-Only Endpoint</h2>
<p>Creating a lightweight metadata-only endpoint allows your users to quickly validate and inspect PDFs without fully processing the document.</p>
<p>This is useful for previewing document info before processing.</p>
<h3 id="heading-creating-the-metadata-extraction-function">Creating the Metadata Extraction Function</h3>
<p>Add a new function that retrieves only document information:</p>
<pre><code class="lang-tsx">async function getPDFMetadata(file: Uint8Array) {
  const parser = new PDFParse(file);
  const info = await parser.getInfo({ parsePageInfo: true });
  return {
    title: info?.info?.Title || "N/A",
    author: info?.info?.Author || "N/A",
    subject: info?.info?.Subject || "N/A",
    creator: info?.info?.Creator || "N/A",
    producer: info?.info?.Producer || "N/A",
    creationDate: convertPDFDateToReadable(info?.info?.CreationDate || "N/A"),
    modificationDate: convertPDFDateToReadable(info?.info?.ModDate || "N/A"),
    pages: info?.total || 0,
  };
}
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ol>
<li><p>You defined an <code>async</code> function <code>getPDFMetadata</code> that extracts and processes metadata from PDF documents. The function accepts a <code>Uint8Array</code> PDF file buffer and uses the <code>PDFParse</code> library to retrieve document information.</p>
</li>
<li><p>The function extracts key PDF metadata fields, including title, author, subject, creator, and producer, providing fallback values of "N/A" when these fields are missing. This ensures the function always returns a complete metadata object, even for PDFs with missing information.</p>
</li>
<li><p>You implemented date processing using a <code>convertPDFDateToReadable</code> helper function to transform the PDF's specialized date formats into human-readable strings. The function returns a structured object containing all extracted metadata along with the total page count.</p>
</li>
</ol>
<p>This utility function provides a clean interface for extracting and normalizing PDF metadata. It makes it easy to access document information like authorship, creation dates, and page counts in a standardized format.</p>
<p>Here’s the <code>convertPDFDateToReadable</code> function:</p>
<pre><code class="lang-tsx">function convertPDFDateToReadable(pdfDateString: string): string {
  try {
    // Remove "D:" prefix if present
    let dateStr = pdfDateString.startsWith("D:")
      ? pdfDateString.slice(2)
      : pdfDateString;

    // Extract date and time components (format: YYYYMMDDHHmmss)
    const year = dateStr.substring(0, 4);
    const month = dateStr.substring(4, 6);
    const day = dateStr.substring(6, 8);
    const hour = dateStr.substring(8, 10);
    const minute = dateStr.substring(10, 12);
    const second = dateStr.substring(12, 14);

    // Validate date components
    const monthNum = parseInt(month);
    const dayNum = parseInt(day);

    if (monthNum &lt; 1 || monthNum &gt; 12 || dayNum &lt; 1 || dayNum &gt; 31) {
      throw new Error("Invalid date values");
    }

    // Return in dd/mm/yyyy format
    return `${day}/${month}/${year}`;
  } catch (error) {
    console.error("Error converting PDF date:", error);
    return "Invalid date";
  }
}
</code></pre>
<h3 id="heading-creating-the-metadata-endpoint">Creating the Metadata Endpoint</h3>
<p>Create a POST endpoint that accepts file uploads and returns only metadata:</p>
<pre><code class="lang-tsx">app.post("/metadata", async (req: Request, res: Response) =&gt; {
  try {
    if (!req.files || !("file" in req.files)) {
      return res.status(400).json({
        error: "No PDF file shared.",
      });
    }

    const pdfFile = req.files.file as UploadedFile;
    const unit8ArrayData = new Uint8Array(pdfFile?.data);
    const metadata = await getPDFMetadata(unit8ArrayData);

    console.log("PDF metadata extracted successfully: ", metadata);
    res.json({ metadata, success: true });
  } catch (error) {
    console.error("Error extracting metadata:", error);
    if (error instanceof Error) {
      return res.status(500).json({ error: error.message, success: false });
    }
    res.status(500).json({
      error: "Failed to extract metadata due to an unknown error.",
      success: false,
    });
  }
});
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ol>
<li><p>You defined a POST route handler at <code>/metadata</code> that extracts and returns metadata from uploaded PDF files. The handler begins with validation to ensure a PDF file exists in the request using <code>req.files</code>, returning a 400 error with a clear message if no file is provided.</p>
</li>
<li><p>The function converts the uploaded PDF file to a <code>Uint8Array</code> buffer format, which is required by the <code>getPDFMetadata()</code> utility function you created earlier. This conversion ensures the PDF data is in the proper format for the PDF parsing library to process.</p>
</li>
<li><p>After successfully extracting metadata, the route logs the results and returns them in a structured JSON response. The comprehensive error handling catches any issues during processing and returns appropriate 500 errors with descriptive messages while maintaining the consistent response format.</p>
</li>
</ol>
<p>This endpoint provides a dedicated API for extracting PDF metadata like title, author, creation dates, and page counts. This provides your users with an easy way to analyze PDF document properties without needing to parse the entire file content.</p>
<p>Now, you can extract only the metadata of uploaded files:</p>
<pre><code class="lang-bash">curl -X POST -F <span class="hljs-string">"file=@document.pdf"</span> http://localhost:8080/metadata
</code></pre>
<p>Your response should look like this:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"metadata"</span>: {
        <span class="hljs-attr">"title"</span>: <span class="hljs-string">"MSA"</span>,
        <span class="hljs-attr">"author"</span>: <span class="hljs-string">"N/A"</span>,
        <span class="hljs-attr">"subject"</span>: <span class="hljs-string">"N/A"</span>,
        <span class="hljs-attr">"creator"</span>: <span class="hljs-string">"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/144.0.0.0 Safari/537.36"</span>,
        <span class="hljs-attr">"producer"</span>: <span class="hljs-string">"Skia/PDF m144"</span>,
        <span class="hljs-attr">"creationDate"</span>: <span class="hljs-string">"22/01/2026"</span>,
        <span class="hljs-attr">"modificationDate"</span>: <span class="hljs-string">"22/01/2026"</span>,
        <span class="hljs-attr">"pages"</span>: <span class="hljs-number">26</span>
    },
    <span class="hljs-attr">"success"</span>: <span class="hljs-literal">true</span>
}
</code></pre>
<h2 id="heading-adding-searchfind-functionality">Adding Search/Find Functionality</h2>
<p>When working with large PDFs, finding specific information manually can be time-consuming. The search functionality allows users to locate keywords within a PDF and get immediate results showing which pages contain the keyword and how many times it appears.</p>
<p>This is especially valuable for research, compliance, or document analysis tasks.</p>
<p>For example, a user may wish to find all instances of "invoice" in a 50-page financial report, or locate "clause 3.2" in a legal document. By adding a dedicated search endpoint that accepts a PDF file and a keyword, you give clients the ability to quickly navigate large documents without reading through every page.</p>
<p>In this section, you'll create a search function that finds keywords within PDF text and an endpoint that accepts file uploads along with search queries.</p>
<h3 id="heading-creating-the-searchfind-function">Creating the Search/Find Function</h3>
<p>The search function is the core utility that finds keywords within PDF documents and returns detailed results about their locations.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">searchPDFText</span>(<span class="hljs-params">
  file: <span class="hljs-built_in">Uint8Array</span>,
  searchQuery: <span class="hljs-built_in">string</span>,
  caseSensitive: <span class="hljs-built_in">boolean</span> = <span class="hljs-literal">false</span>
</span>) </span>{
  <span class="hljs-keyword">const</span> parser = <span class="hljs-keyword">new</span> PDFParse(file);
  <span class="hljs-keyword">const</span> info = <span class="hljs-keyword">await</span> parser.getInfo({ parsePageInfo: <span class="hljs-literal">true</span> });
  <span class="hljs-keyword">const</span> totalPages = <span class="hljs-built_in">Array</span>.isArray(info?.pages)
    ? info.pages.length
    : (info?.pages <span class="hljs-keyword">as</span> <span class="hljs-built_in">number</span>) || <span class="hljs-number">0</span>;

  <span class="hljs-keyword">const</span> results = {
    query: searchQuery,
    caseSensitive,
    matchCount: <span class="hljs-number">0</span>,
    matches: [] <span class="hljs-keyword">as</span> <span class="hljs-built_in">Array</span>&lt;{
      page: <span class="hljs-built_in">number</span>;
      text: <span class="hljs-built_in">string</span>;
      position: <span class="hljs-built_in">number</span>;
    }&gt;,
  };

  <span class="hljs-comment">// Extract text from all pages</span>
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> page = <span class="hljs-number">1</span>; page &lt;= totalPages; page++) {
    <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> parser.getText();
    <span class="hljs-keyword">const</span> pageText = data?.text || <span class="hljs-string">""</span>;

    <span class="hljs-comment">// Determine search text based on case sensitivity</span>
    <span class="hljs-keyword">const</span> searchText = caseSensitive ? searchQuery : searchQuery.toLowerCase();
    <span class="hljs-keyword">const</span> compareText = caseSensitive ? pageText : pageText.toLowerCase();

    <span class="hljs-keyword">let</span> searchIndex = <span class="hljs-number">0</span>;
    <span class="hljs-keyword">while</span> ((searchIndex = compareText.indexOf(searchText, searchIndex)) !== <span class="hljs-number">-1</span>) {
      <span class="hljs-comment">// Extract context (100 characters before and after)</span>
      <span class="hljs-keyword">const</span> startContext = <span class="hljs-built_in">Math</span>.max(<span class="hljs-number">0</span>, searchIndex - <span class="hljs-number">50</span>);
      <span class="hljs-keyword">const</span> endContext = <span class="hljs-built_in">Math</span>.min(pageText.length, searchIndex + searchQuery.length + <span class="hljs-number">50</span>);
      <span class="hljs-keyword">const</span> contextText = pageText.substring(startContext, endContext);

      results.matches.push({
        page,
        text: contextText.trim(),
        position: searchIndex,
      });

      results.matchCount++;
      searchIndex += searchText.length;
    }
  }

  <span class="hljs-keyword">return</span> results;
}
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ol>
<li><p>You defined an <code>async</code> function <code>searchPDFText</code> that performs text search within PDF documents with optional case sensitivity. The function accepts a PDF file buffer, a search query string, and a <code>caseSensitive</code> parameter with a default value of <code>false</code> for more flexible searching.</p>
</li>
<li><p>The function uses the <code>PDFParse</code> library to first extract the total page count from the PDF metadata. It then initializes a results object to track the search query, case sensitivity setting, total match count, and individual matches with their page numbers, context text, and positions.</p>
</li>
<li><p>For each page in the PDF, the function extracts text and performs the search using either case-sensitive or case-insensitive comparison based on the parameter. When matches are found, it captures a 100-character context window around each match (50 characters before and after) and records the page number, position, and contextual text in the results.</p>
</li>
</ol>
<p>This function creates a comprehensive PDF search utility that can locate specific text within documents, while providing contextual snippets for each match. This makes it useful for document analysis and content retrieval applications.</p>
<h3 id="heading-creating-the-searchfind-endpoint">Creating the Search/Find Endpoint</h3>
<p>Now that you've created the search function, you'll create an endpoint that accepts file uploads along with a search query. The endpoint also supports case-sensitive searches.</p>
<pre><code class="lang-typescript">app.post(<span class="hljs-string">"/search"</span>, <span class="hljs-keyword">async</span> (req: Request, res: Response) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">if</span> (!req.files || !(<span class="hljs-string">"file"</span> <span class="hljs-keyword">in</span> req.files)) {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({
        error: <span class="hljs-string">"No PDF file shared."</span>,
      });
    }

    <span class="hljs-comment">// Get search query and options</span>
    <span class="hljs-keyword">const</span> query = (req.query.query <span class="hljs-keyword">as</span> <span class="hljs-built_in">string</span>) || (req.body?.query <span class="hljs-keyword">as</span> <span class="hljs-built_in">string</span>);
    <span class="hljs-keyword">const</span> caseSensitive =
      (req.query.caseSensitive <span class="hljs-keyword">as</span> <span class="hljs-built_in">string</span>) === <span class="hljs-string">"true"</span> ||
      req.body?.caseSensitive === <span class="hljs-literal">true</span>;

    <span class="hljs-keyword">if</span> (!query || query.trim() === <span class="hljs-string">""</span>) {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({
        error: <span class="hljs-string">"Search query is required."</span>,
      });
    }

    <span class="hljs-keyword">const</span> pdfFile = req.files.file <span class="hljs-keyword">as</span> UploadedFile;
    <span class="hljs-keyword">const</span> unit8ArrayData = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Uint8Array</span>(pdfFile?.data);
    <span class="hljs-keyword">const</span> results = <span class="hljs-keyword">await</span> searchPDFText(unit8ArrayData, query, caseSensitive);

    <span class="hljs-keyword">if</span> (results.matchCount === <span class="hljs-number">0</span>) {
      <span class="hljs-keyword">return</span> res.json({
        result: results,
        success: <span class="hljs-literal">true</span>,
        message: <span class="hljs-string">"No matches found."</span>,
      });
    }

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Found <span class="hljs-subst">${results.matchCount}</span> matches for "<span class="hljs-subst">${query}</span>"`</span>);
    res.json({ result: results, success: <span class="hljs-literal">true</span> });
  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Error searching PDF:"</span>, error);
    <span class="hljs-keyword">if</span> (error <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Error</span>) {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ error: error.message, success: <span class="hljs-literal">false</span> });
    }
    res.status(<span class="hljs-number">500</span>).json({
      error: <span class="hljs-string">"Failed to search PDF due to an unknown error."</span>,
      success: <span class="hljs-literal">false</span>,
    });
  }
});
</code></pre>
<p>Let's break down what's happening in this code:</p>
<ol>
<li><p>You defined a POST route handler at <code>/search</code> that enables full-text search within uploaded PDF documents. The handler begins with validation to ensure that both a PDF file and a search query are provided, returning 400 errors with descriptive messages if either is missing or empty.</p>
</li>
<li><p>The function extracts the search query and <code>caseSensitive</code> option from either query parameters or the request body, with proper type conversion for the boolean flag. It converts the uploaded PDF to a <code>Uint8Array</code> buffer and passes it to your <code>searchPDFText()</code> utility function along with the search parameters.</p>
</li>
<li><p>The handler provides informative responses based on search results: returning a success response with a "No matches found" message when no matches are detected, or returning the full results when matches exist. Error handling differentiates between client errors (400) for invalid inputs and server errors (500) for processing failures.</p>
</li>
</ol>
<p>This endpoint creates a powerful PDF search API that allows clients to locate specific text within documents with configurable case sensitivity, providing contextual matches and comprehensive results for document analysis applications.</p>
<p>Now, you can search for keywords within PDFs using query parameters.</p>
<p>Search for “example” (case-insensitive):</p>
<pre><code class="lang-bash">curl -F <span class="hljs-string">"file=@document.pdf"</span> <span class="hljs-string">"http://localhost:8080/search?query=example"</span>
</code></pre>
<p>Search for “Example” (case-sensitive):</p>
<pre><code class="lang-bash">curl -F <span class="hljs-string">"file=@document.pdf"</span> <span class="hljs-string">"http://localhost:8080/search?query=Example&amp;caseSensitive=true"</span>
</code></pre>
<p>You can use the request body:</p>
<pre><code class="lang-bash">curl -X POST -F <span class="hljs-string">"file=@document.pdf"</span> \
  -F <span class="hljs-string">"query=PDF"</span> \
  -F <span class="hljs-string">"caseSensitive=true"</span> \
  http://localhost:8080/search
</code></pre>
<p>Your response should look like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"result"</span>: {
    <span class="hljs-attr">"query"</span>: <span class="hljs-string">"PDF"</span>,
    <span class="hljs-attr">"caseSensitive"</span>: <span class="hljs-literal">false</span>,
    <span class="hljs-attr">"matchCount"</span>: <span class="hljs-number">3</span>,
    <span class="hljs-attr">"matches"</span>: [
      {
        <span class="hljs-attr">"page"</span>: <span class="hljs-number">1</span>,
        <span class="hljs-attr">"text"</span>: <span class="hljs-string">"...This is a PDF document. The PDF format is..."</span>,
        <span class="hljs-attr">"position"</span>: <span class="hljs-number">10</span>
      },
      {
        <span class="hljs-attr">"page"</span>: <span class="hljs-number">2</span>,
        <span class="hljs-attr">"text"</span>: <span class="hljs-string">"...Learn more about PDF standards..."</span>,
        <span class="hljs-attr">"position"</span>: <span class="hljs-number">25</span>
      }
    ]
  },
  <span class="hljs-attr">"success"</span>: <span class="hljs-literal">true</span>
}
</code></pre>
<p>You’ve now added three important features to your PDF parser.</p>
<p>In the next section, we’ll look at handling edge cases.</p>
<h2 id="heading-handling-edge-cases-and-best-practices">Handling Edge Cases and Best Practices</h2>
<p>When building your custom PDF parser, there are some edge cases you should keep in mind if you want to build a more robust and reliable parser.</p>
<p>Below are some edge cases to watch out for:</p>
<h3 id="heading-corrupted-or-malformed-pdfs">Corrupted or Malformed PDFs</h3>
<p>Some users may upload corrupted PDFs – that is, PDFs with invalid structure or corrupted headers. This can cause errors during processing.</p>
<p>You can wrap your parsing operations in a <code>try-catch</code> block to handle the parsing errors gracefully. Also, you’ll want to provide clear error messages that distinguish corrupted files from other errors.</p>
<h3 id="heading-password-protected-pdfs">Password-Protected PDFs</h3>
<p>PDFs can be encrypted with user or owner passwords. This poses a challenge as <code>pdf-parse</code> has limited support for password-protected files.</p>
<p>There are two ways you can solve this problem:</p>
<ul>
<li><p>Implement a mechanism that rejects password-protected files.</p>
</li>
<li><p>Accept a password query you can use to decrypt the files.</p>
</li>
</ul>
<h3 id="heading-scanned-pdfs-image-based">Scanned PDFs (Image-Based)</h3>
<p>PDFs created from scanned documents are images with no extractable text. If you try to parse these documents as is, you’ll get empty or minimal text.</p>
<p>You can implement OCR (Optical Character Recognition) to extract text from scanned PDFs.</p>
<h3 id="heading-special-characters-and-encoding">Special Characters and Encoding</h3>
<p>Your users may upload PDFs that contain special characters, Unicode symbols, or non-Latin scripts. If your extraction function doesn’t support such characters, your users may lose a good chunk of their files.</p>
<p>You’ll want to make sure that your text extraction can handle UTF-8 encoding and various character sets.</p>
<h2 id="heading-best-practices">Best Practices</h2>
<p>Below are some best practices to adopt when building your own custom PDF parser:</p>
<p>1. Validate incoming files before processing:</p>
<pre><code class="lang-tsx">function validatePDFFile(pdfFile: UploadedFile): { valid: boolean; error?: string } {
  // Check MIME type
  if (pdfFile.mimetype !== "application/pdf") {
    return { valid: false, error: "Invalid MIME type. Expected application/pdf" };
  }

  // Check file size (already limited by middleware, but double-check)
  const maxSize = 50 * 1024 * 1024; // 50 MB
  if (pdfFile.size &gt; maxSize) {
    return { valid: false, error: "File exceeds maximum size of 50 MB" };
  }

  // Check for empty file
  if (pdfFile.size === 0) {
    return { valid: false, error: "File is empty" };
  }

  // Check file signature (PDF magic bytes)
  const data = new Uint8Array(pdfFile.data as ArrayBuffer);
  const header = String.fromCharCode(...data.slice(0, 4));
  if (header !== "%PDF") {
    return { valid: false, error: "Invalid PDF file format" };
  }

  return { valid: true };
}
</code></pre>
<p>2. Implement request timeouts to avoid server hangs</p>
<pre><code class="lang-tsx">// Set timeout for long-running PDF operations
const parseWithTimeout = (file: Uint8Array, timeoutMs = 30000) =&gt; {
  return Promise.race([
    parsePDF(file),
    new Promise((_, reject) =&gt;
      setTimeout(() =&gt; reject(new Error("PDF parsing timeout")), timeoutMs)
    ),
  ]);
};
</code></pre>
<p>3. Implement rate limiting to avoid abuse. You can use the <code>express-rate-limit</code> library to apply rate limiting to your Express apps.</p>
<pre><code class="lang-tsx">import rateLimit from 'express-rate-limit';

const app = express();

// Create the rate limiting middleware
const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // Limit each IP to 100 requests per `windowMs`
  message:
    'Too many requests from this IP, please try again after 15 minutes',
  standardHeaders: true, // Enable standard RateLimit headers (draft-7)
  legacyHeaders: false, // Disable legacy X-RateLimit-* headers
});

// Apply the rate limiting middleware to all requests
app.use(limiter);
</code></pre>
<p>4. Sanitize each keyword or search query to avoid injection attacks.</p>
<h2 id="heading-unit-testing-your-pdf-parser"><strong>Unit Testing Your PDF Parser</strong></h2>
<p>Testing is critical when building PDF processing tools, as real-world PDFs vary widely in structure, encoding, and complexity. Jest provides an excellent framework for testing Express endpoints and ensuring your extraction logic works reliably across different scenarios.</p>
<h3 id="heading-setting-up-jest-tests">Setting Up Jest Tests</h3>
<p>The test suite I've created uses Jest with Supertest (an HTTP assertion library) to simulate requests to your API endpoints without running a server.</p>
<p>To start, install Jest, Supertest, and their types:</p>
<pre><code class="lang-bash">npm install --save-dev jest @types/jest supertest @types/supertest ts-jest
</code></pre>
<p>Then update your package.json to include Jest configuration:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"test"</span>: <span class="hljs-string">"jest"</span>,
    <span class="hljs-attr">"test:watch"</span>: <span class="hljs-string">"jest --watch"</span>
  },
  <span class="hljs-attr">"jest"</span>: {
    <span class="hljs-attr">"preset"</span>: <span class="hljs-string">"ts-jest"</span>,
    <span class="hljs-attr">"testEnvironment"</span>: <span class="hljs-string">"node"</span>,
    <span class="hljs-attr">"extensionsToTreatAsEsm"</span>: [<span class="hljs-string">".ts"</span>],
    <span class="hljs-attr">"moduleNameMapper"</span>: {
      <span class="hljs-attr">"^(\\.{1,2}/.*)\\.js$"</span>: <span class="hljs-string">"$1"</span>
    },
    <span class="hljs-attr">"transform"</span>: {
      <span class="hljs-attr">"^.+\\.tsx?$"</span>: [
        <span class="hljs-string">"ts-jest"</span>,
        {
          <span class="hljs-attr">"useESM"</span>: <span class="hljs-literal">true</span>,
          <span class="hljs-attr">"tsconfig"</span>: {
            <span class="hljs-attr">"module"</span>: <span class="hljs-string">"esnext"</span>
          }
        }
      ]
    }
  }
}
</code></pre>
<h3 id="heading-understanding-the-test-structure">Understanding the Test Structure</h3>
<p>The test file includes comprehensive coverage for all your endpoints. For example, the <code>/upload-page-range</code> endpoint tests verify both happy paths and error handling:</p>
<pre><code class="lang-tsx">describe("POST /upload-page-range", () =&gt; {
  it("should return error when no file is provided", async () =&gt; {
    const response = await request(app)
      .post("/upload-page-range")
      .query({ startPage: 1, endPage: 2 });
    expect(response.status).toBe(400);
    expect(response.body.error).toBe("No PDF file shared.");
  });

  it("should return error for invalid page range", async () =&gt; {
    const mockPdfBuffer = Buffer.from("%PDF-1.4 mock pdf");
    const response = await request(app)
      .post("/upload-page-range")
      .query({ startPage: "invalid", endPage: 2 })
      .attach("file", mockPdfBuffer, "test.pdf");

    expect(response.status).toBe(400);
    expect(response.body.error).toContain("valid integers");
  });

  it("should extract text from page range", async () =&gt; {
    const mockPdfBuffer = Buffer.from("%PDF-1.4 mock pdf");
    const response = await request(app)
      .post("/upload-page-range")
      .query({ startPage: 1, endPage: 2 })
      .attach("file", mockPdfBuffer, "test.pdf");

    expect(response.status).toBe(200);
    expect(response.body.success).toBe(true);
    expect(response.body.result.startPage).toBe(1);
    expect(response.body.result.endPage).toBe(2);
  });
});
</code></pre>
<p>Notice how the tests mock the PDFParse library rather than requiring actual PDF files. This approach makes tests:</p>
<ul>
<li><p><strong>Fast</strong>: No disk I/O, tests run in milliseconds</p>
</li>
<li><p><strong>Reliable</strong>: No dependency on external files that might change</p>
</li>
<li><p><strong>Focused</strong>: Each test verifies specific behavior, not file handling</p>
</li>
</ul>
<p>The mock returns consistent data for all test cases, allowing you to verify your endpoint logic, handle responses correctly, validate parameters properly, and return appropriate error messages.</p>
<h3 id="heading-running-tests">Running Tests</h3>
<p>Execute your test suite with:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Run tests once</span>
npm <span class="hljs-built_in">test</span>

<span class="hljs-comment"># Run tests in watch mode for development</span>
npm run <span class="hljs-built_in">test</span>:watch

<span class="hljs-comment"># Generate coverage report</span>
npm <span class="hljs-built_in">test</span> -- --coverage
</code></pre>
<p>A successful test run confirms that all endpoints, <code>/upload</code>, /metadata, <code>/search</code>, and <code>/upload-page-range</code>, handle valid requests, reject invalid inputs, and return data in the expected format.</p>
<h2 id="heading-deploying-your-pdf-parser-api"><strong>Deploying Your PDF Parser API</strong></h2>
<p>Once your tests pass, you're ready to deploy your Express app. The deployment process depends on your hosting platform, but here are the essentials:</p>
<h3 id="heading-running-locally">Running Locally</h3>
<p>Start your development server with:</p>
<pre><code class="lang-bash">npm run dev
</code></pre>
<p>This runs the server from <code>server.ts</code> using <code>ts-node</code> and Nodemon. The API will be available at <code>http://localhost:8080</code>.</p>
<p>Test your endpoints with curl:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Test the health check</span>
curl http://localhost:8080/

<span class="hljs-comment"># Upload and parse a PDF</span>
curl -F <span class="hljs-string">"file=@sample.pdf"</span> http://localhost:8080/upload

<span class="hljs-comment"># Extract specific pages</span>
curl -F <span class="hljs-string">"file=@sample.pdf"</span> <span class="hljs-string">"http://localhost:8080/upload-page-range?startPage=1&amp;endPage=5"</span>

<span class="hljs-comment"># Search for text</span>
curl -F <span class="hljs-string">"file=@sample.pdf"</span> <span class="hljs-string">"http://localhost:8080/search?query=invoice"</span>

<span class="hljs-comment"># Get metadata only</span>
curl -F <span class="hljs-string">"file=@sample.pdf"</span> http://localhost:8080/metadata
</code></pre>
<h3 id="heading-production-deployment">Production Deployment</h3>
<p>Before deploying to production, build your TypeScript:</p>
<pre><code class="lang-bash">npm run build
</code></pre>
<p>Then start the compiled server:</p>
<pre><code class="lang-bash">npm start
</code></pre>
<p>For cloud platforms like Heroku, AWS, or DigitalOcean, ensure your environment variables are set (particularly the <code>PORT</code> variable). The API is designed to scale horizontally, since it doesn't maintain state. Each request processes independently.</p>
<p>Consider adding these production improvements:</p>
<ul>
<li><p><strong>Rate limiting</strong>: Prevent abuse with express-rate-limit</p>
</li>
<li><p><strong>Logging</strong>: Use Winston or Pino for structured logging</p>
</li>
<li><p><strong>Monitoring</strong>: Set up error tracking with Sentry or similar services</p>
</li>
<li><p><strong>Database</strong>: Store extraction results in MongoDB or PostgreSQL for historical access</p>
</li>
<li><p><strong>Caching</strong>: Cache metadata for frequently accessed PDFs to reduce processing overhead</p>
</li>
</ul>
<h2 id="heading-next-steps-integrate-into-your-saas"><strong>Next Steps: Integrate Into Your SaaS</strong></h2>
<p>This PDF parser is now a production-ready API that you can integrate into any SaaS platform needing document processing capabilities. Here's how to get started:</p>
<p>Fork the repository and customize it for your use case. Add features like:</p>
<ul>
<li><p>Support for additional document formats (DOCX, XLSX, images)</p>
</li>
<li><p>Batch processing endpoints for handling multiple files</p>
</li>
<li><p>Webhook support for asynchronous processing</p>
</li>
<li><p>User authentication and per-user quotas</p>
</li>
<li><p>Advanced text extraction options (tables, forms, structured data)</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building a production-ready PDF parser gives you complete control over document processing while maintaining modularity for future extensions.</p>
<p>You've learned to build an Express API that handles full extraction, page ranges, text search, and metadata retrieval – all with robust error handling and validation patterns that apply to any document processing tool.</p>
<p>This tested, deployable foundation is ready to scale in real applications, whether you're building a SaaS product or adding PDF capabilities to existing systems.</p>
<p>As you integrate these patterns into your projects, consider exploring advanced libraries like <code>pdfjs-dist</code> or <code>pdf-lib</code> while applying the same validation and modular design principles you've mastered here.</p>
<h3 id="heading-resources">Resources</h3>
<ul>
<li><a target="_blank" href="https://github.com/DeraCodings/custom-pdf-parser">GitHub repo</a></li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build a Payroll System with Express and Monnify Using Background Jobs ]]>
                </title>
                <description>
                    <![CDATA[ Processing payroll payments is an important operation for any business. When you need to pay employees simultaneously, you can't afford to have your server hang, get blocking errors, or timeout while waiting for each payment to complete. Building a p... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-a-payroll-system-with-express-and-monnify-using-background-jobs/</link>
                <guid isPermaLink="false">69680d9ead82a9267c20097d</guid>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Express ]]>
                    </category>
                
                    <category>
                        <![CDATA[ TypeScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ PostgreSQL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ handbook ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ David Aniebo ]]>
                </dc:creator>
                <pubDate>Wed, 14 Jan 2026 21:41:50 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768414407566/4384def7-fdc2-4274-888d-d5bd5bd5549b.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Processing payroll payments is an important operation for any business. When you need to pay employees simultaneously, you can't afford to have your server hang, get blocking errors, or timeout while waiting for each payment to complete.</p>
<p>Building a payroll system is an excellent way to practice real-world backend development skills. Unlike simple CRUD applications, payroll systems require you to think about:</p>
<ul>
<li><p><strong>Asynchronous processing</strong>: When you need to pay hundreds of employees, processing payments synchronously can cause your server to timeout. Background jobs with Bull and Redis allow you to handle long-running operations without blocking your API.</p>
</li>
<li><p><strong>Payment gateway integration</strong>: Working with payment APIs like Monnify teaches you how to handle external service integrations, authentication flows, webhook verification, and error handling in production systems.</p>
</li>
<li><p><strong>Data consistency</strong>: Payroll systems need to maintain accurate records. You'll learn about transaction reconciliation, idempotency, and how to handle partial failures gracefully.</p>
</li>
<li><p><strong>Production-ready patterns</strong>: This tutorial covers patterns you'll use in real applications: job queues, webhook handlers, database migrations, and proper error handling.</p>
</li>
</ul>
<p>Whether you're building a fintech application, an HR system, or just want to understand how payment processing works, the concepts in this tutorial will serve you well. The combination of Express, TypeScript, background jobs, and payment APIs represents a common stack in modern backend development.</p>
<p>In this tutorial, you’ll learn how to build a production-grade payroll engine using Express.js, TypeScript, and Monnify's payment API. You'll implement background job processing with <code>Bull</code> and <code>Redis</code> to handle bulk disbursements efficiently.</p>
<p>By the end, you will have a fully functional payroll system that can:</p>
<ul>
<li><p>Manage employee records with bank account details</p>
</li>
<li><p>Create and process payroll batches</p>
</li>
<li><p>Process bulk payments using Monnify's disbursement API</p>
</li>
<li><p>Handle payment status updates via webhooks</p>
</li>
<li><p>Reconcile transactions to ensure data consistency</p>
</li>
</ul>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ol>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-project-architecture-overview">Project Architecture Overview</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-the-project">Setting Up the Project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-configuring-docker-for-postgresql-and-redis">Configuring Docker for PostgreSQL and Redis</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-the-database">Setting Up the Database</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-database-models">Creating Database Models</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-employee-model">Employee Model</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-employee-data-structure-employee-interface">Employee Data Structure (Employee Interface)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-employee-model-class">Employee Model Class</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-auto-generating-employee-ids-generateemployeeid">Auto-Generating Employee IDs (generateEmployeeId)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-an-employee-create">Creating an Employee (create)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-retrieving-all-active-employees-findall">Retrieving All Active Employees (findAll)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-retrieving-an-employee-by-database-id-findbyid">Retrieving an Employee by Database ID (findById)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-retrieving-an-employee-by-employee-identifier-findbyemployeeid">Retrieving an Employee by Employee Identifier (findByEmployeeId)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-updating-an-employee-update">Updating an Employee (update)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-soft-deleting-an-employee-delete">Soft-Deleting an Employee (delete)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-model">Payroll Model</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-status-lifecycle">Payroll Status Lifecycle</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-entity">Payroll Entity</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-item-entity">Payroll Item Entity</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-a-payroll-payrollmodelcreate">Creating a Payroll (PayrollModel.create)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-payroll-records">Fetching Payroll Records</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-updating-payroll-status-payrollmodelupdatestatus">Updating Payroll Status (PayrollModel.updateStatus)</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-payrollitemmodel">PayrollItemModel</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-fetching-payroll-items-payrollitemmodelfindbypayrollid">Fetching Payroll Items (PayrollItemModel.findByPayrollId)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-a-single-payroll-item-payrollitemmodelfindbyid">Fetching a Single Payroll Item (PayrollItemModel.findById)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-updating-payroll-item-status-payrollitemmodelupdatestatus">Updating Payroll Item Status (PayrollItemModel.updateStatus)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-overall-payroll-flow">Overall Payroll Flow</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-building-the-monnify-client">Building the Monnify Client</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-configuration-and-environment-setup">Configuration and Environment Setup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-create-the-monnifyclient-class">Create the MonnifyClient Class</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-axios-client-and-request-interceptor">Axios Client and Request Interceptor</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-authenticate-with-monnify">Authenticate with Monnify</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-automatic-token-refresh-ensureauthenticated">Automatic Token Refresh (ensureAuthenticated)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-initiating-bulk-transfers">Initiating Bulk Transfers</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-authorizing-bulk-transfers-otp-validation">Authorizing Bulk Transfers (OTP Validation)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-transaction-status-lookup">Transaction Status Lookup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-batch-details-retrieval">Batch Details Retrieval</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-wallet-balance-check">Wallet Balance Check</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-implementing-background-job-processing">Implementing Background Job Processing</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-set-up-the-payroll-processing-queue">Set Up the Payroll Processing Queue</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-queue-processor-registration">Queue Processor Registration</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-bulk-payroll-processing-flow-processbulkpayroll">Bulk Payroll Processing Flow (processBulkPayroll)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-building-the-bulk-transfer-payload">Building the Bulk Transfer Payload</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-initiating-bulk-disbursement-via-monnify">Initiating Bulk Disbursement via Monnify</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-storing-transaction-references">Storing Transaction References</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-statistics-reconciliation-updatepayrollstats">Payroll Statistics Reconciliation (updatePayrollStats)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-queue-entry-point-processpayrollitems">Queue Entry Point (processPayrollItems)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-role-in-the-overall-payroll-architecture">Role in the Overall Payroll Architecture</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-the-api-controllers">Creating the API Controllers</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-controller-responsibilities">Controller Responsibilities</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-an-employee-createemployee">Creating an Employee (createEmployee)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-all-employees-getallemployees">Fetching All Employees (getAllEmployees)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-a-single-employee-getemployeebyid">Fetching a Single Employee (getEmployeeById)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-updating-an-employee-updateemployee">Updating an Employee (updateEmployee)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deleting-an-employee-deleteemployee">Deleting an Employee (deleteEmployee)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-error-handling-strategy">Error Handling Strategy</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-role-in-the-overall-payroll-system">Role in the Overall Payroll System</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-controller">Payroll Controller</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-controller-responsibilities-1">Controller Responsibilities</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-creating-a-payroll-createpayroll">Creating a Payroll (createPayroll)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-all-payrolls-getallpayrolls">Fetching All Payrolls (getAllPayrolls)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-a-payroll-with-items-getpayrollbyid">Fetching a Payroll with Items (getPayrollById)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-processing-a-payroll-processpayroll">Processing a Payroll (processPayroll)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-reconciling-payroll-payments-reconcilepayroll">Reconciling Payroll Payments (reconcilePayroll)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-statistics-update-internal-helper">Payroll Statistics Update (Internal Helper)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-fetching-payroll-status-summary-getpayrollstatus">Fetching Payroll Status Summary (getPayrollStatus)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-authorizing-bulk-transfers-authorizebulktransfer">Authorizing Bulk Transfers (authorizeBulkTransfer)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-checking-transaction-status-checktransactionstatus">Checking Transaction Status (checkTransactionStatus)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-checking-wallet-balance-getaccountbalance">Checking Wallet Balance (getAccountBalance)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-error-handling-and-resilience">Error Handling and Resilience</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-role-in-the-overall-payroll-architecture-1">Role in the Overall Payroll Architecture</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-webhook-handlers">Setting Up Webhook Handlers</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-wiring-up-routes">Wiring Up Routes</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-employee-routes">Employee Routes</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-payroll-routes">Payroll Routes</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-main-application-entry-point">Main Application Entry Point</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-testing-the-system">Testing the System</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-webhooks-for-production">Setting Up Webhooks for Production</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-key-takeaways">Key Takeaways</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-references">References:</a></p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you begin, make sure you have the following:</p>
<ul>
<li><p>Node.js (v18 or higher)</p>
</li>
<li><p>Docker and Docker Compose installed</p>
</li>
<li><p>A Monnify merchant account with API credentials</p>
</li>
<li><p>Basic knowledge of TypeScript and Express.js</p>
</li>
<li><p>Familiarity with REST APIs</p>
</li>
</ul>
<p>You'll also need to obtain these credentials from your Monnify dashboard:</p>
<ul>
<li><p>API Key</p>
</li>
<li><p>Secret Key</p>
</li>
<li><p>Contract Code</p>
</li>
<li><p>Webhook Secret (for verifying webhook signatures)</p>
</li>
</ul>
<h2 id="heading-project-architecture-overview">Project Architecture Overview</h2>
<p>Here's how the payroll system works:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766393228193/8626c139-776c-491b-b060-2f95a760f32b.png" alt="Payroll system working principle" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p><strong>Key components:</strong></p>
<ol>
<li><p><strong>Express API</strong>: A minimal and flexible Node.js web framework that handles HTTP requests for managing employees and payrolls. Express provides routing, middleware support, and makes it easy to build RESTful APIs.</p>
</li>
<li><p><strong>Bull Queue</strong>: A Redis-based queue library for Node.js that processes payroll jobs asynchronously in the background. Bull handles job retries, scheduling, and provides a reliable way to process long-running tasks without blocking your main application thread.</p>
</li>
<li><p><strong>Redis</strong>: An in-memory data structure store that serves as the backend for Bull queues. Redis stores job data, manages job states (pending, active, completed, failed), and enables distributed job processing across multiple workers.</p>
</li>
<li><p><strong>PostgreSQL</strong>: A relational database that persists employee records, payrolls, and payment items. PostgreSQL's ACID compliance ensures data integrity, and its support for complex queries makes it ideal for financial applications.</p>
</li>
<li><p><strong>Monnify API</strong>: A payment gateway service that handles actual money transfers to employee bank accounts. Monnify provides bulk disbursement capabilities, allowing you to process multiple payments in a single API call, which is essential for payroll systems.</p>
</li>
<li><p><strong>Webhooks</strong>: HTTP callbacks that receive real-time payment status updates from Monnify. When a payment completes or fails, Monnify sends a webhook to your server, allowing you to update your database immediately without polling.</p>
</li>
</ol>
<h2 id="heading-setting-up-the-project">Setting Up the Project</h2>
<p>In this section, we'll initialize a new Node.js project with TypeScript and install all the necessary dependencies. We'll configure TypeScript for type safety and set up the project structure that will support our payroll system.</p>
<p>First, create a new directory and initialize your project:</p>
<pre><code class="lang-bash">mkdir monnify-payroll-system
<span class="hljs-built_in">cd</span> monnify-payroll-system
npm init -y
</code></pre>
<p>Next, install the required dependencies:</p>
<pre><code class="lang-bash">npm install express cors helmet dotenv axios bull ioredis pg swagger-jsdoc swagger-ui-express express-validator
</code></pre>
<p>Then install the development dependencies:</p>
<pre><code class="lang-bash">npm install -D typescript ts-node-dev @types/node @types/express @types/cors @types/pg @types/bull
</code></pre>
<p>Create a <code>tsconfig.json</code> file:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"compilerOptions"</span>: {
    <span class="hljs-attr">"target"</span>: <span class="hljs-string">"ES2020"</span>,
    <span class="hljs-attr">"module"</span>: <span class="hljs-string">"commonjs"</span>,
    <span class="hljs-attr">"lib"</span>: [<span class="hljs-string">"ES2020"</span>],
    <span class="hljs-attr">"outDir"</span>: <span class="hljs-string">"./dist"</span>,
    <span class="hljs-attr">"rootDir"</span>: <span class="hljs-string">"./src"</span>,
    <span class="hljs-attr">"strict"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"esModuleInterop"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"skipLibCheck"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"forceConsistentCasingInFileNames"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"resolveJsonModule"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"declaration"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"declarationMap"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"sourceMap"</span>: <span class="hljs-literal">true</span>
  },
  <span class="hljs-attr">"include"</span>: [<span class="hljs-string">"src/**/*"</span>, <span class="hljs-string">"scripts/**/*"</span>],
  <span class="hljs-attr">"exclude"</span>: [<span class="hljs-string">"node_modules"</span>, <span class="hljs-string">"dist"</span>]
}
</code></pre>
<p>And update your <code>package.json</code> scripts:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"build"</span>: <span class="hljs-string">"tsc"</span>,
    <span class="hljs-attr">"start"</span>: <span class="hljs-string">"node dist/index.js"</span>,
    <span class="hljs-attr">"dev"</span>: <span class="hljs-string">"ts-node-dev --respawn --transpile-only src/index.ts"</span>,
    <span class="hljs-attr">"migrate"</span>: <span class="hljs-string">"ts-node scripts/run-migrations.ts"</span>
  }
}
</code></pre>
<p>Now, create a <code>.env</code> file for your environment variables. All the Monnify env details can be gotten in this <a target="_blank" href="https://app.monnify.com/developer">route</a>:</p>
<pre><code class="lang-plaintext"># Server
PORT=3008
NODE_ENV=development

# Database
DB_HOST=localhost
DB_PORT=5433
DB_NAME=payroll_db
DB_USER=payroll_user
DB_PASSWORD=payroll_password

# Redis
REDIS_HOST=localhost
REDIS_PORT=6379

# Monnify
MONNIFY_API_KEY=your_api_key
MONNIFY_SECRET_KEY=your_secret_key
MONNIFY_BASE_URL=https://sandbox.monnify.com
MONNIFY_CONTRACT_CODE=your_contract_code
MONNIFY_WEBHOOK_SECRET=your_webhook_secret
</code></pre>
<h2 id="heading-configuring-docker-for-postgresql-and-redis">Configuring Docker for PostgreSQL and Redis</h2>
<p>Before we can start building our application, we need to set up the infrastructure services: PostgreSQL for data persistence and Redis for job queue management. Using Docker Compose makes it easy to run these services locally with a single command. This approach ensures consistency across development environments and simplifies deployment.</p>
<p>Create a <code>docker-compose.yml</code> file to set up PostgreSQL and Redis:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">postgres:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:15-alpine</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">monnify-payroll-db</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">POSTGRES_USER:</span> <span class="hljs-string">payroll_user</span>
      <span class="hljs-attr">POSTGRES_PASSWORD:</span> <span class="hljs-string">payroll_password</span>
      <span class="hljs-attr">POSTGRES_DB:</span> <span class="hljs-string">payroll_db</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'5433:5432'</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">postgres_data:/var/lib/postgresql/data</span>
    <span class="hljs-attr">healthcheck:</span>
      <span class="hljs-attr">test:</span> [<span class="hljs-string">'CMD-SHELL'</span>, <span class="hljs-string">'pg_isready -U payroll_user'</span>]
      <span class="hljs-attr">interval:</span> <span class="hljs-string">10s</span>
      <span class="hljs-attr">timeout:</span> <span class="hljs-string">5s</span>
      <span class="hljs-attr">retries:</span> <span class="hljs-number">5</span>

  <span class="hljs-attr">redis:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">redis:7-alpine</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">monnify-payroll-redis</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'6379:6379'</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">redis_data:/data</span>
    <span class="hljs-attr">healthcheck:</span>
      <span class="hljs-attr">test:</span> [<span class="hljs-string">'CMD'</span>, <span class="hljs-string">'redis-cli'</span>, <span class="hljs-string">'ping'</span>]
      <span class="hljs-attr">interval:</span> <span class="hljs-string">10s</span>
      <span class="hljs-attr">timeout:</span> <span class="hljs-string">5s</span>
      <span class="hljs-attr">retries:</span> <span class="hljs-number">5</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">postgres_data:</span>
  <span class="hljs-attr">redis_data:</span>
</code></pre>
<p>Start the services:</p>
<pre><code class="lang-bash">docker-compose up -d
</code></pre>
<p>And verify that both services are running:</p>
<pre><code class="lang-bash">docker-compose ps
</code></pre>
<h2 id="heading-setting-up-the-database">Setting Up the Database</h2>
<p>Now we'll configure the database connection and create the necessary tables. We'll use a connection pool for efficient database access and create migration files to set up our schema. This approach ensures our database structure is version-controlled and can be easily reproduced.</p>
<p>Create the <code>src/config/database.ts</code> file to configure the PostgreSQL connection:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Pool, PoolConfig } <span class="hljs-keyword">from</span> <span class="hljs-string">'pg'</span>;
<span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;

dotenv.config();

<span class="hljs-keyword">const</span> dbName = (process.env.DB_NAME || <span class="hljs-string">'payroll_db'</span>).trim();
<span class="hljs-keyword">if</span> (!dbName) {
  <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Database name (DB_NAME) must be set and cannot be empty'</span>);
}

<span class="hljs-keyword">const</span> config: PoolConfig = {
  host: process.env.DB_HOST || <span class="hljs-string">'localhost'</span>,
  port: <span class="hljs-built_in">parseInt</span>(process.env.DB_PORT || <span class="hljs-string">'5433'</span>),
  database: dbName,
  user: process.env.DB_USER || <span class="hljs-string">'payroll_user'</span>,
  password: process.env.DB_PASSWORD || <span class="hljs-string">'payroll_password'</span>,
  max: <span class="hljs-number">20</span>,
  idleTimeoutMillis: <span class="hljs-number">30000</span>,
  connectionTimeoutMillis: <span class="hljs-number">2000</span>,
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> pool = <span class="hljs-keyword">new</span> Pool(config);

pool.on(<span class="hljs-string">'error'</span>, <span class="hljs-function">(<span class="hljs-params">err: <span class="hljs-built_in">Error</span></span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Unexpected error on idle client'</span>, err);
  process.exit(<span class="hljs-number">-1</span>);
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> query = <span class="hljs-keyword">async</span> (text: <span class="hljs-built_in">string</span>, params?: <span class="hljs-built_in">any</span>[]) =&gt; {
  <span class="hljs-keyword">const</span> start = <span class="hljs-built_in">Date</span>.now();
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> pool.query(text, params);
    <span class="hljs-keyword">return</span> res;
  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Database query error'</span>, error);
    <span class="hljs-keyword">throw</span> error;
  }
};
</code></pre>
<p>Now create the migration files. First, create a <code>migrations</code> folder:</p>
<pre><code class="lang-bash">mkdir migrations
</code></pre>
<p>Then create <code>migrations/001_create_employees_table.sql</code>:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Create employees table</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> employees (
  <span class="hljs-keyword">id</span> <span class="hljs-built_in">SERIAL</span> PRIMARY <span class="hljs-keyword">KEY</span>,
  <span class="hljs-keyword">name</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">255</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  email <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">255</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">UNIQUE</span>,
  employee_id <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">UNIQUE</span>,
  salary <span class="hljs-built_in">DECIMAL</span>(<span class="hljs-number">15</span>, <span class="hljs-number">2</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  account_number <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">50</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  bank_code <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">20</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  bank_name <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">255</span>),
  is_active <span class="hljs-built_in">BOOLEAN</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-literal">true</span>,
  created_at <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">CURRENT_TIMESTAMP</span>,
  updated_at <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">CURRENT_TIMESTAMP</span>
);

<span class="hljs-comment">-- Create indexes for faster lookups</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_employees_employee_id <span class="hljs-keyword">ON</span> employees(employee_id);
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_employees_is_active <span class="hljs-keyword">ON</span> employees(is_active);
</code></pre>
<p>Now, create <code>migrations/002_create_payrolls_table.sql</code>:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Create payrolls table</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> payrolls (
  <span class="hljs-keyword">id</span> <span class="hljs-built_in">SERIAL</span> PRIMARY <span class="hljs-keyword">KEY</span>,
  payroll_period <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  total_amount <span class="hljs-built_in">DECIMAL</span>(<span class="hljs-number">15</span>, <span class="hljs-number">2</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  total_employees <span class="hljs-built_in">INTEGER</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  <span class="hljs-keyword">status</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">50</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-string">'pending'</span>,
  processed_count <span class="hljs-built_in">INTEGER</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-number">0</span>,
  failed_count <span class="hljs-built_in">INTEGER</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-number">0</span>,
  created_at <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">CURRENT_TIMESTAMP</span>,
  updated_at <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">CURRENT_TIMESTAMP</span>,
  processed_at <span class="hljs-built_in">TIMESTAMP</span>
);

<span class="hljs-comment">-- Create indexes for faster queries</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_payrolls_status <span class="hljs-keyword">ON</span> payrolls(<span class="hljs-keyword">status</span>);
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_payrolls_period <span class="hljs-keyword">ON</span> payrolls(payroll_period);
</code></pre>
<p>And next, create <code>migrations/003_create_payroll_items_table.sql</code>:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Create payroll_items table</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> payroll_items (
  <span class="hljs-keyword">id</span> <span class="hljs-built_in">SERIAL</span> PRIMARY <span class="hljs-keyword">KEY</span>,
  payroll_id <span class="hljs-built_in">INTEGER</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">REFERENCES</span> payrolls(<span class="hljs-keyword">id</span>) <span class="hljs-keyword">ON</span> <span class="hljs-keyword">DELETE</span> <span class="hljs-keyword">CASCADE</span>,
  employee_id <span class="hljs-built_in">INTEGER</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">REFERENCES</span> employees(<span class="hljs-keyword">id</span>) <span class="hljs-keyword">ON</span> <span class="hljs-keyword">DELETE</span> <span class="hljs-keyword">CASCADE</span>,
  amount <span class="hljs-built_in">DECIMAL</span>(<span class="hljs-number">15</span>, <span class="hljs-number">2</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  <span class="hljs-keyword">status</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">50</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-string">'pending'</span>,
  transaction_reference <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">255</span>),
  error_message <span class="hljs-built_in">TEXT</span>,
  processed_at <span class="hljs-built_in">TIMESTAMP</span>,
  created_at <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">CURRENT_TIMESTAMP</span>,
  updated_at <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">CURRENT_TIMESTAMP</span>
);

<span class="hljs-comment">-- Create indexes for faster queries</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_payroll_items_payroll_id <span class="hljs-keyword">ON</span> payroll_items(payroll_id);
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_payroll_items_employee_id <span class="hljs-keyword">ON</span> payroll_items(employee_id);
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_payroll_items_status <span class="hljs-keyword">ON</span> payroll_items(<span class="hljs-keyword">status</span>);
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> idx_payroll_items_transaction_ref <span class="hljs-keyword">ON</span> payroll_items(transaction_reference);
</code></pre>
<p>Then create a migration runner script at <code>scripts/run-migrations.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> fs <span class="hljs-keyword">from</span> <span class="hljs-string">'fs'</span>;
<span class="hljs-keyword">import</span> path <span class="hljs-keyword">from</span> <span class="hljs-string">'path'</span>;
<span class="hljs-keyword">import</span> { pool } <span class="hljs-keyword">from</span> <span class="hljs-string">'../src/config/database'</span>;

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">runMigrations</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> migrationsDir = path.join(__dirname, <span class="hljs-string">'../migrations'</span>);
  <span class="hljs-keyword">const</span> files = fs.readdirSync(migrationsDir).sort();

  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> file <span class="hljs-keyword">of</span> files) {
    <span class="hljs-keyword">if</span> (file.endsWith(<span class="hljs-string">'.sql'</span>)) {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Running migration: <span class="hljs-subst">${file}</span>`</span>);
      <span class="hljs-keyword">const</span> sql = fs.readFileSync(path.join(migrationsDir, file), <span class="hljs-string">'utf-8'</span>);
      <span class="hljs-keyword">await</span> pool.query(sql);
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Completed: <span class="hljs-subst">${file}</span>`</span>);
    }
  }

  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'All migrations completed'</span>);
  <span class="hljs-keyword">await</span> pool.end();
}

runMigrations().catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Migration failed:'</span>, err);
  process.exit(<span class="hljs-number">1</span>);
});
</code></pre>
<p>Run the migrations:</p>
<pre><code class="lang-bash">npm run migrate
</code></pre>
<h2 id="heading-creating-database-models">Creating Database Models</h2>
<p>In this section, we'll create the data access layer for our payroll system. Models encapsulate all database operations, providing a clean interface for the rest of the application. We'll build two main models: one for managing employees and another for handling payrolls and payroll items.</p>
<p>For each model, I’ll first explain its purpose and key methods, then show you the complete code implementation. This approach helps you understand what each model does before you implement it.</p>
<h3 id="heading-employee-model">Employee Model</h3>
<p>The <code>EmployeeModel</code> serves as the data-access layer for employee records. It handles creating, reading, updating, and soft-deleting employees. The model includes automatic employee ID generation (format: <code>EMP001</code>, <code>EMP002</code>, and so on) and ensures that each employee has the banking details required for payroll disbursement.</p>
<p>Start by creating a new file at <code>src/models/employee.ts</code> where we’ll implement all employee-related database logic.</p>
<p>After creating the file, import a shared database query helper to execute parameterized SQL safely.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { query } <span class="hljs-keyword">from</span> <span class="hljs-string">'../config/database'</span>;
</code></pre>
<p>This keeps raw SQL isolated from controllers and ensures protection against SQL injection.</p>
<h3 id="heading-employee-data-structure-employee-interface">Employee Data Structure (<code>Employee</code> Interface)</h3>
<p>Next, we’ll define the employee interface.</p>
<p>The <code>Employee</code> interface represents a row in the <code>employees</code> database table and captures both operational and audit fields. It includes identifying fields (`id`, <code>employee_id</code>), personal fields (`name`, <code>email</code>), payroll fields (`salary`), banking details (`account_number`, <code>bank_code</code>, <code>bank_name</code>), operational state (`is_active`), and timestamps (`created_at`, <code>updated_at</code>). The <code>is_active</code> flag is used to support soft deletion and employee deactivation without permanently removing historical payroll relationships.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> Employee {
  id: <span class="hljs-built_in">number</span>;
  name: <span class="hljs-built_in">string</span>;
  email: <span class="hljs-built_in">string</span>;
  employee_id: <span class="hljs-built_in">string</span>;
  salary: <span class="hljs-built_in">number</span>;
  account_number: <span class="hljs-built_in">string</span>;
  bank_code: <span class="hljs-built_in">string</span>;
  bank_name: <span class="hljs-built_in">string</span>;
  is_active: <span class="hljs-built_in">boolean</span>;
  created_at: <span class="hljs-built_in">Date</span>;
  updated_at: <span class="hljs-built_in">Date</span>;
}
</code></pre>
<p>Now, we’ll define the <code>CreateEmployeeInput</code> interface which represent the expected payload for creating an employee. It includes required fields such as name, email, salary, and bank details.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> CreateEmployeeInput {
  name: <span class="hljs-built_in">string</span>;
  email: <span class="hljs-built_in">string</span>;
  employee_id?: <span class="hljs-built_in">string</span>;
  salary: <span class="hljs-built_in">number</span>;
  account_number: <span class="hljs-built_in">string</span>;
  bank_code: <span class="hljs-built_in">string</span>;
  bank_name: <span class="hljs-built_in">string</span>;
}
</code></pre>
<p>The <code>employee_id</code> field is optional, allowing the system to auto-generate a unique identifier if one is not provided. This flexibility supports both automated workflows and manual HR data imports.</p>
<h3 id="heading-employee-model-class">Employee Model Class</h3>
<p>Next, we’ll define the <code>EmployeeModel</code> class.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> EmployeeModel {
  <span class="hljs-comment">// Class methods will go here</span>
}
</code></pre>
<p>This class encapsulates all database operations related to employee records. It centralizes logic for creating, retrieving, updating, and deleting employees, as well as generating unique sequential employee IDs.</p>
<h3 id="heading-auto-generating-employee-ids-generateemployeeid">Auto-Generating Employee IDs (<code>generateEmployeeId</code>)</h3>
<p>We start by creating the <code>generateEmployeeId</code> method inside the <code>EmployeeModel</code> class.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> generateEmployeeId(): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">string</span>&gt; {
    <span class="hljs-comment">// Get the highest existing employee_id number that matches EMP### pattern</span>
    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`SELECT employee_id FROM employees
       WHERE employee_id LIKE 'EMP%'
       AND LENGTH(employee_id) &gt;= 4
       AND SUBSTRING(employee_id FROM 4) ~ '^[0-9]+$'
       ORDER BY CAST(SUBSTRING(employee_id FROM 4) AS INTEGER) DESC
       LIMIT 1`</span>
    );

    <span class="hljs-keyword">if</span> (result.rows.length === <span class="hljs-number">0</span>) {
      <span class="hljs-keyword">return</span> <span class="hljs-string">'EMP001'</span>;
    }

    <span class="hljs-keyword">const</span> lastId = result.rows[<span class="hljs-number">0</span>].employee_id;
    <span class="hljs-keyword">const</span> numberPart = lastId.substring(<span class="hljs-number">3</span>);
    <span class="hljs-keyword">const</span> lastNumber = <span class="hljs-built_in">parseInt</span>(numberPart, <span class="hljs-number">10</span>);

    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">isNaN</span>(lastNumber)) {
      <span class="hljs-keyword">return</span> <span class="hljs-string">'EMP001'</span>;
    }

    <span class="hljs-keyword">const</span> nextNumber = lastNumber + <span class="hljs-number">1</span>;
    <span class="hljs-comment">// Format as EMP001, EMP002, etc. (3 digits minimum)</span>
    <span class="hljs-keyword">return</span> <span class="hljs-string">`EMP<span class="hljs-subst">${nextNumber.toString().padStart(<span class="hljs-number">3</span>, <span class="hljs-string">'0'</span>)}</span>`</span>;
  }
</code></pre>
<p>The private <code>generateEmployeeId</code> method generates a unique employee identifier in a readable sequential format such as <code>EMP001</code>, <code>EMP002</code>, and so on. It queries the database for the highest existing employee ID that matches the expected pattern (<code>EMP</code> prefix followed by numeric digits), orders by the numeric suffix in descending order, and increments the latest number to produce the next ID.</p>
<p>If no matching record exists, it starts from <code>EMP001</code>. The method also protects against malformed data by returning <code>EMP001</code> if parsing fails.</p>
<p>Finally, it ensures formatting consistency by padding the number portion to at least three digits using <code>padStart(3, '0')</code>, which keeps IDs aligned and easy to sort visually.</p>
<h3 id="heading-creating-an-employee-create">Creating an Employee (<code>create</code>)</h3>
<p>Next, we’ll define the <code>create</code> method, which inserts a new employee record into the database. If the caller does not supply an <code>employee_id</code>, the method generates one automatically using <code>generateEmployeeId</code>.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> create(data: CreateEmployeeInput): <span class="hljs-built_in">Promise</span>&lt;Employee&gt; {
    <span class="hljs-comment">// Auto-generate employee_id if not provided</span>
    <span class="hljs-keyword">let</span> employeeId = data.employee_id;
    <span class="hljs-keyword">if</span> (!employeeId) {
      employeeId = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.generateEmployeeId();
    }

    <span class="hljs-comment">// Check if employee_id already exists (if manually provided)</span>
    <span class="hljs-keyword">if</span> (data.employee_id) {
      <span class="hljs-keyword">const</span> existing = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.findByEmployeeId(data.employee_id);
      <span class="hljs-keyword">if</span> (existing) {
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Employee ID already exists'</span>);
      }
    }

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`INSERT INTO employees (name, email, employee_id, salary, account_number, bank_code, bank_name)
       VALUES ($1, $2, $3, $4, $5, $6, $7)
       RETURNING *`</span>,
      [
        data.name,
        data.email,
        employeeId,
        data.salary,
        data.account_number,
        data.bank_code,
        data.bank_name,
      ]
    );
    <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>];
  }
</code></pre>
<p>Here’s what’s happening in the code<strong>:</strong></p>
<p>If an <code>employee_id</code> is manually provided, it validates uniqueness by checking if that ID already exists among active employees, preventing collisions and ensuring each employee has a distinct identifier. After validations, the employee is inserted into the <code>employees</code> table and the new record is returned. This method ensures every employee created has complete banking details required for payroll disbursement.</p>
<h3 id="heading-retrieving-all-active-employees-findall">Retrieving All Active Employees (<code>findAll</code>)</h3>
<p>The <code>findAll</code> method fetches all active employees from the database.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findAll(): <span class="hljs-built_in">Promise</span>&lt;Employee[]&gt; {
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
    <span class="hljs-string">'SELECT * FROM employees WHERE is_active = true ORDER BY created_at DESC'</span>
  );
  <span class="hljs-keyword">return</span> result.rows;
}
</code></pre>
<p>The <code>findAll</code> method returns all active employees (<code>is_active = true</code>) ordered by most recent creation date. This behavior supports common UI patterns such as HR dashboards and payroll selection screens, where only active employees should be visible by default.</p>
<h3 id="heading-retrieving-an-employee-by-database-id-findbyid">Retrieving an Employee by Database ID (<code>findById</code>)</h3>
<p>The <code>findById</code> method retrieves a single employee by the internal numeric primary key (<code>id</code>).</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findById(id: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;Employee | <span class="hljs-literal">null</span>&gt; {
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(<span class="hljs-string">'SELECT * FROM employees WHERE id = $1'</span>, [id]);
  <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>] || <span class="hljs-literal">null</span>;
}
</code></pre>
<p>If the employee does not exist, it returns <code>null</code>. This is typically used for internal operations such as payroll processing, updates, or admin detail views.</p>
<h3 id="heading-retrieving-an-employee-by-employee-identifier-findbyemployeeid">Retrieving an Employee by Employee Identifier (<code>findByEmployeeId</code>)</h3>
<p>The <code>findByEmployeeId</code> method retrieves an active employee using the business-friendly <code>employee_id</code> (for example, <code>EMP014</code>).</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findByEmployeeId(employeeId: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;Employee | <span class="hljs-literal">null</span>&gt; {
    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">'SELECT * FROM employees WHERE employee_id = $1 AND is_active = true'</span>,
      [employeeId]
    );
    <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>] || <span class="hljs-literal">null</span>;
}
</code></pre>
<p>The method filters by <code>is_active = true</code> to prevent selecting deactivated employees during operations like payroll runs or HR searches.</p>
<h3 id="heading-updating-an-employee-update">Updating an Employee (<code>update</code>)</h3>
<p>The <code>update</code> method supports partial updates by dynamically building the SQL <code>SET</code> clause based on the fields present in the update payload. It iterates through the provided properties, includes only those with defined values, and constructs a parameterized query to prevent SQL injection and preserve correctness.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> update(
    id: <span class="hljs-built_in">number</span>,
    data: Partial&lt;CreateEmployeeInput&gt;
  ): <span class="hljs-built_in">Promise</span>&lt;Employee&gt; {
    <span class="hljs-keyword">const</span> fields: <span class="hljs-built_in">string</span>[] = [];
    <span class="hljs-keyword">const</span> values: <span class="hljs-built_in">any</span>[] = [];
    <span class="hljs-keyword">let</span> paramCount = <span class="hljs-number">1</span>;

    <span class="hljs-comment">// Build dynamic update query based on provided fields</span>
    <span class="hljs-built_in">Object</span>.entries(data).forEach(<span class="hljs-function">(<span class="hljs-params">[key, value]</span>) =&gt;</span> {
      <span class="hljs-keyword">if</span> (value !== <span class="hljs-literal">undefined</span>) {
        fields.push(<span class="hljs-string">`<span class="hljs-subst">${key}</span> = $<span class="hljs-subst">${paramCount}</span>`</span>);
        values.push(value);
        paramCount++;
      }
    });

    <span class="hljs-keyword">if</span> (fields.length === <span class="hljs-number">0</span>) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'No fields to update'</span>);
    }

    <span class="hljs-comment">// Always update the updated_at timestamp</span>
    fields.push(<span class="hljs-string">`updated_at = $<span class="hljs-subst">${paramCount}</span>`</span>);
    values.push(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>());
    values.push(id);

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`UPDATE employees SET <span class="hljs-subst">${fields.join(<span class="hljs-string">', '</span>)}</span> WHERE id = $<span class="hljs-subst">${
        paramCount + <span class="hljs-number">1</span>
      }</span> RETURNING *`</span>,
      values
    );
    <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>];
  }
</code></pre>
<p>Here’s what’s happening in the code:</p>
<p>If no fields are provided, it throws an error to avoid performing a meaningless update. It also explicitly updates the <code>updated_at</code> timestamp to ensure accurate audit tracking. Finally, it returns the updated database record, making it easy for controllers to respond with the latest employee state.</p>
<h3 id="heading-soft-deleting-an-employee-delete">Soft-Deleting an Employee (<code>delete</code>)</h3>
<p>Finally, instead of permanently removing the employee record, the <code>delete</code> method performs a soft delete by setting <code>is_active = false</code> and updating the <code>updated_at</code> timestamp.</p>
<p>This approach preserves historical payroll references and audit trails while excluding inactive employees from standard queries like <code>findAll</code>. It’s especially important in payroll systems where historical payment records must remain valid and traceable even after an employee leaves the organization.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> <span class="hljs-keyword">delete</span>(id: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
  <span class="hljs-keyword">await</span> query(
    <span class="hljs-string">'UPDATE employees SET is_active = false, updated_at = NOW() WHERE id = $1'</span>,
    [id]
  );
}
</code></pre>
<p>Key features of the employee model:</p>
<ul>
<li><p>Auto-generates sequential employee IDs if not provided</p>
</li>
<li><p>Validates employee ID uniqueness</p>
</li>
<li><p>Supports soft deletion to preserve historical payroll records</p>
</li>
<li><p>Provides methods for finding employees by database ID or employee identifier</p>
</li>
</ul>
<h3 id="heading-payroll-model">Payroll Model</h3>
<p>The <code>PayrollModel</code> manages payroll batches and individual payroll items. A payroll represents a single payment cycle (for example, "December 2024"), while payroll items represent individual employee payments within that cycle. This separation allows us to track the status of each payment independently.</p>
<p>Key features:</p>
<ul>
<li><p>Creates payroll batches with automatic calculation of totals</p>
</li>
<li><p>Supports filtering employees for selective payroll runs</p>
</li>
<li><p>Tracks status at both batch and item levels</p>
</li>
<li><p>Provides methods for reconciliation and status updates</p>
</li>
</ul>
<p>Let's implement the Payroll Model.</p>
<p>We’ll begin by creating a new file at <code>src/models/payroll.ts</code>, where we’ll implement the payroll models that encapsulate payroll batch creation, employee payment tracking, and payroll status management.</p>
<p>First, import a shared database query helper to execute parameterized SQL safely.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { query } <span class="hljs-keyword">from</span> <span class="hljs-string">'../config/database'</span>;
</code></pre>
<p>This keeps raw SQL isolated from controllers and ensures protection against SQL injection.</p>
<h3 id="heading-payroll-status-lifecycle">Payroll Status Lifecycle</h3>
<p>Next, we’ll define the <code>PayrollStatus</code> enum.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-built_in">enum</span> PayrollStatus {
 PENDING = <span class="hljs-string">'pending'</span>,
 PROCESSING = <span class="hljs-string">'processing'</span>,
 COMPLETED = <span class="hljs-string">'completed'</span>,
 FAILED = <span class="hljs-string">'failed'</span>,
 PARTIALLY_COMPLETED = <span class="hljs-string">'partially_completed'</span>,
}
</code></pre>
<p>The <code>PayrollStatus</code> enum defines all possible states for both payroll batches and individual payroll items:</p>
<ul>
<li><p><strong>PENDING</strong> – Created but not yet processed</p>
</li>
<li><p><strong>PROCESSING</strong> – Currently being processed by background workers</p>
</li>
<li><p><strong>COMPLETED</strong> – Successfully processed</p>
</li>
<li><p><strong>FAILED</strong> – Processing failed</p>
</li>
<li><p><strong>PARTIALLY_COMPLETED</strong> – Some items succeeded while others failed</p>
</li>
</ul>
<h3 id="heading-payroll-entity">Payroll Entity</h3>
<p>With the payroll status lifecycle defined, we can now define the payroll entity.</p>
<p>The <code>Payroll</code> interface represents a single payroll run, such as a monthly salary payout. It stores aggregate and audit information including the payroll period, total salary amount, total number of employees, processing status, counts of successful and failed payments, and timestamps for creation, updates, and completion.</p>
<p>Add the following interface to <code>src/models/payroll.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> Payroll {
 id: <span class="hljs-built_in">number</span>;
 payroll_period: <span class="hljs-built_in">string</span>;
 total_amount: <span class="hljs-built_in">number</span>;
 total_employees: <span class="hljs-built_in">number</span>;
 status: PayrollStatus;
 processed_count: <span class="hljs-built_in">number</span>;
 failed_count: <span class="hljs-built_in">number</span>;
 created_at: <span class="hljs-built_in">Date</span>;
 updated_at: <span class="hljs-built_in">Date</span>;
 processed_at?: <span class="hljs-built_in">Date</span>;
}
</code></pre>
<p>This entity acts as the parent record for all employee payments within a payroll cycle and is used to track overall payroll progress and outcomes.</p>
<h3 id="heading-payroll-item-entity">Payroll Item Entity</h3>
<p>Next, we’ll define the payroll item entity, which represents an individual employee payment within a payroll.</p>
<p>The <code>PayrollItem</code> tracks the employee being paid, the payment amount, its processing status, any transaction reference returned by the payment provider, error messages in case of failure, and relevant timestamps.</p>
<p>Add the following interface just below the <code>Payroll</code> interface:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> PayrollItem {
  id: <span class="hljs-built_in">number</span>;
  payroll_id: <span class="hljs-built_in">number</span>;
  employee_id: <span class="hljs-built_in">number</span>;
  amount: <span class="hljs-built_in">number</span>;
  status: PayrollStatus;
  transaction_reference?: <span class="hljs-built_in">string</span>;
  error_message?: <span class="hljs-built_in">string</span>;
  processed_at?: <span class="hljs-built_in">Date</span>;
  created_at: <span class="hljs-built_in">Date</span>;
  updated_at: <span class="hljs-built_in">Date</span>;
}
</code></pre>
<p>This structure allows individual employee payments to be retried, audited, or reconciled independently without affecting the rest of the payroll batch.</p>
<h3 id="heading-creating-a-payroll-payrollmodelcreate">Creating a Payroll (<code>PayrollModel.create</code>)</h3>
<p>Now that we’ve defined the <code>Payroll</code> and <code>PayrollItem</code> entities, we can move on to creating a payroll batch.</p>
<p>To keep our business logic organized, we’ll introduce a <code>PayrollModel</code> class. This class will be responsible for creating payroll records, calculating aggregates, and generating individual payroll items for each employee.</p>
<p>Before writing the model itself, let’s define the input required to create a payroll.</p>
<p>Add the following interface below the <code>PayrollItem</code> interface:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> CreatePayrollInput {
  payroll_period: <span class="hljs-built_in">string</span>;
  employee_ids?: <span class="hljs-built_in">number</span>[];
}
</code></pre>
<ul>
<li><p><code>payroll_period</code> identifies the payroll run (for example, <code>2025-01</code>)</p>
</li>
<li><p><code>employee_ids</code> is optional and allows us to run payroll for a subset of employees, enabling selective payouts or retries</p>
</li>
</ul>
<p>Next, create the <code>PayrollModel</code> class. This class will encapsulate all payroll-related database operations.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> PayrollModel {
<span class="hljs-comment">// Payroll model class methods will go here</span>
}
</code></pre>
<p>We’ll start by implementing the <code>create</code> method, which is responsible for creating a new payroll batch.</p>
<p>The method performs the following steps:</p>
<ol>
<li><p>Optionally filters employees if specific employee IDs are provided</p>
</li>
<li><p>Calculates aggregate payroll statistics from the employees table</p>
</li>
<li><p>Creates a payroll record with a <code>PENDING</code> status</p>
</li>
<li><p>Creates a payroll item for each eligible employee</p>
</li>
</ol>
<p>Here’s the implementation:</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> create(data: CreatePayrollInput): <span class="hljs-built_in">Promise</span>&lt;Payroll&gt; {
    <span class="hljs-keyword">let</span> employeeFilter = <span class="hljs-string">''</span>;
    <span class="hljs-keyword">let</span> queryParams: <span class="hljs-built_in">any</span>[] = [];

    <span class="hljs-comment">// Build filter for selective employee payrolls</span>
    <span class="hljs-keyword">if</span> (data.employee_ids &amp;&amp; data.employee_ids.length &gt; <span class="hljs-number">0</span>) {
      employeeFilter = <span class="hljs-string">`AND id = ANY($1::int[])`</span>;
      queryParams = [data.employee_ids];
    }

    <span class="hljs-comment">// Calculate aggregate statistics from employees table</span>
    <span class="hljs-keyword">const</span> employeeStats = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`SELECT COUNT(*) as count, COALESCE(SUM(salary), 0) as total
       FROM employees
       WHERE is_active = true <span class="hljs-subst">${employeeFilter}</span>`</span>,
      queryParams
    );

    <span class="hljs-keyword">const</span> totalEmployees = <span class="hljs-built_in">parseInt</span>(employeeStats.rows[<span class="hljs-number">0</span>].count);
    <span class="hljs-keyword">const</span> totalAmount = <span class="hljs-built_in">parseFloat</span>(employeeStats.rows[<span class="hljs-number">0</span>].total);

    <span class="hljs-comment">// Create the payroll record</span>
    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`INSERT INTO payrolls (payroll_period, total_amount, total_employees, status)
       VALUES ($1, $2, $3, $4)
       RETURNING *`</span>,
      [data.payroll_period, totalAmount, totalEmployees, PayrollStatus.PENDING]
    );

    <span class="hljs-keyword">const</span> payroll = result.rows[<span class="hljs-number">0</span>];

    <span class="hljs-comment">// Create payroll items for each employee</span>
    <span class="hljs-comment">// Each item starts with PENDING status and will be processed asynchronously</span>
    <span class="hljs-keyword">const</span> employees = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`SELECT id, salary FROM employees WHERE is_active = true <span class="hljs-subst">${employeeFilter}</span>`</span>,
      queryParams
    );

    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> employee <span class="hljs-keyword">of</span> employees.rows) {
      <span class="hljs-keyword">await</span> query(
        <span class="hljs-string">`INSERT INTO payroll_items (payroll_id, employee_id, amount, status)
         VALUES ($1, $2, $3, $4)`</span>,
        [payroll.id, employee.id, employee.salary, PayrollStatus.PENDING]
      );
    }

    <span class="hljs-keyword">return</span> payroll;
  }
</code></pre>
<p>The payroll creation process begins by determining which employees should be included. If specific employee IDs are provided, only those employees are selected – otherwise, all active employees are included. This allows the system to support both full payroll runs and selective payouts.</p>
<p>Next, the system calculates aggregate payroll statistics directly from the employees table by counting eligible employees and summing their salaries. These values are stored in a new payroll record created with a <code>PENDING</code> status.</p>
<p>Finally, a payroll item is generated for each eligible employee, with each item also initialized in a <code>PENDING</code> state. This design separates payroll setup from payment execution, allowing employee payments to be processed asynchronously and in parallel in later stages of the system.</p>
<h3 id="heading-fetching-payroll-records">Fetching Payroll Records</h3>
<p>After creating payrolls, we often need to retrieve them for administrative dashboards, reporting, and audit trails.</p>
<p>The <code>PayrollModel</code> provides two simple methods:</p>
<ol>
<li><p><code>findById</code> – Retrieves a single payroll by its unique identifier</p>
</li>
<li><p><code>findAll</code> – Retrieves all payroll records, ordered by creation date (newest first)</p>
</li>
</ol>
<p>These methods should be added <strong>below</strong> the <code>create</code> method in the <code>PayrollModel</code> class:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findById(id: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;Payroll | <span class="hljs-literal">null</span>&gt; {
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(<span class="hljs-string">'SELECT * FROM payrolls WHERE id = $1'</span>, [id]);
  <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>] || <span class="hljs-literal">null</span>;
}

<span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findAll(): <span class="hljs-built_in">Promise</span>&lt;Payroll[]&gt; {
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
    <span class="hljs-string">'SELECT * FROM payrolls ORDER BY created_at DESC'</span>
  );
  <span class="hljs-keyword">return</span> result.rows;
}
</code></pre>
<p>The <code>findById</code> method retrieves a single payroll by its identifier, while <code>findAll</code> returns all payroll records ordered by creation date.</p>
<h3 id="heading-updating-payroll-status-payrollmodelupdatestatus">Updating Payroll Status (<code>PayrollModel.updateStatus</code>)</h3>
<p>Once payroll processing begins, we need a way to track the overall status of a payroll batch. The <code>updateStatus</code> method updates the payroll record with:</p>
<ul>
<li><p>The current status (<code>PENDING</code>, <code>PROCESSING</code>, <code>COMPLETED</code>, and so on)</p>
</li>
<li><p>Optional counts of processed and failed payments</p>
</li>
<li><p>A <code>processed_at</code> timestamp automatically set for terminal states (<code>COMPLETED</code> or <code>PARTIALLY_COMPLETED</code>)</p>
</li>
</ul>
<p>Add the following method below the fetch methods in your <code>PayrollModel</code> class:</p>
<pre><code class="lang-typescript">
  <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> updateStatus(
    id: <span class="hljs-built_in">number</span>,
    status: PayrollStatus,
    processedCount?: <span class="hljs-built_in">number</span>,
    failedCount?: <span class="hljs-built_in">number</span>
  ): <span class="hljs-built_in">Promise</span>&lt;Payroll&gt; {
    <span class="hljs-keyword">const</span> updates: <span class="hljs-built_in">string</span>[] = [<span class="hljs-string">'status = $2'</span>, <span class="hljs-string">'updated_at = NOW()'</span>];
    <span class="hljs-keyword">const</span> values: <span class="hljs-built_in">any</span>[] = [id, status];

    <span class="hljs-comment">// Dynamically add processed_count if provided</span>
    <span class="hljs-keyword">if</span> (processedCount !== <span class="hljs-literal">undefined</span>) {
      updates.push(<span class="hljs-string">`processed_count = $<span class="hljs-subst">${values.length + <span class="hljs-number">1</span>}</span>`</span>);
      values.push(processedCount);
    }

    <span class="hljs-comment">// Dynamically add failed_count if provided</span>
    <span class="hljs-keyword">if</span> (failedCount !== <span class="hljs-literal">undefined</span>) {
      updates.push(<span class="hljs-string">`failed_count = $<span class="hljs-subst">${values.length + <span class="hljs-number">1</span>}</span>`</span>);
      values.push(failedCount);
    }

    <span class="hljs-comment">// Set processed_at timestamp for terminal states</span>
    <span class="hljs-keyword">if</span> (
      status === PayrollStatus.COMPLETED ||
      status === PayrollStatus.PARTIALLY_COMPLETED
    ) {
      updates.push(<span class="hljs-string">`processed_at = NOW()`</span>);
    }

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
      <span class="hljs-string">`UPDATE payrolls SET <span class="hljs-subst">${updates.join(<span class="hljs-string">', '</span>)}</span> WHERE id = $1 RETURNING *`</span>,
      values
    );
    <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>];
  }
}
</code></pre>
<p>As payroll processing progresses, this method updates the overall payroll status along with optional counts of processed and failed payments. When a payroll reaches a terminal state such as <code>COMPLETED</code> or <code>PARTIALLY_COMPLETED</code>, the system automatically records a completion timestamp. This ensures accurate tracking of payroll execution and supports reconciliation workflows.</p>
<h2 id="heading-payrollitemmodel">PayrollItemModel</h2>
<p>After handling payroll batches with <code>PayrollModel</code>, we need a way to manage individual employee payments. This is where the <code>PayrollItemModel</code> comes in. It encapsulates database operations related to payroll items, including fetching, and updating records with employee details.</p>
<p>Start by adding a new class <strong>below</strong> <code>PayrollModel</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> PayrollItemModel {
  <span class="hljs-comment">// Methods will go here</span>
}
</code></pre>
<h3 id="heading-fetching-payroll-items-payrollitemmodelfindbypayrollid">Fetching Payroll Items (<code>PayrollItemModel.findByPayrollId</code>)</h3>
<p>Often, we want to get all payroll items for a specific payroll batch. For example, to display them on a dashboard or process them in a background worker.</p>
<p>This <code>findByPayrollId</code> method does that exactly. It retrieves all payroll items associated with a specific payroll and enriches them with employee details such as name, bank account number, and bank information through a database join.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findByPayrollId(payrollId: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;PayrollItem[]&gt; {
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
    <span class="hljs-string">`SELECT
       pi.id, pi.payroll_id, pi.employee_id, pi.amount, pi.status,
       pi.transaction_reference, pi.error_message, pi.processed_at,
       pi.created_at, pi.updated_at,
       e.name as employee_name, e.employee_id as employee_identifier,
       e.account_number, e.bank_code, e.bank_name
     FROM payroll_items pi
     JOIN employees e ON pi.employee_id = e.id
     WHERE pi.payroll_id = $1
     ORDER BY pi.created_at`</span>,
      [payrollId]
    );
    <span class="hljs-comment">// Normalize numeric fields from PostgreSQL (which returns them as strings)</span>
    <span class="hljs-keyword">return</span> result.rows.map(<span class="hljs-function">(<span class="hljs-params">row</span>) =&gt;</span> ({
      ...row,
      employee_id: <span class="hljs-built_in">parseInt</span>(row.employee_id, <span class="hljs-number">10</span>),
      id: <span class="hljs-built_in">parseInt</span>(row.id, <span class="hljs-number">10</span>),
      payroll_id: <span class="hljs-built_in">parseInt</span>(row.payroll_id, <span class="hljs-number">10</span>),
      amount: <span class="hljs-built_in">parseFloat</span>(row.amount),
    }));
  }
</code></pre>
<p>Here’s what’s happening in the code:</p>
<ol>
<li><p>We use a JOIN with the <code>employees</code> table so each payroll item includes the employee’s name, account number, and bank information.</p>
</li>
<li><p>Some numeric fields may come as strings, so we convert them to proper JavaScript numbers (<code>parseInt</code> / <code>parseFloat</code>) for accurate calculations and display.</p>
</li>
<li><p>The results are ordered by creation date, which helps when rendering items in a UI or processing them sequentially.</p>
</li>
</ol>
<p>This method makes it easy to work with all items in a payroll batch while keeping the data enriched and consistent.</p>
<h3 id="heading-fetching-a-single-payroll-item-payrollitemmodelfindbyid">Fetching a Single Payroll Item (<code>PayrollItemModel.findById</code>)</h3>
<p>Sometimes, you need to look at one specific employee’s payment (for example, to retry a failed transaction or investigate an issue). The <code>findById</code> method helps in fetching a single payroll item along with the employee’s details, so you have everything you need in one place.</p>
<p>Here’s how we implement it:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> findById(id: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;PayrollItem | <span class="hljs-literal">null</span>&gt; {
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
    <span class="hljs-string">`SELECT
       pi.id, pi.payroll_id, pi.employee_id, pi.amount, pi.status,
       pi.transaction_reference, pi.error_message, pi.processed_at,
       pi.created_at, pi.updated_at,
       e.name as employee_name, e.employee_id as employee_identifier,
       e.account_number, e.bank_code, e.bank_name
     FROM payroll_items pi
     JOIN employees e ON pi.employee_id = e.id
     WHERE pi.id = $1`</span>,
    [id]
  );

  <span class="hljs-keyword">if</span> (result.rows.length === <span class="hljs-number">0</span>) <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;

  <span class="hljs-keyword">const</span> row = result.rows[<span class="hljs-number">0</span>];

  <span class="hljs-comment">// Convert numeric fields to proper JavaScript numbers for easier calculations and display</span>
  <span class="hljs-keyword">return</span> {
    ...row,
    employee_id: <span class="hljs-built_in">parseInt</span>(row.employee_id, <span class="hljs-number">10</span>),
    id: <span class="hljs-built_in">parseInt</span>(row.id, <span class="hljs-number">10</span>),
    payroll_id: <span class="hljs-built_in">parseInt</span>(row.payroll_id, <span class="hljs-number">10</span>),
    amount: <span class="hljs-built_in">parseFloat</span>(row.amount),
  };
}
</code></pre>
<p>Here’s what’s happening in the code:</p>
<ul>
<li><p>We use a JOIN with the <code>employees</code> table to include employee info such as name, account number, and bank details.</p>
</li>
<li><p>If the ID doesn’t exist, the method returns <code>null</code> so you can handle missing records gracefully.</p>
</li>
<li><p>Numeric fields are converted to JavaScript numbers, making it easy to calculate totals or display amounts in the UI.</p>
</li>
</ul>
<p>This method ensures that whenever you need a single payroll item, you get a complete, ready-to-use record.</p>
<h3 id="heading-updating-payroll-item-status-payrollitemmodelupdatestatus">Updating Payroll Item Status (<code>PayrollItemModel.updateStatus</code>)</h3>
<p>As individual employee payments are processed, this method updates the payroll item’s status, stores transaction references from external payment providers, captures error messages on failure, and timestamps completion or failure events. This fine-grained tracking enables reliable retries, audits, and reconciliation with external payment systems.</p>
<p>Here’s the implementation:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> updateStatus(
  id: <span class="hljs-built_in">number</span>,
  status: PayrollStatus,
  transactionReference?: <span class="hljs-built_in">string</span>,
  errorMessage?: <span class="hljs-built_in">string</span>
): <span class="hljs-built_in">Promise</span>&lt;PayrollItem&gt; {
  <span class="hljs-keyword">const</span> updates: <span class="hljs-built_in">string</span>[] = [<span class="hljs-string">'status = $2'</span>, <span class="hljs-string">'updated_at = NOW()'</span>];
  <span class="hljs-keyword">const</span> values: <span class="hljs-built_in">any</span>[] = [id, status];

  <span class="hljs-comment">// Add transaction reference if provided (from Monnify API response)</span>
  <span class="hljs-keyword">if</span> (transactionReference) {
    updates.push(<span class="hljs-string">`transaction_reference = $<span class="hljs-subst">${values.length + <span class="hljs-number">1</span>}</span>`</span>);
    values.push(transactionReference);
  }

  <span class="hljs-comment">// Add error message if provided (from failed payment)</span>
  <span class="hljs-keyword">if</span> (errorMessage) {
    updates.push(<span class="hljs-string">`error_message = $<span class="hljs-subst">${values.length + <span class="hljs-number">1</span>}</span>`</span>);
    values.push(errorMessage);
  }

  <span class="hljs-comment">// Set processed_at timestamp for terminal states</span>
  <span class="hljs-keyword">if</span> (status === PayrollStatus.COMPLETED || status === PayrollStatus.FAILED) {
    updates.push(<span class="hljs-string">`processed_at = NOW()`</span>);
  }

  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> query(
    <span class="hljs-string">`UPDATE payroll_items SET <span class="hljs-subst">${updates.join(
      <span class="hljs-string">', '</span>
     )}</span> WHERE id = $1 RETURNING *`</span>,
     values
   );
   <span class="hljs-keyword">return</span> result.rows[<span class="hljs-number">0</span>];
 }
}
</code></pre>
<p>Here’s what’s happening in the code:</p>
<ul>
<li><p>We build a dynamic SET clause to update only the fields provided – status is required, while transaction reference and error message are optional.</p>
</li>
<li><p>Terminal states (<code>COMPLETED</code> or <code>FAILED</code>) trigger an automatic timestamp on <code>processed_at</code>, so we always know when a payment finished.</p>
</li>
<li><p>The method returns the updated payroll item, ready for further processing, logging, or UI display.</p>
</li>
</ul>
<p>This ensures each payroll item is tracked accurately throughout its lifecycle, enabling reliable retries and complete audit trails.</p>
<h3 id="heading-overall-payroll-flow">Overall Payroll Flow</h3>
<p>In this payroll flow, an administrator creates a payroll batch, which generates individual payroll items for each employee. The payroll is then handed off to background workers that process each payroll item independently via an external payment service.</p>
<p>As each payment succeeds or fails, payroll items are updated accordingly. Once processing concludes, the payroll batch status is updated to reflect the overall outcome, whether fully successful, partially successful, or failed.</p>
<p>This architecture provides scalability, resilience, and strong auditability for real-world payroll systems.</p>
<h2 id="heading-building-the-monnify-client">Building the Monnify Client</h2>
<p>The Monnify client is the bridge between our application and Monnify's payment API. In this section, we'll build a reusable client that handles authentication, bulk transfers, and transaction tracking. The client automatically manages API tokens, retries failed requests, and provides a clean interface for the rest of our application.</p>
<p>This module implements a reusable Monnify API client responsible for handling authentication, bulk payroll disbursements, authorization, transaction tracking, and balance checks in a secure and production-ready manner. It abstracts all Monnify-specific logic behind a single class, making it easy to integrate into background jobs, payroll processors, or service layers.</p>
<p>We’ll begin by creating a new file at <code>src/config/monnify.ts</code> where we’ll implement the Monnify client.</p>
<h3 id="heading-configuration-and-environment-setup">Configuration and Environment Setup</h3>
<p>Start by loading the configuration from environment variables using <code>dotenv</code>, ensuring that sensitive credentials are never hardcoded. These include the Monnify API key, secret key, base URL, and contract code (wallet account number). This setup allows the same client to be safely used across development, staging, and production environments.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> axios, { AxiosInstance } <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;
<span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;

dotenv.config();

<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> MonnifyConfig {
  apiKey: <span class="hljs-built_in">string</span>;
  secretKey: <span class="hljs-built_in">string</span>;
  baseUrl: <span class="hljs-built_in">string</span>;
  contractCode: <span class="hljs-built_in">string</span>;
}
</code></pre>
<h3 id="heading-create-the-monnifyclient-class">Create the <code>MonnifyClient</code> Class</h3>
<p>Next, you’ll define the <code>MonnifyClient</code> class. This class encapsulates all communication with the Monnify API. It internally manages API credentials, an Axios HTTP client, an access token, and token expiry tracking.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> MonnifyClient {
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> apiKey: <span class="hljs-built_in">string</span>;
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> secretKey: <span class="hljs-built_in">string</span>;
  <span class="hljs-keyword">private</span> baseUrl: <span class="hljs-built_in">string</span>;
  <span class="hljs-keyword">private</span> contractCode: <span class="hljs-built_in">string</span>;
  <span class="hljs-keyword">private</span> client: AxiosInstance;

  <span class="hljs-keyword">private</span> accessToken: <span class="hljs-built_in">string</span> | <span class="hljs-literal">null</span> = <span class="hljs-literal">null</span>;
  <span class="hljs-keyword">private</span> tokenExpiry: <span class="hljs-built_in">number</span> = <span class="hljs-number">0</span>;
</code></pre>
<p>This design ensures authentication is handled transparently and automatically for every request.</p>
<h3 id="heading-axios-client-and-request-interceptor">Axios Client and Request Interceptor</h3>
<p>Inside the constructor, initialize the Monnify client with credentials from environment variables. The Axios instance is created with the Monnify base URL and JSON headers.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">constructor</span>(<span class="hljs-params"></span>) {
    <span class="hljs-built_in">this</span>.apiKey = process.env.MONNIFY_API_KEY || <span class="hljs-string">''</span>;
    <span class="hljs-built_in">this</span>.secretKey = process.env.MONNIFY_SECRET_KEY || <span class="hljs-string">''</span>;
    <span class="hljs-built_in">this</span>.baseUrl = process.env.MONNIFY_BASE_URL || <span class="hljs-string">'https://api.monnify.com'</span>;
    <span class="hljs-built_in">this</span>.contractCode = process.env.MONNIFY_CONTRACT_CODE || <span class="hljs-string">''</span>;

    <span class="hljs-built_in">this</span>.client = axios.create({
      baseURL: <span class="hljs-built_in">this</span>.baseUrl,
      headers: {
        <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json'</span>,
      },
    });
</code></pre>
<p>We attach the request interceptor to this client to automatically inject a valid Bearer token into every outgoing request (except the authentication endpoint). Before each request, the interceptor ensures the client is authenticated, preventing unauthorized requests and eliminating token-related boilerplate across the codebase.</p>
<pre><code class="lang-typescript">    <span class="hljs-built_in">this</span>.client.interceptors.request.use(<span class="hljs-keyword">async</span> (config: <span class="hljs-built_in">any</span>) =&gt; {
      <span class="hljs-comment">// Skip auth for the login endpoint itself</span>
      <span class="hljs-keyword">if</span> (config.url?.includes(<span class="hljs-string">'/auth/login'</span>)) {
        <span class="hljs-keyword">return</span> config;
      }

      <span class="hljs-comment">// Ensure a valid token exists before every request</span>
      <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ensureAuthenticated();

      <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.accessToken) {
        config.headers.Authorization = <span class="hljs-string">`Bearer <span class="hljs-subst">${<span class="hljs-built_in">this</span>.accessToken}</span>`</span>;
      }

      <span class="hljs-keyword">return</span> config;
    });
  }
</code></pre>
<h3 id="heading-authenticate-with-monnify">Authenticate with Monnify</h3>
<p>Authentication is handled using Monnify’s Basic Auth mechanism, where the API key and secret key are base64-encoded and sent to the <code>/auth/login</code> endpoint. Upon successful authentication, the client stores the returned access token and sets an internal expiry timestamp slightly below the official token lifetime to avoid edge-case expirations. Any authentication failure is logged and surfaced as a controlled error to prevent silent failures.</p>
<pre><code class="lang-typescript">
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> authenticate(): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-comment">// Encode credentials as Base64 for Basic Auth</span>
      <span class="hljs-keyword">const</span> credentials = Buffer.from(
        <span class="hljs-string">`<span class="hljs-subst">${<span class="hljs-built_in">this</span>.apiKey}</span>:<span class="hljs-subst">${<span class="hljs-built_in">this</span>.secretKey}</span>`</span>
      ).toString(<span class="hljs-string">'base64'</span>);

      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> axios.post(
        <span class="hljs-string">`<span class="hljs-subst">${<span class="hljs-built_in">this</span>.baseUrl}</span>/api/v1/auth/login`</span>,
        {},
        {
          headers: {
            Authorization: <span class="hljs-string">`Basic <span class="hljs-subst">${credentials}</span>`</span>,
            <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json'</span>,
          },
        }
      );

      <span class="hljs-built_in">this</span>.accessToken = response.data.responseBody.accessToken;
      <span class="hljs-comment">// Set expiry to 23 hours (Monnify tokens typically last 24 hours)</span>
      <span class="hljs-comment">// This prevents edge cases where token expires mid-request</span>
      <span class="hljs-built_in">this</span>.tokenExpiry = <span class="hljs-built_in">Date</span>.now() + <span class="hljs-number">23</span> * <span class="hljs-number">60</span> * <span class="hljs-number">60</span> * <span class="hljs-number">1000</span>;
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(
        <span class="hljs-string">'Monnify authentication error:'</span>,
        error.response?.data || error.message
      );
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Failed to authenticate with Monnify'</span>);
    }
  }
</code></pre>
<h3 id="heading-automatic-token-refresh-ensureauthenticated">Automatic Token Refresh (<code>ensureAuthenticated</code>)</h3>
<p>Before any API call, the client verifies whether a valid access token exists or if the token has expired. If so, it transparently re-authenticates.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> ensureAuthenticated(): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">if</span> (!<span class="hljs-built_in">this</span>.accessToken || <span class="hljs-built_in">Date</span>.now() &gt;= <span class="hljs-built_in">this</span>.tokenExpiry) {
      <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.authenticate();
    }
  }
</code></pre>
<p>This ensures that long-running processes such as payroll queues or background workers can safely make Monnify requests without manual token handling.</p>
<h3 id="heading-initiating-bulk-transfers">Initiating Bulk Transfers</h3>
<p>The <code>initiateBulkTransfer</code> method handles the creation of a bulk disbursement batch, typically used for payroll payments. It validates input transfers to ensure each payment has a valid amount, destination account number, and bank code.</p>
<p>A structured batch request is then constructed, including a unique batch reference, source account (contract code), narration, and a list of transactions. The request is logged for traceability and sent to Monnify’s batch disbursement endpoint. Any API error is normalized and returned with meaningful messaging to aid debugging and retries.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">async</span> initiateBulkTransfer(
    transfers: <span class="hljs-built_in">Array</span>&lt;{
      amount: <span class="hljs-built_in">number</span>;
      recipientAccountNumber: <span class="hljs-built_in">string</span>;
      recipientBankCode: <span class="hljs-built_in">string</span>;
      recipientName: <span class="hljs-built_in">string</span>;
      narration: <span class="hljs-built_in">string</span>;
      reference: <span class="hljs-built_in">string</span>;
    }&gt;
  ): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">any</span>&gt; {
    <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ensureAuthenticated();
</code></pre>
<p>We validate inputs early to fail fast:</p>
<pre><code class="lang-typescript">    <span class="hljs-keyword">if</span> (!transfers || transfers.length === <span class="hljs-number">0</span>) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'No transfers provided'</span>);
    }

    <span class="hljs-keyword">if</span> (!<span class="hljs-built_in">this</span>.contractCode) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Monnify contract code is not configured'</span>);
    }
</code></pre>
<p>Each transfer is validated individually:</p>
<pre><code class="lang-typescript">    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> transfer <span class="hljs-keyword">of</span> transfers) {
      <span class="hljs-keyword">if</span> (!transfer.amount || transfer.amount &lt;= <span class="hljs-number">0</span>) {
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`Invalid amount for transfer: <span class="hljs-subst">${transfer.reference}</span>`</span>);
      }
      <span class="hljs-keyword">if</span> (!transfer.recipientAccountNumber) {
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`Missing account number for transfer: <span class="hljs-subst">${transfer.reference}</span>`</span>);
      }
      <span class="hljs-keyword">if</span> (!transfer.recipientBankCode) {
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`Missing bank code for transfer: <span class="hljs-subst">${transfer.reference}</span>`</span>);
      }
    }
</code></pre>
<p>We then construct the batch payload:</p>
<pre><code class="lang-typescript">    <span class="hljs-keyword">const</span> requestBody = {
      title: <span class="hljs-string">'Bulk Payroll Transfers'</span>,
      batchReference: <span class="hljs-string">`BATCH_<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
      narration: <span class="hljs-string">'Payroll batch disbursement'</span>,
      sourceAccountNumber: <span class="hljs-built_in">this</span>.contractCode,
      onValidationFailure: <span class="hljs-string">'CONTINUE'</span>,
      notificationInterval: <span class="hljs-number">50</span>,
      transactionList: transfers.map(<span class="hljs-function">(<span class="hljs-params">t</span>) =&gt;</span> ({
        amount: t.amount,
        reference: t.reference,
        narration: t.narration,
        destinationBankCode: t.recipientBankCode,
        destinationAccountNumber: t.recipientAccountNumber,
        currency: <span class="hljs-string">'NGN'</span>,
      })),
    };
</code></pre>
<p>Finally, we send the request and normalize errors:</p>
<pre><code class="lang-typescript">    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.client.post(
        <span class="hljs-string">'/api/v2/disbursements/batch'</span>,
        requestBody
      );
      <span class="hljs-keyword">return</span> response.data;
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-keyword">const</span> errorData = error.response?.data;
      <span class="hljs-keyword">const</span> message =
        errorData?.responseMessage ||
        errorData?.message ||
        <span class="hljs-string">`Monnify API error (<span class="hljs-subst">${error.response?.status}</span>)`</span>;
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(message);
    }
  }
</code></pre>
<h3 id="heading-authorizing-bulk-transfers-otp-validation">Authorizing Bulk Transfers (OTP Validation)</h3>
<p>Some bulk transfers require OTP authorization. The <code>authorizeBulkTransfer</code> method validates the presence of a batch reference and authorization code before submitting them to Monnify’s OTP validation endpoint. This step finalizes the batch disbursement and allows processing to continue. Errors are logged and surfaced clearly for operational visibility.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> authorizeBulkTransfer(
reference: <span class="hljs-built_in">string</span>,
authorizationCode: <span class="hljs-built_in">string</span>
): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">any</span>&gt; {
<span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ensureAuthenticated();
    <span class="hljs-keyword">if</span> (!reference) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Batch reference is required'</span>);
    }

    <span class="hljs-keyword">if</span> (!authorizationCode) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Authorization code (OTP) is required'</span>);
    }

    <span class="hljs-keyword">const</span> requestBody = {
      reference,
      authorizationCode,
    };

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.client.post(
        <span class="hljs-string">'/api/v2/disbursements/batch/validate-otp'</span>,
        requestBody
      );

      <span class="hljs-keyword">return</span> response.data;
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-keyword">const</span> errorDetails = error.response?.data || error.message;
      <span class="hljs-built_in">console</span>.error(
        <span class="hljs-string">'Monnify authorization error:'</span>,
        <span class="hljs-built_in">JSON</span>.stringify(errorDetails, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>)
      );

      <span class="hljs-keyword">if</span> (error.response) {
        <span class="hljs-keyword">const</span> errorData = error.response.data;
        <span class="hljs-keyword">const</span> errorMessage =
          errorData?.responseMessage ||
          errorData?.message ||
          <span class="hljs-string">`Monnify API error (<span class="hljs-subst">${error.response.status}</span>)`</span>;
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(errorMessage);
      }
      <span class="hljs-keyword">throw</span> error;
    }
}
</code></pre>
<h3 id="heading-transaction-status-lookup">Transaction Status Lookup</h3>
<p>The <code>getTransactionStatus</code> method retrieves the real-time status of an individual transaction using its reference.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> getTransactionStatus(transactionReference: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">any</span>&gt; {
<span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ensureAuthenticated();
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.client.get(
        <span class="hljs-string">`/api/v2/disbursements/<span class="hljs-subst">${transactionReference}</span>/status`</span>
      );
      <span class="hljs-keyword">return</span> response.data;
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(
        <span class="hljs-string">'Monnify status check error:'</span>,
        error.response?.data || error.message
      );
      <span class="hljs-keyword">throw</span> error;
    }
}
</code></pre>
<p>This is useful for reconciliation, webhook fallbacks, or manual verification of disbursement outcomes.</p>
<h3 id="heading-batch-details-retrieval">Batch Details Retrieval</h3>
<p>The <code>getBatchDetails</code> method fetches detailed information about an entire disbursement batch, including the state of individual transactions.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> getBatchDetails(batchReference: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">any</span>&gt; {
<span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ensureAuthenticated();
    <span class="hljs-keyword">if</span> (!batchReference) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Batch reference is required'</span>);
    }

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.client.get(
        <span class="hljs-string">`/api/v2/disbursements/batch/<span class="hljs-subst">${batchReference}</span>`</span>
      );
      <span class="hljs-keyword">return</span> response.data;
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(
        <span class="hljs-string">'Monnify batch details error:'</span>,
        error.response?.data || error.message
      );
      <span class="hljs-keyword">throw</span> error;
    }
}
</code></pre>
<p>This is particularly useful when reconciling payroll runs or recovering from partial failures.</p>
<h3 id="heading-wallet-balance-check">Wallet Balance Check</h3>
<p>Finally, we can query the available balance of the Monnify wallet.</p>
<p>The <code>getAccountBalance</code> method retrieves the available balance of the configured Monnify wallet (contract account).</p>
<p>Create <code>src/config/monnify.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> getAccountBalance(): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">any</span>&gt; {
<span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ensureAuthenticated();

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.client.get(
        <span class="hljs-string">`/api/v2/disbursements/wallet-balance?accountNumber=<span class="hljs-subst">${<span class="hljs-built_in">this</span>.contractCode}</span>`</span>
      );
      <span class="hljs-keyword">return</span> response.data;
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(
        <span class="hljs-string">'Monnify balance check error:'</span>,
        error.response?.data || error.message
      );
      <span class="hljs-keyword">throw</span> error;
    }
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> monnifyClient = <span class="hljs-keyword">new</span> MonnifyClient();
</code></pre>
<p>Key features of this client:</p>
<ol>
<li><p><strong>Automatic token management</strong>: The client automatically handles authentication and refreshes tokens before they expire.</p>
</li>
<li><p><strong>Request interceptor</strong>: Every API request automatically includes the authentication token.</p>
</li>
<li><p><strong>Bulk transfers</strong>: Uses Monnify's batch disbursement API for efficient payroll processing.</p>
</li>
<li><p><strong>Error handling</strong>: Comprehensive error handling with meaningful error messages.</p>
</li>
</ol>
<h2 id="heading-implementing-background-job-processing">Implementing Background Job Processing</h2>
<p>To avoid blocking HTTP requests and to ensure reliable retries, payroll execution is handled asynchronously using a background job processor. This worker is responsible for orchestrating bulk payroll disbursements, coordinating with Monnify, updating payroll and payroll item states, and handling retries safely.</p>
<p>Begin by creating a new file at <code>src/jobs/payroll.processor.ts</code>. All background payroll execution logic will live in this file.</p>
<h3 id="heading-set-up-the-payroll-processing-queue">Set Up the Payroll Processing Queue</h3>
<p>We’ll create a Bull queue named <code>payroll-processing</code> and a backed by Redis. Redis connection details are loaded from environment variables, allowing flexibility across environments.</p>
<p>Default job options are configured to retry failed jobs up to three times using an exponential backoff strategy. This ensures resilience against transient failures such as network issues or temporary payment gateway downtime. Completed jobs are automatically removed from the queue to keep Redis storage clean.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> Queue <span class="hljs-keyword">from</span> <span class="hljs-string">'bull'</span>;
<span class="hljs-keyword">import</span> { monnifyClient } <span class="hljs-keyword">from</span> <span class="hljs-string">'../config/monnify'</span>;
<span class="hljs-keyword">import</span> {
 PayrollItemModel,
 PayrollModel,
 PayrollStatus,
} <span class="hljs-keyword">from</span> <span class="hljs-string">'../models/payroll'</span>;
<span class="hljs-keyword">import</span> { EmployeeModel } <span class="hljs-keyword">from</span> <span class="hljs-string">'../models/employee'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> payrollQueue = <span class="hljs-keyword">new</span> Queue(<span class="hljs-string">'payroll-processing'</span>, {
 redis: {
 host: process.env.REDIS_HOST || <span class="hljs-string">'localhost'</span>,
 port: <span class="hljs-built_in">Number</span>(process.env.REDIS_PORT || <span class="hljs-number">6379</span>),
},
defaultJobOptions: {
 attempts: <span class="hljs-number">3</span>,
 backoff: { 
  <span class="hljs-keyword">type</span>: <span class="hljs-string">'exponential'</span>, 
  delay: <span class="hljs-number">2000</span> 
},
 removeOnComplete: <span class="hljs-literal">true</span>,
},
});
</code></pre>
<h3 id="heading-queue-processor-registration">Queue Processor Registration</h3>
<p>The queue registers a processor function using <code>payrollQueue.process</code>, which receives jobs containing a <code>payrollId</code>. Each job triggers the <code>processBulkPayroll</code> function, making the queue responsible for executing one payroll batch at a time.</p>
<pre><code class="lang-typescript">payrollQueue.process(<span class="hljs-keyword">async</span> (job) =&gt; {
 <span class="hljs-keyword">return</span> processBulkPayroll(job.data.payrollId);
});
</code></pre>
<p>This design decouples payroll execution from HTTP requests and allows processing to happen asynchronously in background workers.</p>
<h3 id="heading-bulk-payroll-processing-flow-processbulkpayroll">Bulk Payroll Processing Flow (<code>processBulkPayroll</code>)</h3>
<p>When a payroll job is picked up, the system first fetches all payroll items associated with the given payroll ID. It filters out only items that are eligible for processing: those still in a <code>PENDING</code> state or previously marked as <code>PROCESSING</code> but missing a transaction reference.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">processBulkPayroll</span>(<span class="hljs-params">payrollId: <span class="hljs-built_in">number</span></span>) </span>{

<span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payrollId);
</code></pre>
<p>Also, it filters payroll items to include only those that still require processing. This prevents duplicate payments when jobs are retried.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> payable = items.filter(
  <span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span>
    i.status === PayrollStatus.PENDING ||
    (i.status === PayrollStatus.PROCESSING &amp;&amp; !i.transaction_reference)
);

<span class="hljs-keyword">if</span> (payable.length === <span class="hljs-number">0</span>) <span class="hljs-keyword">return</span>;
</code></pre>
<p>If no payable items remain, the function exits early, avoiding unnecessary API calls.</p>
<p>Once we confirm there are payable items, we update the overall payroll status:</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">await</span> PayrollModel.updateStatus(payrollId, PayrollStatus.PROCESSING);
</code></pre>
<p>This provides immediate visibility that disbursement is underway.</p>
<h3 id="heading-building-the-bulk-transfer-payload">Building the Bulk Transfer Payload</h3>
<p>Create a variable to store the transfer list that will be sent to Monnify.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">const</span> transfers = [];
</code></pre>
<p>For each payable payroll item, the corresponding employee record is fetched to retrieve bank and account details. A unique payment reference is generated using the payroll ID and payroll item ID, ensuring traceability across systems. Each payroll item is immediately marked as <code>PROCESSING</code> before initiating payment to prevent concurrent workers from attempting to process the same item.</p>
<p>A transfer object is then constructed containing the payment amount, recipient bank details, narration, and unique reference. These transfer objects are accumulated into a single batch request.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> item <span class="hljs-keyword">of</span> payable) {
<span class="hljs-keyword">const</span> employee = <span class="hljs-keyword">await</span> EmployeeModel.findById(item.employee_id);
<span class="hljs-keyword">if</span> (!employee) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Employee not found'</span>);

    <span class="hljs-keyword">const</span> reference = <span class="hljs-string">`PAYROLL_<span class="hljs-subst">${payrollId}</span>_<span class="hljs-subst">${item.id}</span>`</span>;

    <span class="hljs-keyword">await</span> PayrollItemModel.updateStatus(item.id, PayrollStatus.PROCESSING);

    transfers.push({
      amount: <span class="hljs-built_in">Number</span>(item.amount),
      reference,
      recipientAccountNumber: employee.account_number,
      recipientBankCode: employee.bank_code,
      recipientName: employee.name,
      narration: <span class="hljs-string">`Payroll payment`</span>,
    });

}
</code></pre>
<h3 id="heading-initiating-bulk-disbursement-via-monnify">Initiating Bulk Disbursement via Monnify</h3>
<p>Once all transfers are prepared, the system initiates a bulk transfer through the Monnify client.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> monnifyClient.initiateBulkTransfer(transfers);

<span class="hljs-keyword">if</span> (!response?.requestSuccessful) {
  <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Bulk transfer initiation failed'</span>);
}
</code></pre>
<p>If Monnify doesn’t confirm successful initiation, the job throws an error, allowing Bull’s retry mechanism to take over. This ensures failed initiation attempts are retried safely without manual intervention.</p>
<h3 id="heading-storing-transaction-references">Storing Transaction References</h3>
<p>After a successful bulk transfer initiation, Monnify returns a list of transactions containing unique transaction references. The system matches each response entry to its corresponding payroll item using the generated reference and updates the payroll item record with the Monnify transaction reference while keeping its status as <code>PROCESSING</code>.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> results = response.responseBody?.transactionList || [];

<span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> item <span class="hljs-keyword">of</span> payable) {
<span class="hljs-keyword">const</span> ref = <span class="hljs-string">`PAYROLL_<span class="hljs-subst">${payrollId}</span>_<span class="hljs-subst">${item.id}</span>`</span>;
<span class="hljs-keyword">const</span> match = results.find(<span class="hljs-function">(<span class="hljs-params">r: <span class="hljs-built_in">any</span></span>) =&gt;</span> r.reference === ref);

    <span class="hljs-keyword">if</span> (match?.transactionReference) {
      <span class="hljs-keyword">await</span> PayrollItemModel.updateStatus(
        item.id,
        PayrollStatus.PROCESSING,
        match.transactionReference
      );
    }

}

<span class="hljs-keyword">await</span> updatePayrollStats(payrollId);
}
</code></pre>
<p>This step is critical for later reconciliation through webhooks or status polling.</p>
<h3 id="heading-payroll-statistics-reconciliation-updatepayrollstats">Payroll Statistics Reconciliation (<code>updatePayrollStats</code>)</h3>
<p>After initiating payments, the system recalculates payroll-level statistics by refetching all payroll items.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updatePayrollStats</span>(<span class="hljs-params">payrollId: <span class="hljs-built_in">number</span></span>) </span>{
<span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payrollId);

<span class="hljs-keyword">const</span> completed = items.filter(
  <span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.COMPLETED
).length;
</code></pre>
<p>The overall payroll status is derived from these counts:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> failed = items.filter(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.FAILED).length;

<span class="hljs-keyword">let</span> status = PayrollStatus.PROCESSING;

<span class="hljs-keyword">if</span> (completed === items.length) {
  status = PayrollStatus.COMPLETED;
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (failed === items.length) {
  status = PayrollStatus.FAILED;
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (completed &gt; <span class="hljs-number">0</span>) {
  status = PayrollStatus.PARTIALLY_COMPLETED;
}

 <span class="hljs-keyword">await</span> PayrollModel.updateStatus(payrollId, status, completed, failed);
}
</code></pre>
<p>If all items are completed, the payroll is marked as <code>COMPLETED</code>. If all failed, it’s marked as <code>FAILED</code>. If some succeeded and some failed, it’s marked as <code>PARTIALLY_COMPLETED</code>. Otherwise, it remains in <code>PROCESSING</code>. The payroll record is then updated with the new status and aggregate counts, providing an accurate real-time snapshot of payroll execution.</p>
<h3 id="heading-queue-entry-point-processpayrollitems">Queue Entry Point (<code>processPayrollItems</code>)</h3>
<p>The <code>processPayrollItems</code> function serves as the public entry point for triggering payroll execution.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">processPayrollItems</span>(<span class="hljs-params">payrollId: <span class="hljs-built_in">number</span></span>) </span>{
  <span class="hljs-keyword">await</span> payrollQueue.add({ payrollId, <span class="hljs-keyword">type</span>: <span class="hljs-string">'bulk'</span> });
}
</code></pre>
<p>It simply enqueues a payroll job with the relevant payroll ID, allowing controllers or services to initiate payroll processing without coupling themselves to queue logic or payment execution details.</p>
<h3 id="heading-role-in-the-overall-payroll-architecture">Role in the Overall Payroll Architecture</h3>
<p>This queue worker acts as the execution engine of the payroll system. It:</p>
<ul>
<li><p>Bridges payroll domain models with the Monnify payment gateway</p>
</li>
<li><p>Ensures safe retries through Bull’s job management and maintains idempotency</p>
</li>
<li><p>Continuously synchronizes payroll and payroll item states</p>
</li>
</ul>
<p>By offloading payment execution to background workers, the system achieves scalability, reliability, and operational resilience required for real-world payroll processing.</p>
<p>Key features of the job processor:</p>
<ol>
<li><p><strong>Exponential backoff</strong>: Failed jobs are retried with increasing delays (2s, 4s, 8s).</p>
</li>
<li><p><strong>Bulk processing</strong>: All payroll items are processed as a single batch transfer.</p>
</li>
<li><p><strong>Status tracking</strong>: Each item's status is updated throughout the process.</p>
</li>
<li><p><strong>Automatic cleanup</strong>: Completed jobs are automatically removed from the queue.</p>
</li>
</ol>
<h2 id="heading-creating-the-api-controllers">Creating the API Controllers</h2>
<p>Next, we’ll build the HTTP controller layer for managing employees in the payroll system using Express.js. It exposes RESTful API endpoints that handle incoming requests, perform validation, interact with the employee data model, and return appropriate HTTP responses.</p>
<p>The controller acts as the bridge between client-facing APIs and the underlying business logic encapsulated in the <code>EmployeeModel</code>.</p>
<h3 id="heading-controller-responsibilities">Controller Responsibilities</h3>
<p>The <code>EmployeeController</code> is responsible for:</p>
<ul>
<li><p>Validating incoming request data</p>
</li>
<li><p>Calling the appropriate model methods</p>
</li>
<li><p>Handling errors gracefully</p>
</li>
<li><p>Returning meaningful HTTP status codes and JSON responses</p>
</li>
</ul>
<p>Each method follows a consistent structure using <code>try–catch</code> blocks to ensure reliability and simplify error handling.</p>
<p>Start by creating a new file at <code>src/controllers/employee.controller.ts</code>. This file will contain all the endpoints needed to manage employees in the payroll system.</p>
<p>At the top of the file, import the required Express types and the employee model:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Request, Response } <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> { EmployeeModel, CreateEmployeeInput } <span class="hljs-keyword">from</span> <span class="hljs-string">'../models/employee'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> EmployeeController {
  <span class="hljs-comment">// Controller methods will go here</span>
}
</code></pre>
<p>Each method inside this class will map to a specific API endpoint.</p>
<h3 id="heading-creating-an-employee-createemployee">Creating an Employee (<code>createEmployee</code>)</h3>
<p>We’ll start with an endpoint for creating a new employee.</p>
<p>This endpoint handles the creation of a new employee record. It extracts the request body and validates the presence of required fields such as name, email, salary, bank account number, and bank code.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> createEmployee(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> data: CreateEmployeeInput = req.body;

      <span class="hljs-keyword">if</span> (
        !data.name ||
        !data.email ||
        !data.salary ||
        !data.account_number ||
        !data.bank_code
      ) {
        res.status(<span class="hljs-number">400</span>).json({
          error:
            <span class="hljs-string">'Missing required fields: name, email, salary, account_number, bank_code'</span>,
        });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> employee = <span class="hljs-keyword">await</span> EmployeeModel.create(data);
      res.status(<span class="hljs-number">201</span>).json({
        message: <span class="hljs-string">'Employee created successfully'</span>,
        data: employee,
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error creating employee:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to create employee'</span> });
    }

}
</code></pre>
<p>If any required field is missing, the request is rejected with a <code>400 Bad Request</code> response.</p>
<p>Upon successful validation, the controller delegates employee creation to the <code>EmployeeModel.create</code> method and returns a <code>201 Created</code> response containing the newly created employee. Any unexpected error during the process results in a <code>500 Internal Server Error</code>.</p>
<h3 id="heading-fetching-all-employees-getallemployees">Fetching All Employees (<code>getAllEmployees</code>)</h3>
<p>Next, we’ll add an endpoint for retrieving all employee records from the system.</p>
<p>This endpoint simply calls <code>EmployeeModel.findAll</code> and returns the result as a JSON response. This API is typically used for administrative dashboards, payroll preparation, or reporting purposes.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getAllEmployees(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> employees = <span class="hljs-keyword">await</span> EmployeeModel.findAll();
    res.json({ data: employees });
  } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching employees:'</span>, error);
    res
      .status(<span class="hljs-number">500</span>)
      .json({ error: error.message || <span class="hljs-string">'Failed to fetch employees'</span> });
  }
}
</code></pre>
<p>If the retrieval is successful, the controller responds with the full list of employees. If something goes wrong, such as a database or unexpected runtime error, the error is logged and a 500 Internal Server Error is returned to the client.</p>
<h3 id="heading-fetching-a-single-employee-getemployeebyid">Fetching a Single Employee (<code>getEmployeeById</code>)</h3>
<p>After listing all employees, the next logical step is being able to fetch a single employee by their ID.</p>
<p>This endpoint retrieves a specific employee by ID, which is parsed from the URL parameters. If the employee doesn’t exist, the controller responds with a <code>404 Not Found</code>. Otherwise, the employee data is returned in a successful response. This endpoint is useful for viewing or editing individual employee details.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getEmployeeById(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> { id } = req.params;
      <span class="hljs-keyword">const</span> employee = <span class="hljs-keyword">await</span> EmployeeModel.findById(<span class="hljs-built_in">parseInt</span>(id));

      <span class="hljs-keyword">if</span> (!employee) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Employee not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      res.json({ data: employee });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching employee:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to fetch employee'</span> });
    }
  }
</code></pre>
<h3 id="heading-updating-an-employee-updateemployee">Updating an Employee (<code>updateEmployee</code>)</h3>
<p>Now that we can retrieve individual employees, the next step is allowing their details to be updated.</p>
<p>This endpoint allows partial updates to an existing employee record. It first checks whether the employee exists before attempting an update.</p>
<p>If the employee isn’t found, a <code>404 Not Found</code> response is returned. If the employee exists, the controller forwards the update payload to <code>EmployeeModel.update</code> and returns the updated employee record. This approach ensures data integrity and prevents silent failures.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> updateEmployee(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> { id } = req.params;
      <span class="hljs-keyword">const</span> data: Partial&lt;CreateEmployeeInput&gt; = req.body;

      <span class="hljs-keyword">const</span> employee = <span class="hljs-keyword">await</span> EmployeeModel.findById(<span class="hljs-built_in">parseInt</span>(id));
      <span class="hljs-keyword">if</span> (!employee) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Employee not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> updated = <span class="hljs-keyword">await</span> EmployeeModel.update(<span class="hljs-built_in">parseInt</span>(id), data);
      res.json({
        message: <span class="hljs-string">'Employee updated successfully'</span>,
        data: updated,
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error updating employee:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to update employee'</span> });
    }
  }
</code></pre>
<h3 id="heading-deleting-an-employee-deleteemployee">Deleting an Employee (<code>deleteEmployee</code>)</h3>
<p>Finally, the last endpoint in the <code>EmployeeController</code> handles employee deletion.</p>
<p>Before deleting, it verifies that the employee exists to avoid invalid delete operations. If found, the employee record is removed using <code>EmployeeModel.delete</code>, and a success message is returned. If the employee doesn’t exist, the controller responds with a <code>404 Not Found</code>.</p>
<pre><code class="lang-typescript"> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> deleteEmployee(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> { id } = req.params;

      <span class="hljs-keyword">const</span> employee = <span class="hljs-keyword">await</span> EmployeeModel.findById(<span class="hljs-built_in">parseInt</span>(id));
      <span class="hljs-keyword">if</span> (!employee) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Employee not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">await</span> EmployeeModel.delete(<span class="hljs-built_in">parseInt</span>(id));
      res.json({ message: <span class="hljs-string">'Employee deleted successfully'</span> });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error deleting employee:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to delete employee'</span> });
    }
  }
</code></pre>
<h3 id="heading-error-handling-strategy">Error Handling Strategy</h3>
<p>All controller methods use structured error handling to log errors internally while returning clean and user-friendly error messages to API consumers. This separation ensures sensitive implementation details are not leaked while still providing useful feedback for debugging and client-side handling.</p>
<h3 id="heading-role-in-the-overall-payroll-system">Role in the Overall Payroll System</h3>
<p>The <code>EmployeeController</code> provides the foundational APIs required for managing employee records, which are essential inputs for payroll processing. By cleanly separating HTTP concerns from business logic and persistence layers, this controller supports maintainability, scalability, and clear system boundaries within the payroll architecture.</p>
<h3 id="heading-payroll-controller">Payroll Controller</h3>
<p>This module defines the PayrollController, which serves as the primary HTTP-facing orchestration layer for payroll operations in the system. It exposes RESTful APIs that allow clients to create payrolls, retrieve payroll data, trigger payroll processing, reconcile payment results, authorize bulk transfers, and monitor transaction and account statuses.</p>
<h3 id="heading-controller-responsibilities-1">Controller Responsibilities</h3>
<p>The <code>PayrollController</code> is responsible for:</p>
<ul>
<li><p>Accepting and validating client requests related to payrolls</p>
</li>
<li><p>Managing payroll lifecycle transitions (creation → processing → completion)</p>
</li>
<li><p>Triggering background job execution for bulk payroll disbursement</p>
</li>
<li><p>Reconciling payment results with Monnify</p>
</li>
<li><p>Providing real-time payroll and transaction status visibility</p>
</li>
<li><p>Acting as a safe boundary between external clients and internal services</p>
</li>
</ul>
<p>To get started, create a new file <code>src/controllers/payroll.controller.ts</code>. This is where we’ll define all payroll-related endpoints.</p>
<p>At the top of <code>src/controllers/payroll.controller.ts</code>, we start with the following imports:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Request, Response } <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> {
  PayrollModel,
  PayrollItemModel,
  PayrollStatus,
} <span class="hljs-keyword">from</span> <span class="hljs-string">'../models/payroll'</span>;
<span class="hljs-keyword">import</span> { processPayrollItems } <span class="hljs-keyword">from</span> <span class="hljs-string">'../jobs/payroll.processor'</span>;
<span class="hljs-keyword">import</span> { monnifyClient } <span class="hljs-keyword">from</span> <span class="hljs-string">'../config/monnify'</span>;
</code></pre>
<p>Here’s what each of these is responsible for:</p>
<ul>
<li><p><code>Request</code> and <code>Response</code> (from Express): These types give us strongly typed access to incoming HTTP requests and outgoing responses.</p>
</li>
<li><p><code>PayrollModel</code>: This model handles payroll batch operations such as creating payrolls, fetching them, and updating their overall status.</p>
</li>
<li><p><code>PayrollItemModel</code>: This model lets us fetch and update those items, especially during processing and reconciliation.</p>
</li>
<li><p><code>PayrollStatus</code>: This is an enum that defines the valid states of a payroll or payroll item (for example: <code>PENDING</code>, <code>PROCESSING</code>, <code>COMPLETED</code>, <code>FAILED</code>). Using an enum helps keep state transitions explicit and consistent across the system.</p>
</li>
<li><p><code>processPayrollItems</code>: This function is responsible for handing off payroll processing to background workers. Instead of processing payrolls synchronously in the HTTP request, we queue the work and let workers handle it asynchronously.</p>
</li>
<li><p><code>monnifyClient</code>: This is our gateway to the external payment service. We use it to authorize bulk transfers, check transaction statuses, reconcile payments, and fetch account balances.</p>
</li>
</ul>
<p>Together, these imports give the controller everything it needs to process payroll operations.</p>
<p>With our imports in place, we can now define the controller class itself. This class will serve as the single home for all payroll-related endpoints.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> PayrollController {
  <span class="hljs-comment">// Payroll endpoints will live here</span>
}
</code></pre>
<h3 id="heading-creating-a-payroll-createpayroll">Creating a Payroll (<code>createPayroll</code>)</h3>
<p>With the controller in place, we’ll begin by implementing the endpoint create payroll. This endpoint initializes a new payroll batch, allowing us to either process all employees or a subset by their IDs.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> createPayroll(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { payroll_period, employee_ids } = req.body;

      <span class="hljs-keyword">if</span> (!payroll_period) {
        res.status(<span class="hljs-number">400</span>).json({ error: <span class="hljs-string">'payroll_period is required'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> processedEmployeeIds = employee_ids
        ? employee_ids
            .map(<span class="hljs-function">(<span class="hljs-params">id: <span class="hljs-built_in">any</span></span>) =&gt;</span> <span class="hljs-built_in">parseInt</span>(id, <span class="hljs-number">10</span>))
            .filter(<span class="hljs-function">(<span class="hljs-params">id: <span class="hljs-built_in">number</span></span>) =&gt;</span> !<span class="hljs-built_in">isNaN</span>(id))
        : <span class="hljs-literal">undefined</span>;

      <span class="hljs-keyword">const</span> payroll = <span class="hljs-keyword">await</span> PayrollModel.create({
        payroll_period,
        employee_ids: processedEmployeeIds,
      });

      res.status(<span class="hljs-number">201</span>).json({
        message: <span class="hljs-string">'Payroll created successfully'</span>,
        data: payroll,
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error creating payroll:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to create payroll'</span> });
    }

}
</code></pre>
<p>Here’s what’s happening in the code:</p>
<ul>
<li><p>The endpoint requires a <code>payroll_period</code> and optionally accepts a list of employee IDs to support partial payroll runs.</p>
</li>
<li><p>Incoming employee IDs are normalized and validated to ensure they are valid integers before being passed to the payroll model.</p>
</li>
<li><p>The controller delegates the actual creation logic to <code>PayrollModel.create</code>, which computes totals and creates payroll items.</p>
</li>
<li><p>On success, the API responds with a <code>201 Created</code> status and the newly created payroll record.</p>
</li>
</ul>
<h3 id="heading-fetching-all-payrolls-getallpayrolls">Fetching All Payrolls (<code>getAllPayrolls</code>)</h3>
<p>This endpoint retrieves all payroll batches in the system. It’s typically used for administrative dashboards and payroll history views. The controller simply delegates to <code>PayrollModel.findAll</code> and returns the results in a structured JSON response.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getAllPayrolls(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> payrolls = <span class="hljs-keyword">await</span> PayrollModel.findAll();
res.json({ data: payrolls });
} <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
<span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching payrolls:'</span>, error);
res
.status(<span class="hljs-number">500</span>)
.json({ error: error.message || <span class="hljs-string">'Failed to fetch payrolls'</span> });
}
}

<span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getPayrollById(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { id } = req.params;
<span class="hljs-keyword">const</span> payroll = <span class="hljs-keyword">await</span> PayrollModel.findById(<span class="hljs-built_in">parseInt</span>(id));

      <span class="hljs-keyword">if</span> (!payroll) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Payroll not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payroll.id);

      res.json({
        data: {
          ...payroll,
          items,
        },
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching payroll:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to fetch payroll'</span> });
    }
}
</code></pre>
<h3 id="heading-fetching-a-payroll-with-items-getpayrollbyid">Fetching a Payroll with Items (<code>getPayrollById</code>)</h3>
<p>Next, we’ll implement an endpoint to retrieve a single payroll by its ID along with all associated payroll items. This is useful for administrative dashboards and payroll history views.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getPayrollById(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { id } = req.params;
<span class="hljs-keyword">const</span> payroll = <span class="hljs-keyword">await</span> PayrollModel.findById(<span class="hljs-built_in">parseInt</span>(id));

      <span class="hljs-keyword">if</span> (!payroll) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Payroll not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payroll.id);

      res.json({
        data: {
          ...payroll,
          items,
        },
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching payroll:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to fetch payroll'</span> });
    }

}
</code></pre>
<p>In the code, we read the <code>id</code> parameter from the URL and convert it to an integer.</p>
<p>If the payroll does not exist, a <code>404 Not Found</code> response is returned. When found, the controller aggregates payroll metadata and its child payroll items into a single response object, making it convenient for detailed payroll inspection and UI rendering.</p>
<h3 id="heading-processing-a-payroll-processpayroll">Processing a Payroll (<code>processPayroll</code>)</h3>
<p>Next, we implement the <code>processPayroll</code> endpoint. This endpoint initiates payroll execution. Before queuing the payroll for processing, the controller enforces important state checks to prevent duplicate or invalid execution, ensuring payrolls that are already <code>PROCESSING</code> or <code>COMPLETED</code> cannot be reprocessed.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> processPayroll(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { id } = req.params;

      <span class="hljs-keyword">const</span> payroll = <span class="hljs-keyword">await</span> PayrollModel.findById(<span class="hljs-built_in">Number</span>(id));

      <span class="hljs-keyword">if</span> (!payroll) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Payroll not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">if</span> (
        payroll.status === PayrollStatus.COMPLETED ||
        payroll.status === PayrollStatus.PROCESSING
      ) {
        res.status(<span class="hljs-number">400</span>).json({
          error: <span class="hljs-string">`Payroll already <span class="hljs-subst">${payroll.status}</span>`</span>,
        });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-comment">// Queue the payroll for processing</span>
      <span class="hljs-keyword">await</span> processPayrollItems(payroll.id);

      res.json({
        message: <span class="hljs-string">'Payroll queued for bulk processing'</span>,
        data: {
          payroll_id: payroll.id,
          processing_mode: <span class="hljs-string">'bulk'</span>,
        },
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error processing payroll:'</span>, error);
      res.status(<span class="hljs-number">500</span>).json({
        error: error.message || <span class="hljs-string">'Failed to process payroll'</span>,
      });
    }
}
</code></pre>
<p>Here’s what’s happening in the code:</p>
<ul>
<li><p>We get the <code>id</code> parameter from the URL and convert it to a number.</p>
</li>
<li><p>If no payroll is found with the given ID, we return a <code>404 Not Found</code> response.</p>
</li>
<li><p>Before queuing, we check the payroll’s current status. Payrolls that are already <code>PROCESSING</code> or <code>COMPLETED</code> cannot be reprocessed.</p>
</li>
<li><p>Valid payrolls are handed off to <code>processPayrollItems</code>, which runs the bulk execution in background workers (Bull jobs).</p>
</li>
<li><p>Once queued, we respond with a JSON object confirming the payroll is ready for bulk processing.</p>
</li>
</ul>
<h3 id="heading-reconciling-payroll-payments-reconcilepayroll">Reconciling Payroll Payments (<code>reconcilePayroll</code>)</h3>
<p>Next, we’ll implement the endpoint that reconciles payroll payments. This ensures that the statuses of payroll items in our system match the actual payment outcomes from Monnify.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> reconcilePayroll(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { id } = req.params;

      <span class="hljs-keyword">const</span> payroll = <span class="hljs-keyword">await</span> PayrollModel.findById(<span class="hljs-built_in">Number</span>(id));
      <span class="hljs-keyword">if</span> (!payroll) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Payroll not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(<span class="hljs-built_in">Number</span>(id));

      <span class="hljs-keyword">const</span> itemsToReconcile = items.filter(
        <span class="hljs-function">(<span class="hljs-params">item</span>) =&gt;</span> item.transaction_reference
      );

      <span class="hljs-keyword">if</span> (itemsToReconcile.length === <span class="hljs-number">0</span>) {
        res.json({
          message: <span class="hljs-string">'No items to reconcile (no transaction references found)'</span>,
          reconciled: <span class="hljs-number">0</span>,
        });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">let</span> updated = <span class="hljs-number">0</span>;
      <span class="hljs-keyword">let</span> errors = <span class="hljs-number">0</span>;

      <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> item <span class="hljs-keyword">of</span> itemsToReconcile) {
        <span class="hljs-keyword">try</span> {
          <span class="hljs-keyword">const</span> txStatus = <span class="hljs-keyword">await</span> monnifyClient.getTransactionStatus(
            item.transaction_reference!
          );

          <span class="hljs-keyword">const</span> responseBody = txStatus.responseBody || txStatus;
          <span class="hljs-keyword">const</span> paymentStatus =
            responseBody.paymentStatus || responseBody.status;

          <span class="hljs-keyword">if</span> (
            paymentStatus === <span class="hljs-string">'PAID'</span> &amp;&amp;
            item.status !== PayrollStatus.COMPLETED
          ) {
            <span class="hljs-keyword">await</span> PayrollItemModel.updateStatus(
              item.id,
              PayrollStatus.COMPLETED,
              item.transaction_reference
            );
            updated++;
          } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (
            paymentStatus === <span class="hljs-string">'FAILED'</span> &amp;&amp;
            item.status !== PayrollStatus.FAILED
          ) {
            <span class="hljs-keyword">const</span> errorMessage =
              responseBody.paymentDescription ||
              responseBody.failureReason ||
              <span class="hljs-string">'Transaction failed'</span>;
            <span class="hljs-keyword">await</span> PayrollItemModel.updateStatus(
              item.id,
              PayrollStatus.FAILED,
              item.transaction_reference,
              errorMessage
            );
            updated++;
          }
        } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
          errors++;
          <span class="hljs-built_in">console</span>.error(<span class="hljs-string">`Error reconciling item <span class="hljs-subst">${item.id}</span>:`</span>, error.message);
        }
      }

      <span class="hljs-comment">// Update payroll stats</span>
      <span class="hljs-keyword">await</span> PayrollController.updatePayrollStats(<span class="hljs-built_in">Number</span>(id));

      res.json({
        message: <span class="hljs-string">'Payroll reconciled successfully'</span>,
        reconciled: updated,
        errors,
        total: itemsToReconcile.length,
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error reconciling payroll:'</span>, error);
      res.status(<span class="hljs-number">500</span>).json({
        error: error.message || <span class="hljs-string">'Failed to reconcile payroll'</span>,
      });
    }
}
</code></pre>
<p>The endpoint retrieves all payroll items with transaction references and queries Monnify for each transaction’s status. Based on the response, payroll items are updated to either <code>COMPLETED</code> or <code>FAILED</code>, with failure reasons captured where applicable.</p>
<p>Errors during reconciliation are tracked and logged without aborting the entire reconciliation process. After reconciliation, payroll-level statistics are recalculated to ensure consistency between item-level and batch-level states.</p>
<h3 id="heading-payroll-statistics-update-internal-helper">Payroll Statistics Update (Internal Helper)</h3>
<p>The private <code>updatePayrollStats</code> method recalculates payroll status based on the aggregate states of its payroll items.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> updatePayrollStats(payrollId: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payrollId);

    <span class="hljs-keyword">const</span> completed = items.filter(
      <span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.COMPLETED
    ).length;
    <span class="hljs-keyword">const</span> failed = items.filter(
      <span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.FAILED
    ).length;
    <span class="hljs-keyword">const</span> total = items.length;

    <span class="hljs-keyword">let</span> status: PayrollStatus;
    <span class="hljs-keyword">if</span> (completed === total) {
      status = PayrollStatus.COMPLETED;
    } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (failed === total) {
      status = PayrollStatus.FAILED;
    } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (completed &gt; <span class="hljs-number">0</span>) {
      status = PayrollStatus.PARTIALLY_COMPLETED;
    } <span class="hljs-keyword">else</span> {
      status = PayrollStatus.PROCESSING;
    }

    <span class="hljs-keyword">await</span> PayrollModel.updateStatus(payrollId, status, completed, failed);

}
</code></pre>
<p>The endpoint determines whether a payroll is fully completed, fully failed, partially completed, or still processing, and updates the payroll record accordingly.</p>
<p>This logic guarantees that the payroll’s summary status always reflects the true execution state of its underlying payments.</p>
<h3 id="heading-fetching-payroll-status-summary-getpayrollstatus">Fetching Payroll Status Summary (<code>getPayrollStatus</code>)</h3>
<p>Next, we’ll implement the <code>getPayrollStatus</code> endpoint. This endpoint provides a comprehensive status snapshot of a payroll. In addition to returning payroll metadata and items, it computes a summary breakdown of completed, failed, pending, and processing items. This endpoint is particularly useful for real-time dashboards, monitoring tools, and operational visibility.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getPayrollStatus(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { id } = req.params;
<span class="hljs-keyword">const</span> payroll = <span class="hljs-keyword">await</span> PayrollModel.findById(<span class="hljs-built_in">parseInt</span>(id));

      <span class="hljs-keyword">if</span> (!payroll) {
        res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Payroll not found'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payroll.id);

      res.json({
        data: {
          ...payroll,
          items,
          summary: {
            total: items.length,
            completed: items.filter(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.COMPLETED)
              .length,
            failed: items.filter(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.FAILED)
              .length,
            pending: items.filter(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.PENDING)
              .length,
            processing: items.filter(
              <span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.PROCESSING
            ).length,
          },
        },
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching payroll status:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to fetch payroll status'</span> });
    }
}
</code></pre>
<h3 id="heading-authorizing-bulk-transfers-authorizebulktransfer">Authorizing Bulk Transfers (<code>authorizeBulkTransfer</code>)</h3>
<p>Next, we’ll implement the <code>authorizeBulkTransfer</code> endpoint. Some bulk disbursements require OTP authorization from Monnify. This endpoint accepts a batch reference and authorization code, validates their presence, and forwards them to the Monnify client for verification. Successful authorization allows the bulk transfer to proceed, while failures are clearly reported to the client.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> authorizeBulkTransfer(
req: Request,
res: Response
): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { reference, authorizationCode, payrollId } = req.body;

      <span class="hljs-keyword">if</span> (!reference) {
        res.status(<span class="hljs-number">400</span>).json({ error: <span class="hljs-string">'Batch reference is required'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">if</span> (!authorizationCode) {
        res.status(<span class="hljs-number">400</span>).json({ error: <span class="hljs-string">'Authorization code (OTP) is required'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> monnifyClient.authorizeBulkTransfer(
        reference,
        authorizationCode
      );

      res.json({
        message: <span class="hljs-string">'Bulk transfer authorized successfully'</span>,
        data: result,
      });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error authorizing bulk transfer:'</span>, error);
      res.status(<span class="hljs-number">500</span>).json({
        error: error.message || <span class="hljs-string">'Failed to authorize bulk transfer'</span>,
      });
    }
}
</code></pre>
<p>Here is what’s happening in the code:</p>
<ul>
<li><p>Firstly, we get the batch reference, OTP, and optional payroll ID from the request body.</p>
</li>
<li><p>We return a <code>400 Bad Request</code> if the reference or OTP is missing.</p>
</li>
<li><p>Next, we send the reference and OTP to Monnify to approve the bulk transfer.</p>
</li>
<li><p>If successful, return a JSON confirmation with Monnify’s response.</p>
</li>
</ul>
<h3 id="heading-checking-transaction-status-checktransactionstatus">Checking Transaction Status (<code>checkTransactionStatus</code>)</h3>
<p>This endpoint allows clients or administrators to query the status of an individual transaction using its reference. It delegates the lookup to the Monnify client and returns the raw response, making it useful for debugging, audits, or manual verification workflows.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> checkTransactionStatus(
req: Request,
res: Response
): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">const</span> { reference } = req.params;

      <span class="hljs-keyword">if</span> (!reference) {
        res.status(<span class="hljs-number">400</span>).json({ error: <span class="hljs-string">'Transaction reference is required'</span> });
        <span class="hljs-keyword">return</span>;
      }

      <span class="hljs-keyword">const</span> status = <span class="hljs-keyword">await</span> monnifyClient.getTransactionStatus(reference);
      res.json({ data: status });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error checking transaction status:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to check transaction status'</span> });
    }
}
</code></pre>
<h3 id="heading-checking-wallet-balance-getaccountbalance">Checking Wallet Balance (<code>getAccountBalance</code>)</h3>
<p>This endpoint retrieves the current balance of the Monnify wallet associated with the payroll contract code. It’s typically used for pre-disbursement checks, monitoring available funds, or administrative reporting.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> getAccountBalance(req: Request, res: Response): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> balance = <span class="hljs-keyword">await</span> monnifyClient.getAccountBalance();
      res.json({ data: balance });
    } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching account balance:'</span>, error);
      res
        .status(<span class="hljs-number">500</span>)
        .json({ error: error.message || <span class="hljs-string">'Failed to fetch account balance'</span> });
    }
  }
</code></pre>
<h3 id="heading-error-handling-and-resilience">Error Handling and Resilience</h3>
<p>All controller methods use structured <code>try–catch</code> blocks to ensure unexpected failures are logged and surfaced as controlled HTTP error responses. This approach prevents sensitive internal errors from leaking while maintaining clarity and debuggability for API consumers.</p>
<h3 id="heading-role-in-the-overall-payroll-architecture-1">Role in the Overall Payroll Architecture</h3>
<p>The <code>PayrollController</code> acts as the central coordinator of the payroll system. It bridges client requests, domain models, background job processing, and external payment services into a cohesive workflow.</p>
<p>By enforcing state transitions, delegating heavy processing to background workers, and providing reconciliation and monitoring capabilities, this controller ensures payroll execution remains reliable, auditable, and scalable in real-world production environments.</p>
<h2 id="heading-setting-up-webhook-handlers">Setting Up Webhook Handlers</h2>
<p>Webhooks are essential for receiving real-time payment status updates from Monnify. When a payment completes or fails, Monnify sends a notification to your webhook endpoint.</p>
<p>Start by creating a new file <code>src/routes/monnify.webhook.ts</code>. This file will contain everything related to handling Monnify webhook events.</p>
<pre><code class="lang-typescript">
<span class="hljs-keyword">import</span> { Router, Request, Response } <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> crypto <span class="hljs-keyword">from</span> <span class="hljs-string">'crypto'</span>;
<span class="hljs-keyword">import</span> {
PayrollItemModel,
PayrollModel,
PayrollStatus,
} <span class="hljs-keyword">from</span> <span class="hljs-string">'../models/payroll'</span>;

<span class="hljs-keyword">const</span> router = Router();

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">verifySignature</span>(<span class="hljs-params">req: Request</span>): <span class="hljs-title">boolean</span> </span>{
<span class="hljs-keyword">const</span> signature = req.headers[<span class="hljs-string">'monnify-signature'</span>] <span class="hljs-keyword">as</span> <span class="hljs-built_in">string</span>;
<span class="hljs-keyword">if</span> (!signature) <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;

<span class="hljs-keyword">const</span> secret = process.env.MONNIFY_WEBHOOK_SECRET!;
<span class="hljs-keyword">const</span> hash = crypto
.createHmac(<span class="hljs-string">'sha512'</span>, secret)
.update(<span class="hljs-built_in">JSON</span>.stringify(req.body))
.digest(<span class="hljs-string">'hex'</span>);

<span class="hljs-keyword">return</span> hash === signature;
}

router.post(<span class="hljs-string">'/monnify/webhook'</span>, <span class="hljs-keyword">async</span> (req: Request, res: Response) =&gt; {
<span class="hljs-keyword">try</span> {
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Monnify Webhook:'</span>, <span class="hljs-built_in">JSON</span>.stringify(req.body, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>));

    <span class="hljs-keyword">const</span> { eventType, eventData } = req.body;

    <span class="hljs-keyword">if</span> (!eventData?.reference) {
      <span class="hljs-built_in">console</span>.warn(<span class="hljs-string">'Missing reference, ignoring webhook'</span>);
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'Ignored'</span>);
    }

    <span class="hljs-keyword">const</span> paymentReference = eventData.reference;
    <span class="hljs-keyword">const</span> transactionReference = eventData.transactionReference;
    <span class="hljs-keyword">const</span> description = eventData.transactionDescription || <span class="hljs-string">''</span>;

    <span class="hljs-comment">// Parse our reference format: PAYROLL_{payrollId}_{itemId}</span>
    <span class="hljs-keyword">const</span> [prefix, payrollIdStr, itemIdStr] = paymentReference.split(<span class="hljs-string">'_'</span>);

    <span class="hljs-keyword">if</span> (prefix !== <span class="hljs-string">'PAYROLL'</span>) {
      <span class="hljs-built_in">console</span>.warn(<span class="hljs-string">'Invalid payment reference format:'</span>, paymentReference);
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'Ignored'</span>);
    }

    <span class="hljs-keyword">const</span> payrollId = <span class="hljs-built_in">Number</span>(payrollIdStr);
    <span class="hljs-keyword">const</span> itemId = <span class="hljs-built_in">Number</span>(itemIdStr);

    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">isNaN</span>(payrollId) || <span class="hljs-built_in">isNaN</span>(itemId)) {
      <span class="hljs-built_in">console</span>.warn(<span class="hljs-string">'Invalid payroll/item IDs:'</span>, paymentReference);
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'Ignored'</span>);
    }

    <span class="hljs-keyword">const</span> item = <span class="hljs-keyword">await</span> PayrollItemModel.findById(itemId);

    <span class="hljs-keyword">if</span> (!item) {
      <span class="hljs-built_in">console</span>.warn(<span class="hljs-string">'Payroll item not found:'</span>, itemId);
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'Ignored'</span>);
    }

    <span class="hljs-comment">// Idempotency check - don't process already finalized items</span>
    <span class="hljs-keyword">if</span> (
      item.status === PayrollStatus.COMPLETED ||
      item.status === PayrollStatus.FAILED
    ) {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Item <span class="hljs-subst">${itemId}</span> already finalized (<span class="hljs-subst">${item.status}</span>)`</span>);
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'Already processed'</span>);
    }

    <span class="hljs-comment">// Update status based on event type</span>
    <span class="hljs-keyword">if</span> (
      eventType === <span class="hljs-string">'SUCCESSFUL_DISBURSEMENT'</span> ||
      eventData.status === <span class="hljs-string">'SUCCESS'</span>
    ) {
      <span class="hljs-keyword">await</span> PayrollItemModel.updateStatus(
        itemId,
        PayrollStatus.COMPLETED,
        transactionReference
      );
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`✅ Payroll item <span class="hljs-subst">${itemId}</span> COMPLETED`</span>);
    } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (
      eventType === <span class="hljs-string">'FAILED_DISBURSEMENT'</span> ||
      eventType === <span class="hljs-string">'REVERSED_DISBURSEMENT'</span> ||
      eventData.status === <span class="hljs-string">'FAILED'</span>
    ) {
      <span class="hljs-keyword">await</span> PayrollItemModel.updateStatus(
        itemId,
        PayrollStatus.FAILED,
        transactionReference,
        description
      );
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Payroll item <span class="hljs-subst">${itemId}</span> FAILED`</span>);
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Unhandled Monnify eventType: <span class="hljs-subst">${eventType}</span>`</span>);
    }

    <span class="hljs-comment">// Update overall payroll stats</span>
    <span class="hljs-keyword">await</span> updatePayrollStats(payrollId);

    <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'OK'</span>);

} <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
<span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Monnify webhook error:'</span>, error.message);
<span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">'OK'</span>); <span class="hljs-comment">// Always return 200 to prevent retries</span>
}
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> router;

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updatePayrollStats</span>(<span class="hljs-params">payrollId: <span class="hljs-built_in">number</span></span>) </span>{
<span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> PayrollItemModel.findByPayrollId(payrollId);

<span class="hljs-keyword">const</span> completed = items.filter(
<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.COMPLETED
).length;

<span class="hljs-keyword">const</span> failed = items.filter(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.status === PayrollStatus.FAILED).length;

<span class="hljs-keyword">let</span> status = PayrollStatus.PROCESSING;

<span class="hljs-keyword">if</span> (completed === items.length) {
status = PayrollStatus.COMPLETED;
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (failed === items.length) {
status = PayrollStatus.FAILED;
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (completed &gt; <span class="hljs-number">0</span>) {
status = PayrollStatus.PARTIALLY_COMPLETED;
}

<span class="hljs-keyword">await</span> PayrollModel.updateStatus(payrollId, status, completed, failed);
}
</code></pre>
<p>Key webhook implementation details:</p>
<ol>
<li><p><strong>Signature verification</strong>: The <code>verifySignature</code> function validates that webhooks actually come from Monnify.</p>
</li>
<li><p><strong>Idempotency</strong>: The handler checks if an item is already finalized before processing.</p>
</li>
<li><p><strong>Always return 200</strong>: Even on errors, return 200 to prevent Monnify from retrying indefinitely.</p>
</li>
<li><p><strong>Reference parsing</strong>: Our reference format <code>PAYROLL_{payrollId}_{itemId}</code> lets us identify which payment item to update.</p>
</li>
</ol>
<h2 id="heading-wiring-up-routes">Wiring Up Routes</h2>
<h3 id="heading-employee-routes">Employee Routes</h3>
<p>We’ll start by defining routes for employee management. These routes expose CRUD operations for employees and simply delegate the actual logic to the <code>EmployeeController</code>.</p>
<p>Create the file <code>src/routes/employee.routes.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Router } <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> { EmployeeController } <span class="hljs-keyword">from</span> <span class="hljs-string">'../controllers/employee.controller'</span>;

<span class="hljs-keyword">const</span> router = Router();

router.post(<span class="hljs-string">'/'</span>, EmployeeController.createEmployee);
router.get(<span class="hljs-string">'/'</span>, EmployeeController.getAllEmployees);
router.get(<span class="hljs-string">'/:id'</span>, EmployeeController.getEmployeeById);
router.put(<span class="hljs-string">'/:id'</span>, EmployeeController.updateEmployee);
router.delete(<span class="hljs-string">'/:id'</span>, EmployeeController.deleteEmployee);

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> router;
</code></pre>
<p>What this gives us:</p>
<ul>
<li><p>A clean <code>/api/employees</code> entry point for all employee-related operations</p>
</li>
<li><p>Clear separation between routing (URLs) and business logic (controllers)</p>
</li>
<li><p>A predictable REST structure that’s easy to extend later</p>
</li>
</ul>
<h3 id="heading-payroll-routes">Payroll Routes</h3>
<p>Next, we define routes for payroll operations. Payroll is more complex than employees, so this router exposes endpoints for creation, processing, reconciliation, authorization, and monitoring.</p>
<p>Create the file <code>src/routes/payroll.routes.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Router } <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> { PayrollController } <span class="hljs-keyword">from</span> <span class="hljs-string">'../controllers/payroll.controller'</span>;

<span class="hljs-keyword">const</span> router = Router();

router.post(<span class="hljs-string">'/'</span>, PayrollController.createPayroll);
router.get(<span class="hljs-string">'/'</span>, PayrollController.getAllPayrolls);
router.get(<span class="hljs-string">'/:id'</span>, PayrollController.getPayrollById);
router.post(<span class="hljs-string">'/:id/process'</span>, PayrollController.processPayroll);
router.post(<span class="hljs-string">'/batch/authorize'</span>, PayrollController.authorizeBulkTransfer);
router.get(<span class="hljs-string">'/:id/status'</span>, PayrollController.getPayrollStatus);
router.get(
  <span class="hljs-string">'/transaction/:reference/status'</span>,
  PayrollController.checkTransactionStatus
);
router.get(<span class="hljs-string">'/account/balance'</span>, PayrollController.getAccountBalance);
router.post(<span class="hljs-string">'/:id/reconcile'</span>, PayrollController.reconcilePayroll);

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> router;
</code></pre>
<p>What’s happening here:</p>
<ul>
<li><p>Each route maps directly to a well-defined payroll operation</p>
</li>
<li><p>Long-running or sensitive actions (processing, reconciliation, authorization) are clearly separated</p>
</li>
<li><p>Monitoring and operational endpoints (status, transaction lookup, balance checks) are first-class citizens</p>
</li>
</ul>
<h3 id="heading-main-application-entry-point">Main Application Entry Point</h3>
<p>With all routes defined, we now bring everything together in the main application file. This is where we configure middleware, register routes, and start the server.</p>
<p>Create the file <code>src/index.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> express, { Application, Request, Response } <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> cors <span class="hljs-keyword">from</span> <span class="hljs-string">'cors'</span>;
<span class="hljs-keyword">import</span> helmet <span class="hljs-keyword">from</span> <span class="hljs-string">'helmet'</span>;
<span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;
<span class="hljs-keyword">import</span> path <span class="hljs-keyword">from</span> <span class="hljs-string">'path'</span>;
<span class="hljs-keyword">import</span> { pool } <span class="hljs-keyword">from</span> <span class="hljs-string">'./config/database'</span>;
<span class="hljs-keyword">import</span> employeeRoutes <span class="hljs-keyword">from</span> <span class="hljs-string">'./routes/employee.routes'</span>;
<span class="hljs-keyword">import</span> payrollRoutes <span class="hljs-keyword">from</span> <span class="hljs-string">'./routes/payroll.routes'</span>;
<span class="hljs-keyword">import</span> monnifyWebhookRoutes <span class="hljs-keyword">from</span> <span class="hljs-string">'./routes/monnify.webhook'</span>;

dotenv.config();

<span class="hljs-keyword">const</span> app: Application = express();
<span class="hljs-keyword">const</span> PORT = process.env.PORT || <span class="hljs-number">3008</span>;

<span class="hljs-comment">// Middleware</span>
app.use(
  helmet({
    contentSecurityPolicy: <span class="hljs-literal">false</span>,
  })
);
app.use(
  cors({
    origin: <span class="hljs-string">'*'</span>,
    methods: [<span class="hljs-string">'GET'</span>, <span class="hljs-string">'POST'</span>, <span class="hljs-string">'PUT'</span>, <span class="hljs-string">'DELETE'</span>, <span class="hljs-string">'OPTIONS'</span>],
    allowedHeaders: [<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'Authorization'</span>],
  })
);
app.use(express.json());
app.use(express.urlencoded({ extended: <span class="hljs-literal">true</span> }));

<span class="hljs-comment">// Health check endpoint</span>
app.get(<span class="hljs-string">'/health'</span>, <span class="hljs-keyword">async</span> (req: Request, res: Response) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">await</span> pool.query(<span class="hljs-string">'SELECT 1'</span>);
    res.json({ status: <span class="hljs-string">'healthy'</span>, database: <span class="hljs-string">'connected'</span> });
  } <span class="hljs-keyword">catch</span> (error) {
    res.status(<span class="hljs-number">500</span>).json({ status: <span class="hljs-string">'unhealthy'</span>, database: <span class="hljs-string">'disconnected'</span> });
  }
});

<span class="hljs-comment">// Routes</span>
app.use(<span class="hljs-string">'/api/employees'</span>, employeeRoutes);
app.use(<span class="hljs-string">'/api/payrolls'</span>, payrollRoutes);
app.use(<span class="hljs-string">'/api'</span>, monnifyWebhookRoutes);

<span class="hljs-comment">// 404 handler</span>
app.use(<span class="hljs-function">(<span class="hljs-params">req: Request, res: Response</span>) =&gt;</span> {
  res.status(<span class="hljs-number">404</span>).json({ error: <span class="hljs-string">'Route not found'</span> });
});

<span class="hljs-comment">// Error handler</span>
app.use(<span class="hljs-function">(<span class="hljs-params">err: <span class="hljs-built_in">any</span>, req: Request, res: Response, next: <span class="hljs-built_in">any</span></span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error:'</span>, err);
  res.status(err.status || <span class="hljs-number">500</span>).json({
    error: err.message || <span class="hljs-string">'Internal server error'</span>,
  });
});

app.listen(PORT, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server is running on port <span class="hljs-subst">${PORT}</span>`</span>);
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Environment: <span class="hljs-subst">${process.env.NODE_ENV || <span class="hljs-string">'development'</span>}</span>`</span>);
});

<span class="hljs-comment">// Graceful shutdown</span>
process.on(<span class="hljs-string">'SIGTERM'</span>, <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGTERM signal received: closing HTTP server'</span>);
  <span class="hljs-keyword">await</span> pool.end();
  process.exit(<span class="hljs-number">0</span>);
});

process.on(<span class="hljs-string">'SIGINT'</span>, <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGINT signal received: closing HTTP server'</span>);
  <span class="hljs-keyword">await</span> pool.end();
  process.exit(<span class="hljs-number">0</span>);
});
</code></pre>
<h2 id="heading-testing-the-system">Testing the System</h2>
<p>Now let's test the complete payroll flow.</p>
<p>Start the application:</p>
<pre><code class="lang-bash">docker-compose up -d
npm run dev
</code></pre>
<p>Create employees:</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3008/api/employees \
  -H <span class="hljs-string">"Content-Type: application/json"</span> \
  -d <span class="hljs-string">'{
    "name": "John Doe",
    "email": "john.doe@company.com",
    "salary": 50000,
    "account_number": "0123456789",
    "bank_code": "058",
    "bank_name": "GTBank"
  }'</span>
</code></pre>
<p>Create a few more employees with different salaries to see how it’s handled.</p>
<p>Create a payroll:</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3008/api/payrolls \
  -H <span class="hljs-string">"Content-Type: application/json"</span> \
  -d <span class="hljs-string">'{
    "payroll_period": "2024-12"
  }'</span>
</code></pre>
<p>This creates a payroll with all active employees.</p>
<p>Process the payroll:</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3008/api/payrolls/1/process
</code></pre>
<p>This queues the payroll for background processing. The system will:</p>
<ol>
<li><p>Create a bulk transfer request to Monnify</p>
</li>
<li><p>Update each payroll item with a transaction reference</p>
</li>
<li><p>Wait for webhooks to update final status</p>
</li>
</ol>
<p>Authorize the bulk transfer (if OTP is required):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766392280287/4d8ae61f-4ccf-4d63-86a1-a6f72d7286e1.png" alt="Monnify payroll authorization OTP email" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>After processing, Monnify sends an OTP to your registered email. Use it to authorize:</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3008/api/payrolls/batch/authorize \
  -H <span class="hljs-string">"Content-Type: application/json"</span> \
  -d <span class="hljs-string">'{
    "reference": "BATCH_1702123456789",
    "authorizationCode": "123456",
    "payrollId": 1
  }'</span>
</code></pre>
<p>Check the payroll status:</p>
<pre><code class="lang-bash">curl http://localhost:3008/api/payrolls/1/status
</code></pre>
<p>This returns detailed status including a summary of completed, failed, and pending items.</p>
<p>Now, reconcile if needed – if webhooks were missed or you need to sync status:</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3008/api/payrolls/1/reconcile
</code></pre>
<h2 id="heading-setting-up-webhooks-for-production">Setting Up Webhooks for Production</h2>
<p>For Monnify to send webhooks to your local development environment, you'll need to expose your local server. You can use ngrok:</p>
<pre><code class="lang-bash">ngrok http 3008
</code></pre>
<p>Then configure the webhook URL in your <a target="_blank" href="https://app.monnify.com/developer#webhook-urls">Monnify dashboard</a>:</p>
<pre><code class="lang-plaintext">https://your-ngrok-url.ngrok.io/api/monnify/webhook
</code></pre>
<p>For production, use your actual server URL and ensure HTTPS is enabled.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766392444369/440bc1a9-7c70-42b0-9157-892f1ef07861.png" alt="Monnify webhook URL configuration" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Then when transactions are successful it will be revealed on the monnify dashboard as well as the transactions that failed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766392958199/e8abaa75-5a2f-44fd-b322-b110cf71e92d.png" alt="Monnify dashboard with payroll transaction status" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You've built a complete payroll system that:</p>
<ul>
<li><p>Manages employees with their bank account details</p>
</li>
<li><p>Creates payroll batches with automatic amount calculation</p>
</li>
<li><p>Processes bulk payments using Monnify's disbursement API</p>
</li>
<li><p>Uses background jobs to prevent request timeouts</p>
</li>
<li><p>Handles webhooks for real-time status updates</p>
</li>
<li><p>Supports reconciliation to ensure data consistency</p>
</li>
</ul>
<h3 id="heading-key-takeaways">Key Takeaways</h3>
<ol>
<li><p><strong>Background jobs are essential</strong>: Processing payments synchronously would timeout for large payrolls. Bull and Redis provide reliable async processing.</p>
</li>
<li><p><strong>Idempotency matters</strong>: Both the webhook handler and reconciliation process check current status before updating, preventing duplicate processing.</p>
</li>
<li><p><strong>Bulk transfers save time</strong>: Monnify's batch API lets you process hundreds of payments with a single OTP authorization.</p>
</li>
<li><p><strong>Status tracking is critical</strong>: The system tracks status at both the payroll and individual item level, making it easy to identify and handle failures.</p>
</li>
<li><p><strong>Reconciliation is your safety net</strong>: When webhooks fail or get delayed, the reconciliation endpoint ensures your database stays in sync with actual payment status.</p>
</li>
</ol>
<h3 id="heading-references">References:</h3>
<ul>
<li><a target="_blank" href="https://developers.monnify.com/">Monnify Docs</a></li>
</ul>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Use to Docker with Node.js: A Handbook for Developers ]]>
                </title>
                <description>
                    <![CDATA[ In this handbook, you’ll learn what Docker is and why it’s a must-have skill for backend and full-stack developers. And, most importantly, you’ll learn how to use it in real-world projects from start to finish. We will go far beyond the usual “Hello ... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-use-to-docker-with-nodejs-handbook/</link>
                <guid isPermaLink="false">691cf09fea147a95b92d3551</guid>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Devops ]]>
                    </category>
                
                    <category>
                        <![CDATA[ ci-cd ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Technical writing  ]]>
                    </category>
                
                    <category>
                        <![CDATA[ handbook ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Oghenekparobo Stephen ]]>
                </dc:creator>
                <pubDate>Tue, 18 Nov 2025 22:18:07 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763502750050/74610cbc-124b-48aa-9cb6-7ed861123511.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In this handbook, you’ll learn what Docker is and why it’s a must-have skill for backend and full-stack developers. And, most importantly, you’ll learn how to use it in real-world projects from start to finish.</p>
<p>We will go far beyond the usual “Hello World” examples and walk you through containerizing a complete full-stack JavaScript application (Node.js + Express backend, HTML/CSS/JS frontend, MongoDB database, and Mongo Express admin UI).</p>
<p>You’ll learn about networking multiple containers, orchestrating everything with Docker Compose, building and versioning your own images, persisting data with volumes, and securely pushing your Images to a private AWS ECR repository for sharing and production deployment.</p>
<p>By the end, you’ll be able to eliminate “it works on my machine” issues, confidently manage multi-service applications, deploy consistent environments anywhere, and integrate Docker into your daily workflow and CI/CD pipelines like a pro.</p>
<p>Since Docker is such a key skill for backend developers, we’ll start by covering its basic concepts.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>This technical handbook is designed for developers who have some practical, hands-on experience in full-stack development. You should be comfortable deploying applications and have a basic understanding of CI/CD pipelines.</p>
<p>While we’ll cover Docker from the ground up, this guide is not for absolute beginner developers. I assume you have real-world development experience and want to level up your workflow with Docker.</p>
<p>Finally, a basic familiarity with AWS and general deployment concepts will also be useful, though you don’t need to be an expert. This handbook is ideal for developers looking to enhance their production-grade skills and confidently integrate Docker into their projects.</p>
<h2 id="heading-table-of-contents">Table of Contents:</h2>
<ol>
<li><p><a class="post-section-overview" href="#heading-what-is-a-container">What is a Container?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-docker-vs-virtual-machines">Docker vs Virtual Machines</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-docker-installation">Docker Installation</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-basic-docker-commands">Basic Docker Commands</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-practice-with-javascript">Practice with JavaScript</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-how-to-pull-the-mongodb-image">How to Pull the MongoDB Image</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-pull-the-mongo-express-image">How to Pull the Mongo Express Image</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-docker-network">Docker Network</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-run-the-mongo-container">How to Run the Mongo Container</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-run-the-mongo-express-container">How to Run the Mongo Express Container</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-connect-nodejs-to-mongodb">How to Connect Node.js to MongoDB</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-use-docker-compose">How to Use Docker Compose</a></p>
<ul>
<li><a class="post-section-overview" href="#heading-why-use-docker-compose">Why Use Docker Compose?</a></li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-build-our-own-docker-image">How to Build Our Own Docker Image</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-the-solution">The Solution</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-why-mongodb-works">Why Mongodb Works</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-add-your-app-to-docker-compose">Add Your App to Docker Compose</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-start-all-services">Start All Services</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-verify-everything-works">Verify Everything Works</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-changed-and-why-it-works">What Changed and Why It Works</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-manage-your-containers">How to Manage Your Containers</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-create-a-private-docker-repository">How to Create a Private Docker Repository</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-step-1-get-your-aws-access-keys">Step 1: Get Your AWS Access Keys</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-2-check-if-aws-cli-is-installed">Step 2: Check if AWS CLI is Installed</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-3-configure-aws-cli">Step 3: Configure AWS CLI</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-4-test-your-aws-configuration">Step 4: Test Your AWS Configuration</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-5-login-to-ecr-docker-registry">Step 5: Login to ECR (Docker Registry)</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-image-naming-in-docker-repositories">Understanding Image Naming in Docker Repositories</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-step-6-build-tag-and-push-your-image">Step 6: Build, Tag, and Push Your Image</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-assignment-create-and-push-a-new-version">Assignment: Create and Push a New Version</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-deploying-our-image">Deploying Our Image</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-why-must-we-use-the-full-image-url-for-ecr">Why Must We Use the Full Image URL for ECR</a>?</p>
</li>
<li><p><a class="post-section-overview" href="#heading-deploy-your-app-using-docker-compose">Deploy Your App Using Docker Compose</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-sharing-our-private-docker-image">Sharing Our Private Docker Image</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-docker-volumes">Docker Volumes</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-how-docker-volumes-work">How Docker Volumes Work</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-types-of-docker-volumes">Types of Docker Volumes</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-example-docker-compose-file-using-volumes">Example Docker Compose File Using Volumes</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-start-your-application">Start Your Application</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-what-is-a-container">What is a Container?</h2>
<p>A container is a way to package an application together with everything it needs, including its dependencies, libraries, and configuration files.</p>
<p>Because containers are portable, they can be shared across teams and deployed on any machine without worrying about compatibility.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762863191484/827d0731-a392-419f-b17b-9a3611a4f3b4.jpeg" alt="pictures of stack containers, to portrait or give an idea what containers are or a vivid pictureof containers aliking to containers in docker" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-where-do-containers-live">Where Do Containers Live?</h3>
<p>Since containers are portable and can be shared across teams and systems, they need a place to live. That’s where container repositories come in – special storage locations for containers. Organizations can have private repositories for internal use, while public ones like <a target="_blank" href="https://hub.docker.com/">Docker Hub</a> let anyone browse and use shared containers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762863680430/caddd581-08e1-45c7-a676-818ad364f56b.png" alt="an image of docker hub, showing a catalogue of images" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If you visit the catalog page on Docker Hub, you will see a variety of container repositories, both official and community-made, from developers and teams like Redis, Jenkins, and many others.</p>
<p>In the past, when multiple developers worked on different projects, each had to manually install services on their own systems. Since different developers often use different operating systems like Linux, macOS, and Windows, the setup process was never the same. It took a lot of time, led to plenty of errors, and made setting up new environments a real headache, especially when you had to repeat it for multiple services.</p>
<p>Docker changed the game for developers and teams. Instead of manually installing every service and dependency, you can just run a single Docker command to start a container. Each container has its own isolated environment with everything it needs, so it runs the same on any machine, no matter if it’s Windows, macOS, or Linux. This makes collaboration smoother and eliminates all the bottlenecks that come from different setups, missing dependencies, or version mismatches.</p>
<p>In short, Docker is a platform that packages your app and its dependencies into a single, portable container, so it runs the same way everywhere.</p>
<h2 id="heading-docker-vs-virtual-machines">Docker vs Virtual Machines</h2>
<p>Docker and virtual machines (VMs) are both ways to run apps in a “virtual” environment, but they work differently. To understand the differences, it helps to know a bit about how computers run software.</p>
<p>A quick look at the layers:</p>
<ul>
<li><p><strong>Kernel:</strong> This is the part of the operating system that talks to your computer’s hardware, like the CPU, memory, and disk. Think of it as the middleman between your apps and your computer.</p>
</li>
<li><p><strong>Application layer:</strong> This is where programs and apps run. It sits on top of the kernel and uses it to access hardware resources.</p>
</li>
</ul>
<p>So, now let’s get into a bit more detail about Virtual Machines. A VM virtualizes the <strong>entire operating system</strong>, which means it comes with its own kernel and its own application layer. When you download a VM, you are basically getting a full OS inside your computer, often several gigabytes in size.</p>
<p>Because it has to boot its own OS, VMs start slowly. But VMs are very compatible, and can run on almost any host because they include everything they need.</p>
<p>Docker, on the other hand, only virtualizes the <strong>application layer</strong>, not the full OS. Containers share the host system’s kernel but include everything the app needs, dependencies, libraries, and configuration.</p>
<p>Docker images are small, often just a few megabytes. Containers start almost instantly because they don’t boot a full OS. A Docker container can run anywhere Docker is installed, no matter what operating system your computer uses.</p>
<p>In simple terms, to summarize:</p>
<ul>
<li><p>A VM is like running a whole computer inside your computer – big, heavy, and slow.</p>
</li>
<li><p>A Docker container is like a self-contained app package – small, fast, and portable.</p>
</li>
</ul>
<p>Here’s a quick comparison:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Virtual Machine</td><td>Docker Container</td></tr>
</thead>
<tbody>
<tr>
<td>Size</td><td>GBs (large)</td><td>MBs (small)</td></tr>
<tr>
<td>Startup Speed</td><td>Slow</td><td>Fast</td></tr>
<tr>
<td>OS Layer</td><td>Full OS + kernel</td><td>Shares host kernel</td></tr>
<tr>
<td>Portability</td><td>Runs on compatible host</td><td>Runs anywhere Docker is installed</td></tr>
</tbody>
</table>
</div><h2 id="heading-docker-installation">Docker Installation</h2>
<p>Alright, now that you know what Docker is, let’s get it running on your own machine.</p>
<p>Docker works on Windows, macOS, and Linux, but each system has slightly different steps. The official Docker <a target="_blank" href="https://docs.docker.com/get-started/introduction/">documentation</a> has clear instructions for all operating systems under Docker Docs: Install Docker.</p>
<p>If you are more of a visual learner, this YouTube video walks you through installing Docker on Windows and Linux step by step: <a target="_blank" href="https://www.youtube.com/watch?v=BuGEGM_elXY">Watch here</a>.</p>
<p>Here is a simple roadmap:</p>
<p>First, check your system requirements. Docker won’t run on every computer, so make sure your OS version is supported (the official <a target="_blank" href="https://docs.docker.com/engine/install/">docs</a> have a checklist).</p>
<ol>
<li><p>Windows and macOS users:</p>
<ul>
<li><p><strong>Newer systems:</strong> Download and install <a target="_blank" href="https://docs.docker.com/desktop/"><strong>Docker Desktop</strong></a><strong>.</strong> It’s the easiest way to get started.</p>
</li>
<li><p><strong>Older systems:</strong> If your computer doesn’t support Docker Desktop (for example, missing Hyper-V or older OS versions), you can use <a target="_blank" href="https://docker-docs.uclv.cu/toolbox/toolbox_install_windows/"><strong>Docker Toolbox</strong></a>. Toolbox installs Docker using a lightweight virtual machine, so you can still run containers even on older machines.</p>
</li>
</ul>
</li>
<li><p>Linux users: You will usually install Docker through your package manager (<code>apt</code> for Ubuntu/Debian, <code>yum</code> for CentOS/Fedora, etc.). The official <a target="_blank" href="https://docs.docker.com/desktop/setup/install/linux/">docs</a> show the commands for your distro.</p>
</li>
</ol>
<p>Then verify your installation: Open a terminal or command prompt and type:</p>
<pre><code class="lang-bash">docker --version
</code></pre>
<p>If you see the Docker version displayed, congratulations! Docker is ready to go.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762871221981/6b01cf18-a8b5-4aa9-b213-38cffd4ae5f4.png" alt="docker version displayed on cli" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Once Docker is installed, you’ll be ready to start running containers, pulling images, and experimenting with your apps in a safe, isolated environment.</p>
<p><strong>Tip for beginners:</strong></p>
<p>If you’re on an older machine and using Docker Toolbox, commands are mostly the same, but you will run them inside the <strong>Docker Quickstart Terminal</strong>, which sets up the virtual machine for you.</p>
<h2 id="heading-basic-docker-commands">Basic Docker Commands</h2>
<p>So far, we have been throwing around terms like images and containers, sometimes even interchangeably. But there is an important difference:</p>
<ul>
<li><p><strong>Docker image:</strong> Think of an image as a <strong>blueprint</strong> or a package. It contains everything your app needs: the code, libraries, dependencies, and configuration, but it’s not running yet.</p>
</li>
<li><p><strong>Docker container:</strong> A container is a <strong>running instance of an image</strong>. When you start a container, Docker takes the image and runs it in its own isolated environment.</p>
</li>
</ul>
<p>A helpful way to remember it is this: the image is the recipe, while the container is the cake**.** You can have one recipe (image) and make multiple cakes (containers) from it.</p>
<p><strong>Important note:</strong> Docker Hub stores images, not containers. So when you pull something from Docker Hub, you’re downloading an image. For example:</p>
<pre><code class="lang-bash">docker pull redis
</code></pre>
<p>Here’s what you’ll see:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762872367018/39039261-9617-4e5f-8156-9529697d0667.png" alt="docker run redis shown on cli" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This command downloads the Redis image to your machine. Once the download is complete, you can see all the images you have locally with:</p>
<pre><code class="lang-bash">docker images
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762872440545/bfdb401f-7dd6-4f72-920b-545fbf5193e1.png" alt="running docker images on cli" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>From there, you can start a container from an image whenever you need it:</p>
<pre><code class="lang-bash">docker run -d --name my-redis redis
</code></pre>
<p>This command starts a container, <code>my-redis</code>, from the <code>redis</code> image you just pulled.</p>
<ul>
<li><p><code>docker run</code> tells Docker to start a new container from an image.</p>
</li>
<li><p><code>-d</code> stands for “detached mode.” It means the container runs in the background so you can keep using your terminal.</p>
</li>
<li><p><code>--name my-redis</code> gives your container a friendly name (<code>my-redis</code>) instead of letting Docker assign a random one. It makes it easier to manage later.</p>
</li>
<li><p><code>redis</code> is the image you are using to start the container.</p>
</li>
</ul>
<p>To see all containers that are currently running, you can use:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762873018123/2e184c25-e4f1-445c-b182-81987929c014.png" alt="ran docker ps in the terminal to list all running containers" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>This will list containers with details like:</p>
<ul>
<li><p>Container ID</p>
</li>
<li><p>Name</p>
</li>
<li><p>Status (running or stopped)</p>
</li>
<li><p>The image it’s running from</p>
</li>
</ul>
<p>If you want to see all containers, even ones that aren’t running, you can add the <code>-a</code> flag:</p>
<pre><code class="lang-bash">docker ps -a
</code></pre>
<h3 id="heading-how-to-specify-a-version-of-an-image">How to Specify a Version of an Image:</h3>
<p>By default, Docker pulls the <strong>latest version</strong> of an image. But sometimes you might need a specific version. You can do this using a colon (<code>:</code>) followed by the version tag. For example:</p>
<pre><code class="lang-bash">docker pull redis:7.2
docker run -d --name my-redis redis:7.2
</code></pre>
<p>To know which versions are available, you can visit <a target="_blank" href="https://hub.docker.com/repositories"><strong>Docker Hub</strong></a> or check the image tags online. Also, running <code>docker images</code> on your machine will show you all downloaded images and their versions.</p>
<h3 id="heading-how-to-stop-start-and-remove-a-container">How to Stop, Start, and Remove a Container</h3>
<p>If you want to stop a running container, run this:</p>
<pre><code class="lang-bash">docker stop my-redis
</code></pre>
<p>To start it again:</p>
<pre><code class="lang-bash">docker start my-redis
</code></pre>
<p>You can also <strong>remove a container</strong> if you no longer need it:</p>
<pre><code class="lang-bash">docker rm my-redis
</code></pre>
<h3 id="heading-how-to-restart-a-container">How to Restart a Container</h3>
<p>You can restart a container using its <strong>container ID</strong> (or name) if something crashes, needs a refresh, or you just want to apply changes.</p>
<p>For example:</p>
<pre><code class="lang-bash">docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS      NAMES
c002bed0ae9a   redis     <span class="hljs-string">"docker-entrypoint.s…"</span>   3 minutes ago   Up 3 minutes   6379/tcp   my-redis
</code></pre>
<p>Restart it like this:</p>
<pre><code class="lang-bash">docker restart c002bed0ae9a
</code></pre>
<p>or by name:</p>
<pre><code class="lang-bash">docker restart my-redis
</code></pre>
<p>Other handy ways:</p>
<ul>
<li><p><strong>Stop then start</strong></p>
<pre><code class="lang-bash">  docker stop c002bed0ae9a
  docker start c002bed0ae9a
</code></pre>
</li>
<li><p><strong>Start with logs</strong></p>
<pre><code class="lang-bash">  docker start c002bed0ae9a &amp;&amp; docker logs -f c002bed0ae9a
</code></pre>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762873445952/ffb56b5d-f850-4b53-998d-467ed431a191.png" alt="starting a docker container with logs" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-how-to-run-multiple-redis-containers-and-understanding-ports">How to Run Multiple Redis Containers and Understanding Ports</h3>
<p>Right now, you have a Redis container running:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>It shows something like this:</p>
<pre><code class="lang-bash">CONTAINER ID   IMAGE     COMMAND                  STATUS          PORTS      NAMES
c002bed0ae9a   redis     <span class="hljs-string">"docker-entrypoint.s…"</span>   Up 20 minutes   6379/tcp   my-redis
</code></pre>
<p>Notice the <strong>PORTS</strong> column: <code>6379/tcp</code>. This means the container is running Redis on its internal port 6379. By default, this port is inside the container and is not automatically exposed to your computer (the host). Docker maps it only if you specify it.</p>
<h4 id="heading-trying-to-run-another-redis-container-on-the-same-port">Trying to Run Another Redis Container on the Same Port</h4>
<p>If you try:</p>
<pre><code class="lang-bash">docker run -d --name my-redis2 redis:7.4.7-alpine
</code></pre>
<p>It will fail to map the host port 6379 because the first container is already using it. This is where port binding comes in.</p>
<h4 id="heading-what-is-port-binding">What is Port Binding?</h4>
<p>Port binding (also called port mapping) is the mechanism Docker uses to connect a port inside a container to a port on your host machine (your laptop/desktop/server).</p>
<p>Without port binding, any service running inside a container is completely isolated: it can listen on its internal ports (for example, Redis on 6379, a Node.js app on 3000, MongoDB on 27017), but nothing outside the container, including your browser, another app on your computer, or even another container on a different network, can reach it.</p>
<ul>
<li><p><strong>Container Port</strong>: The port inside the container where the app is running (Redis defaults to <code>6379</code>).</p>
</li>
<li><p><strong>Host Port</strong>: The port on your computer that you want to use to access that container.</p>
</li>
</ul>
<p>Docker lets you map a container port to a different host port using the <code>-p</code> flag.</p>
<h4 id="heading-running-a-second-redis-container-on-a-different-host-port">Running a Second Redis Container on a Different Host Port</h4>
<pre><code class="lang-bash">docker run -d --name my-redis2 -p 6380:6379 redis:7.4.7-alpine
</code></pre>
<p><code>-p 6380:6379</code> maps host port 6380 to container port 6379.</p>
<ul>
<li><p>Now you can connect to Redis in the second container using <code>localhost:6380</code>.</p>
</li>
<li><p>Inside the container, Redis still runs on port 6379.</p>
</li>
</ul>
<p>Check both containers:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>Output will look like this:</p>
<pre><code class="lang-bash">CONTAINER ID   IMAGE     STATUS          PORTS             NAMES
c002bed0ae9a   redis     Up 20 minutes   6379/tcp          my-redis
d123abcd5678   redis     Up 1 minute     0.0.0.0:6380-&gt;6379/tcp   my-redis2
</code></pre>
<p>The first container is running internally on 6379 (host port not exposed), while the second container is mapped so host port 6380 forwards traffic to container port 6379.</p>
<p>Think of each container as a room with a phone line (container port).</p>
<ul>
<li><p>You want to call that room from the outside (host).</p>
</li>
<li><p>You can’t use the same external phone line for two rooms at the same time.</p>
</li>
<li><p>With <strong>port binding</strong>, you assign a different external line for each room, even if the internal phone number is the same.</p>
</li>
</ul>
<h4 id="heading-why-port-binding-exists">Why Port Binding Exists</h4>
<ol>
<li><p><strong>Avoid port conflicts on the host:</strong> Only one process on your computer can use a given port at a time. If you already have one Redis container using host port 6379, a second container cannot also bind to the same host port. Port binding lets you run many identical containers side-by-side by mapping each one to a different host port (6379 → 6380, 6381, etc.).</p>
</li>
<li><p><strong>Access containerised services from your host:</strong> Your browser, Postman, MongoDB Compass, redis-cli, curl, etc., all run on the host. Without -p, they have no way to talk to services inside containers.</p>
</li>
<li><p><strong>Selective exposure:</strong> You don’t have to expose every port a container uses. Only map the ports you actually need externally, keeping the rest private and secure.</p>
</li>
</ol>
<p>It also gives you more flexibility in development and production. In development, you might map container 3000 to host 3000. But in production (for example, behind a reverse proxy), you might map container 3000 to host 80 or 443, or not expose it at all and let another container talk to it over Docker’s internal network.</p>
<h3 id="heading-how-to-explore-a-container">How to Explore a Container</h3>
<p>To explore a container, run:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it my-redis2 /bin/sh
</code></pre>
<ul>
<li><p><code>docker exec</code> runs a command in the container.</p>
</li>
<li><p><code>-it</code> interactive terminal (lets you type and see output).</p>
</li>
<li><p><code>/bin/sh</code> starts a shell inside the container.</p>
</li>
</ul>
<p>Once inside, your prompt changes to something like:</p>
<pre><code class="lang-bash">/data <span class="hljs-comment">#</span>
</code></pre>
<p>Now you can <strong>list files</strong>, navigate directories, or run programs, all inside the container, without affecting your host machine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762876729378/d16e8f00-ab9c-447b-b274-76d613b30ce3.png" alt="result of running docker exec -it my-redis2 /bin/sh" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-docker-run-vs-docker-start"><code>docker run</code> vs <code>docker start</code></h3>
<p>We have been using <code>docker run</code> and <code>docker start</code> throughout this article, but here’s why the difference is important:</p>
<ul>
<li><p><strong>Avoid accidental duplicates:</strong> Using <code>docker run</code> every time creates a new container. If you just want to restart something you already set up, <code>docker start</code> is faster and safer.</p>
</li>
<li><p><strong>Maintain configuration:</strong> <code>docker start</code> preserves the container’s original settings, ports, volumes, and names so you don’t risk breaking anything by changing options.</p>
</li>
<li><p><strong>Work efficiently with multiple containers:</strong> When running multiple services or different versions of the same app, knowing when to <code>run</code> vs <code>start</code> helps you manage resources, avoid port conflicts, and keep your workflow smooth.</p>
</li>
<li><p><strong>Speed up your workflow:</strong> Starting existing containers is almost instant, while creating a new one takes slightly longer.</p>
</li>
</ul>
<p><strong>Bottom line</strong> <code>docker run</code> = create something new, while <code>docker start</code> = resume what you already have.</p>
<h2 id="heading-practice-with-javascript">Practice with JavaScript</h2>
<p>Now that we have covered the core Docker concepts, let’s put them into action. In this section, we’ll containerize a simple JavaScript project that consists of:</p>
<ul>
<li><p><strong>A frontend:</strong> Built with HTML, CSS, and JavaScript</p>
</li>
<li><p><strong>A backend:</strong> A simple Node.js server (<code>server.js</code>)</p>
</li>
<li><p><strong>A database:</strong> A MongoDB instance pulled directly from Docker Hub</p>
</li>
<li><p><strong>A UI for MongoDB:</strong> Using <strong>Mongo Express</strong> to visualize and manage our database</p>
</li>
</ul>
<p>This example demonstrates how Docker can manage multiple components of an application, including code, dependencies, and services in isolated, consistent environments.</p>
<p>You can <a target="_blank" href="https://github.com/Oghenekparobo/docker_tut_js">pull the starter project from GitHub here</a>.</p>
<p>Or clone it directly using your terminal:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/Oghenekparobo/docker_tut_js.git
<span class="hljs-built_in">cd</span> docker_tut_js
</code></pre>
<p>This contains the basic HTML and JavaScript files along with the Node.js backend.</p>
<p>Next, we will prepare to set up our database. Head over to <a target="_blank" href="https://hub.docker.com/">Docker Hub</a> and type <strong>“mongo”</strong> in the search box. You will see the official MongoDB image published by Docker.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762950041097/89f54e21-f607-488a-8d98-d688733270c4.png" alt="official mongo db database in dockerhub" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-how-to-pull-the-mongodb-image">How to Pull the MongoDB Image</h3>
<p>Now that you have explored the official MongoDB image on Docker Hub, let’s actually pull it into your local environment.</p>
<p>Open your terminal, navigate to your project directory (for example, <code>docker_tut_js</code>), and run:</p>
<pre><code class="lang-bash">docker pull mongo
</code></pre>
<p>This command tells Docker to download the latest version of the MongoDB image from Docker Hub.</p>
<p>You will see output similar to this:</p>
<pre><code class="lang-bash">Using default tag: latest
latest: Pulling from library/mongo
b8a35db46e38: Already exists 
a637dbfff7e5: Pull complete 
0c9047ace63c: Pull complete 
02cd4cf70021: Pull complete 
dfb5d357a025: Pull complete 
007bf0024f67: Pull complete 
67fd8af3998d: Pull complete 
d702312e8109: Pull complete 
Digest: sha256:7d1a1a613b41523172dc2b1b02c706bc56cee64144ccd6205b1b38703c85bf61
Status: Downloaded newer image <span class="hljs-keyword">for</span> mongo:latest
docker.io/library/mongo:latest
</code></pre>
<p>Here’s what’s happening:</p>
<ul>
<li><p><strong>“Using default tag: latest”</strong>: Docker pulls the most recent version of MongoDB since no specific version was provided.</p>
</li>
<li><p><strong>“Pulling from library/mongo”</strong>: It’s downloading from Docker’s official image library.</p>
</li>
<li><p><strong>“Pull complete”</strong>: Each line represents a layer of the image being successfully downloaded.</p>
</li>
<li><p><strong>“Downloaded newer image for mongo:latest”</strong>: Confirms that the MongoDB image is now stored locally on your system.</p>
</li>
</ul>
<p>You can confirm that it’s available by running:</p>
<pre><code class="lang-bash">docker images
</code></pre>
<p>You should see <strong>mongo</strong> listed in the repository column.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762950246712/343d4d23-9c61-4480-956c-a5c2cd391889.png" alt="mongo db listed in the repository column after running docker images" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h3 id="heading-how-to-pull-the-mongo-express-image">How to Pull the Mongo Express Image</h3>
<p>Now that the MongoDB image is ready, let’s pull the <strong>Mongo Express</strong> image.</p>
<p>Mongo Express is a lightweight web-based interface that lets you view and manage your MongoDB collections through a browser, similar to how phpMyAdmin works for MySQL.</p>
<p>Open your terminal (still in your project directory) and run:</p>
<pre><code class="lang-bash">docker pull mongo-express
</code></pre>
<p>You’ll see output similar to this:</p>
<pre><code class="lang-bash">Using default tag: latest
latest: Pulling from library/mongo-express
b8a35db46e38: Already exists
a637dbfff7e5: Pull complete
4e0e0977e9c3: Pull complete
02cd4cf70021: Pull complete
Digest: sha256:3d6dbac587ad91d0e2eab83f09a5b31a1c8f9d91a8825ddaa6c7453c25cb4812
Status: Downloaded newer image <span class="hljs-keyword">for</span> mongo-express:latest
docker.io/library/mongo-express:latest
</code></pre>
<p>Here’s what this means:</p>
<ul>
<li><p><code>docker pull mongo-express</code> downloads the official Mongo Express image from Docker Hub.</p>
</li>
<li><p>Each <strong>“Pull complete”</strong> line represents a successfully downloaded layer of the image.</p>
</li>
<li><p><code>mongo-express:latest</code> confirms that the latest version is now stored locally.</p>
</li>
</ul>
<p>To verify that both images are available, run:</p>
<pre><code class="lang-bash">docker images
</code></pre>
<p>You should see mongo and mongo-express listed in the output.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762951088081/06345eb1-80c9-4fcd-8585-5ff309ed2779.png" alt="docker images command showing both mongo db database and mongo express images verifying they have been installed by docker on your project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now that both images are downloaded, the next step is to run the containers to make sure MongoDB is up and accessible, and then connect it to Mongo Express so we can manage it through the browser.</p>
<p>Before we do that, let’s briefly look at how these two containers will communicate.</p>
<h3 id="heading-docker-network">Docker Network</h3>
<p>When MongoDB and Mongo Express run in separate containers, they need a way to talk to each other. Docker handles this using something called a <strong>Docker Network,</strong> a virtual bridge that lets containers communicate securely without exposing internal ports to the outside world.</p>
<p>When you run containers in Docker, it automatically creates an isolated network for them. Think of it like a private space where your containers can talk to each other safely without exposing everything to the outside world.</p>
<p>For example, if our MongoDB container and Mongo Express container are on the same Docker network, they can communicate just by using their container names (like <code>mongo</code> or <code>mongo-express</code>). You don’t need to use <code>localhost</code> or port numbers, as Docker handles that part internally.</p>
<p>But anything outside the Docker network (like your host machine or a Node.js app) connects through the exposed ports.</p>
<p>So later, when we package our entire application, the Node.js backend, MongoDB, Mongo Express, and even the frontend (<code>index.html</code>) into Docker, all these containers will interact smoothly through the Docker network. The browser on your computer will then connect to your Node.js app using the host address and port we have exposed.</p>
<p>By default, Docker already provides a few built-in networks. You can see them by running:</p>
<pre><code class="lang-bash">docker network ls
</code></pre>
<p>You will get something like this:</p>
<pre><code class="lang-bash">NETWORK ID     NAME      DRIVER    SCOPE
712a7144f1a0   bridge    bridge    <span class="hljs-built_in">local</span>
4ae27eedea5b   host      host      <span class="hljs-built_in">local</span>
4806000201ce   none      null      <span class="hljs-built_in">local</span>
</code></pre>
<p>These are automatically created by Docker. You don’t need to worry too much about them right now – we will just focus on creating our own custom network.</p>
<p>For our setup, we will create a separate network that both MongoDB and Mongo Express can share. Let’s call it mongo-network:</p>
<pre><code class="lang-bash">docker network create mongo-network
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762953968310/bdf51a4e-1986-48a4-922b-6f312ff99414.png" alt="mongo-network created with docker network create mongo-network then to see it in the list run docker network ls" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-run-the-mongo-container">How to Run the Mongo Container</h2>
<p>To make sure our MongoDB and Mongo Express containers can communicate, we need to run them inside the same Docker network. That’s why we created mongo-network earlier.</p>
<p>Let’s start with MongoDB. Remember, the <code>docker run</code> command is used to start a container from an image. In this case, we will run the official MongoDB image and attach it to our network.</p>
<p>We will also expose the default MongoDB port 27017 so it’s accessible from outside the container, and set up environment variables for the root username and password.</p>
<p>Here is the command:</p>
<pre><code class="lang-bash">docker run -p 27017:27017 -d \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=password \
  --name mongo \
  --network mongo-network \
  mongo
</code></pre>
<p>Here’s what each part does:</p>
<ul>
<li><p><code>-p 27017:27017</code> maps the container’s MongoDB port to your host machine.</p>
</li>
<li><p><code>-d</code> runs the container in detached mode (in the background).</p>
</li>
<li><p><code>-e</code> sets environment variables for the database’s root credentials.</p>
</li>
<li><p><code>--name mongo</code> gives the container a custom name for easier reference.</p>
</li>
<li><p><code>--network mongo-network</code> connects the container to the network we created.</p>
</li>
</ul>
<p>Once it runs successfully, your MongoDB instance will be up and running inside the Docker network, ready for other containers like Mongo Express to connect to it.</p>
<p>After creating your MongoDB container, you can easily check if it’s running and healthy.</p>
<p>First, run <code>docker ps</code> to see all active containers. You should see your MongoDB container (<code>mongo</code>) listed with its port <code>27017</code> exposed. To get more details about what’s happening inside the container, you can check its logs using <code>docker logs mongo</code> or, if you prefer, by using the container ID (for example: <code>docker logs 7abb38175ae28</code>). The logs will show startup messages from MongoDB, and you should look for lines indicating that the database started successfully and is ready to accept connections.</p>
<p>This is a quick way to verify that everything is working correctly before connecting other services, like Mongo Express, to it.</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>This will list all <strong>running containers</strong>. You should see your MongoDB container (<code>mongo</code>) with its port <code>27017</code> exposed.</p>
<pre><code class="lang-bash">docker logs mongo or the id of the container e.g docker logs 7abb38175ae283429354609866c8d97521f37b535c475ae448295f8fc0ed947f
</code></pre>
<p>This will show startup messages. Look for lines indicating MongoDB started successfully and is ready to accept connections.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762956236708/44dbe331-b736-4526-8dae-019150b618d8.png" alt="checking if the mongo container is running" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-how-to-run-the-mongo-express-container">How to Run the Mongo Express Container</h2>
<p>Now that MongoDB is up and running, we can run Mongo Express, which is a web-based interface to manage and view your MongoDB databases. We will connect it to the same network (<code>mongo-network</code>) so it can communicate with MongoDB.</p>
<p>Here’s the command:</p>
<pre><code class="lang-bash">docker run -d \
  -e ME_CONFIG_MONGODB_ADMINUSERNAME=admin \
  -e ME_CONFIG_MONGODB_ADMINPASSWORD=password \
  -e ME_CONFIG_MONGODB_SERVER=mongo \
  --name mongo-express \
  --network mongo-network \
  -p 8081:8081 \
  mongo-express
</code></pre>
<p>Here’s what each part does:</p>
<ul>
<li><p><code>-d</code> runs the container in detached mode (in the background).</p>
</li>
<li><p><code>-e ME_CONFIG_MONGODB_ADMINUSERNAME=admin</code> sets the MongoDB admin username for Mongo Express to use.</p>
</li>
<li><p><code>-e ME_CONFIG_MONGODB_ADMINPASSWORD=password</code> sets the corresponding MongoDB password.</p>
</li>
<li><p><code>-e ME_CONFIG_MONGODB_SERVER=mongo</code> tells Mongo Express which MongoDB server to connect to. Here we use the container name <code>mongo</code> because both containers are on the same network.</p>
</li>
<li><p><code>--name mongo-express</code> gives the container a friendly name for easier reference.</p>
</li>
<li><p><code>--network mongo-network</code> connects the container to the same Docker network as MongoDB so they can talk to each other.</p>
</li>
<li><p><code>-p 8081:8081</code> exposes the Mongo Express web interface on port <code>8081</code> of your host machine.</p>
</li>
<li><p><code>mongo-express</code> the name of the Docker image we’re running.</p>
</li>
</ul>
<p>Once the container is running, you can open your browser and visit <code>http://localhost:8081</code> to access Mongo Express and interact with your MongoDB instance.</p>
<p>For more details about the available environment variables and options, you can check the official Docker Hub page for Mongo Express <a target="_blank" href="https://hub.docker.com/_/mongo-express">here</a>.</p>
<p>Before opening your browser at <a target="_blank" href="http://localhost:8081"><code>http://localhost:8081</code></a>, it’s a good idea to check if the Mongo Express container is running properly. You can do this by viewing its logs:</p>
<pre><code class="lang-bash">docker logs &lt;container-id&gt;
<span class="hljs-comment"># or</span>
docker logs mongo-express
</code></pre>
<p>You should see output similar to this:</p>
<pre><code class="lang-bash">Waiting <span class="hljs-keyword">for</span> mongo:27017...
No custom config.js found, loading config.default.js
Welcome to mongo-express 1.0.2
------------------------
Mongo Express server listening at http://0.0.0.0:8081
Server is open to allow connections from anyone (0.0.0.0)
basicAuth credentials are <span class="hljs-string">"admin:pass"</span>, it is recommended you change this <span class="hljs-keyword">in</span> your config.js!
</code></pre>
<p>This confirms that Mongo Express is up and running and ready to connect to your MongoDB instance.</p>
<p>Take note of the basicAuth credentials shown in the logs (admin:pass). If these credentials are present, you’ll need to use them when accessing Mongo Express from your browser. Later, you can change them in a custom config.js file for better security.</p>
<p>Once everything looks good in the logs, you can safely visit <a target="_blank" href="http://localhost:8081"><code>http://localhost:8081</code></a> to access the Mongo Express interface.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762957766334/f64b1f06-87a8-4ffb-b905-47e1871cca64.png" alt="mongo-express interface from http://localhost:8081 " class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If your browser asks for a username and password when accessing Mongo Express, use the basicAuth credentials shown in the container logs:</p>
<pre><code class="lang-bash">Username: admin
Password: pass
</code></pre>
<p>These are the default credentials, and it’s <strong>strongly recommended</strong> to change them later in a custom <code>config.js</code> file for better security.</p>
<p>When you open Mongo Express, you will notice some default databases already created. For this project, we will create a new database called todos. Once it’s created, your Node.js application can connect to this database to store and retrieve data.</p>
<h2 id="heading-how-to-connect-nodejs-to-mongodb">How to Connect Node.js to MongoDB</h2>
<p>You already have MongoDB running inside a Docker container (mongo). The container exposes the default MongoDB port 27017 to the host, so any process on your laptop/desktop can reach it via <a target="_blank" href="http://localhost:27017">localhost:27017</a>.</p>
<p><strong>Important:</strong> The Node.js app is <strong>outside Docker</strong> (it’s just a regular node server.js process you start from your terminal).</p>
<p>Because the app is external, we <strong>must use</strong> <a target="_blank" href="http://localhost"><strong>localhost</strong></a> (or 127.0.0.1) as the host name – <strong>not</strong> the container name mongo.</p>
<p>Once we later containerise the Node.js app and put it on the same Docker network, we’ll switch the host to mongo. For now, keep it <a target="_blank" href="http://localhost">localhost</a>.</p>
<h3 id="heading-nodejs-backend">Node.js Backend</h3>
<p>Here’s a version of our <code>server.js</code> using MongoDB:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> multer = <span class="hljs-built_in">require</span>(<span class="hljs-string">"multer"</span>);
<span class="hljs-keyword">const</span> path = <span class="hljs-built_in">require</span>(<span class="hljs-string">"path"</span>);
<span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">"fs"</span>);
<span class="hljs-keyword">const</span> { MongoClient, ObjectId } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"mongodb"</span>);

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> PORT = <span class="hljs-number">3000</span>;

<span class="hljs-comment">// Host = localhost  →  talks to the MongoDB container via the exposed port</span>
<span class="hljs-comment">// Port = 27017      →  default MongoDB port</span>
<span class="hljs-comment">// User / Pass       →  admin / password (the credentials you gave the container)</span>
<span class="hljs-keyword">const</span> mongoUrl = <span class="hljs-string">"mongodb://admin:password@localhost:27017"</span>;
<span class="hljs-keyword">const</span> dbName = <span class="hljs-string">"todos"</span>;
<span class="hljs-keyword">let</span> db;

MongoClient.connect(mongoUrl)
  .then(<span class="hljs-function">(<span class="hljs-params">client</span>) =&gt;</span> {
    db = client.db(dbName);
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Connected to MongoDB →"</span>, dbName);
  })
  .catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"MongoDB connection error:"</span>, err));

<span class="hljs-keyword">const</span> uploadDir = path.join(__dirname, <span class="hljs-string">"uploads"</span>);
<span class="hljs-keyword">if</span> (!fs.existsSync(uploadDir)) fs.mkdirSync(uploadDir);

<span class="hljs-keyword">const</span> storage = multer.diskStorage({
  <span class="hljs-attr">destination</span>: <span class="hljs-function">(<span class="hljs-params">req, file, cb</span>) =&gt;</span> cb(<span class="hljs-literal">null</span>, uploadDir),
  <span class="hljs-attr">filename</span>: <span class="hljs-function">(<span class="hljs-params">req, file, cb</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> unique = <span class="hljs-built_in">Date</span>.now() + <span class="hljs-string">"-"</span> + <span class="hljs-built_in">Math</span>.round(<span class="hljs-built_in">Math</span>.random() * <span class="hljs-number">1e9</span>);
    cb(<span class="hljs-literal">null</span>, <span class="hljs-string">"photo-"</span> + unique + path.extname(file.originalname));
  },
});
<span class="hljs-keyword">const</span> upload = multer({ storage });

app.use(express.static(__dirname));
app.use(<span class="hljs-string">"/uploads"</span>, express.static(uploadDir));
app.use(express.json());
app.use(express.urlencoded({ <span class="hljs-attr">extended</span>: <span class="hljs-literal">true</span> }));

app.get(<span class="hljs-string">"/todos"</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">const</span> todos = <span class="hljs-keyword">await</span> db.collection(<span class="hljs-string">"todos"</span>).find().toArray();
  res.json(todos);
});

app.post(<span class="hljs-string">"/todos"</span>, upload.single(<span class="hljs-string">"photo"</span>), <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">const</span> text = req.body.text?.trim();
  <span class="hljs-keyword">if</span> (!text) <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">"Text required"</span> });

  <span class="hljs-keyword">const</span> todo = {
    text,
    <span class="hljs-attr">image</span>: req.file ? <span class="hljs-string">`/uploads/<span class="hljs-subst">${req.file.filename}</span>`</span> : <span class="hljs-literal">null</span>,
    <span class="hljs-attr">createdAt</span>: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(),
  };

  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> db.collection(<span class="hljs-string">"todos"</span>).insertOne(todo);
  todo._id = result.insertedId;
  res.json(todo);
});

<span class="hljs-comment">// Start server</span>
app.listen(PORT, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server → http://localhost:<span class="hljs-subst">${PORT}</span>`</span>);
});
</code></pre>
<h3 id="heading-frontend"><strong>Frontend</strong></h3>
<p><code>index.html</code>:</p>
<pre><code class="lang-xml"><span class="hljs-meta">&lt;!DOCTYPE <span class="hljs-meta-keyword">html</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">html</span> <span class="hljs-attr">lang</span>=<span class="hljs-string">"en"</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">head</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">meta</span> <span class="hljs-attr">charset</span>=<span class="hljs-string">"UTF-8"</span> /&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span>Todo + Image<span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">style</span>&gt;</span><span class="css">
      <span class="hljs-selector-tag">body</span> {
        <span class="hljs-attribute">font-family</span>: sans-serif;
        <span class="hljs-attribute">margin</span>: <span class="hljs-number">2rem</span>;
        <span class="hljs-attribute">max-width</span>: <span class="hljs-number">800px</span>;
      }
      <span class="hljs-selector-class">.todo</span> {
        <span class="hljs-attribute">border</span>: <span class="hljs-number">1px</span> solid <span class="hljs-number">#ccc</span>;
        <span class="hljs-attribute">padding</span>: <span class="hljs-number">1rem</span>;
        <span class="hljs-attribute">margin-bottom</span>: <span class="hljs-number">1rem</span>;
        <span class="hljs-attribute">border-radius</span>: <span class="hljs-number">8px</span>;
      }
      <span class="hljs-selector-class">.todo</span> <span class="hljs-selector-tag">img</span> {
        <span class="hljs-attribute">max-height</span>: <span class="hljs-number">150px</span>;
        <span class="hljs-attribute">margin-top</span>: <span class="hljs-number">0.5rem</span>;
      }
      <span class="hljs-selector-class">.error</span> {
        <span class="hljs-attribute">color</span>: red;
      }
      <span class="hljs-selector-tag">input</span><span class="hljs-selector-attr">[type=<span class="hljs-string">"text"</span>]</span> {
        <span class="hljs-attribute">width</span>: <span class="hljs-number">100%</span>;
        <span class="hljs-attribute">padding</span>: <span class="hljs-number">0.5rem</span>;
        <span class="hljs-attribute">margin-bottom</span>: <span class="hljs-number">0.5rem</span>;
      }
      <span class="hljs-selector-id">#preview</span> {
        <span class="hljs-attribute">max-width</span>: <span class="hljs-number">300px</span>;
        <span class="hljs-attribute">margin-top</span>: <span class="hljs-number">0.5rem</span>;
        <span class="hljs-attribute">display</span>: none;
      }
    </span><span class="hljs-tag">&lt;/<span class="hljs-name">style</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">head</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Todo List with Images<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"addForm"</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">input</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"text"</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"textInput"</span> <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"What needs to be done?"</span> /&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">input</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"file"</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"imageInput"</span> <span class="hljs-attr">accept</span>=<span class="hljs-string">"image/*"</span> /&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">img</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"preview"</span> <span class="hljs-attr">alt</span>=<span class="hljs-string">"preview"</span> /&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"addBtn"</span>&gt;</span>Add Todo<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"status"</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">h2</span>&gt;</span>Todos<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"todos"</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">script</span>&gt;</span><span class="javascript">
      <span class="hljs-keyword">const</span> $ = <span class="hljs-built_in">document</span>.querySelector.bind(<span class="hljs-built_in">document</span>);

      <span class="hljs-keyword">const</span> textInput = $(<span class="hljs-string">"#textInput"</span>);
      <span class="hljs-keyword">const</span> imageInput = $(<span class="hljs-string">"#imageInput"</span>);
      <span class="hljs-keyword">const</span> preview = $(<span class="hljs-string">"#preview"</span>);
      <span class="hljs-keyword">const</span> addBtn = $(<span class="hljs-string">"#addBtn"</span>);
      <span class="hljs-keyword">const</span> status = $(<span class="hljs-string">"#status"</span>);
      <span class="hljs-keyword">const</span> todosDiv = $(<span class="hljs-string">"#todos"</span>);

      imageInput.addEventListener(<span class="hljs-string">"change"</span>, <span class="hljs-function">() =&gt;</span> {
        <span class="hljs-keyword">const</span> file = imageInput.files[<span class="hljs-number">0</span>];
        <span class="hljs-keyword">if</span> (!file) {
          preview.style.display = <span class="hljs-string">"none"</span>;
          <span class="hljs-keyword">return</span>;
        }
        <span class="hljs-keyword">const</span> reader = <span class="hljs-keyword">new</span> FileReader();
        reader.onload = <span class="hljs-function">(<span class="hljs-params">e</span>) =&gt;</span> {
          preview.src = e.target.result;
          preview.style.display = <span class="hljs-string">"block"</span>;
        };
        reader.readAsDataURL(file);
      });

      addBtn.addEventListener(<span class="hljs-string">"click"</span>, <span class="hljs-keyword">async</span> () =&gt; {
        <span class="hljs-keyword">const</span> text = textInput.value.trim();
        <span class="hljs-keyword">if</span> (!text) {
          status.textContent = <span class="hljs-string">"Please enter a todo text."</span>;
          status.className = <span class="hljs-string">"error"</span>;
          <span class="hljs-keyword">return</span>;
        }

        <span class="hljs-keyword">const</span> form = <span class="hljs-keyword">new</span> FormData();
        form.append(<span class="hljs-string">"text"</span>, text);
        <span class="hljs-keyword">if</span> (imageInput.files[<span class="hljs-number">0</span>]) form.append(<span class="hljs-string">"photo"</span>, imageInput.files[<span class="hljs-number">0</span>]);

        <span class="hljs-keyword">try</span> {
          <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"/todos"</span>, { <span class="hljs-attr">method</span>: <span class="hljs-string">"POST"</span>, <span class="hljs-attr">body</span>: form });
          <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> res.json();
          <span class="hljs-keyword">if</span> (!res.ok) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(data.error || <span class="hljs-string">"failed"</span>);
          status.textContent = <span class="hljs-string">"Todo added!"</span>;
          status.className = <span class="hljs-string">""</span>;
          textInput.value = <span class="hljs-string">""</span>;
          imageInput.value = <span class="hljs-string">""</span>;
          preview.style.display = <span class="hljs-string">"none"</span>;
          loadTodos(); <span class="hljs-comment">// refresh list</span>
        } <span class="hljs-keyword">catch</span> (err) {
          status.textContent = <span class="hljs-string">"Error: "</span> + err.message;
          status.className = <span class="hljs-string">"error"</span>;
        }
      });

      <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">loadTodos</span>(<span class="hljs-params"></span>) </span>{
        <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"/todos"</span>);
        <span class="hljs-keyword">const</span> todos = <span class="hljs-keyword">await</span> res.json();
        todosDiv.innerHTML = <span class="hljs-string">""</span>;
        todos.forEach(<span class="hljs-function">(<span class="hljs-params">t</span>) =&gt;</span> {
          <span class="hljs-keyword">const</span> div = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">"div"</span>);
          div.className = <span class="hljs-string">"todo"</span>;
          div.innerHTML = <span class="hljs-string">`&lt;strong&gt;<span class="hljs-subst">${escapeHtml(t.text)}</span>&lt;/strong&gt;`</span>;
          <span class="hljs-keyword">if</span> (t.image) {
            div.innerHTML += <span class="hljs-string">`&lt;br&gt;&lt;img src="<span class="hljs-subst">${t.image}</span>" alt="todo image"&gt;`</span>;
          }
          todosDiv.appendChild(div);
        });
      }

      <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">escapeHtml</span>(<span class="hljs-params">s</span>) </span>{
        <span class="hljs-keyword">const</span> div = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">"div"</span>);
        div.textContent = s;
        <span class="hljs-keyword">return</span> div.innerHTML;
      }

      loadTodos();
    </span><span class="hljs-tag">&lt;/<span class="hljs-name">script</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">html</span>&gt;</span>
</code></pre>
<p>Now your Node.js app can connect to the MongoDB container running in Docker. Since the app is running outside Docker for now, it connects through <code>localhost:27017</code> using the credentials you set (<code>admin</code> / <code>password</code>).</p>
<p>Once connected, your Node.js backend stores and retrieves todos directly from the <code>todos</code> database in MongoDB, replacing the in-memory array. Later, if you containerize the Node.js app and put it on the same Docker network as MongoDB, you can switch the host from <code>localhost</code> to the container name <code>mongo</code>. we are getting there</p>
<p>You can get the full backend and frontend code ready to run and tweak it for your setup here: <a target="_blank" href="https://github.com/Oghenekparobo/docker_tut_js/tree/mongodb-connection">GitHub repo</a>.</p>
<h2 id="heading-how-to-use-docker-compose">How to Use Docker Compose</h2>
<p>So we now have our Node.js app connected to MongoDB and Mongo Express, both running inside containers. We’ve created the network, started the containers, and everything is talking to each other perfectly.</p>
<p>But let’s be honest: typing out all those long <code>docker run</code> commands every time can get tedious. You probably want a simpler, cleaner way to spin everything up with just one command. That’s where <strong>Docker Compose</strong> comes in.</p>
<p>Docker Compose is a tool that lets you define and run multi-container applications with a single command. Instead of manually running multiple <code>docker run</code> commands, you describe your setup in a simple <code>docker-compose.yml</code> file, specifying each service (like your Node.js app, MongoDB, and Mongo Express), their configurations, environment variables, and shared networks.</p>
<p>Basically, it lets you manage multiple containers as one project, easy to start, stop, and maintain with a single file and a single command.</p>
<p>The standard naming convention is <code>docker-compose.yml</code> (or <code>docker-compose.yaml</code>. Both work, but <code>.yml</code> is more common).</p>
<p>Docker automatically detects it when you run:</p>
<pre><code class="lang-xml">docker compose up
</code></pre>
<p>So yeah, stick with <code>docker-compose.yml</code> for convention.</p>
<p>Now, to run the containers for MongoDB and Mongo Express, we can use the following two commands, respectively:</p>
<pre><code class="lang-xml"># MongoDB container
docker run -p 27017:27017 -d \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=password \
  --name mongo \
  --network mongo-network \
  mongo

# Mongo Express container
docker run -d \
  -e ME_CONFIG_MONGODB_ADMINUSERNAME=admin \
  -e ME_CONFIG_MONGODB_ADMINPASSWORD=password \
  -e ME_CONFIG_MONGODB_SERVER=mongo \
  --name mongo-express \
  --network mongo-network \
  -p 8081:8081 \
  mongo-express
</code></pre>
<p>Now, instead of typing these long commands every time, we will combine them and run everything at once using a <strong>Docker Compose file</strong>.</p>
<p>The <code>docker-compose.yml</code> file will be located at the root of our Node.js project.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763032152636/7fc026ba-d593-4097-a34c-945b398f2aeb.png" alt="docker-composer.yml file in the root of the project" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Here’s how our <code>docker-compose.yml</code> file looks:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3.8"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">mongodb:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"27017:27017"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_USERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_PASSWORD:</span> <span class="hljs-string">password</span>

  <span class="hljs-attr">mongo-express:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8081:8081"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINUSERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINPASSWORD:</span> <span class="hljs-string">password</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_SERVER:</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>
</code></pre>
<p>Let’s break down what’s going on here:</p>
<ul>
<li><p><code>version: "3.8"</code>: This defines the <strong>Compose file version</strong>. Each version has slightly different syntax rules and features. Version 3.8 is modern and works with the latest Docker Engine.</p>
</li>
<li><p><code>services:</code>: All the containers we want to run are defined here. In our case, two services: <code>mongodb</code> and <code>mongo-express</code>.</p>
</li>
</ul>
<p><strong>MongoDB service:</strong></p>
<ul>
<li><p><code>image: mongo</code> pulls the official MongoDB image from Docker Hub.</p>
</li>
<li><p><code>container_name: mongo</code> gives the container a friendly name.</p>
</li>
<li><p><code>ports: "27017:27017"</code> exposes MongoDB’s default port to our host, so Node.js or other apps can connect.</p>
</li>
<li><p><code>environment:</code> sets up the root username and password for MongoDB.</p>
</li>
</ul>
<p><strong>Mongo Express service:</strong></p>
<ul>
<li><p><code>image: mongo-express</code> is the official Mongo Express image.</p>
</li>
<li><p><code>container_name: mongo-express</code> is a friendly name for easier reference.</p>
</li>
<li><p><code>ports: "8081:8081"</code> exposes Mongo Express web interface on host port 8081.</p>
</li>
<li><p><code>environment:</code> let’s Mongo Express know how to connect to MongoDB (username, password, host).</p>
</li>
<li><p><code>depends_on: - mongodb</code> ensures MongoDB starts first, so Mongo Express can connect immediately.</p>
</li>
</ul>
<h3 id="heading-why-use-docker-compose">Why Use Docker Compose?</h3>
<ul>
<li><strong>Single command</strong>: Instead of running multiple long <code>docker run</code> commands, just run:</li>
</ul>
<pre><code class="lang-bash">docker compose up -d
</code></pre>
<ul>
<li><p><strong>Automatic networking</strong>: Compose creates a default network so services can communicate using their <strong>service names</strong> (<code>mongodb</code> In our case)</p>
</li>
<li><p><strong>Easier maintenance</strong>: You can stop, start, or rebuild all services with simple commands.</p>
</li>
</ul>
<p>Before we run our new <code>docker-compose.yml</code>, it’s important to make sure no conflicting containers are running. Remember, we already had MongoDB and Mongo Express running from the previous <code>docker run</code> commands.</p>
<p>To avoid conflicts (like ports already in use), we should stop and remove any running containers first.</p>
<p>Here’s how:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># List all running containers</span>
docker ps

<span class="hljs-comment"># Stop a specific container (replace &lt;container_name&gt; with mongo or mongo-express)</span>
docker stop mongo
docker stop mongo-express

<span class="hljs-comment"># Remove the stopped containers</span>
docker rm mongo
docker rm mongo-express

<span class="hljs-comment"># Optional: stop and remove all running containers at once</span>
docker stop $(docker ps -q)
docker rm $(docker ps -a -q)
</code></pre>
<ul>
<li><p><code>docker ps</code> shows currently running containers.</p>
</li>
<li><p><code>docker stop &lt;name&gt;</code> stops a container gracefully.</p>
</li>
<li><p><code>docker rm &lt;name&gt;</code> removes the container from Docker.</p>
</li>
<li><p><code>docker stop $(docker ps -q)</code> stops all running containers.</p>
</li>
<li><p><code>docker rm $(docker ps -a -q)</code> removes all containers (running or stopped).</p>
</li>
</ul>
<p>Once all previous containers are stopped and removed, we’re ready to run our Docker Compose setup safely without conflicts.</p>
<p>Now that all previous containers are stopped, we can start MongoDB and Mongo Express together using our <code>docker-compose.yml</code> file.</p>
<p>From the root of your Node.js project (where the <code>docker-compose.yml</code> file is located), run:</p>
<pre><code class="lang-bash">docker compose up -d
</code></pre>
<p>Here’s what this does:</p>
<ul>
<li><p><code>docker compose</code> tells Docker to use Compose.</p>
</li>
<li><p><code>up</code> builds (if needed) and starts all the services defined in the Compose file.</p>
</li>
<li><p><code>-d</code> runs the containers in <strong>detached mode</strong>, meaning they run in the background.</p>
</li>
</ul>
<p>After running this command, Docker will start both MongoDB and Mongo Express, connect them on the same internal network, and expose the ports we defined (<code>27017</code> for MongoDB and <code>8081</code> for Mongo Express).</p>
<p>If everything worked correctly, after running:</p>
<pre><code class="lang-bash">docker compose up -d
</code></pre>
<p>You should see output similar to this:</p>
<pre><code class="lang-bash">[+] Running 3/3
 ✔ Network docker_tut_default  Created                                                                                               0.0s 
 ✔ Container mongo             Started                                                                                               0.6s 
 ✔ Container mongo-express     Started                                                                                               0.8s 
stephenjohnson@Oghenekparobo docker_tut %
</code></pre>
<p>What this means:</p>
<ul>
<li><p><code>Network docker_tut_default Created</code>: Docker Compose automatically creates a network for your services so they can communicate with each other.</p>
</li>
<li><p><code>Container mongo Started</code>: Your MongoDB container is running.</p>
</li>
<li><p><code>Container mongo-express Started</code>: Your Mongo Express container is running.</p>
</li>
</ul>
<p>You can confirm that the containers are running by using:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>This will list all active containers. You should see both <code>mongo</code> and <code>mongo-express</code> with their respective ports (<code>27017</code> for MongoDB and <code>8081</code> for Mongo Express) exposed.</p>
<ul>
<li><p>To access Mongo Express, open your browser and go to <a target="_blank" href="http://localhost:8081">http://localhost:8081</a> to interact with MongoDB through the web interface.</p>
</li>
<li><p>To access MongoDB, your Node.js app can connect to MongoDB at <code>localhost:27017</code> using the credentials you set in the Compose file.</p>
</li>
</ul>
<p>Compared to running long <code>docker run</code> commands for each container, using Docker Compose is easier because:</p>
<ul>
<li><p>Starts multiple containers with one command.</p>
</li>
<li><p>Automatically sets up networking between containers.</p>
</li>
<li><p>Makes it easier to stop, remove, or rebuild containers later.</p>
</li>
</ul>
<p>In short, Docker Compose simplifies and organizes everything, making it much easier to manage your development environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763034021801/c4f806d2-080f-4b52-9f36-7b2044a7f8c5.png" alt="docker compose up -d succesfuly created the containers and docker ps shows the containers" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>At this stage, it’s important to know that any data you add to MongoDB is temporary. If you stop or remove your containers and then start them again, you will notice that all your data is gone. This happens because data inside a container isn’t persistent by default.</p>
<p>Don’t worry, this is expected, and we’ll cover how to make data persistent later in the tutorial when we introduce <strong>Docker volumes.</strong> For now, just be aware that each time you restart your containers, MongoDB starts fresh with no previous data.</p>
<p>You can get a full sample, including the Dockerfile <strong>and</strong> the docker‑compose file, <a target="_blank" href="https://github.com/Oghenekparobo/docker_tut_js/tree/docker-compose">here</a>.</p>
<h2 id="heading-how-to-build-our-own-docker-image">How to Build Our Own Docker Image</h2>
<p>Now that we have tested our Node.js application locally and seen it working perfectly with MongoDB and Mongo Express, the next step is preparing it for deployment.</p>
<p>Running the app directly on our machine works fine for development, but it’s not practical when we want to move it to another environment or server. By creating a Docker image, we can package the application together with all its dependencies, configuration, and environment setup into a single, portable unit. This image can then run anywhere Docker is installed, ensuring our app works the same way across development, testing, and production.</p>
<p>In short, building a Docker image is how we containerize our app and make it deployment-ready.</p>
<p>In order to containerize our Todo app, we need a <strong>Dockerfile</strong>. A Dockerfile is essentially a blueprint that tells Docker how to build an image for our application. It defines the base environment, copies our application code, installs dependencies, and specifies how the app should start. With this blueprint, Docker can create a consistent image that behaves the same way on any machine, making our Node.js app fully portable and ready for deployment.</p>
<p>In our Dockerfile, notice the capital <code>D</code>, which is the standard naming convention. Place this file in the <strong>root directory</strong> of your Node.js project. In simple projects like ours, our main app file (like <code>server.js</code> or <code>index.js</code>) is usually in the root too, along with <code>package.json</code>. Docker will use this file as a blueprint to build a container image of your application.</p>
<p>If your main app file is inside a subfolder, that’s fine too. Just make sure the Dockerfile’s <code>COPY</code> and <code>CMD</code> commands point to the correct location. The important thing is that the Dockerfile lives in the root so Docker knows where to start building your app.</p>
<p>Here’s how the contents of our Dockerfile look:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Use full Node 18 (Debian-based)</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">node:18</span>

<span class="hljs-comment"># Set environment variables</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">MONGO_DB_USERNAME=admin</span> <span class="hljs-string">\</span>
    <span class="hljs-string">MONGO_DB_PASSWORD=password</span>

<span class="hljs-comment"># Set working directory</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/home/app</span>

<span class="hljs-comment"># Copy package files</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">package*.json</span> <span class="hljs-string">./</span>

<span class="hljs-comment"># Install dependencies</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>

<span class="hljs-comment"># Copy source code</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">.</span>

<span class="hljs-comment"># Expose port</span>
<span class="hljs-string">EXPOSE</span> <span class="hljs-number">3000</span>

<span class="hljs-comment"># Start the app</span>
<span class="hljs-string">CMD</span> [<span class="hljs-string">"node"</span>, <span class="hljs-string">"server.js"</span>]
</code></pre>
<p>Let’s see what’s going on here:</p>
<ul>
<li><p><code>FROM node:13-alpine</code> is the base image for our container. It comes with Node.js installed and is very lightweight, keeping the image small.</p>
</li>
<li><p><code>ENV MONGO_DB_USERNAME=admin \ MONGO_DB_PASSWORD=password</code> sets environment variables inside the container so the Node.js app can connect to MongoDB.</p>
</li>
<li><p><code>WORKDIR /home/app</code> sets the working directory inside the container. All subsequent commands like <code>COPY</code> or <code>RUN</code> will run relative to this folder.</p>
</li>
<li><p><code>COPY . .</code> copies all files from your local project into the container’s working directory. This includes your <code>server.js</code>, <code>package.json</code>, and any other files needed to run the app.</p>
</li>
<li><p><code>RUN npm install</code> installs all the Node.js dependencies listed in <code>package.json</code> inside the container.</p>
</li>
<li><p><code>EXPOSE 3000</code> tells Docker that the container will listen on port 3000, which is the port our Node.js app runs on.</p>
</li>
<li><p><code>CMD ["node", "server.js"]</code> defines the command that runs when the container starts, which launches our Node.js server.</p>
</li>
</ul>
<p>By placing this Dockerfile in the root of your project, Docker knows exactly where to find your app’s files and dependencies. When we build the image, it packages everything inside a portable container that can run anywhere Docker is installed, making deployment straightforward and consistent.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763037649231/a831ccc0-5b84-4e70-82bf-08ef7de85ddf.png" alt="Dockerfile VS CODE Illustration" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now that we have our Dockerfile ready, the next step is to build the Docker image for our Node.js app.</p>
<p>To build the image, open your terminal, make sure you are in the root directory of your project (where the Dockerfile is), and run:</p>
<pre><code class="lang-bash">docker build -t todo-app:1.0 .
</code></pre>
<ul>
<li><p><code>todo-app</code> is the name of your image.</p>
</li>
<li><p><code>:1.0</code> is the version tag (you can use any versioning scheme, like <code>1.0</code>, <code>v1</code>, <code>latest</code>, etc.).</p>
</li>
<li><p><code>.</code> tells Docker to use the current folder (root of your project) as the build context.</p>
</li>
</ul>
<p>After running:</p>
<pre><code class="lang-bash">docker build -t todo-app:1.0 .
</code></pre>
<p>Docker reads your Dockerfile, packages your Node.js app with all its dependencies, and creates a Docker image. You can confirm the image exists by running:</p>
<pre><code class="lang-bash">docker images
</code></pre>
<p>You should see output like this:</p>
<pre><code class="lang-bash">REPOSITORY      TAG       IMAGE ID       CREATED          SIZE
todo-app        1.0       d85dd4ed97f9   45 seconds ago   147MB
mongo           latest    1d659cebf5e9   2 weeks ago      894MB
mongo-express   latest    1133e12468c7   20 months ago    182MB
</code></pre>
<p>This shows that your <code>todo-app</code> image has been created successfully, alongside the images for MongoDB and Mongo Express.</p>
<h3 id="heading-running-your-nodejs-app-container">Running Your Node.js App Container</h3>
<p>Now that the image exists, the next step is to run a container from it. A container is basically a running instance of your image. To do this:</p>
<pre><code class="lang-bash">docker run todo-app:1.0
</code></pre>
<p>Here’s what this command does:</p>
<ul>
<li><p><code>docker run</code> starts a new container from the image.</p>
</li>
<li><p><code>todo-app:1.0</code> tells Docker which image to use (the one we just built).</p>
</li>
</ul>
<p>Once this runs, your Node.js app will be live inside a container, separate from your local environment. You can open your browser at <a target="_blank" href="http://localhost:3000"><code>http://localhost:3000</code></a> and see your Todo app working just like it did locally.</p>
<p>To see all running containers, use:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>You’ll see something like:</p>
<pre><code class="lang-bash">CONTAINER ID   IMAGE           COMMAND         CREATED       STATUS       PORTS                  NAMES
d85dd4ed97f9   todo-app:1.0    <span class="hljs-string">"node server.js"</span>  10s ago      Up 10s       0.0.0.0:3000-&gt;3000/tcp   awesome_todo
</code></pre>
<p>This confirms your container is running. If you ever need to stop it:</p>
<pre><code class="lang-bash">docker stop &lt;container-id&gt;
</code></pre>
<h3 id="heading-troubleshooting-errors">Troubleshooting Errors</h3>
<p>We started facing some issues here: when you run <code>docker run todo-app:1.0</code> You'll see an error like this:</p>
<pre><code class="lang-yaml"><span class="hljs-string">Server</span> <span class="hljs-string">→</span> <span class="hljs-string">http://localhost:3000</span> 
<span class="hljs-attr">MongoDB connection error: MongoServerSelectionError:</span> <span class="hljs-string">getaddrinfo</span> <span class="hljs-string">ENOTFOUND</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-string">at</span> <span class="hljs-string">Topology.selectServer</span> <span class="hljs-string">(/home/app/node_modules/mongodb/lib/sdam/topology.js:346:38)</span>
    <span class="hljs-string">...</span>
    [<span class="hljs-string">cause</span>
</code></pre>
<p>especially when you try to perform an operation like creating a todo list.</p>
<p>The error <code>getaddrinfo ENOTFOUND mongodb</code> tells us that your Node.js container can't find MongoDB. Even though MongoDB is running in another container, your app container is isolated and doesn't know how to reach it.</p>
<h4 id="heading-why-this-happens">Why This Happens:</h4>
<p>Remember in our <code>server.js</code>, we connect to MongoDB using:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> mongoUrl = <span class="hljs-string">"mongodb://admin:password@localhost:27017"</span>;
</code></pre>
<p>The problem is with <code>localhost</code>. When you run your app locally on your machine (not in Docker), <code>localhost</code> works perfectly because MongoDB is running on the same machine. But when your app runs inside a Docker container, <code>localhost</code> refers to the container itself, not your host machine or other containers.</p>
<p>Think of it like this:</p>
<ul>
<li><p><strong>Running locally:</strong> Your app and MongoDB are like two people in the same room, <code>localhost</code> works</p>
</li>
<li><p><strong>Running in Docker:</strong> Each container is like a separate room, <code>localhost</code> only refers to that specific room</p>
</li>
</ul>
<h3 id="heading-the-solution"><strong>The Solution</strong></h3>
<p>We need to change the MongoDB connection URL to use the Docker service name instead of <code>localhost</code>. Update your <code>server.js</code> file:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> mongoUrl = <span class="hljs-string">"mongodb://admin:password@localhost:27017"</span>;
</code></pre>
<p>To this:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> mongoUrl = <span class="hljs-string">"mongodb://admin:password@mongodb:27017"</span>;
</code></pre>
<p>Here's the complete updated <code>server.js</code>:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> multer = <span class="hljs-built_in">require</span>(<span class="hljs-string">"multer"</span>);
<span class="hljs-keyword">const</span> path = <span class="hljs-built_in">require</span>(<span class="hljs-string">"path"</span>);
<span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">"fs"</span>);
<span class="hljs-keyword">const</span> { MongoClient, ObjectId } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"mongodb"</span>);

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> PORT = <span class="hljs-number">3000</span>;

<span class="hljs-comment">// Host = localhost  →  talks to the MongoDB container via the exposed port</span>
<span class="hljs-comment">// Port = 27017      →  default MongoDB port</span>
<span class="hljs-comment">// User / Pass       →  admin / password (the credentials you gave the container)</span>
<span class="hljs-keyword">const</span> mongoUrl = <span class="hljs-string">"mongodb://admin:password@mongodb:27017"</span>;
<span class="hljs-keyword">const</span> dbName = <span class="hljs-string">"todos"</span>;
<span class="hljs-keyword">let</span> db;

MongoClient.connect(mongoUrl)
  .then(<span class="hljs-function">(<span class="hljs-params">client</span>) =&gt;</span> {
    db = client.db(dbName);
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Connected to MongoDB →"</span>, dbName);
  })
  .catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"MongoDB connection error:"</span>, err));

<span class="hljs-keyword">const</span> uploadDir = path.join(__dirname, <span class="hljs-string">"uploads"</span>);
<span class="hljs-keyword">if</span> (!fs.existsSync(uploadDir)) fs.mkdirSync(uploadDir);

<span class="hljs-keyword">const</span> storage = multer.diskStorage({
  <span class="hljs-attr">destination</span>: <span class="hljs-function">(<span class="hljs-params">req, file, cb</span>) =&gt;</span> cb(<span class="hljs-literal">null</span>, uploadDir),
  <span class="hljs-attr">filename</span>: <span class="hljs-function">(<span class="hljs-params">req, file, cb</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> unique = <span class="hljs-built_in">Date</span>.now() + <span class="hljs-string">"-"</span> + <span class="hljs-built_in">Math</span>.round(<span class="hljs-built_in">Math</span>.random() * <span class="hljs-number">1e9</span>);
    cb(<span class="hljs-literal">null</span>, <span class="hljs-string">"photo-"</span> + unique + path.extname(file.originalname));
  },
});
<span class="hljs-keyword">const</span> upload = multer({ storage });

app.use(express.static(__dirname));
app.use(<span class="hljs-string">"/uploads"</span>, express.static(uploadDir));
app.use(express.json());
app.use(express.urlencoded({ <span class="hljs-attr">extended</span>: <span class="hljs-literal">true</span> }));

app.get(<span class="hljs-string">"/todos"</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">const</span> todos = <span class="hljs-keyword">await</span> db.collection(<span class="hljs-string">"todos"</span>).find().toArray();
  res.json(todos);
});

app.post(<span class="hljs-string">"/todos"</span>, upload.single(<span class="hljs-string">"photo"</span>), <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">const</span> text = req.body.text?.trim();
  <span class="hljs-keyword">if</span> (!text) <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">"Text required"</span> });

  <span class="hljs-keyword">const</span> todo = {
    text,
    <span class="hljs-attr">image</span>: req.file ? <span class="hljs-string">`/uploads/<span class="hljs-subst">${req.file.filename}</span>`</span> : <span class="hljs-literal">null</span>,
    <span class="hljs-attr">createdAt</span>: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(),
  };

  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> db.collection(<span class="hljs-string">"todos"</span>).insertOne(todo);
  todo._id = result.insertedId;
  res.json(todo);
});

<span class="hljs-comment">// Start server</span>
app.listen(PORT, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server → http://localhost:<span class="hljs-subst">${PORT}</span>`</span>);
});
</code></pre>
<h3 id="heading-why-mongodb-works">Why <code>mongodb</code> Works</h3>
<p>The hostname <code>mongodb</code> matches the service name we defined in our <code>docker-compose.yml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">mongodb:</span>    <span class="hljs-comment"># ← This is the hostname other containers use</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-string">...</span>
</code></pre>
<p>When containers run in the same Docker Compose network, Docker provides an internal DNS that resolves service names to the correct container IP addresses. So when your app tries to connect to <code>mongodb:27017</code>, Docker automatically routes it to the MongoDB container.</p>
<h3 id="heading-rebuild-your-docker-image">Rebuild Your Docker Image</h3>
<p>Now that we have updated the code, we need to rebuild the Docker image to include this change:</p>
<pre><code class="lang-bash">docker build -t todo-app:1.0 .
``

You should see output confirming the build completed successfully:
```
[+] Building 8.1s (10/10) FINISHED
 =&gt; [internal] load build definition from Dockerfile
 =&gt; =&gt; transferring dockerfile: 443B
 ...
 =&gt; =&gt; naming to docker.io/library/todo-app:1.0
</code></pre>
<h3 id="heading-add-your-app-to-docker-compose">Add Your App to Docker Compose</h3>
<p>Now update your <code>docker-compose.yml</code> file to include the <code>todo-app</code> service:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3.8"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">mongodb:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"27017:27017"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_USERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_PASSWORD:</span> <span class="hljs-string">password</span>

  <span class="hljs-attr">mongo-express:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8081:8081"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINUSERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINPASSWORD:</span> <span class="hljs-string">password</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_SERVER:</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>

  <span class="hljs-attr">todo-app:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">todo-app:1.0</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">todo-app</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"3000:3000"</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>
</code></pre>
<p>The <code>todo-app</code> service includes:</p>
<ul>
<li><p><strong>image: todo-app:1.0</strong> that uses the Docker image we just rebuilt</p>
</li>
<li><p><strong>container_name: todo-app</strong> that gives the container a friendly name</p>
</li>
<li><p><strong>ports: "3000:3000"</strong> that exposes the app on port 3000</p>
</li>
<li><p><strong>depends_on: mongodb</strong> that ensures MongoDB starts before the app</p>
</li>
</ul>
<h3 id="heading-start-all-services">Start All Services</h3>
<p>First, stop any running containers:</p>
<pre><code class="lang-bash">docker compose down
</code></pre>
<p><strong>If you have port 3000 running in your local system, then stop it (that is, free up port 3000).</strong></p>
<p>We were running the server locally before, but now that we’ve built a Docker image, the app runs inside a container, so it’s no longer dependent on the local machine’s environment.</p>
<pre><code class="lang-yaml"><span class="hljs-string">node</span> <span class="hljs-string">server.js</span>
<span class="hljs-string">Server</span> <span class="hljs-string">→</span> <span class="hljs-string">http://localhost:3000</span>
</code></pre>
<p>Now stop it with Ctrl + C in that terminal. That’s it.</p>
<p>Then start everything together:</p>
<pre><code class="lang-bash">docker compose up -d
```

You should see:
```
[+] Running 4/4
 ✔ Network docker_tut_default  Created
 ✔ Container mongo             Started
 ✔ Container mongo-express     Started
 ✔ Container todo-app          Started
</code></pre>
<h3 id="heading-verify-everything-works">Verify Everything Works</h3>
<p>Check that all containers are running:</p>
<pre><code class="lang-bash">docker ps
```

Expected output:
```
CONTAINER ID   IMAGE           COMMAND                  CREATED          STATUS          PORTS                      NAMES
a1b2c3d4e5f6   todo-app:1.0    <span class="hljs-string">"node server.js"</span>         30 seconds ago   Up 28 seconds   0.0.0.0:3000-&gt;3000/tcp     todo-app
3d7c797fde1d   mongo-express   <span class="hljs-string">"/sbin/tini -- /dock…"</span>   30 seconds ago   Up 29 seconds   0.0.0.0:8081-&gt;8081/tcp     mongo-express
4511ade73c38   mongo           <span class="hljs-string">"docker-entrypoint.s…"</span>   30 seconds ago   Up 29 seconds   0.0.0.0:27017-&gt;27017/tcp   mongo
```

<span class="hljs-comment">## Test Your Application</span>

Now <span class="hljs-built_in">let</span><span class="hljs-string">'s verify everything works:

### 1. Access Your Todo App
Open your browser and go to:
```
http://localhost:3000
```

### 2. Create Some Todos
Add a few todo items to test the functionality. Try uploading images too!

### 3. Verify in Mongo Express
Open Mongo Express:
```
http://localhost:8081</span>
</code></pre>
<p>Navigate to the <code>todos</code> database, then the <code>todos</code> collection. You should see all the todos you just created with their complete data.</p>
<h3 id="heading-what-changed-and-why-it-works">What Changed and Why It Works</h3>
<p><strong>Before the fix:</strong></p>
<ul>
<li><p>Connection string used <code>localhost:27017</code> ❌</p>
</li>
<li><p>Container looked for MongoDB on itself</p>
</li>
<li><p>Connection failed with <code>ENOTFOUND</code> error</p>
</li>
</ul>
<p><strong>After the fix:</strong></p>
<ul>
<li><p>Connection string uses <code>mongodb:27017</code> ✅</p>
</li>
<li><p>Docker's internal DNS resolves <code>mongodb</code> to the MongoDB container</p>
</li>
<li><p>Connection succeeds and data flows properly</p>
</li>
</ul>
<p>This is a crucial lesson in Docker networking: containers communicate using service names, not <code>localhost</code>. Docker Compose automatically creates a network where all services can find each other by name.</p>
<h3 id="heading-how-to-manage-your-containers">How to Manage Your Containers</h3>
<p>Here’s a quick overview of how to manage your containers once you have them up and running. You’ll typically use these common commands:</p>
<p><strong>Stop all services:</strong></p>
<pre><code class="lang-bash">docker compose down
</code></pre>
<p><strong>View logs from your app:</strong></p>
<pre><code class="lang-bash">docker compose logs todo-app
</code></pre>
<p><strong>View logs in real-time:</strong></p>
<pre><code class="lang-bash">docker compose logs -f todo-app
</code></pre>
<p><strong>Rebuild after code changes:</strong></p>
<pre><code class="lang-bash">docker build -t todo-app:1.0 .
docker compose up -d --force-recreate todo-app
</code></pre>
<p>Your application is now fully containerized and production-ready. All three services work together seamlessly, and you can deploy this entire stack anywhere Docker is supported with just the <code>docker-compose.yml</code> file and your built image.</p>
<p>Get the full updated code <a target="_blank" href="https://github.com/Oghenekparobo/docker_tut_js/tree/docker-image">here</a>.</p>
<h2 id="heading-how-to-create-a-private-docker-repository">How to Create a Private Docker Repository</h2>
<p>Now we want to store our custom Docker image in a private container registry (instead of our local machine only). This gives you three major advantages:</p>
<ol>
<li><p><strong>Controlled access</strong> – Only people or servers you explicitly authorize can pull (or push) the image. Your code and dependencies stay private and secure.</p>
</li>
<li><p><strong>Reliable distribution</strong> – Anyone (or any server) with the correct AWS credentials can pull the exact same image from anywhere in the world, eliminating “it works on my machine” problems.</p>
</li>
<li><p><strong>Versioning and lifecycle management</strong> – You can keep multiple tagged versions (1.0, 2.0, latest, and so on) and easily roll back if needed.</p>
</li>
</ol>
<p>The first step is to create a private Docker repository, also known as a container registry. In this case, we will use <a target="_blank" href="https://aws.amazon.com/ecr/"><strong>AWS Elastic Container Registry (ECR)</strong></a>. Amazon ECR is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images and artifacts securely from anywhere.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763044840247/bbfb5cfa-aa22-4a4f-a5e4-da91cd539063.png" alt="Amazon ECR Landing page" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Once you’re on the home page, just click on the <strong>Create</strong> button. Name the repository the same as your image, todo-app, and then click Create to finalize the setup.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763045060132/b2f8b287-026c-47f6-bd44-9fbad62da0f5.png" alt="creating our repository on AWS ECR" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Don’t worry about the extra options – this isn’t an AWS tutorial.</p>
<p><strong>Note:</strong> In AWS ECR, each image has its own repository, where we store the different tagged versions of that image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763045250855/477e4566-8f47-41f8-93e6-4c3bcb28668f.png" alt="AWS ECR our todo-app empty repository" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now, to push our image into the private repository, we need to do two things. First, we have to log in to the private repo. This is necessary because you’ll need authenticate yourself before AWS allows you to push anything. In other words, when you push your local image to the repo, you’re basically saying, <em>“Yes, I have access to this registry. Here are my credentials.”</em></p>
<p>In our case, since we’re using AWS ECR, we will authenticate through AWS instead of typing our username and password manually.</p>
<h3 id="heading-step-1-get-your-aws-access-keys">Step 1: Get Your AWS Access Keys</h3>
<p>To locate your access keys in the AWS console, follow these steps:</p>
<ol>
<li><p>Log in to the AWS Console at <a target="_blank" href="https://console.aws.amazon.com">https://console.aws.amazon.com</a></p>
</li>
<li><p>Click your account name (top right corner) and go to Security Credentials</p>
</li>
<li><p>Scroll down to "Access keys" section</p>
</li>
<li><p>If you don't have an access key:</p>
<ul>
<li><p>Click "Create access key"</p>
</li>
<li><p>Select "Command Line Interface (CLI)"</p>
</li>
<li><p>Check the confirmation box and click Next</p>
</li>
<li><p>Add a description (optional) and click "Create access key"</p>
</li>
</ul>
</li>
<li><p><strong>IMPORTANT</strong>: Copy both the <strong>access key ID</strong> (looks like: <code>AKIAIOSFODNN7EXAMPLE</code>) and the <strong>secret access key</strong> (looks like: <code>wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY</code>). <strong>Save these immediately.</strong> The secret key is only shown once. If you lose it, you'll need to create a new key pair.</p>
</li>
</ol>
<p>Alternatively, if someone else manages your AWS account, you’ll need to ask your AWS administrator for:</p>
<ul>
<li><p>An IAM user with ECR permissions</p>
</li>
<li><p>The Access Key ID and Secret Access Key for that user</p>
</li>
</ul>
<h3 id="heading-step-2-check-if-aws-cli-is-installed">Step 2: Check if AWS CLI is installed</h3>
<p>You can do this by running this:</p>
<pre><code class="lang-bash">aws --version
</code></pre>
<h3 id="heading-step-3-configure-aws-cli-with-your-credentials">Step 3: Configure AWS CLI with your credentials</h3>
<p>Here’s how you can do this:</p>
<pre><code class="lang-bash">aws configure
```

It will prompt you <span class="hljs-keyword">for</span> 4 things:
```
AWS Access Key ID [None]: &lt;paste your Access Key ID here&gt;
AWS Secret Access Key [None]: &lt;paste your Secret Access Key here&gt;
Default region name [None]: eu-north-1 or any region of your choice
Default output format [None]: json
</code></pre>
<p>Just paste your keys when prompted, type <code>eu-north-1</code> or any region of your choice for region, and <code>json</code> for format (or just press Enter for format).</p>
<h3 id="heading-step-4-test-your-aws-configuration">Step 4: Test your AWS configuration</h3>
<p>Now you’ll want to test your config to make sure everything is set up properly:</p>
<pre><code class="lang-bash">aws sts get-caller-identity
</code></pre>
<p>This should show your AWS account details if everything is configured correctly.</p>
<h3 id="heading-step-5-login-to-ecr-docker-registry">Step 5: Login to ECR (Docker Registry)</h3>
<p>Now, login to ECR:</p>
<pre><code class="lang-bash">aws ecr get-login-password --region eu-north-1 | docker login --username AWS --password-stdin 244836489456.dkr.ecr.eu-north-1.amazonaws.com
</code></pre>
<p>You should see: <strong>"Login Succeeded"</strong>.</p>
<h3 id="heading-understanding-image-naming-in-docker-repositories">Understanding Image Naming in Docker Repositories</h3>
<p>Every Docker image has a name that tells Docker where to find or store it. For example, when you run:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">pull</span> <span class="hljs-string">mongo:4.2</span>
</code></pre>
<p>Docker is actually pulling from:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker.io/library/mongo:4.2</span>
</code></pre>
<p>Here’s what’s happening:</p>
<ul>
<li><p><a target="_blank" href="http://docker.io"><code>docker.io</code></a> is the registry (in this case, Docker Hub)</p>
</li>
<li><p><code>library</code> is the default namespace for official images</p>
</li>
<li><p><code>mongo</code> is the repository name</p>
</li>
<li><p><code>4.2</code> is the image tag</p>
</li>
</ul>
<p>If you build a local image like <code>todo-app:1.0</code>, that image exists only on your machine. Docker won’t know where to push it unless you include the full registry path.</p>
<p>For AWS ECR, the image name must include your ECR registry URL. For example:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">tag</span> <span class="hljs-string">todo-app:1.0</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
</code></pre>
<p>Then you can push it with:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
</code></pre>
<p>Without that full path, Docker won’t know <em>which</em> remote repository you’re referring to. That’s why just <code>todo-app:1.0</code> alone won’t work.</p>
<h3 id="heading-step-6-build-tag-and-push-your-image">Step 6: Build, Tag, and Push your image</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763050402411/f4afd0ed-bc2a-4bf2-9bc4-4d373a54bab4.png" alt="aws push commands for the ecr todo-app repo " class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Tag your local image with the full ECR path</span>
docker tag todo-app:1.0 244836489456.dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0

<span class="hljs-comment"># Now push it</span>
docker push 244836489456.dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0
</code></pre>
<p>⚠️ <strong>Note:</strong> Be careful when tagging and pushing your image, as every ECR repository URL is tied to a specific AWS account and region.</p>
<p>For example, in this tutorial, we’re using:</p>
<pre><code class="lang-yaml"><span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com</span>
</code></pre>
<p>But your own ECR URL will be different depending on your AWS account and the region you selected (like <code>us-east-1</code>, <code>ap-south-1</code>, and so on).</p>
<p>So before you run your <code>docker tag</code> or <code>docker push</code> commands, make sure to replace the registry URL and region with your own.</p>
<p>If you don’t, Docker will throw errors like <em>“tag does not exist”</em> or <em>“repository not found.”</em></p>
<p>In short, stay calm, double-check your region, and always confirm the exact ECR URL shown in your AWS console before pushing.</p>
<p>If you successfully ran Step 6, you should see output similar to this in your terminal:</p>
<pre><code class="lang-yaml"><span class="hljs-string">stephenjohnson@Oghenekparobo</span> <span class="hljs-string">docker_tut</span> <span class="hljs-string">%</span> <span class="hljs-string">docker</span> <span class="hljs-string">tag</span> <span class="hljs-string">todo-app:1.0</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
<span class="hljs-string">stephenjohnson@Oghenekparobo</span> <span class="hljs-string">docker_tut</span> <span class="hljs-string">%</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
<span class="hljs-string">The</span> <span class="hljs-string">push</span> <span class="hljs-string">refers</span> <span class="hljs-string">to</span> <span class="hljs-string">repository</span> [<span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app</span>]
<span class="hljs-attr">4f94b5cbe8ab:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">85ba7bf54231:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">4ea46a43fa07:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">dee30873f229:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">e78159dbd370:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">a358a725b813:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">cd8a6003174c:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">abb63e49e652:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">6cc65bdde70e:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">41a4e3939504:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">3520c50ae60e:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">75ba6634710f:</span> <span class="hljs-string">Pushed</span> 
<span class="hljs-attr">1.0: digest: sha256:51f07267936fc94d9b677db8a760801e6c5fd4764f4bb2bd7b4dd150c756a39b size:</span> <span class="hljs-number">2842</span>
</code></pre>
<p>This confirms your image was successfully pushed to your private AWS ECR repository.</p>
<p>You can now go to the AWS Management Console and then ECR, and you should see your <code>todo-app</code> image listed there, along with the tag <code>1.0</code>.</p>
<p>At this point, your image is safely stored in AWS ECR and ready to be pulled or deployed anywhere that has access to your repository.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763066904897/fe44a2e2-1e20-4ff5-88ec-799475b2fe0d.png" alt="your image now deployed on AWS ECR" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-assignment-create-and-push-a-new-version-of-your-app"><strong>Assignment: Create and Push a New Version of Your App</strong></h2>
<p>Now that your first image (<code>todo-app:1.0</code>) has been successfully pushed to AWS ECR, it’s time to simulate a real-world workflow where developers make updates and release new versions of their applications.</p>
<p>Now, you’ll make a small change to your Node.js app, rebuild it, and push the updated version as <code>todo-app:2.0</code>.</p>
<h3 id="heading-deploying-our-image">Deploying Our Image</h3>
<p>Now it’s time to deploy our image using Docker Compose.</p>
<p>Up to this point, we have been running our app using a local image:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-string">todo-app:1.0</span>
</code></pre>
<p>But now that your image lives inside AWS ECR, we need to replace that line with the full ECR image URI, because Docker must know exactly where to pull the image from.</p>
<p>Local image:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-string">todo-app:1.0</span>
</code></pre>
<p>Private repository image (ECR):</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
</code></pre>
<p>Docker cannot magically guess where “todo-app:1.0” is stored. If you don’t include the full registry URL, Docker will assume it’s looking at your <strong>local machine</strong>, not AWS.</p>
<p>Here is the clean, fixed, properly formatted docker-compose file that pulls your app from ECR:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3.8"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">my-app:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">my-app</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"3000:3000"</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>

  <span class="hljs-attr">mongodb:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"27017:27017"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_USERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_PASSWORD:</span> <span class="hljs-string">password</span>

  <span class="hljs-attr">mongo-express:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8081:8081"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINUSERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINPASSWORD:</span> <span class="hljs-string">password</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_SERVER:</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>
</code></pre>
<p><strong>Why “my-app” instead of “todo-app”?</strong></p>
<p>In this case, we renamed it to avoid confusion between:</p>
<ul>
<li><p>our <strong>local</strong> “todo-app:1.0”</p>
</li>
<li><p>our <strong>ECR</strong> “todo-app:1.0”</p>
</li>
</ul>
<p>This keeps things clean, but you can rename it back if you want.</p>
<h3 id="heading-why-must-we-use-the-full-image-url-for-ecr">Why Must We Use the Full Image URL for ECR?</h3>
<p>Other containers like mongo and mongo-express work like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
<span class="hljs-attr">image:</span> <span class="hljs-string">mongo-express</span>
</code></pre>
<p>Because Docker knows these are on <strong>Docker Hub</strong>.</p>
<p>But for a private repo like AWS ECR, Docker has no idea where “todo-app” is unless you give the full path:</p>
<pre><code class="lang-yaml"><span class="hljs-string">AWS_ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com/repository_name:tag</span>
</code></pre>
<p>This tells Docker:</p>
<ul>
<li><p>which account</p>
</li>
<li><p>which region</p>
</li>
<li><p>which repo</p>
</li>
<li><p>which version</p>
</li>
</ul>
<p>Without this URL, Docker can’t pull the image.</p>
<p>Every time we want to <em>pull</em> from a private ECR repo, including using Docker Compose, we must be logged in.</p>
<p>Run this:</p>
<pre><code class="lang-yaml"><span class="hljs-string">aws</span> <span class="hljs-string">ecr</span> <span class="hljs-string">get-login-password</span> <span class="hljs-string">--region</span> <span class="hljs-string">eu-north-1</span> <span class="hljs-string">|</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">--username</span> <span class="hljs-string">AWS</span> <span class="hljs-string">--password-stdin</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com</span>
</code></pre>
<p>If you’re not logged in, Docker Compose will throw:</p>
<p>❌ <code>pull access denied</code><br>❌ <code>repository does not exist</code><br>❌ <code>no basic auth credentials</code></p>
<h3 id="heading-deploy-your-app-using-docker-compose">Deploy Your App Using Docker Compose</h3>
<p>Before deploying, it’s best practice to stop and remove any existing containers to avoid port conflicts or orphaned containers:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Stop all running containers in this project</span>
<span class="hljs-string">docker-compose</span> <span class="hljs-string">down</span> <span class="hljs-string">--remove-orphans</span>

<span class="hljs-comment"># Optional: verify nothing is running</span>
<span class="hljs-string">docker</span> <span class="hljs-string">ps</span>
</code></pre>
<p>This ensures that port 3000 and other mapped ports are free, preventing errors when starting new containers.</p>
<p>Once the environment is clean, deploy your stack:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker-compose</span> <span class="hljs-string">up</span> <span class="hljs-string">-d</span>
</code></pre>
<p>Docker Compose will:</p>
<ol>
<li><p><strong>Connect to AWS ECR</strong> – Authenticate and pull the <code>todo-app:1.0</code> image from your private repository.</p>
</li>
<li><p><strong>Start MongoDB</strong> – Launch the database container with your configured credentials.</p>
</li>
<li><p><strong>Start Mongo Express</strong> – Launch the web-based MongoDB admin interface.</p>
</li>
<li><p><strong>Start your Node.js app</strong> – Launch the <code>my-app</code> container, linked to MongoDB.</p>
</li>
</ol>
<p>Check the running containers:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">ps</span>
</code></pre>
<p>You should see:</p>
<ul>
<li><p><code>mongo</code></p>
</li>
<li><p><code>mongo-express</code></p>
</li>
<li><p><code>my-app</code></p>
</li>
</ul>
<p>If <code>my-app</code> fails to start, it’s usually because <strong>port 3000 is already in use</strong>. Ensure it’s free by stopping any process using it:</p>
<pre><code class="lang-yaml"><span class="hljs-string">lsof</span> <span class="hljs-string">-i</span> <span class="hljs-string">:3000</span>
<span class="hljs-string">kill</span> <span class="hljs-number">-9</span> <span class="hljs-string">&lt;PID&gt;</span>  <span class="hljs-comment"># if a process is using it</span>
</code></pre>
<p>Then rerun:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker-compose</span> <span class="hljs-string">up</span> <span class="hljs-string">-d</span>
</code></pre>
<p>To access your app:</p>
<ul>
<li><p>Node.js app: <a target="_blank" href="http://localhost:3000"><code>http://localhost:3000</code></a></p>
</li>
<li><p>Mongo Express: <a target="_blank" href="http://localhost:8081"><code>http://localhost:8081</code></a></p>
</li>
</ul>
<p>This workflow ensures a clean start and avoids common port or container conflicts.</p>
<h3 id="heading-sharing-our-private-docker-image">Sharing our Private Docker Image</h3>
<p>Once your Node.js app is pushed to AWS ECR, it’s safely stored in your private repository. But what if another developer, team member, or server needs to run that same image? Since it’s private, Docker cannot pull it automatically like public images (e.g., <code>mongo</code> or <code>nginx</code>). They need <strong>authenticated access</strong>.</p>
<p>Here’s how they can get and use your image:</p>
<h4 id="heading-1-grant-iam-access">1. Grant IAM Access</h4>
<p>Your collaborator needs an <strong>AWS IAM user or role</strong> with permissions for ECR. At minimum, the policy should allow:</p>
<ul>
<li><p><code>ecr:GetAuthorizationToken</code></p>
</li>
<li><p><code>ecr:BatchCheckLayerAvailability</code></p>
</li>
<li><p><code>ecr:GetDownloadUrlForLayer</code></p>
</li>
<li><p><code>ecr:BatchGetImage</code></p>
</li>
</ul>
<p>You can create a dedicated IAM user for this and provide them an Access Key ID and a Secret Access Key.</p>
<h4 id="heading-2-install-and-configure-aws-cli">2. Install and Configure AWS CLI</h4>
<p>The collaborator must have the AWS CLI installed. Then they configure it with their credentials:</p>
<pre><code class="lang-yaml"><span class="hljs-string">aws</span> <span class="hljs-string">configure</span>
</code></pre>
<p>They enter:</p>
<ul>
<li><p>Access Key ID</p>
</li>
<li><p>Secret Access Key</p>
</li>
<li><p>Default region (the same region where the ECR repo exists, for example, <code>eu-north-1</code>)</p>
</li>
<li><p>Default output format (usually <code>json</code>)</p>
</li>
</ul>
<h4 id="heading-3-authenticate-docker-with-ecr">3. Authenticate Docker with ECR</h4>
<p>Before pulling the image, Docker must authenticate using the AWS credentials:</p>
<pre><code class="lang-yaml"><span class="hljs-string">aws</span> <span class="hljs-string">ecr</span> <span class="hljs-string">get-login-password</span> <span class="hljs-string">--region</span> <span class="hljs-string">eu-north-1</span> <span class="hljs-string">|</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">--username</span> <span class="hljs-string">AWS</span> <span class="hljs-string">--password-stdin</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com</span>
</code></pre>
<p>If successful, Docker will respond with:</p>
<pre><code class="lang-yaml"><span class="hljs-string">Login</span> <span class="hljs-string">Succeeded</span>
</code></pre>
<h4 id="heading-4-pull-the-image">4. Pull the Image</h4>
<p>Now the collaborator can pull the image using the full ECR URI, which includes your AWS account, region, repository name, and tag:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">pull</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
</code></pre>
<h4 id="heading-5-run-the-container">5. Run the Container</h4>
<p>After pulling, they can run the container locally:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">-p</span> <span class="hljs-number">3000</span><span class="hljs-string">:3000</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
</code></pre>
<p>Or include it in a Docker Compose file, replacing the <code>image:</code> field with the full ECR URI.</p>
<ul>
<li><p>Public images like <code>mongo</code> don’t require this because Docker Hub is open. Private ECR images require explicit authentication.</p>
</li>
<li><p>Every pull from a private repository requires an active login**.** Docker cannot guess credentials.</p>
</li>
<li><p>Using the full image URI ensures Docker knows exactly where to fetch the image.</p>
</li>
</ul>
<p>This setup allows your team to share, deploy, or run your application anywhere, on local machines, staging servers, or production, while keeping your repository private and secure.</p>
<h2 id="heading-docker-volumes">Docker Volumes</h2>
<p>When running containers like MongoDB, all data created inside a container is ephemeral. If the container stops or is removed, all data inside it disappears. This is fine for testing, but not suitable for production.</p>
<p>To solve this, Docker provides <strong>volumes</strong>, which allow containers to store data outside the container, either on the host machine or in Docker-managed storage, so it survives container restarts, rebuilds, or removals.</p>
<h3 id="heading-how-docker-volumes-work">How Docker Volumes Work</h3>
<p>Think of Docker volumes as persistent folders for containers:</p>
<ul>
<li><p>Data written inside a volume remains safe, even if the container is removed.</p>
</li>
<li><p>Containers can read/write to these volumes.</p>
</li>
<li><p>Volumes are essential for databases, logs, file uploads, or any persistent data your application needs.</p>
</li>
</ul>
<h3 id="heading-types-of-docker-volumes">Types of Docker Volumes</h3>
<p>Docker has three main types of volumes:</p>
<h4 id="heading-1-named-volumes">1. Named Volumes</h4>
<p>Named volumes are user-defined volumes with a clear name, that are fully managed by Docker. You’d typically use them in production databases and for persistent data that containers can share.</p>
<p>Here’s an example:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">mongo-data:</span>
</code></pre>
<p>And in a service:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">mongo-data:/data/db</span>
</code></pre>
<h4 id="heading-2-bind-mounts">2. Bind Mounts</h4>
<p>Blind mounts map a folder from your <strong>host machine</strong> into the container. They’re often used for development, live syncing files, logs, and uploaded files.</p>
<p>Here’s an example:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">./uploads:/usr/src/app/uploads</span>
</code></pre>
<h4 id="heading-3-anonymous-volumes">3. Anonymous Volumes</h4>
<p>These are volumes without a name. Docker just assigns them a random name. You’d use them for temporary data for testing (and they’re not commonly used in production).</p>
<p>Here’s an example:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">/data/tmp</span>
</code></pre>
<h3 id="heading-example-docker-compose-file-using-volumes">Example Docker Compose File Using Volumes</h3>
<p>Here’s a full <code>docker-compose.yml</code> file using the most common volume types for a Node.js + MongoDB + Mongo Express stack:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3.8"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">my-app:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">my-app</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"3000:3000"</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./uploads:/usr/src/app/uploads</span>  <span class="hljs-comment"># bind mount for file uploads</span>

  <span class="hljs-attr">mongodb:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"27017:27017"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_USERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">MONGO_INITDB_ROOT_PASSWORD:</span> <span class="hljs-string">password</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongo-data:/data/db</span>  <span class="hljs-comment"># named volume for persistent database storage</span>

  <span class="hljs-attr">mongo-express:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">mongo-express</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8081:8081"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINUSERNAME:</span> <span class="hljs-string">admin</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_ADMINPASSWORD:</span> <span class="hljs-string">password</span>
      <span class="hljs-attr">ME_CONFIG_MONGODB_SERVER:</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mongodb</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">mongo-data:</span>  <span class="hljs-comment"># named volume definition</span>
</code></pre>
<p>How this code is working:</p>
<ol>
<li><p><strong>MongoDB Volume</strong> (<code>mongo-data</code>): This is a named volume. It stores all database files under <code>/data/db</code> inside the container. It survives container restarts, removals, or rebuilds.</p>
</li>
<li><p><strong>Node.js Uploads (</strong><code>./uploads</code>): This is a <strong>bind mount</strong>. It maps the <code>uploads</code> folder on your host to <code>/usr/src/app/uploads</code> inside the container. Any uploaded files are immediately visible on your host.</p>
</li>
<li><p><strong>Anonymous Volume</strong>: These are not shown in this file because it’s rarely used in production. Temporary data storage is created automatically by Docker if a volume is defined without a name.</p>
</li>
</ol>
<h4 id="heading-visual-concept-simplified">Visual Concept (Simplified):</h4>
<pre><code class="lang-yaml"><span class="hljs-string">Host</span> <span class="hljs-string">Machine</span>
<span class="hljs-string">├─</span> <span class="hljs-string">/project/uploads</span>  <span class="hljs-string">←</span> <span class="hljs-string">bind</span> <span class="hljs-string">mount,</span> <span class="hljs-string">synced</span> <span class="hljs-string">with</span> <span class="hljs-string">container</span>
<span class="hljs-string">├─</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Volumes</span>
<span class="hljs-string">│</span>  <span class="hljs-string">└─</span> <span class="hljs-string">mongo-data</span>    <span class="hljs-string">←</span> <span class="hljs-string">named</span> <span class="hljs-string">volume,</span> <span class="hljs-string">persistent</span> <span class="hljs-string">MongoDB</span> <span class="hljs-string">data</span>

<span class="hljs-string">Containers</span>
<span class="hljs-string">├─</span> <span class="hljs-string">my-app</span>
<span class="hljs-string">│</span>  <span class="hljs-string">└─</span> <span class="hljs-string">/usr/src/app/uploads</span>  <span class="hljs-string">←</span> <span class="hljs-string">sees</span> <span class="hljs-string">host</span> <span class="hljs-string">uploads</span> <span class="hljs-string">folder</span>
<span class="hljs-string">├─</span> <span class="hljs-string">mongodb</span>
<span class="hljs-string">│</span>  <span class="hljs-string">└─</span> <span class="hljs-string">/data/db</span>             <span class="hljs-string">←</span> <span class="hljs-string">uses</span> <span class="hljs-string">named</span> <span class="hljs-string">volume</span> <span class="hljs-string">mongo-data</span>
<span class="hljs-string">├─</span> <span class="hljs-string">mongo-express</span>
</code></pre>
<h3 id="heading-takeaways">Takeaways</h3>
<ul>
<li><p>Always use volumes for data you care about.</p>
</li>
<li><p>Named volumes are best for databases in production.</p>
</li>
<li><p>Bind mounts are best for development and live syncing.</p>
</li>
<li><p>Anonymous volumes are rarely needed outside testing.</p>
</li>
<li><p>Volumes separate container lifecycle from data lifecycle, which is a cornerstone of Docker best practices.</p>
</li>
</ul>
<h3 id="heading-start-your-application">Start Your Application</h3>
<p>Once your Docker Compose is configured with volumes, the next step is to start your application and make sure the volumes are working correctly. Here’s a simple step-by-step guide.</p>
<p><strong>1. Start the Containers</strong></p>
<p>Run:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker-compose</span> <span class="hljs-string">up</span> <span class="hljs-string">-d</span>
</code></pre>
<p>The <code>-d</code> flag runs the containers in detached mode (in the background).</p>
<p>Docker will:</p>
<ul>
<li><p>Pull your app image from AWS ECR (if you’re logged in)</p>
</li>
<li><p>Start MongoDB with the named volume</p>
</li>
<li><p>Start Mongo Express</p>
</li>
<li><p>Start your Node.js app</p>
</li>
</ul>
<p><strong>2. Check Running Containers</strong></p>
<p>To see if everything started correctly:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">ps</span>
</code></pre>
<p>You should see something like:</p>
<pre><code class="lang-yaml"><span class="hljs-string">CONTAINER</span> <span class="hljs-string">ID</span>   <span class="hljs-string">IMAGE</span>                                               <span class="hljs-string">STATUS</span>          <span class="hljs-string">PORTS</span>
<span class="hljs-string">2a2e120cc912</span>   <span class="hljs-number">244836489456.</span><span class="hljs-string">dkr.ecr.eu-north-1.amazonaws.com/todo-app:1.0</span>   <span class="hljs-string">Up</span> <span class="hljs-string">5s</span>    <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:3000-&gt;3000/tcp</span>
<span class="hljs-string">f4d5a1ab1234</span>   <span class="hljs-string">mongo</span>                                               <span class="hljs-string">Up</span> <span class="hljs-string">5s</span>          <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:27017-&gt;27017/tcp</span>
<span class="hljs-string">c3d5b2bc2345</span>   <span class="hljs-string">mongo-express</span>                                      <span class="hljs-string">Up</span> <span class="hljs-string">5s</span>          <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:8081-&gt;8081/tcp</span>
</code></pre>
<p><strong>3. Verify Volumes</strong></p>
<p>List Docker volumes:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">volume</span> <span class="hljs-string">ls</span>
</code></pre>
<p>You should see your named volume, for example <code>mongo-data</code>.</p>
<p>Inspect the volume:</p>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">volume</span> <span class="hljs-string">inspect</span> <span class="hljs-string">docker_tut_mongo-data</span>
</code></pre>
<p>This will show where Docker stores your MongoDB data on the host, for example:</p>
<pre><code class="lang-yaml">[
    {
        <span class="hljs-attr">"Name":</span> <span class="hljs-string">"mongo-data"</span>,
        <span class="hljs-attr">"Driver":</span> <span class="hljs-string">"local"</span>,
        <span class="hljs-attr">"Mountpoint":</span> <span class="hljs-string">"/var/lib/docker/volumes/mongo-data/_data"</span>,
        <span class="hljs-attr">"Labels":</span> {},
        <span class="hljs-attr">"Scope":</span> <span class="hljs-string">"local"</span>
    }
]
</code></pre>
<p><strong>Anything stored in</strong> <code>/data/db</code> <strong>inside MongoDB is actually saved here on your host.</strong></p>
<p><strong>4. Test Data Persistence</strong></p>
<ol>
<li><p>Connect to MongoDB or your app and add some data.</p>
</li>
<li><p>Stop and remove the container:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-string">docker-compose</span> <span class="hljs-string">down</span>
</code></pre>
<ol start="3">
<li>Restart the app:</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-string">docker-compose</span> <span class="hljs-string">up</span> <span class="hljs-string">-d</span>
</code></pre>
<ol start="4">
<li>Check your data again.</li>
</ol>
<ul>
<li><p>Because MongoDB uses the named volume, your data is still there.</p>
</li>
<li><p>This proves the volume is persistent.</p>
</li>
</ul>
<p>5. Optional: Check Node.js Uploads (Bind Mount)</p>
<ul>
<li><p>If you uploaded a file through your app, check your project folder <code>./uploads</code>.</p>
</li>
<li><p>You should see the file appear on your host machine because bind mounts sync host and container directories.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Well done, you have made it to the end of this comprehensive Docker tutorial. From unraveling the basics of containers and images, to networking, Docker Compose, volumes, and even deploying to a private AWS ECR repository, you've built a fully containerized Node.js application stack that's production-ready and scalable. These are hands-on skills that will transform how you develop, collaborate, and deploy applications in real-world scenarios.</p>
<p>Thank you for sticking with it. Docker can feel overwhelming at first – those long commands, networking quirks, and persistent data challenges aren't trivial. But getting to this point? It means you've conquered a steep learning curve and reached new heights in your development journey. You're now equipped to eliminate "it works on my machine" headaches, streamline CI/CD pipelines, and level up as a backend or full-stack pro.</p>
<p>Keep experimenting: Tweak your todo-app, try multi-stage builds in your Dockerfile, or explore orchestration tools like Kubernetes next. The Docker ecosystem is vast, but with this foundation, you're ready to dive deeper. If you hit snags or have questions, the community on Docker Hub, Stack Overflow, or GitHub.</p>
<p>You can find the final code here: <a target="_blank" href="https://github.com/Oghenekparobo/docker_tut_js/tree/final">https://github.com/Oghenekparobo/docker_tut_js/tree/final</a></p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Implement Multi-Threading in Node.js With Worker Threads [Full Handbook] ]]>
                </title>
                <description>
                    <![CDATA[ JavaScript is a single-threaded programming language, and Node.js is the runtime environment for JavaScript. This means that JavaScript essentially runs within Node.js, and all operations are handled through a single thread. But when we perform tasks... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-implement-multi-threading-in-nodejs-with-worker-threads-full-handbook/</link>
                <guid isPermaLink="false">68fba9c10656b6400beb0762</guid>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ multithreading ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Worker Thread ]]>
                    </category>
                
                    <category>
                        <![CDATA[ workers ]]>
                    </category>
                
                    <category>
                        <![CDATA[ handbook ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Sumit Saha ]]>
                </dc:creator>
                <pubDate>Fri, 24 Oct 2025 16:30:57 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761323431527/d74eb2ba-edaa-4d19-a041-364e99a705ba.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>JavaScript is a single-threaded programming language, and Node.js is the runtime environment for JavaScript. This means that JavaScript essentially runs within Node.js, and all operations are handled through a single thread.</p>
<p>But when we perform tasks that require heavy processing, Node.js's performance can start to decline. Many people mistakenly think that Node.js isn’t good or that JavaScript is flawed. But there’s actually a solution. JavaScript can also be used effectively with multi-threading.</p>
<p>In this article, we will focus on the backend: specifically, how to implement multi-threading on the server side using Node.js.</p>
<h2 id="heading-heres-what-well-cover"><strong>Here’s What We’ll Cover</strong></h2>
<ol>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-project-setup-with-expressjs">Project Setup with ExpressJS</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-1-create-a-new-project-folder">1. Create a New Project Folder</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-2-initialize-a-nodejs-project">2. Initialize a Node.js Project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-3-install-expressjs">3. Install Express.js</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-4-optional-install-nodemon-for-development">4. Optional: Install Nodemon for Development</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-5-create-the-main-server-file">5. Create the Main Server File</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-6-run-the-project">6. Run the Project</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-the-problem">Understanding the Problem</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-observing-the-behavior">Observing the Behavior</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-why-does-this-happen">Why Does This Happen?</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-understanding-javascript-execution">Understanding JavaScript Execution</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-how-libuv-works">How Libuv Works</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-asynchronous-nature-of-nodejs">Asynchronous Nature of Node.js</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-the-cpu-intensive-problem">The CPU-Intensive Problem</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-implement-worker-threads">How to Implement Worker Threads</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-communication-between-threads">Communication Between Threads</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-worker-communication">Setting Up Worker Communication</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-optimize-with-multiple-cores">How to Optimize with Multiple Cores</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-checking-how-many-cores-your-system-has">Checking How Many Cores Your System Has</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-utilizing-multiple-cores-for-faster-execution">Utilizing Multiple Cores for Faster Execution</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-implement-multi-core-optimization">How to Implement Multi-Core Optimization</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-understanding-the-code-line-by-line">Understanding the Code Line by Line</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-thread-planning-and-configuration">Thread Planning and Configuration</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-dividing-work-across-multiple-workers">Dividing Work Across Multiple Workers</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-handling-complex-tasks">Handling Complex Tasks</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-performance-comparison">Performance Comparison</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-testing-results">Testing Results</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-performance-metrics">Performance Metrics</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-key-takeaways">Key Takeaways</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-summary">Summary</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-the-multi-core-challenge">The Multi-Core Challenge</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-discovering-available-cores">Discovering Available Cores</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-asynchronous-worker-creation">Asynchronous Worker Creation</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-multi-threaded-implementation-strategy">Multi-Threaded Implementation Strategy</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-key-concepts-recap">Key Concepts Recap</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-we-learned">What We Learned</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-final-words">Final Words</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-additional-resources">Additional Resources</a></p>
</li>
</ol>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>To follow along and get the most out of this guide, you should have:</p>
<ol>
<li><p>Basic JavaScript (ES6-style) knowledge</p>
</li>
<li><p>Familiarity with Node.js fundamentals</p>
</li>
<li><p>Web-server basics using Express (or similar)</p>
</li>
<li><p>Understanding of blocking vs non-blocking operations in Node.js / JavaScript</p>
</li>
<li><p>Comfort with asynchronous code (Promises / async/await) and event-based handling</p>
</li>
<li><p>Setting up a simple development environment with Node.js</p>
</li>
</ol>
<p>I’ve also created a video to go along with this article. If you’re the type who likes to learn from video as well as text, you can check it out here:</p>
<div class="embed-wrapper">
        <iframe width="560" height="315" src="https://www.youtube.com/embed/JTl6tQ4bqYA" style="aspect-ratio: 16 / 9; width: 100%; height: auto;" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" loading="lazy"></iframe></div>
<p> </p>
<h2 id="heading-project-setup-with-expressjs">Project Setup with ExpressJS</h2>
<p>In this section, we will go through a detailed, beginner-friendly setup for a Node.js project using <a target="_blank" href="https://expressjs.com/">Express</a>. This guide explains every step, so even if you are new to Node.js, you can follow along easily.</p>
<h3 id="heading-1-create-a-new-project-folder">1. Create a New Project Folder</h3>
<p>Start by creating a new folder for your project. Open your terminal or command prompt and run:</p>
<pre><code class="lang-powershell">mkdir node<span class="hljs-literal">-worker</span><span class="hljs-literal">-threads</span>
<span class="hljs-built_in">cd</span> node<span class="hljs-literal">-worker</span><span class="hljs-literal">-threads</span>
</code></pre>
<ul>
<li><p><code>mkdir node-worker-threads</code>: This command creates a new folder named <code>node-worker-threads</code>.</p>
</li>
<li><p><code>cd node-worker-threads</code>: Moves you into the newly created folder where all project files will be stored.</p>
</li>
</ul>
<p>Think of this folder as the home for your project.</p>
<h3 id="heading-2-initialize-a-nodejs-project">2. Initialize a Node.js Project</h3>
<p>Every Node.js project needs a <code>package.json</code> file to manage dependencies and scripts. Run:</p>
<pre><code class="lang-powershell">npm init <span class="hljs-literal">-y</span>
</code></pre>
<ul>
<li><p><code>npm init</code> creates a <code>package.json</code> file.</p>
</li>
<li><p>The <code>-y</code> flag automatically fills in default values, saving you time.</p>
</li>
</ul>
<p>After this, you will see a <code>package.json</code> file in your project folder. This file keeps track of all packages and configurations.</p>
<h3 id="heading-3-install-expressjs">3. Install Express.js</h3>
<p>Express is a lightweight web framework for Node.js. Install it with:</p>
<pre><code class="lang-powershell">npm install express
</code></pre>
<p>This adds Express to your project and allows you to create routes, handle requests, and send responses easily.</p>
<h3 id="heading-4-optional-install-nodemon-for-development">4. Optional: Install Nodemon for Development</h3>
<p>Nodemon automatically restarts your server whenever you make changes. This is very useful during development.</p>
<pre><code class="lang-powershell">npm install <span class="hljs-literal">-D</span> nodemon
</code></pre>
<p>The <code>-D</code> flag installs Nodemon as a development dependency.</p>
<p>Next, Update <code>package.json</code> scripts:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"dev"</span>: <span class="hljs-string">"nodemon index.js"</span>
  }
}
</code></pre>
<p>Now you can start the server with:</p>
<pre><code class="lang-powershell">npm run dev
</code></pre>
<p>This will automatically restart your server whenever you make code changes.</p>
<h3 id="heading-5-create-the-main-server-file">5. Create the Main Server File</h3>
<p>Create a file called <code>index.js</code>. This will be the main entry point of your application:</p>
<pre><code class="lang-powershell">touch index.js
</code></pre>
<p>Open <code>index.js</code> and add the following code:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// index.js </span>

<span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">3000</span>;

<span class="hljs-comment">// Non-blocking route</span>
app.get(<span class="hljs-string">"/non-blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">"This page is non-blocking."</span>);
});

<span class="hljs-comment">// Blocking route using Worker Threads</span>
app.get(<span class="hljs-string">"/blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">let</span> result = <span class="hljs-number">0</span>;
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">1000000000</span>; i++) {
    result++;
  }
  res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">`Result is <span class="hljs-subst">${result}</span>`</span>);
});

<span class="hljs-comment">// Start the server</span>
app.listen(port, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`App listening on port <span class="hljs-subst">${port}</span>`</span>);
});
</code></pre>
<p>Here’s what’s going on in this code:</p>
<ul>
<li><p><code>express</code>: To create the server.</p>
</li>
<li><p><code>Worker</code>: To run CPU-intensive tasks in a separate thread.</p>
</li>
<li><p><code>/non-blocking</code> route: Sends a quick response immediately.</p>
</li>
<li><p><code>/blocking</code> route: Runs a Worker thread to handle heavy computation.</p>
</li>
<li><p><code>app.listen</code>: Starts the server on port 3000 (or environment port).</p>
</li>
</ul>
<p>Don’t worry if all of this isn’t perfectly clear at the moment. We’ll explore everything in greater detail as we move forward. Get ready, because we’re going to break down each part step by step in the simplest way possible.</p>
<h3 id="heading-6-run-the-project">6. Run the Project</h3>
<p>Start the server using Nodemon:</p>
<pre><code class="lang-powershell">npm run dev
</code></pre>
<p>Or without Nodemon:</p>
<pre><code class="lang-powershell">node index.js
</code></pre>
<p>Visit these URLs in your browser:</p>
<ul>
<li><p><a target="_blank" href="http://localhost:3000/non-blocking"><code>http://localhost:3000/non-blocking</code></a> displays a simple non-blocking message.</p>
</li>
<li><p><a target="_blank" href="http://localhost:3000/blocking"><code>http://localhost:3000/blocking</code></a> executes a CPU-intensive task using Worker Threads.</p>
</li>
</ul>
<p><strong>Congratulations!</strong> Your Node.js project with Express is fully set up and ready for development.</p>
<h2 id="heading-understanding-the-problem">Understanding the Problem</h2>
<p>We have already set up a basic Express.js application, which is essentially a Node.js app. In this application, we have defined <strong>two routes</strong>:</p>
<ol>
<li><p><code>/non-blocking</code></p>
</li>
<li><p><code>/blocking</code></p>
</li>
</ol>
<p>The <code>/non-blocking</code> route is straightforward: it simply returns a text response saying, "This page is non-blocking."</p>
<p>On the other hand, the <code>/blocking</code> route contains a heavy computation. It runs a loop up to one million numbers, calculates the sum of all these numbers, and then returns the result.</p>
<p>Finally, the application is set to run on port 3000 using <code>app.listen</code>.</p>
<h3 id="heading-observing-the-behavior">Observing the Behavior</h3>
<p>If you open your browser and visit the <a target="_blank" href="http://localhost:3000/non-blocking"><code>http://localhost:3000/non-blocking</code></a> URL, it works perfectly fine and responds immediately.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079165626/6d6a3c24-8095-4243-83db-8a44865e5af9.png" alt="Non-blocking Browser" class="image--center mx-auto" width="1920" height="1080" loading="lazy"></p>
<p>But if you visit the <a target="_blank" href="http://localhost:3000/blocking"><code>http://localhost:3000/blocking</code></a> URL, the page keeps loading and doesn’t respond right away.</p>
<p>What's even more interesting is that if you try to access <a target="_blank" href="http://localhost:3000/non-blocking"><code>http://localhost:3000/non-blocking</code></a> <strong>while</strong> <code>/blocking</code> is still running, it also becomes unresponsive.</p>
<p>This demonstrates a key concept: while the <code>/blocking</code> route is executing, even the <code>/non-blocking</code> route cannot respond. In other words, the heavy computation in <code>/blocking</code> <strong>blocks the Node.js event loop</strong>, affecting all other routes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079338040/ecaf458e-d91a-4752-863e-71ac34081949.gif" alt="Blocking Browser" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-why-does-this-happen">Why Does This Happen?</h3>
<p>The reason lies in how Node.js works. Node.js is essentially a JavaScript runtime, and as we know, JavaScript is a <strong>single-threaded</strong> programming language. Naturally, Node.js also runs on a single thread by default.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079437967/3330b62c-54c2-41a9-bccc-8f962e71287c.gif" alt="Single Threaded Programming" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<p>So, where does the problem arise? When you execute the <code>/blocking</code> route, all the JavaScript code runs on the <strong>main thread</strong>. During this time, the main thread is completely busy or blocked. As a result, if another user tries to access the <code>/non-blocking</code> route, they won't get any response because the main thread is still occupied with the previous task.</p>
<p>This is why many people mistakenly think that JavaScript is weak because it's single-threaded. But this perception is not entirely accurate. With the right approach and techniques, JavaScript <strong>can also be used in a multi-threaded way</strong>, allowing you to handle heavy computations without blocking other operations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079547228/0799f826-3715-4664-b94b-e8f2e80afd04.gif" alt="Weak JS" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h2 id="heading-understanding-javascript-execution"><strong>Understanding JavaScript Execution</strong></h2>
<p>Let's think about the main thread where JavaScript primarily runs. You might ask, where exactly does JavaScript execute? JavaScript runs inside the <strong>JavaScript engine</strong>, which is responsible for converting JavaScript code into machine code.</p>
<p>In the case of Node.js, it runs on the <strong>V8 engine</strong>, which is the same engine used in Google Chrome. The V8 engine operates entirely on a single thread, meaning all JavaScript code executes within just one main thread.</p>
<p>Now, you might wonder: are there any threads other than the main thread? The answer is yes. Apart from the main thread, there are additional threads used to handle different types of tasks. The management and implementation of these threads are handled by a special library called <strong>Libuv</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079628895/02bc21f7-1d49-4952-9c17-3f57cbbe3488.gif" alt="Understanding JavaScript Execution" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-how-libuv-works">How Libuv Works</h3>
<p>Libuv is designed to work alongside the V8 Engine. While the V8 Engine executes JavaScript code on the main thread, additional threads are used to handle different types of tasks. For example, operations like database queries, network requests, or file read/write tasks are handled by these extra threads, and the Libuv library manages and coordinates them.</p>
<p>Whenever we perform such tasks, they are actually executed on these extra threads outside the main thread. Libuv instructs the V8 Engine on how to handle these tasks efficiently. These tasks are commonly referred to as <strong>Input/Output operations</strong>, or I/O operations for short. In other words, when performing file read/write, database queries, or network requests, these I/O operations are executed on separate threads without blocking the Main Thread.</p>
<p>But if we have tasks like a large for-loop in our earlier example, or any operation that primarily requires <strong>CPU processing</strong>, they do not fall under I/O operations. In such cases, the task must be executed on the main thread, which inevitably blocks it until the task is completed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079720366/9f01e68c-ad5f-491a-9e68-d91a4ec5f3fb.gif" alt="How libuv works" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-asynchronous-nature-of-nodejs"><strong>Asynchronous Nature of Node.js</strong></h3>
<p>Consider a scenario where a client sends a request to the main thread, and this request requires a database query to be executed.</p>
<p>When the user sends such a request, the database query is sent to the database, but importantly, it <strong>does not block</strong> the main thread. Instead, Libuv handles the database query on a <strong>separate thread</strong>, keeping the Main Thread free to handle other tasks.</p>
<p>In this situation, if another user sends a request that does <strong>not</strong> involve any database query or I/O operation, it can be executed immediately on the Main Thread. As a result, this second user receives a response without any delay.</p>
<p>Once the database query running on the separate thread completes, the result is returned to the Main Thread, which then sends it back as a response to the original user. This approach ensures that users receive their output efficiently, and the main thread remains available for other tasks.</p>
<p>This entire process represents the <strong>asynchronous nature</strong> of JavaScript and Node.js. Tasks are not executed synchronously – instead, they run asynchronously. One user's request can be processed on a separate thread while other users continue to interact with the server seamlessly. This is how Node.js maintains high performance and responsiveness even under multiple simultaneous requests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079806477/dd7b58e2-48ab-4994-ad09-41fad2f0b78b.gif" alt="Asynchronous Nature of Node.js" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-the-cpu-intensive-problem"><strong>The CPU-Intensive Problem</strong></h3>
<p>So, this is how everything works effectively. Now, the question is, what happens if the main thread has a task that doesn't require any database access for a user's request but demands heavy CPU processing? In that case, the main thread will get blocked.</p>
<p>Let's say a task on the main thread is consuming a lot of CPU. If we execute it directly on the main thread, the event loop will get blocked, and other requests won't be able to be processed.</p>
<p>This is where <strong>worker threads</strong> come into play in Node.js. With worker threads, we can spin up a new thread outside the main thread to handle CPU-heavy operations separately. As a result, the main thread stays free, allowing other requests to be processed immediately.</p>
<p>In other words, by using worker threads, we can run <strong>CPU-bound</strong> tasks <strong>asynchronously</strong>, ensuring that the server's throughput and responsiveness are not affected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079890972/98686661-0c5f-493e-a0b4-25c744c60938.gif" alt="CPU Intensive Problem" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h2 id="heading-how-to-implement-worker-threads"><strong>How to Implement Worker Threads</strong></h2>
<p>If we take a look at our previous <code>index.js</code> file, the task in the <code>/blocking</code> route handler is running entirely on the main thread, which is why it causes blocking. So, how can we solve this problem? The solution is to use Node.js's built-in worker threads module.</p>
<p>There is <strong>no need to install any external package</strong>, as worker threads is a core module of Node.js.<br>We can directly require the <code>Worker</code> class from the <code>worker_threads</code> module and create a new worker thread.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// index.js</span>

<span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> { Worker } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"worker_threads"</span>);

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">3000</span>;

<span class="hljs-comment">// Non-blocking route</span>
app.get(<span class="hljs-string">"/non-blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">"This page is non-blocking."</span>);
});

<span class="hljs-comment">// Blocking route using Worker Threads</span>
app.get(<span class="hljs-string">"/blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> worker = <span class="hljs-keyword">new</span> Worker(<span class="hljs-string">"./worker.js"</span>);

  <span class="hljs-keyword">let</span> result = <span class="hljs-number">0</span>;
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">1000000000</span>; i++) {
    result++;
  }
  res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">`Result is <span class="hljs-subst">${result}</span>`</span>);
});

<span class="hljs-comment">// Start the server</span>
app.listen(port, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`App listening on port <span class="hljs-subst">${port}</span>`</span>);
});
</code></pre>
<p>How it works:</p>
<ul>
<li><p>Inside the <code>/blocking</code> route handler, we create a new worker using <code>new Worker()</code> and provide a file path.</p>
</li>
<li><p>This file (<code>worker.js</code>) contains the <strong>CPU-heavy</strong> task that we want the worker to execute.</p>
</li>
<li><p>For example, our heavy for-loop is moved into this separate file.</p>
</li>
</ul>
<p>We create a new file named <code>worker.js</code> and paste the loop there:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// worker.js</span>

<span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">1000000000</span>; i++) {
  result++;
}
</code></pre>
<p>When we pass the path to <code>worker.js</code> while creating the Worker, Node.js starts a new thread.</p>
<p>This new thread executes the CPU-intensive task independently, keeping the main thread free to handle other incoming requests.</p>
<p>By doing this, the application becomes more responsive and can handle multiple requests without blocking.</p>
<h3 id="heading-communication-between-threads">Communication Between Threads</h3>
<p>In Node.js, we have the main thread and additional worker threads. To coordinate tasks between them, we can use a <strong>messaging system</strong>. Essentially, all results eventually need to reach the main thread. Otherwise, we won't be able to provide any output to the user.</p>
<p>For example, suppose you assign a task to Thread B and another task to Thread C. When these threads complete their tasks, they must inform the main thread. They do this by sending messages through the messaging system.</p>
<p>Think of it like exchanging messages in an inbox: Thread C sends a message directly to the main thread once its task is finished. Through this communication, worker threads notify the main thread about task completion and send any necessary data.</p>
<p>This is exactly the mechanism we will use in our example to handle CPU-heavy tasks with worker threads, ensuring that the main thread remains free and responsive.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761079966779/34f22f5e-9334-4e89-b54f-71de3de90923.gif" alt="Communication between threads" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-setting-up-worker-communication">Setting Up Worker Communication</h3>
<p>So, we’ve created a <code>worker.js</code> file. Now, the question is, how do we inform the main thread about the task being done in this file?</p>
<p>To achieve this, we extract <code>parentPort</code> from the built-in <code>worker_threads</code> module in Node.js. The <code>parentPort</code> is a special object that allows communication <strong>between the worker thread and the main thread</strong>. It acts as a bridge: whenever the worker completes a task, it can send the result back to the main thread through this channel.</p>
<p>Once the task is complete, we use the method <code>parentPort.postMessage(result)</code> to send the final data. In other words, we’re posting a message to the parent thread, and in our case, that message is the computed result of our loop.</p>
<p>Here’s the full code for the <code>worker.js</code> file:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// worker.js</span>

<span class="hljs-keyword">const</span> { parentPort } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"worker_threads"</span>);

<span class="hljs-keyword">let</span> result = <span class="hljs-number">0</span>;
<span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">10000000000</span>; i++) {
  result++;
}

parentPort.postMessage(result);
</code></pre>
<p>In this example:</p>
<ul>
<li><p>We import parentPort from worker_threads.</p>
</li>
<li><p>We perform a heavy task – a loop that counts up to 10 billion.</p>
</li>
<li><p>After finishing the loop, we send the result back to the main thread using <code>parentPort.postMessage(result)</code>.</p>
</li>
</ul>
<p>This is how communication between the worker thread and the main thread takes place in Node.js.</p>
<p>Now, the question is, once we send the data from the worker, how do we <strong>receive it</strong> in the <code>/blocking</code> handler of our <code>index.js</code> file?</p>
<p>To do this, we need to set up a <strong>listener</strong> inside the handler. For that, we use the <code>worker.on()</code> method.</p>
<p>So, what exactly are we listening for? We listen for the <code>"message"</code> event – just like we listen for <code>onClick</code> or other events in JavaScript.</p>
<p>The first parameter of <code>worker.on()</code> is the event name (<code>"message"</code>), and the second parameter is a <strong>callback function</strong>. Inside that callback, the first argument represents the data we receive from the worker.</p>
<p>Once we receive the data, we can send it back to the browser as a response using:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// index.js </span>

<span class="hljs-comment">// Inside the `/blocking` route handler, we listen for messages from the worker thread.</span>
<span class="hljs-comment">// Whenever the worker completes its task and sends a message, </span>
<span class="hljs-comment">// the callback receives the data as the `data` parameter.</span>
<span class="hljs-comment">// We then send this data back to the client as an HTTP response with status code 200.</span>

worker.on(<span class="hljs-string">"message"</span>, <span class="hljs-function">(<span class="hljs-params">data</span>) =&gt;</span> {
  res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">`Result is <span class="hljs-subst">${data}</span>`</span>);
});
</code></pre>
<p>Explanation<strong>:</strong></p>
<ul>
<li><p><code>worker.on("message", callback)</code> listens for messages sent from the worker thread using <code>parentPort.postMessage()</code>.</p>
</li>
<li><p>The <code>data</code> parameter contains the result sent by the worker.</p>
</li>
<li><p>Using <code>res.status(200).send(...)</code>, we send the computed result back to the browser.</p>
</li>
<li><p>This allows the heavy computation to happen in a separate thread, keeping the main thread free and responsive.</p>
</li>
</ul>
<p>At the same time, we should also handle possible errors.</p>
<p>If any error occurs inside the worker, we can listen for it using the <strong>"error"</strong> event in the same way:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// index.js </span>

<span class="hljs-comment">// In the `/blocking` route handler, we listen for any errors that occur inside the worker thread.</span>
<span class="hljs-comment">// If an error occurs, the callback receives the error object `err`,</span>
<span class="hljs-comment">// and we send it back as an HTTP response with status code 400.</span>

worker.on(<span class="hljs-string">"error"</span>, <span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> {
  res.status(<span class="hljs-number">400</span>).send(<span class="hljs-string">`An Error occurred : <span class="hljs-subst">${err}</span>`</span>);
});
</code></pre>
<p>Explanation<strong>:</strong></p>
<ul>
<li><p><code>worker.on("error", callback)</code> listens specifically for errors inside the worker thread.</p>
</li>
<li><p>The <code>err</code> parameter contains details about what went wrong in the worker.</p>
</li>
<li><p>Using <code>res.status(400).send(...)</code>, we return the error to the client so the request doesn’t hang silently.</p>
</li>
</ul>
<p><strong>Here’s how the complete code looks:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-comment">// index.js</span>

<span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> { Worker } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"worker_threads"</span>);

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">3000</span>;

<span class="hljs-comment">// Non-blocking route</span>
app.get(<span class="hljs-string">"/non-blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">"This page is non-blocking."</span>);
});

<span class="hljs-comment">// Blocking route using worker threads</span>
app.get(<span class="hljs-string">"/blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> worker = <span class="hljs-keyword">new</span> Worker(<span class="hljs-string">"./worker.js"</span>);

  worker.on(<span class="hljs-string">"message"</span>, <span class="hljs-function">(<span class="hljs-params">data</span>) =&gt;</span> {
    res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">`Result is <span class="hljs-subst">${data}</span>`</span>);
  });

  worker.on(<span class="hljs-string">"error"</span>, <span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> {
    res.status(<span class="hljs-number">400</span>).send(<span class="hljs-string">`An Error occured : <span class="hljs-subst">${err}</span>`</span>);
  });
});

<span class="hljs-comment">// Start the server</span>
app.listen(port, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`App listening on port <span class="hljs-subst">${port}</span>`</span>);
});
</code></pre>
<p>Once this is set up, you'll see a dramatic change. The <code>/blocking</code> route is loading, but even while it's loading, repeatedly refreshing the <code>/non-blocking</code> route works perfectly without any issues!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761080061445/f1f213c7-6cce-4334-81e5-8cd828682f8e.gif" alt="Setting up worker communication" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<p>Now notice, the <code>/non-blocking</code> route is accessible, which means even though the <code>/blocking</code> route is still running, it doesn't affect anything. So, we've successfully solved this problem. We moved the main task to a separate thread outside the main thread. What does this mean? The main thread created a new worker thread and assigned the CPU-heavy task to it. The new thread now works independently, while the main thread remains free.</p>
<p>Finally, when the new thread completes its task, it also becomes free. Then, through the messaging system, the new thread informs the main thread, "Your data is ready, here's your data." The main thread receives this data and sends it to the client as a response.</p>
<p>Therefore, the tasks that were automatically handled on separate threads for database queries or file read-write operations – because they were I/O operations – we have now manually initiated a thread and used it to handle similar CPU-heavy tasks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761080131679/148cc279-e4f0-4f68-b2a9-34110abcbc90.gif" alt="IO Operations" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h2 id="heading-how-to-optimize-with-multiple-cores">How to Optimize with Multiple Cores</h2>
<p>Now that you have a clear understanding of how the process works, let's take it one step further and optimize it using multiple CPU cores.</p>
<p>When you visit the <code>/blocking</code> route, you might notice that it still takes a significant amount of time to respond. This indicates that the optimization isn't fully complete yet. So far, we've used a separate thread meaning we've utilized <strong>one CPU core</strong> outside the main thread. But most modern machines have <strong>multiple cores</strong>, and we can take advantage of that to improve performance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761080243361/dddd924c-9138-4790-ac6c-811b39772c6c.gif" alt="Final index" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-checking-how-many-cores-your-system-has">Checking How Many Cores Your System Has</h3>
<p>Before assigning multiple cores, you can check how many cores are available on your system:</p>
<ul>
<li><p><strong>macOS (Unix-based):</strong></p>
<pre><code class="lang-powershell">  sysctl <span class="hljs-literal">-n</span> hw.ncpu
</code></pre>
<p>  This command returns the total number of CPU cores on your machine. For example, on my Mac, it shows <code>10</code>, meaning I have ten cores available.</p>
</li>
<li><p><strong>Linux:</strong></p>
<pre><code class="lang-powershell">  nproc
</code></pre>
<p>  This will print the number of processing units available.</p>
</li>
<li><p><strong>Windows (Command Prompt):</strong></p>
<pre><code class="lang-powershell">  <span class="hljs-built_in">echo</span> %NUMBER_OF_PROCESSORS%
</code></pre>
</li>
</ul>
<p>Each of these commands will help you determine how many cores you can use for parallel processing.</p>
<h3 id="heading-utilizing-multiple-cores-for-faster-execution">Utilizing Multiple Cores for Faster Execution</h3>
<p>Once you know how many cores your machine has, you can decide how many of them to allocate for a specific job. For example, since my system has ten cores, I might choose to use four cores for the task.</p>
<p>By distributing the workload across multiple threads (each running on its own core), you can achieve significant performance improvements. Instead of relying on just one core, the system can execute multiple parts of the task simultaneously reducing the total execution time dramatically.</p>
<p>In short, the more cores you effectively utilize, the faster your computationally heavy tasks can complete (as long as your code is designed to handle parallel execution safely).</p>
<h2 id="heading-how-to-implement-multi-core-optimization">How to Implement Multi-Core Optimization</h2>
<p>Now, we'll optimize the <code>/blocking</code> task by using multiple worker threads. First, we’ll create copies of our existing files:</p>
<ul>
<li><p><code>index.js</code> → <code>index-optimized.js</code></p>
</li>
<li><p><code>worker.js</code> → <code>worker-optimized.js</code></p>
</li>
</ul>
<p>We plan to use four threads. Even though the machine may have more cores, using all could overload the system, so we’ll limit it to four.</p>
<p><strong>index-optimize.js:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-comment">// index-optimize.js</span>

<span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> { Worker } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"worker_threads"</span>);

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">3000</span>;
<span class="hljs-keyword">const</span> THREAD_COUNT = <span class="hljs-number">4</span>;

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createWorker</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function">(<span class="hljs-params">resolve, reject</span>) =&gt;</span> {
        <span class="hljs-keyword">const</span> worker = <span class="hljs-keyword">new</span> Worker(<span class="hljs-string">"./worker-optimized.js"</span>, {
            <span class="hljs-attr">workerData</span>: {
                <span class="hljs-attr">thread_count</span>: THREAD_COUNT,
            },
        });

        worker.on(<span class="hljs-string">"message"</span>, <span class="hljs-function">(<span class="hljs-params">data</span>) =&gt;</span> {
            resolve(data);
        });

        worker.on(<span class="hljs-string">"error"</span>, <span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> {
            reject(<span class="hljs-string">`An Error occured : <span class="hljs-subst">${err}</span>`</span>);
        });
    });
}

app.get(<span class="hljs-string">"/non-blocking"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
    res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">"This page is non-blocking."</span>);
});

app.get(<span class="hljs-string">"/blocking"</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
    <span class="hljs-keyword">const</span> workerPromise = [];

    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; THREAD_COUNT; i++) {
        workerPromise.push(createWorker());
    }

    <span class="hljs-keyword">const</span> threadResults = <span class="hljs-keyword">await</span> <span class="hljs-built_in">Promise</span>.all(workerPromise);
    <span class="hljs-keyword">const</span> total =
        threadResults[<span class="hljs-number">0</span>] +
        threadResults[<span class="hljs-number">1</span>] +
        threadResults[<span class="hljs-number">2</span>] +
        threadResults[<span class="hljs-number">3</span>];

    res.status(<span class="hljs-number">200</span>).send(<span class="hljs-string">`Result is <span class="hljs-subst">${total}</span>`</span>);
});

app.listen(port, <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`App listening on port <span class="hljs-subst">${port}</span>`</span>);
});
</code></pre>
<p>Here, we create a <code>createWorker</code> function that returns a Promise. Inside it, the worker is created, and the message and error events are handled. In the <code>/blocking</code> route, we create multiple workers asynchronously, wait for all of them to finish using <code>Promise.all</code>, and then sum the results.</p>
<p><strong>worker-optimize.js:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-comment">// worker-optimize.js</span>

<span class="hljs-keyword">const</span> { parentPort, workerData } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"worker_threads"</span>);

<span class="hljs-keyword">let</span> result = <span class="hljs-number">0</span>;
<span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">10000000000</span> / workerData.thread_count; i++) {
    result++;
}

parentPort.postMessage(result);
</code></pre>
<p>Each worker receives <code>thread_count</code> from the main thread and calculates its part of the task. Once done, it sends the result back using <code>parentPort.postMessage</code>. This way, heavy computation is distributed, and the main thread remains free.</p>
<h3 id="heading-understanding-the-code-line-by-line">Understanding the Code Line by Line</h3>
<p>Alright, some of these concepts might seem a bit complex at first. But don't worry! We we’ll go through all the code line by line, explaining everything in detail so that you understand exactly what is happening and why.</p>
<h3 id="heading-thread-planning-and-configuration">Thread Planning and Configuration</h3>
<p>Now, coming to the main point we'll be using threads, right? We've planned to use multiple threads. Let's say we've decided to use four threads. Our machine has ten cores, but we won't use them all because that would consume all our system resources. So, we'll use four threads from four of the available cores.</p>
<p>For this reason, in the <code>index-optimized.js</code> file, we've created a constant to store the number of threads we'll use. Let's say we've set it to 4 here, so that later another developer can easily change it if needed.</p>
<h4 id="heading-the-createworker-function">The createWorker Function</h4>
<p>Then, we've created a new function called <code>createWorker</code>. The purpose of this function is to create a new Worker. Here, we’re returning a promise because the process of creating a Worker is performed asynchronously.</p>
<p>This is because when we create four workers, we want the creation process itself to happen asynchronously, so the main thread doesn't get blocked. After all, creating a worker is essentially a separate process.</p>
<p>The best practice is to create workers asynchronously. That's why we created the <code>createWorker</code> function, which returns a promise. As we know, events are listened to inside a promise, where resolve and reject are used. In the <code>/blocking</code> handler, we can handle the worker's result or any errors through this promise.</p>
<h4 id="heading-creating-a-worker">Creating a Worker</h4>
<p>To create a worker, we use:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> worker = <span class="hljs-keyword">new</span> Worker(<span class="hljs-string">"./worker-optimized.js"</span>);
</code></pre>
<p>Here, we need to provide the path to the Worker file. Then, as the second parameter, we can pass some options. For example, if we want to send some data to the Worker, we use <code>{ workerData }</code>. Inside this <code>workerData</code>, we'll send the <code>THREAD_COUNT</code>, which is stored in our file as <code>THREAD_COUNT</code>.</p>
<p>For instance, we can pass an object in <code>workerData</code> like:</p>
<pre><code class="lang-javascript">{
  <span class="hljs-attr">threadCount</span>: THREAD_COUNT;
}
</code></pre>
<p>When this Worker is being created, we send some properties from <code>index-optimized.js</code> as <code>workerData</code>. This is because in <code>worker-optimized.js</code>, the worker can use <code>parentPort</code> to know how many threads it should use. So, we've included a <code>threadCount</code> property in <code>workerData</code>. When the worker starts, it reads <code>threadCount</code> from <code>workerData</code> and works accordingly. This is how we've designed the <code>createWorker</code> function, which simply returns a Promise.</p>
<h4 id="heading-event-handling-and-promise-structure">Event Handling and Promise Structure</h4>
<p>Here, we made an important change compared to our original <code>index.js</code> file.</p>
<p>Since we copied all the code from <code>index.js</code> into <code>index-optimized.js</code>, we adjusted the <code>/blocking</code> route handler. Specifically, we removed the direct creation of the Worker from the <code>/blocking</code> handler. Instead, the Worker is now created inside the <code>createWorker</code> function.</p>
<p>Also, all the event listeners (<code>message</code> and <code>error</code>) that were previously inside the <code>/blocking</code> handler have also been moved into the <code>createWorker</code> function. This means that the worker is fully managed within the function, and the <code>/blocking</code> handler now only handles the promise results, keeping the main thread clean and organized.</p>
<p>But since these events are being listened to inside a promise, we cannot send the response directly from there. We'll send the response inside the <code>/blocking</code> handler. So from the Promise, we only use <code>resolve</code> and <code>reject</code>.</p>
<p><strong>For example:</strong></p>
<pre><code class="lang-javascript">resolve(<span class="hljs-string">`Result is <span class="hljs-subst">${data}</span>`</span>);
reject(<span class="hljs-string">`An error occurred <span class="hljs-subst">${err}</span>`</span>);
</code></pre>
<p>In other words, the entire process of creating a worker has been moved into the <code>createWorker</code> function, which ultimately returns a promise.</p>
<h3 id="heading-dividing-work-across-multiple-workers">Dividing Work Across Multiple Workers</h3>
<p>Now, inside the <code>/blocking</code> handler, I simply call the <code>createWorker</code> function. The workerData we provide tells the worker what task it should perform. The created worker is linked with parentPort in the <code>worker-optimized.js</code> file, which essentially communicates with the parent thread.</p>
<p>Now, we want to divide the for-loop running up to one million across four cores. The number of cores to use is sent from <code>index-optimized.js</code> as part of workerData. Because this information is in workerData, the workers can automatically divide and handle the tasks among themselves.</p>
<p>So, in the <code>worker-optimized.js</code> file, we'll get the workerData using:</p>
<pre><code class="lang-javascript">{ workerData } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"worker_threads"</span>)
</code></pre>
<p>Then, in the for-loop condition, we'll use <code>workerData.threadCount</code>. This means the threadCount sent from <code>index-optimized.js</code> will be used here instead of hardcoding 4. This is best practice because the data is passed to the worker at the time of its creation. In <code>worker-optimized.js</code>, we use this to divide the work into four parts. Then, four workers will be created, meaning the <code>createWorker</code> function will be called four times. Each worker will take one part of the work, and at the end, all results will be combined. This is how the entire process is completed.</p>
<p>So, in this <code>/blocking</code> handler, our task is to collect the results of the four promises and then sum them all. Let's say we store them in an array called <code>workerPromises</code>. Each entry in this array will hold the promise result of a worker. Then, by combining all of them, we get the final result.</p>
<p>Since we need to create four Workers, we'll run a for-loop: <code>for (let i = 0; i &lt; THREAD_COUNT; i++)</code>. Inside the body of this loop, we'll call the <code>createWorker</code> function each time. This means that in every iteration, a new worker is created, and its promise is pushed into the <code>workerPromises</code> array.</p>
<p>So, inside the body of this loop, we'll call the <code>createWorker</code> function four times. Each call to <code>createWorker</code> returns a promise. These four promises are pushed into the <code>workerPromises</code> array, like <code>workerPromises.push(createWorker())</code>. This way, each worker has its own promise. In the end, since all the promises are stored in the <code>workerPromises</code> array, we can easily call <code>Promise.all(workerPromises)</code>.</p>
<p>So, we used <code>threadResults = await Promise.all(workerPromises)</code>. As we know, <code>Promise.all</code> can handle multiple Promises together. Here, we passed the <code>workerPromises</code> array, so <code>threadResults</code> will contain the results of the four promises as separate elements, like <code>threadResults[0]</code>, <code>threadResults[1]</code>, <code>threadResults[2]</code>, and <code>threadResults[3]</code>. Then, we sum these results to get the total calculation, meaning <code>threadResults[0] + threadResults[1] + threadResults[2] + threadResults[3]</code> gives the final result. Since we used await, the entire function needs to be async.</p>
<p>Once everything is done correctly, we can send this total result to the client using <code>res.status(200).send(Result is ${total})</code>. This way, the total calculation works correctly, unlike before.</p>
<p>So, I hope it's clear now: we called the <code>createWorker</code> function four times here. Each call returns a promise. We then awaited all these promises together using <code>Promise.all</code>, so all the results came in at once. After that, we summed these results. The <code>/blocking</code> handler is essentially the one executing our operational work.</p>
<h3 id="heading-handling-complex-tasks">Handling Complex Tasks</h3>
<p>So, in the <code>worker-optimized.js</code> file, we've essentially divided the work into four parts. But it's not necessary that the task will always be a for-loop. There could be different types of complex tasks as well, like image processing, data processing, or pagination.</p>
<p>In such cases, we can't always follow the same pattern. So, we need to send the necessary data from <code>index-optimized.js</code> as <code>workerData</code>, and the worker will use that data to perform the task in a separate process.</p>
<p>In the previous example, all the steps were sequential, so simply summing the results gave us the total. But in the case of complex tasks, we need to use data-driven processing.</p>
<p>In other complex applications, you might need to perform different tasks. But the main concept is clear: any data or property we send from here will be received by the worker, which will then divide the work. Each worker – whether you use four, five, or six – will handle its part, and all the results will need to be accumulated. This is essentially the entire process.</p>
<h2 id="heading-performance-comparison"><strong>Performance Comparison</strong></h2>
<p>When working with CPU-intensive tasks in Node.js, dividing the work using worker threads can significantly improve performance. Let's compare the behavior of our application before and after optimization.</p>
<h3 id="heading-testing-results">Testing Results</h3>
<p>Running the <code>index.js</code> file and hitting the <code>/blocking</code> route in the browser takes a significant amount of time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761164273474/02892dd3-3524-4e7e-83fa-ac279910d759.gif" alt="Final Index" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<p>Running the <code>index-optimized.js</code> file and hitting the same route takes considerably less time – around 3 seconds.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761164303764/4ba27180-49f7-4485-a1a5-73f2f419ab9b.gif" alt="Final Optimized" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<p>Stopping it and running <code>index.js</code> again clearly shows the original implementation is slower.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761164358090/4c3b9056-b936-4341-9302-461c290ea70e.gif" alt="Final unoptimized" class="image--center mx-auto" width="1138" height="640" loading="lazy"></p>
<h3 id="heading-performance-metrics">Performance Metrics</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>File</strong></td><td><strong>Route</strong></td><td><strong>Approx. Response Time</strong></td><td><strong>Notes</strong></td></tr>
</thead>
<tbody>
<tr>
<td><code>index.js</code></td><td><code>/blocking</code></td><td>Much longer</td><td>This is the original implementation. The single-threaded loop blocks the event loop, causing delays.</td></tr>
<tr>
<td><code>index-optimized.js</code></td><td><code>/blocking</code></td><td>Around 3 seconds</td><td>Here, the work is divided into multiple worker threads, making the process much faster.</td></tr>
</tbody>
</table>
</div><h3 id="heading-key-takeaways">Key Takeaways</h3>
<p>This comparison demonstrates how dividing the work into multiple parts using worker threads can make CPU-intensive tasks far more efficient, keeping the main thread responsive and improving overall performance.</p>
<h2 id="heading-summary">Summary</h2>
<p>So, first we saw in <code>index.js</code> how a blocking task can be handled in a <code>non-blocking</code>, asynchronous way. That is, we ran a worker thread, and because of this worker thread, the main thread didn't get blocked, allowing other users to continue their tasks simultaneously.</p>
<h3 id="heading-the-multi-core-challenge">The Multi-Core Challenge</h3>
<p>But the problem is, when we use a new thread on the server, there isn't just a single core. Usually, there are multiple cores, like <code>8</code>, <code>16</code>, or more. To use multiple cores, we first need to find out how many cores are available on the server.</p>
<h3 id="heading-discovering-available-cores">Discovering Available Cores</h3>
<p>If the server is Linux, we can easily find out the total number of cores using the <code>nproc</code> command. Then we can decide how many cores to use. For example, let's say we decide to use three cores. In <code>index-optimized.js</code>, we've implemented a way to divide the work among these cores.</p>
<h3 id="heading-asynchronous-worker-creation">Asynchronous Worker Creation</h3>
<p>So, what we did was wrap the worker creation process in a promise. Since creating a worker takes some time and spinning it up isn't instantaneous, this process is done asynchronously. This way, even if multiple users hit the endpoint to create Workers, the main thread won't be blocked.</p>
<h3 id="heading-how-to-implement-multi-core-optimization-1">How to Implement Multi-Core Optimization</h3>
<p>We simply created workers, and then using the <code>createWorker</code> function inside a loop, we spawned four or a specified number of Workers based on the thread count. Each worker posts messages independently, and through the listener, we receive data from each worker. These results are collected via promises, stored together in an array, and finally, we sum all the results from this array to get the final outcome.</p>
<p>So, the other concepts are all part of basic JavaScript. I hope you now understand how worker threads work and how we can use multi-threaded processes in Node.js. It's an excellent concept and a great opportunity to learn thoroughly.</p>
<h3 id="heading-what-we-learned"><strong>What We Learned</strong></h3>
<p>Worker Threads in Node.js provide a powerful way to handle CPU-intensive tasks without blocking the main event loop. By leveraging multiple cores and distributing work across threads, we can significantly improve application performance while maintaining responsiveness for other users.</p>
<ul>
<li><p><strong>Non-blocking execution</strong>: Worker threads prevent the main thread from being blocked</p>
</li>
<li><p><strong>Multi-core utilization</strong>: We can leverage multiple CPU cores for parallel processing</p>
</li>
<li><p><strong>Asynchronous worker creation</strong>: Using promises to handle worker creation without blocking</p>
</li>
<li><p><strong>Result aggregation</strong>: Collecting and combining results from multiple workers</p>
</li>
<li><p><strong>Performance optimization</strong>: Distributing heavy computations across multiple threads</p>
</li>
</ul>
<p>This approach is particularly valuable for applications that need to handle computationally intensive tasks while remaining responsive to user requests.</p>
<h2 id="heading-final-words">Final Words</h2>
<p>If you found the information here valuable, feel free to share it with others who might benefit from it. I’d really appreciate your thoughts – mention me on X <a target="_blank" href="https://x.com/sumit_analyzen">@sumit_analyzen</a> or on Facebook <a target="_blank" href="https://facebook.com/sumit.analyzen">@sumit.analyzen</a>, <a target="_blank" href="https://youtube.com/@logicBaseLabs">watch my coding tutorials</a>, <a target="_blank" href="https://sumitsaha.me">visit my website</a> or simply <a target="_blank" href="https://www.linkedin.com/in/sumitanalyzen/">connect with me</a> on LinkedIn.</p>
<h2 id="heading-additional-resources">Additional Resources</h2>
<p>You can also check the <a target="_blank" href="https://nodejs.org/api/worker_threads.html">Node.js Worker Threads documentation</a> for more in-depth learning. You can find all the source code from this tutorial in <a target="_blank" href="https://github.com/logicbaselabs/node-worker-threads/">this GitHub repository</a>. If it helped you in any way, consider giving it a star to show your support!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Containerize and Deploy Your Node.js Applications ]]>
                </title>
                <description>
                    <![CDATA[ When you build a Node.js application, running it locally is simple. You type npm start, and it works. But when you need to run it on the cloud, things get complicated. You need to think about servers, environments, dependencies, and deployment pipeli... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-containerize-and-deploy-your-nodejs-applications/</link>
                <guid isPermaLink="false">68e840ed25ca8a99242df116</guid>
                
                    <category>
                        <![CDATA[ containers ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Docker ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Manish Shivanandhan ]]>
                </dc:creator>
                <pubDate>Thu, 09 Oct 2025 23:10:37 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760051426715/fd0f14cf-95dc-4191-b0fc-e5c916520097.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>When you build a Node.js application, running it locally is simple. You type <code>npm start</code>, and it works.</p>
<p>But when you need to run it on the cloud, things get complicated. You need to think about servers, environments, dependencies, and deployment pipelines. That’s where containerization comes in.</p>
<p>Containers make your application portable and predictable. You can run the same code with the same setup anywhere, from your laptop to the cloud.</p>
<p>In this guide, we will walk through how to containerize a simple Node.js API and deploy it to the cloud. By the end, you will know how to set up Docker for your app, push it to a registry, and see your application running on the cloud.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-containerization">What is Containerization?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-setting-up-a-nodejs-app">Setting Up a Node.js App</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-writing-the-dockerfile">Writing the Dockerfile</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-building-and-testing-the-container">Building and Testing the Container</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-preparing-for-deployment">Preparing for Deployment</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deploying-to-the-cloud">Deploying to the Cloud</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-scaling-your-app">Scaling Your App</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-updating-your-app">Updating Your App</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-benefits-of-sing-containers">Benefits of sing Containers</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we dive into containerizing and deploying your Node.js application, make sure you have the following set up on your system. These basics will help you follow along without running into errors.</p>
<p><strong>Node.js and npm</strong><br>You should have <a target="_blank" href="https://nodejs.org/en">Node.js</a> (v18 or higher) and npm installed on your local machine. This ensures you can run your app locally before containerizing it.<br>To check your versions, run:</p>
<pre><code class="lang-python">node -v
npm -v
</code></pre>
<p><strong>Docker installed and running</strong><br><a target="_blank" href="https://www.docker.com/">Docker</a> is the core tool we’ll use to containerize the app. Install Docker Desktop or Docker Engine depending on your system. Once installed, confirm that it’s running and working by typing:</p>
<pre><code class="lang-python">docker --version
</code></pre>
<p><strong>Docker Hub account (or any container registry)</strong><br>You’ll need a Docker Hub account to push your container image to the cloud. This allows your deployment platform to pull and run the image. You can create one for free at <a target="_blank" href="http://hub.docker.com">hub.docker.com</a><a target="_blank" href="https://hub.docker.com/">.</a></p>
<p>Once you have these prerequisites ready, you’ll be set to build your first containerized Node.js app and deploy it to the cloud.</p>
<h2 id="heading-what-is-containerization"><strong>What is Containerization?</strong></h2>
<p>Containerization is a way to package an application along with everything it needs to run. That includes the code, libraries, system tools, and settings. The package is called a container image.</p>
<p>When you run that image, you get a container that behaves exactly the same on any system that supports <a target="_blank" href="https://www.freecodecamp.org/news/the-docker-handbook/">Docker</a>.</p>
<p>Without containers, deployment can be messy. Your app might work on your machine but fail in production due to missing libraries or version mismatches.</p>
<p>Containers solve this by locking in the environment. Think of them as lightweight virtual machines that only contain what your app needs.</p>
<h2 id="heading-setting-up-a-nodejs-app"><strong>Setting Up a Node.js App</strong></h2>
<p>Let’s start by building a simple Node.js API. We will keep it minimal so we can focus on the containerization and deployment steps.</p>
<p>Create a new folder and add a file called <code>server.js</code>:</p>
<pre><code class="lang-plaintext">const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) =&gt; {
  res.json({ message: 'Hello from Container!' });
});
app.listen(PORT, () =&gt; {
  console.log(`Server running on port ${PORT}`);
});
</code></pre>
<p>Next, create a <code>package.json</code> file with the following content:</p>
<pre><code class="lang-plaintext">{
  "name": "container-node-app",
  "version": "1.0.0",
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "express": "^5.1.0"
  }
}
</code></pre>
<p>Run <code>npm install</code> to install the Express dependency. You now have a simple Node.js API that runs locally. You can test it with <code>npm start</code> and open <code>http://localhost:3000</code> in your browser.</p>
<h2 id="heading-writing-the-dockerfile"><strong>Writing the Dockerfile</strong></h2>
<p>To run this app in a container, we need to write a <code>Dockerfile</code>. This file defines how to build the container image. Create a new file called <code>Dockerfile</code> and add this:</p>
<pre><code class="lang-plaintext">FROM node:24

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
</code></pre>
<p>Let’s break this down. We start with the official Node.js 24 image. We set a working directory inside the container. We copy the package files and install dependencies.</p>
<p>Then we copy the rest of the code. We expose port 3000 so that the app can accept traffic. Finally, we run <code>npm start</code> as the default command.</p>
<h2 id="heading-building-and-testing-the-container"><strong>Building and Testing the Container</strong></h2>
<p>Now that we have the <code>Dockerfile</code>, we can build the image. Run the following command:</p>
<pre><code class="lang-plaintext">docker build -t container-node-app .
</code></pre>
<p>This builds an image named <code>container-node-app</code>. To test it locally, run:</p>
<pre><code class="lang-plaintext">docker run -p 3000:3000 container-node-app
</code></pre>
<p>Open <code>http://localhost:3000</code> in your browser, and you should see the JSON message <code>{"message":"Hello from Container!"}</code>. At this point, we know our app works in a container.</p>
<h2 id="heading-preparing-for-deployment"><strong>Preparing for Deployment</strong></h2>
<p>To deploy on any cloud platform, you need to push your image to a container registry. A registry is a place where container images are stored and shared. Your cloud provider can pull images from <a target="_blank" href="https://hub.docker.com/">Docker Hub</a> or other registries.</p>
<p>Tag your image with a registry path. For Docker Hub, it looks like this:</p>
<pre><code class="lang-plaintext">docker tag container-node-app your-dockerhub-username/container-node-app:latest
</code></pre>
<p>Then log in and push it:</p>
<pre><code class="lang-plaintext">docker login
docker push your-dockerhub-username/container-node-app:latest
</code></pre>
<p>Your image should now be available in the cloud registry and ready for deployment.</p>
<p>Here’s mine:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759747825354/e217d7f1-6131-41a2-a8b1-76e8ad84399a.webp" alt="Docker Registry" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-deploying-to-the-cloud"><strong>Deploying to the Cloud</strong></h2>
<p>In this tutorial, I’ll be using Sevalla since it offers a free tier, so there are no costs involved to deploy this container to the cloud. You can use other providers like <a target="_blank" href="https://aws.amazon.com/">AWS</a> or <a target="_blank" href="https://www.heroku.com/">Heroku</a>, but just note that you will incur costs for creating resources.</p>
<p><a target="_blank" href="https://sevalla.com/">Sevalla</a> is a modern, usage-based Platform-as-a-service provider. It offers application hosting, database, object storage, and static site hosting for your projects.</p>
<p>Once you have your account set up, you can create a new application and tell it which container image to use. Sevalla will pull the image from the registry, create a container, and handle the networking, scaling, and updates for you.</p>
<p>To get started, <a target="_blank" href="https://app.sevalla.com/login">login</a> to Sevalla. In the dashboard, choose to create a new application. Give it a name like <code>node-api</code>. Provide the registry path of your image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759747861994/4ad344d6-d8a5-4593-a85e-eb679bc600f5.webp" alt="Create application" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Choose a location and use the “Hobby” plan. Sevalla comes with a $50 free credit, so you wont be charged for deploying this image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759747920267/cf23401d-131e-4c51-a248-411d8624542c.webp" alt="Application Resources" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Click “Create and Deploy”. Sevalla will handle the rest. You can watch it configure the application and run the deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759747953591/79db7997-88a3-48f7-ae09-65703ec2abab.webp" alt="Sevalla Deployment" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Once the deployment is complete, click on “Visit app” to get your app’s live URL. You can see the response from the API.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759747987239/b3a1de3a-3f3a-48d6-86e1-27137f6b41fd.webp" alt="Sevalla deployment success" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<h2 id="heading-scaling-your-app"><strong>Scaling Your App</strong></h2>
<p>One of the main benefits of Sevalla is easy scaling. If you start getting more traffic, you can increase the number of containers running your app with just a few clicks. Sevalla will load balance traffic between them. This means your app can handle more requests without downtime.</p>
<p>Scaling with containers is efficient because each container runs the exact same code. There is no need to configure extra servers manually. Sevalla takes care of orchestration, so your focus stays on writing code instead of managing infrastructure.</p>
<h2 id="heading-updating-your-app"><strong>Updating Your App</strong></h2>
<p>When you make changes to your Node.js app, updating is straightforward. You rebuild the Docker image, push it to the registry, and tell Sevalla to redeploy.</p>
<p>Since containers are immutable, every new build creates a fresh environment. This ensures your updates are clean, consistent, and free of old dependencies.</p>
<p>For example, if you change the message in <code>server.js</code> and want to deploy it, you would run:</p>
<pre><code class="lang-plaintext">docker build -t your-dockerhub-username/container-node-app:latest .
docker push your-dockerhub-username/container-node-app:latest
</code></pre>
<p>Then trigger a redeploy in the Sevalla dashboard. Within minutes, your users will see the updated response.</p>
<h2 id="heading-benefits-of-sing-containers"><strong>Benefits of sing Containers</strong></h2>
<p><a target="_blank" href="https://techcrunch.com/2016/10/16/wtf-is-a-container/">Containers</a> bring many advantages when deploying Node.js applications. They make your app portable because the container holds both the code and its dependencies, ensuring it runs the same way everywhere.</p>
<p>They improve consistency, since every build creates an isolated environment without leftover files or mismatched versions. Scaling becomes simple because you can spin up more containers as traffic grows, and each one behaves identically. Updates are cleaner too, as you replace old containers with fresh ones built from the latest code.</p>
<p>For developers, this means fewer surprises and less time fixing environment issues. Containers provide a reliable foundation, so you can focus on building features rather than troubleshooting deployments.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Containerization is one of the most important shifts in modern software development. By learning how to put your Node.js app into a Docker container, you unlock the ability to run it anywhere.</p>
<p>In this guide, we built a small Node.js API, created a Dockerfile, tested the container locally, pushed it to a registry, and deployed it to the cloud. The steps you followed here apply to much larger and more complex applications as well. Once you get the basics, you can scale up your workflows to production-level projects.</p>
<p>Hope you enjoyed this article. Connect with me <a target="_blank" href="https://www.linkedin.com/in/manishmshiva/?originalSubdomain=in">on Linkedin</a> or <a target="_blank" href="https://manishshivanandhan.com/">visit my website</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Build Production-Ready Web Apps with the Hono Framework: A Deep Dive ]]>
                </title>
                <description>
                    <![CDATA[ As a dev, you’d probably like to write your application once and not have to worry so much about where it's going to run. This is what the open source framework Hono lets you do, and it’s a game-changer. Hono is a small, incredibly fast web framework... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/build-production-ready-web-apps-with-hono/</link>
                <guid isPermaLink="false">68bf3ea02c935a9d306bb65a</guid>
                
                    <category>
                        <![CDATA[ JavaScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ hono ]]>
                    </category>
                
                    <category>
                        <![CDATA[ javascript framework ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Mayur Vekariya ]]>
                </dc:creator>
                <pubDate>Mon, 08 Sep 2025 20:37:52 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757363825321/562644c8-b2b3-4c1c-92c2-736bcade5aac.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>As a dev, you’d probably like to write your application once and not have to worry so much about where it's going to run. This is what the open source framework Hono lets you do, and it’s a game-changer. Hono is a small, incredibly fast web framework that embraces the "write once, run anywhere" philosophy.</p>
<p>The JavaScript ecosystem moves quickly. One minute, we're building monolithic Node.js servers. The next, it's all about serverless functions and running code at the edge on platforms like Cloudflare or Vercel. Staying current can feel like a full-time job.</p>
<p><a target="_blank" href="https://hono.dev/">Hono</a> is built on top of Web Standards – the same <code>Request</code> and <code>Response</code> objects in your browser – which means your code is naturally portable across almost any JavaScript runtime.</p>
<p>This guide is a deep dive into this powerful little framework, designed to help you build real, production-ready applications. We’ll skip the quick "Hello, World!" and jump straight into the patterns and features you will actually use, with plenty of detailed code examples along the way.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-what-you-will-learn-in-this-guide">What You Will Learn in This Guide</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-prerequisites-for-following-along">Prerequisites for Following Along</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-set-up-a-professional-hono-project">How to Set Up a Professional Hono Project</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-understand-honos-core-api">How to Understand Hono's Core API</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-the-context-object-in-depth">The Context Object in Depth</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-use-advanced-features-for-production-apps">How to Use Advanced Features for Production Apps</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-deployment-guide-for-hono">Deployment Guide for Hono</a></p>
</li>
</ul>
<h3 id="heading-what-you-will-learn-in-this-guide">What You Will Learn in This Guide</h3>
<p>By the end of this tutorial, you will be able to:</p>
<ul>
<li><p>Structure a Hono project for both development and production.</p>
</li>
<li><p>Implement advanced routing patterns.</p>
</li>
<li><p>Leverage the full power of the <strong>Context</strong> object to manage requests and pass data between middleware.</p>
</li>
<li><p>Write complex custom middleware for authentication, logging, and error handling.</p>
</li>
<li><p>Validate incoming data using the official <strong>Zod</strong> validator for robust APIs.</p>
</li>
<li><p>Build a small, server-rendered application with <strong>JSX</strong> components.</p>
</li>
<li><p>Deploy a Hono application to various modern hosting platforms.</p>
</li>
</ul>
<h3 id="heading-prerequisites-for-following-along">Prerequisites for Following Along</h3>
<p>This is an in-depth guide, but it assumes you have some foundational knowledge. Before you start, you should have:</p>
<ul>
<li><p><strong>Node.js installed:</strong> Version 18 or higher is recommended.</p>
</li>
<li><p><strong>A code editor:</strong> Visual Studio Code is a great choice.</p>
</li>
<li><p><strong>Familiarity with TypeScript:</strong> You should understand basic types, functions, and <code>async</code>/<code>await</code>.</p>
</li>
<li><p><strong>Basic command-line knowledge:</strong> You should be comfortable running commands in your terminal.</p>
</li>
</ul>
<h2 id="heading-how-to-set-up-a-professional-hono-project">How to Set Up a Professional Hono Project</h2>
<p>You can get started with Hono using a single command. This will create a new project directory with a recommended structure and configuration files. When prompted, select the <code>nodejs</code> template and choose to install dependencies with your preferred package manager (for example, npm).</p>
<pre><code class="lang-bash">npm create hono@latest hono-production-app
</code></pre>
<p>The command will guide you through the setup:</p>
<pre><code class="lang-bash">&gt; npx create-hono hono-production-app

create-hono version 0.19.2
✔ Using target directory … hono-production-app
✔ Which template <span class="hljs-keyword">do</span> you want to use? nodejs
✔ Do you want to install project dependencies? Yes
✔ Which package manager <span class="hljs-keyword">do</span> you want to use? npm
✔ Cloning the template
✔ Installing project dependencies
🎉 Copied project files
Get started with: <span class="hljs-built_in">cd</span> hono-production-app
</code></pre>
<p>Now, navigate into your new directory: <code>cd hono-production-app</code>. Let's look at the files that were created:</p>
<ul>
<li><p><code>package.json</code>: Defines your project's dependencies and scripts.</p>
</li>
<li><p><code>tsconfig.json</code>: The TypeScript configuration file.</p>
</li>
<li><p><code>src/index.ts</code>: The entry point of your application.</p>
</li>
</ul>
<p>Now, you can run <code>npm run dev</code> to start your development server. Navigate to <a target="_blank" href="http://localhost:3000"><code>http://localhost:3000</code></a>, and you will see "Hello Hono!".</p>
<h2 id="heading-how-to-understand-honos-core-api">How to Understand Hono's Core API</h2>
<p>Hono's API is designed to be minimal, which makes it easy to learn – yet incredibly powerful.</p>
<h3 id="heading-how-to-use-advanced-routing-techniques">How to Use Advanced Routing Techniques</h3>
<p>You may already know <code>app.get()</code> and <code>app.post()</code> from Express, but Hono's router can do much more.</p>
<h4 id="heading-1-how-to-route-with-regular-expressions">1. How to Route with Regular Expressions</h4>
<p>You can constrain a URL parameter to match a specific regular expression. For example, to make sure an <code>:id</code> parameter only accepts numbers, you can do this:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Only match routes like /users/123, not /users/abc</span>
app.get(<span class="hljs-string">'/users/:id{[0-9]+}'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> id = c.req.param(<span class="hljs-string">'id'</span>)
  <span class="hljs-keyword">return</span> c.text(<span class="hljs-string">`Fetching data for user ID: <span class="hljs-subst">${id}</span>`</span>)
})
</code></pre>
<h4 id="heading-2-how-to-use-optional-and-wildcard-routes">2. How to Use Optional and Wildcard Routes</h4>
<p>You can define routes that match multiple paths using wildcards (<code>*</code>) or handle optional parameters.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// This will match /files/image.png, /files/docs/report.pdf, and so on.</span>
app.get(<span class="hljs-string">'/files/*'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-comment">// c.req.path will contain the full matched path</span>
  <span class="hljs-keyword">return</span> c.text(<span class="hljs-string">`You are accessing the file at: <span class="hljs-subst">${c.req.path}</span>`</span>)
})

<span class="hljs-comment">// The '?' makes the '/:format?' part of the URL optional</span>
<span class="hljs-comment">// This will match both /api/posts and /api/posts/json</span>
app.get(<span class="hljs-string">'/api/posts/:format?'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> format = c.req.param(<span class="hljs-string">'format'</span>)
  <span class="hljs-keyword">if</span> (format === <span class="hljs-string">'json'</span>) {
    <span class="hljs-keyword">return</span> c.json({ message: <span class="hljs-string">'Here are the posts in JSON format.'</span> })
  }
  <span class="hljs-keyword">return</span> c.text(<span class="hljs-string">'Here are the posts in plain text.'</span>)
})
</code></pre>
<h4 id="heading-3-how-to-group-routes-with-approute">3. How to Group Routes with <code>app.route()</code></h4>
<p>For larger applications, you should organize your routes into logical groups. The <code>app.route()</code> method is perfect for this. It allows you to create modular routers and mount them on a specific prefix.</p>
<p>Let's create a more complex API structure for a blog.</p>
<p><code>src/routes/posts.ts</code></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>

<span class="hljs-comment">// Create a new router instance specifically for posts</span>
<span class="hljs-keyword">const</span> posts = <span class="hljs-keyword">new</span> Hono()

posts.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c.json({ posts: [] }))
posts.post(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c.json({ message: <span class="hljs-string">'Post created'</span> }, <span class="hljs-number">201</span>))
posts.get(<span class="hljs-string">'/:id'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c.json({ post: { id: c.req.param(<span class="hljs-string">'id'</span>) } }))

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> posts
</code></pre>
<p><code>src/routes/authors.ts</code></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>

<span class="hljs-keyword">const</span> authors = <span class="hljs-keyword">new</span> Hono()

authors.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c.json({ authors: [] }))
authors.get(<span class="hljs-string">'/:id'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c.json({ author: { id: c.req.param(<span class="hljs-string">'id'</span>) } }))

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> authors
</code></pre>
<p><code>src/index.ts</code></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">'@hono/node-server'</span>
<span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>
<span class="hljs-keyword">import</span> { appendTrailingSlash } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono/trailing-slash'</span>;
<span class="hljs-keyword">import</span> posts <span class="hljs-keyword">from</span> <span class="hljs-string">'./routes/posts.js'</span>
<span class="hljs-keyword">import</span> authors <span class="hljs-keyword">from</span> <span class="hljs-string">'./routes/authors.js'</span>

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> Hono()

app.use(appendTrailingSlash());

app.route(<span class="hljs-string">'/posts/'</span>, posts)
app.route(<span class="hljs-string">'/authors/'</span>, authors)

app.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-keyword">return</span> c.text(<span class="hljs-string">'Hello Hono!'</span>)
})

serve({
  fetch: app.fetch,
  port: <span class="hljs-number">3000</span>
}, <span class="hljs-function">(<span class="hljs-params">info</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server is running on http://localhost:<span class="hljs-subst">${info.port}</span>`</span>)
})
</code></pre>
<p>This pattern keeps your main <code>index.ts</code> file clean and makes your application much easier to navigate and maintain.</p>
<h2 id="heading-the-context-object-in-depth">The Context Object in Depth</h2>
<p>The <strong>Context</strong> (<code>c</code>) is the heart of Hono. It's an object that gets passed to every middleware and route handler, containing all the information related to the current request. It's essentially a container for the request (<code>c.req</code>) methods for creating a response (<code>c.json</code>, <code>c.html</code>, <code>c.text</code>), as well as a special property for passing data between middleware (<code>c.set</code> and <code>c.get</code>).</p>
<p>While this covers its most common and useful properties, the full Context object contains more. For a comprehensive list of all available properties and methods, you can refer to the official <a target="_blank" href="https://hono.dev/docs/api/context">Hono documentation</a>.</p>
<p>Let's explore how you can use the context object to pass data between middleware and handlers, a crucial technique for things like authentication.</p>
<p>The <code>c.set()</code> and <code>c.get()</code> methods allow you to store and retrieve typed data within the context of a single request.</p>
<p>Replace <code>src/index.ts</code> with this example for authentication:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>
<span class="hljs-keyword">import</span> <span class="hljs-keyword">type</span> { Context, Next } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>

<span class="hljs-comment">// Define a type for the variables we will store in the context</span>
<span class="hljs-keyword">type</span> AppVariables = {
  user: {
    id: <span class="hljs-built_in">string</span>
    name: <span class="hljs-built_in">string</span>
    roles: <span class="hljs-built_in">string</span>[]
  }
}

<span class="hljs-comment">// Use a generic to tell our Hono app about the variables type</span>
<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> Hono&lt;{ Variables: AppVariables }&gt;()

<span class="hljs-comment">// Middleware to "authenticate" a user from a header</span>
<span class="hljs-keyword">const</span> authMiddleware = <span class="hljs-keyword">async</span> (c: Context, next: Next) =&gt; {
  <span class="hljs-keyword">const</span> userId = c.req.header(<span class="hljs-string">'X-User-ID'</span>)
  <span class="hljs-keyword">if</span> (!userId) {
    <span class="hljs-keyword">return</span> c.json({ error: <span class="hljs-string">'Missing X-User-ID header'</span> }, <span class="hljs-number">401</span>)
  }

  <span class="hljs-comment">// In a real app, you would fetch this from a database</span>
  <span class="hljs-keyword">const</span> user = {
    id: userId,
    name: <span class="hljs-string">'Jane Doe'</span>,
    roles: [<span class="hljs-string">'admin'</span>, <span class="hljs-string">'editor'</span>],
  }

  <span class="hljs-comment">// Use c.set() to attach the user data to the context</span>
  c.set(<span class="hljs-string">'user'</span>, user)

  <span class="hljs-keyword">await</span> next()
}

app.get(<span class="hljs-string">'/admin/dashboard'</span>, authMiddleware, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-comment">// Use c.get() to retrieve the typed user data</span>
  <span class="hljs-keyword">const</span> user = c.get(<span class="hljs-string">'user'</span>)

  <span class="hljs-keyword">if</span> (!user.roles.includes(<span class="hljs-string">'admin'</span>)) {
    <span class="hljs-keyword">return</span> c.json({ error: <span class="hljs-string">'Forbidden'</span> }, <span class="hljs-number">403</span>)
  }

  <span class="hljs-keyword">return</span> c.json({
    message: <span class="hljs-string">`Welcome to the admin dashboard, <span class="hljs-subst">${user.name}</span>!`</span>,
    userId: user.id,
  })
})

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> app
</code></pre>
<p>Let's break down the important parts of the code above.</p>
<ul>
<li><p><strong>Typed context variables</strong>: We define a TypeScript type <code>AppVariables</code> and pass it as a generic to our Hono app <code>new Hono&lt;{ Variables: AppVariables }&gt;()</code>. This is a powerful feature that gives us full type-safety for our context variables, preventing typos and ensuring that the data we store and retrieve is exactly what we expect it to be.</p>
</li>
<li><p><strong>Custom middleware</strong>: The <code>authMiddleware</code> is a custom function that runs before our route handler. It inspects the incoming request's headers (<code>c.req.header('X-User-ID')</code>).</p>
</li>
<li><p><strong>Storing data</strong>: If a valid header is found, the middleware uses <code>c.set('user', user)</code> to store the user object on the context. This data is now available to any subsequent middleware or route handler for the same request.</p>
</li>
<li><p><strong>Retrieving data</strong>: The route handler <code>app.get('/admin/dashboard', ...)</code> then uses <code>c.get('user')</code> to retrieve the user object. Hono's type system ensures that <code>c.get('user')</code> returns a variable with the type <code>{ id: string; name: string; roles: string[]; }</code>.</p>
</li>
<li><p><strong>Flow control</strong>: If the user is missing or doesn't have the "admin" role, the middleware or handler can immediately send an error response using <code>c.json()</code> and a status code, preventing the request from proceeding further.</p>
</li>
</ul>
<p>Now, run <code>npm run dev</code>.</p>
<p>You can test with <code>curl</code> (add header):</p>
<pre><code class="lang-bash">curl -H <span class="hljs-string">"X-User-ID: 123"</span> http://localhost:3000/admin/dashboard
</code></pre>
<p>This will return a welcome message.</p>
<p>Without the header:</p>
<pre><code class="lang-bash">curl http://localhost:3000/admin/dashboard
</code></pre>
<p>This will return a <code>401</code> error.</p>
<p>This demonstrates how to pass typed data securely and efficiently between middleware and route handlers.</p>
<h2 id="heading-how-to-use-advanced-features-for-production-apps">How to Use Advanced Features for Production Apps</h2>
<p>Now we're ready to tackle the features you'll use every day in production: advanced middleware, data validation, and building full-stack applications.</p>
<h3 id="heading-how-to-use-advanced-middleware-patterns">How to Use Advanced Middleware Patterns</h3>
<p>Hono has a powerful set of built-in middleware, including JWT and caching. These are not separate libraries you have to install, but rather functions that come with the Hono package itself.</p>
<p><strong>Step 1:</strong> Replace <code>src/index.ts</code> with this example for JWT and caching:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>
<span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">'@hono/node-server'</span>
<span class="hljs-keyword">import</span> { jwt, sign } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono/jwt'</span>

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> Hono()
<span class="hljs-keyword">const</span> SECRET = <span class="hljs-string">'my-secret-key'</span> <span class="hljs-comment">// Use an environment variable in production!</span>

<span class="hljs-comment">// Create a simple in-memory cache store</span>
<span class="hljs-keyword">const</span> cacheStore = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span>();

<span class="hljs-comment">// Custom caching middleware for Node.js</span>
app.use(<span class="hljs-string">'/api/public-data'</span>, <span class="hljs-keyword">async</span> (c, next) =&gt; {
  <span class="hljs-keyword">const</span> cacheKey = c.req.url;

  <span class="hljs-comment">// Check if the response is in our cache</span>
  <span class="hljs-keyword">if</span> (cacheStore.has(cacheKey)) {
    <span class="hljs-keyword">const</span> cachedItem = cacheStore.get(cacheKey);
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Serving from custom in-memory cache.'</span>);
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(cachedItem.body, { headers: cachedItem.headers });
  }

  <span class="hljs-comment">// If not in cache, proceed to the route handler</span>
  <span class="hljs-keyword">await</span> next();

  <span class="hljs-comment">// After the handler returns, clone and store the response</span>
  <span class="hljs-keyword">if</span> (c.res) {
    <span class="hljs-keyword">const</span> newResponse = c.res.clone();
    <span class="hljs-keyword">const</span> body = <span class="hljs-keyword">await</span> newResponse.text();
    <span class="hljs-keyword">const</span> headers = <span class="hljs-built_in">Object</span>.fromEntries(newResponse.headers.entries());
    cacheStore.set(cacheKey, { body, headers });
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Storing response in custom in-memory cache.'</span>);
  }
});

<span class="hljs-comment">// Login to get a JWT</span>
app.post(<span class="hljs-string">'/login'</span>, <span class="hljs-keyword">async</span> (c) =&gt; {
  <span class="hljs-keyword">const</span> { username } = <span class="hljs-keyword">await</span> c.req.json()
  <span class="hljs-keyword">if</span> (username === <span class="hljs-string">'admin'</span>) {
    <span class="hljs-keyword">const</span> payload = {
      sub: username,
      role: <span class="hljs-string">'admin'</span>,
      exp: <span class="hljs-built_in">Math</span>.floor(<span class="hljs-built_in">Date</span>.now() / <span class="hljs-number">1000</span>) + <span class="hljs-number">60</span> * <span class="hljs-number">5</span>, <span class="hljs-comment">// 5 minutes expiration</span>
    }
    <span class="hljs-keyword">const</span> token = <span class="hljs-keyword">await</span> sign(payload, SECRET)
    <span class="hljs-keyword">return</span> c.json({ token })
  }
  <span class="hljs-keyword">return</span> c.json({ error: <span class="hljs-string">'Invalid credentials'</span> }, <span class="hljs-number">401</span>)
})

<span class="hljs-comment">// Protected route</span>
app.get(
  <span class="hljs-string">'/api/protected'</span>,
  jwt({ secret: SECRET }),
  <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> payload = c.get(<span class="hljs-string">'jwtPayload'</span>)
    <span class="hljs-keyword">return</span> c.json({ message: <span class="hljs-string">'You have access!'</span>, payload })
  }
)

<span class="hljs-comment">// Cached route</span>
app.get(
  <span class="hljs-string">'/api/public-data'</span>,
  <span class="hljs-keyword">async</span> (c) =&gt; {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Executing handler with delay...'</span>);
    <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">resolve</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(resolve, <span class="hljs-number">1000</span>)) <span class="hljs-comment">// Simulate a delay</span>
    <span class="hljs-keyword">return</span> c.json({ data: <span class="hljs-string">'This is some public data that rarely changes.'</span> })
  }
)

serve({ fetch: app.fetch, port: <span class="hljs-number">3000</span> }, <span class="hljs-function">(<span class="hljs-params">info</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server is running on http://localhost:<span class="hljs-subst">${info.port}</span>`</span>)
})
</code></pre>
<p>The code above shows two different types of middleware in action.</p>
<p>First, <strong>JWT middleware</strong> (<code>jwt</code>) is a powerful way to secure your routes. When we call <code>jwt({ secret: SECRET })</code>, we're telling Hono to check for a valid JWT in the <code>Authorization</code> header of the incoming request. If a valid token is found, it decodes the payload and attaches it to the context, where we can retrieve it with <code>c.get('jwtPayload')</code>. If no token is found or if the token is invalid, the middleware automatically stops the request and returns a <code>401 Unauthorized</code> error.</p>
<p>We also have <strong>Custom Cache Middleware</strong> which demonstrates the power of Hono's middleware system for in-memory caching. The middleware first checks an in-memory <code>Map</code> to see if a response for the current URL already exists. If it does, it immediately returns the cached response, preventing the route handler from ever being executed. If the response is not in the cache, it allows the request to continue to the handler. After the handler returns, the middleware intercepts the response and stores a copy in the cache before sending it back to the client. This is a robust and reliable pattern for Node.js environments.</p>
<p><strong>Step 2:</strong> Run <code>npm run dev</code>.</p>
<p><strong>Step 3:</strong> Test the login endpoint with <code>curl</code>:</p>
<p>First, let's test the login endpoint to get a JWT. Open a new terminal and run the following command. The command sends a <code>POST</code> request to the <code>/login</code> endpoint with <code>username: "admin"</code> in the request body.</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3000/login -H <span class="hljs-string">"Content-Type: application/json"</span> -d <span class="hljs-string">'{"username": "admin"}'</span>
</code></pre>
<p>This will return a JSON object with a JWT. Copy this token for the next step.</p>
<p>Now, let's test the protected route. We'll use the token we just received in the <code>Authorization</code> header. Replace <code>&lt;your_jwt_token&gt;</code> with the token you copied.</p>
<pre><code class="lang-bash">curl http://localhost:3000/api/protected -H <span class="hljs-string">"Authorization: Bearer &lt;your_jwt_token&gt;"</span>
</code></pre>
<p>You should get a success message with the decoded payload.</p>
<p>Finally, let's test the cached route. You’ll need to run a production build and run the file with <code>node</code> for this to work.</p>
<p>First, run the following command. The <code>1000</code> millisecond delay in the code will make this request take about a second.</p>
<pre><code class="lang-bash">curl -o /dev/null -s -w <span class="hljs-string">'Total: %{time_total}s\n'</span> http://localhost:3000/api/public-data
</code></pre>
<p>Immediately run the <strong>exact same command again</strong>. This time, the response will be almost instantaneous because our custom cache middleware served the response directly from its in-memory store, completely bypassing the <code>setTimeout</code> in the route handler. Run it a third time, and you'll see a similar near-instantaneous response.</p>
<p>Here's an example of what your terminal output should look like when testing the cache. The first request took around 1 second, but subsequent requests were a matter of milliseconds.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757125753352/180cc4a7-f361-4966-a26f-a5d8251f77a4.png" alt="180cc4a7-f361-4966-a26f-a5d8251f77a4" class="image--center mx-auto" width="890" height="202" loading="lazy"></p>
<h3 id="heading-how-to-create-a-global-error-handler">How to Create a Global Error Handler</h3>
<p>You can define a single global error handler with <code>app.onError()</code>. This is useful for handling unexpected errors in a centralized way, such as validation failures.</p>
<p>Add the following code to your <code>src/index.ts</code>:</p>
<pre><code class="lang-typescript">app.get(<span class="hljs-string">'/users/:id'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> id = c.req.param(<span class="hljs-string">'id'</span>)
  <span class="hljs-keyword">if</span> (<span class="hljs-built_in">isNaN</span>(<span class="hljs-built_in">Number</span>(id))) {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'User ID must be a number.'</span>)
  }
  <span class="hljs-keyword">return</span> c.text(<span class="hljs-string">`User ID is <span class="hljs-subst">${id}</span>`</span>)
})

app.onError(<span class="hljs-function">(<span class="hljs-params">err, c</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.error(<span class="hljs-string">`<span class="hljs-subst">${err}</span>`</span>)
  <span class="hljs-keyword">return</span> c.json({
    success: <span class="hljs-literal">false</span>,
    message: err.message,
  }, <span class="hljs-number">500</span>)
})
</code></pre>
<p>Now, if you visit <a target="_blank" href="http://localhost:3000/users/abc"><code>http://localhost:3000/users/abc</code></a>, you will get a JSON error response instead of an uncaught exception.</p>
<h3 id="heading-how-to-handle-validation-with-zod">How to Handle Validation with Zod</h3>
<p>For robust APIs, data validation is essential. Hono integrates seamlessly with <a target="_blank" href="https://zod.dev/">Zod</a>, a popular TypeScript-first schema validation library.</p>
<p><strong>Step 1:</strong> Install the necessary dependencies:</p>
<pre><code class="lang-bash">npm install zod @hono/zod-validator
</code></pre>
<p><strong>Step 2:</strong> Replace <code>src/index.ts</code> with the validation example:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>
<span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">'@hono/node-server'</span>
<span class="hljs-keyword">import</span> { z } <span class="hljs-keyword">from</span> <span class="hljs-string">'zod'</span>
<span class="hljs-keyword">import</span> { zValidator } <span class="hljs-keyword">from</span> <span class="hljs-string">'@hono/zod-validator'</span>

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> Hono()

<span class="hljs-comment">// Define a Zod schema for the user creation data</span>
<span class="hljs-keyword">const</span> createUserSchema = z.object({
  username: z.string().min(<span class="hljs-number">3</span>).max(<span class="hljs-number">20</span>),
  email: z.string().email(),
  age: z.number().int().positive(),
  tags: z.array(z.string()).optional(),
})

app.post(
  <span class="hljs-string">'/users'</span>,
  zValidator(<span class="hljs-string">'json'</span>, createUserSchema), <span class="hljs-comment">// Use zValidator middleware</span>
  (c) =&gt; {
    <span class="hljs-comment">// The validated data is available on c.req.valid()</span>
    <span class="hljs-keyword">const</span> user = c.req.valid(<span class="hljs-string">'json'</span>)
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Creating user: <span class="hljs-subst">${user.username}</span> with email <span class="hljs-subst">${user.email}</span>`</span>)
    <span class="hljs-keyword">return</span> c.json({
      success: <span class="hljs-literal">true</span>,
      message: <span class="hljs-string">'User created successfully!'</span>,
      user: user,
    }, <span class="hljs-number">201</span>)
  }
)

serve({ fetch: app.fetch, port: <span class="hljs-number">3000</span> }, <span class="hljs-function">(<span class="hljs-params">info</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server is running on http://localhost:<span class="hljs-subst">${info.port}</span>`</span>)
})
</code></pre>
<p>This is how the Zod validation is working:</p>
<ol>
<li><p>We first define a schema called <code>createUserSchema</code> using <code>z.object()</code>. This schema is a blueprint for the expected data structure. We use Zod's built-in methods like <code>z.string().min(3)</code>, <code>z.string().email()</code>, and <code>z.number().int().positive()</code> to specify validation rules for each property. For example, <code>username</code> must be a string between 3 and 20 characters, <code>email</code> must be a valid email format, and <code>age</code> must be a positive integer.</p>
</li>
<li><p>We then apply the <a target="_blank" href="https://github.com/honojs/middleware/tree/main/packages/zod-validator"><code>zValidator</code></a> middleware to our route handler. The first argument, <code>'json'</code>, tells the middleware to validate the incoming request's JSON body. The second argument, <code>createUserSchema</code>, tells it which schema to use for the validation.</p>
</li>
<li><p>The <code>zValidator</code> middleware automatically does the heavy lifting. When a request hits the <code>/users</code> endpoint, it will parse the JSON body and attempt to validate it against <code>createUserSchema</code>. If the data is invalid (for example, the <code>email</code> is not in a valid format), the middleware will immediately stop the request and return a <code>400 Bad Request</code> status with a detailed error message, all without us having to write any manual checks.</p>
</li>
<li><p>If the data is valid, the middleware makes it available on the <code>Context</code> object, which we can access with <code>c.req.valid('json')</code>. Hono's type system ensures that this data is correctly typed according to the Zod schema, so we can use it safely in our handler.</p>
</li>
</ol>
<p><strong>Step 3:</strong> Run <code>npm run dev</code>.</p>
<p><strong>Step 4:</strong> Test with <code>curl</code> (valid data):</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3000/users -H <span class="hljs-string">"Content-Type: application/json"</span> -d <span class="hljs-string">'{"username": "testuser", "email": "test@example.com", "age": 25}'</span>
</code></pre>
<p>This will return a success message.</p>
<p>Test with invalid data (for example, bad email):</p>
<pre><code class="lang-bash">curl -X POST http://localhost:3000/users -H <span class="hljs-string">"Content-Type: application/json"</span> -d <span class="hljs-string">'{"username": "testuser", "email": "invalid-email", "age": 25}'</span>
</code></pre>
<p>This will automatically return a <code>400</code> status with a detailed error message from Zod.</p>
<h3 id="heading-how-to-build-a-full-stack-app-with-jsx">How to Build a Full-Stack App with JSX</h3>
<p>Hono supports server-side rendering with JSX, allowing you to build full-stack applications without needing a separate framework.</p>
<p><strong>Step 1:</strong> Create <code>src/components/Layout.tsx</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { html } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono/html'</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> Layout = <span class="hljs-function">(<span class="hljs-params">props: { title: <span class="hljs-built_in">string</span>; children?: <span class="hljs-built_in">any</span> }</span>) =&gt;</span> html`<span class="xml">
  <span class="hljs-meta">&lt;!DOCTYPE <span class="hljs-meta-keyword">html</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">html</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">head</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span></span><span class="hljs-subst">${props.title}</span><span class="xml"><span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">style</span>&gt;</span><span class="css">
        <span class="hljs-selector-tag">body</span> { <span class="hljs-attribute">font-family</span>: sans-serif; <span class="hljs-attribute">background</span>: <span class="hljs-number">#f4f4f4</span>; <span class="hljs-attribute">color</span>: <span class="hljs-number">#333</span>; }
        <span class="hljs-selector-class">.container</span> { <span class="hljs-attribute">max-width</span>: <span class="hljs-number">800px</span>; <span class="hljs-attribute">margin</span>: <span class="hljs-number">2rem</span> auto; <span class="hljs-attribute">padding</span>: <span class="hljs-number">1rem</span>; <span class="hljs-attribute">background</span>: white; <span class="hljs-attribute">border-radius</span>: <span class="hljs-number">8px</span>; }
        <span class="hljs-selector-tag">header</span> { <span class="hljs-attribute">border-bottom</span>: <span class="hljs-number">1px</span> solid <span class="hljs-number">#ccc</span>; <span class="hljs-attribute">padding-bottom</span>: <span class="hljs-number">1rem</span>; }
        <span class="hljs-selector-tag">footer</span> { <span class="hljs-attribute">margin-top</span>: <span class="hljs-number">2rem</span>; <span class="hljs-attribute">text-align</span>: center; <span class="hljs-attribute">font-size</span>: <span class="hljs-number">0.8rem</span>; <span class="hljs-attribute">color</span>: <span class="hljs-number">#777</span>; }
      </span><span class="hljs-tag">&lt;/<span class="hljs-name">style</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">head</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"container"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">header</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span></span><span class="hljs-subst">${props.title}</span><span class="xml"><span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">header</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">main</span>&gt;</span>
          </span><span class="hljs-subst">${props.children}</span><span class="xml">
        <span class="hljs-tag">&lt;/<span class="hljs-name">main</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">footer</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>Powered by Hono<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">footer</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">html</span>&gt;</span>
`</span>
</code></pre>
<p><strong>Step 2:</strong> Create <code>src/components/PostItem.tsx</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> PostItem = <span class="hljs-function">(<span class="hljs-params">props: { post: { id: <span class="hljs-built_in">number</span>; title: <span class="hljs-built_in">string</span>; author: <span class="hljs-built_in">string</span> } }</span>) =&gt;</span> (
  &lt;article style=<span class="hljs-string">"border-bottom: 1px solid #eee; padding: 1rem 0;"</span>&gt;
    &lt;h3&gt;&lt;a href={<span class="hljs-string">`/posts/<span class="hljs-subst">${props.post.id}</span>`</span>}&gt;{props.post.title}&lt;<span class="hljs-regexp">/a&gt;&lt;/</span>h3&gt;
    &lt;p&gt;&lt;em&gt;By {props.post.author}&lt;<span class="hljs-regexp">/em&gt;&lt;/</span>p&gt;
  &lt;/article&gt;
)
</code></pre>
<p><strong>Step 3:</strong> Update <code>src/index.tsx</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Hono } <span class="hljs-keyword">from</span> <span class="hljs-string">'hono'</span>
<span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">'@hono/node-server'</span>
<span class="hljs-keyword">import</span> { Layout } <span class="hljs-keyword">from</span> <span class="hljs-string">'./components/Layout'</span>
<span class="hljs-keyword">import</span> { PostItem } <span class="hljs-keyword">from</span> <span class="hljs-string">'./components/PostItem'</span>

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> Hono()

<span class="hljs-comment">// Mock data</span>
<span class="hljs-keyword">const</span> posts = [
  { id: <span class="hljs-number">1</span>, title: <span class="hljs-string">'Getting Started with Hono'</span>, author: <span class="hljs-string">'Alice'</span> },
  { id: <span class="hljs-number">2</span>, title: <span class="hljs-string">'Advanced Middleware Patterns'</span>, author: <span class="hljs-string">'Bob'</span> },
  { id: <span class="hljs-number">3</span>, title: <span class="hljs-string">'Deploying Hono to the Edge'</span>, author: <span class="hljs-string">'Charlie'</span> },
]

app.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> {
  <span class="hljs-keyword">return</span> c.html(
    &lt;Layout title=<span class="hljs-string">"My Hono Blog"</span>&gt;
      &lt;h2&gt;Recent Posts&lt;/h2&gt;
      {posts.length &gt; <span class="hljs-number">0</span>
        ? posts.map(<span class="hljs-function"><span class="hljs-params">post</span> =&gt;</span> &lt;PostItem post={post} /&gt;)
        : &lt;p&gt;No posts yet!&lt;/p&gt;
      }
    &lt;/Layout&gt;
  )
})

serve({ fetch: app.fetch, port: <span class="hljs-number">3000</span> }, <span class="hljs-function">(<span class="hljs-params">info</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server is running on http://localhost:<span class="hljs-subst">${info.port}</span>`</span>)
})
</code></pre>
<p>Make sure to update the <code>dev</code> script in your <code>package.json</code> file to have <code>src/index.tsx</code> as the starting point.</p>
<pre><code class="lang-json"><span class="hljs-string">"dev"</span>: <span class="hljs-string">"tsx watch src/index.tsx"</span>
</code></pre>
<p><strong>Step 4:</strong> Run <code>npm run dev</code> and visit <a target="_blank" href="http://localhost:3000"><code>http://localhost:3000</code></a>. You will see a fully rendered blog page with the list of posts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756952439130/27ee63cc-6d60-4372-9634-4d1eadf33f32.png" alt="Blog page with list of posts" class="image--center mx-auto" width="929" height="742" loading="lazy"></p>
<h2 id="heading-deployment-guide-for-hono">Deployment Guide for Hono</h2>
<p>You have built your application, and now it's time to share it with the world. Here’s how you can deploy your Hono app to some of the most popular platforms.</p>
<h3 id="heading-how-to-deploy-to-nodejs">How to Deploy to Node.js</h3>
<p>For a traditional server environment, you can use the <code>@hono/node-server</code> adapter and a process manager like <code>pm2</code> for production.</p>
<p><code>src/index.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">'@hono/node-server'</span>
<span class="hljs-keyword">import</span> app <span class="hljs-keyword">from</span> <span class="hljs-string">'./app'</span> <span class="hljs-comment">// Assuming your Hono app is in app.ts</span>

serve({ fetch: app.fetch, port: <span class="hljs-number">3000</span> })
</code></pre>
<p>You will then build your TypeScript to JavaScript and run <code>pm2 start dist/index.js</code> to run it in the background.</p>
<h3 id="heading-how-to-deploy-to-cloudflare-workers">How to Deploy to Cloudflare Workers</h3>
<p>Hono's true power lies in its portability. The <code>create hono</code> command can set up a project specifically for Cloudflare Workers.</p>
<p>Run the following command and select the <code>cloudflare-workers</code> template:</p>
<pre><code class="lang-bash">npm create hono@latest my-app-hono-cloudflare-worker

create-hono version 0.19.2
✔ Using target directory … my-app-hono-cloudflare-worker
? Which template <span class="hljs-keyword">do</span> you want to use?
  aws-lambda
  bun
❯ cloudflare-workers
  cloudflare-workers+vite
  deno
  fastly
  lambda-edge
</code></pre>
<p>The setup process is identical to the Node.js example, but the project structure is optimized for Cloudflare.</p>
<p>Once the project is set up, you only need to type one command to deploy your application to Cloudflare:</p>
<pre><code class="lang-bash">wrangler deploy
</code></pre>
<p>This command will prompt you to log in to your Cloudflare account and will handle the entire deployment process automatically.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>You've made it! We’ve covered a lot in this guide. You started with a professional project setup and moved all the way through advanced routing, context management, complex middleware patterns, robust data validation, and full-stack JSX components.</p>
<p>You now have the knowledge and the tools to build serious, production-ready applications with Hono. Its simple API doesn't limit its power. Rather, it enhances it by getting out of your way and letting you focus on building great features. And with its helpful portability, you can be confident that the application you build today can be deployed to the platforms of tomorrow.</p>
<p>The web development ecosystem will continue to evolve, but by building on Web Standards, Hono is a framework that's built to last.</p>
<p>To continue your journey, I highly recommend exploring the official <a target="_blank" href="https://hono.dev/docs/">Hono documentation</a>, which is full of even more examples and guides.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How the Node.js Event Loop Works ]]>
                </title>
                <description>
                    <![CDATA[ The Node.js event loop is a concept that may seem difficult to understand at first. But as with any seemingly complex subject, the best way to understand it is often through an analogy. In this article, you’ll learn how overworked managers, busy wait... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-the-nodejs-event-loop-works/</link>
                <guid isPermaLink="false">68b86cca911797ceb3fd37e0</guid>
                
                    <category>
                        <![CDATA[ Node.js ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Event Loop ]]>
                    </category>
                
                    <category>
                        <![CDATA[ asynchronous ]]>
                    </category>
                
                    <category>
                        <![CDATA[ synchronous ]]>
                    </category>
                
                    <category>
                        <![CDATA[ concurrency ]]>
                    </category>
                
                    <category>
                        <![CDATA[ parallelism ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Amanda Ene Adoyi ]]>
                </dc:creator>
                <pubDate>Wed, 03 Sep 2025 16:28:58 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756916907320/01074df6-0f8e-4a63-9a3e-07c8297fc22b.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>The Node.js event loop is a concept that may seem difficult to understand at first. But as with any seemingly complex subject, the best way to understand it is often through an analogy.</p>
<p>In this article, you’ll learn how overworked managers, busy waiters, and train stations can help bring home the fundamental concept of the event loop. If you’re working with Node, you’ll need to understand how the event loop works, as it lies at the root of some of the most powerful applications today.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-are-synchronous-and-asynchronous-code">What are Synchronous and Asynchronous Code?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-concurrency-and-parallelism-mean">What Concurrency and Parallelism Mean</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-concurrency-in-nodejs">Concurrency in Node.js</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-parallelism-in-nodejs">Parallelism in Node.js</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-the-event-loop">What is the Event Loop?</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-the-phases-of-the-event-loop">The Phases of the Event Loop</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>In order to seamlessly follow along with this article, it would help if you are familiar with the following concepts:</p>
<ol>
<li><p><strong>A basic understanding of JavaScript:</strong> Node.js runs on JavaScript, so you’ll need to understand variables, functions, and control flow.</p>
</li>
<li><p><strong>Familiarity with Node.js basics:</strong> Running simple scripts with Node and requiring modules.</p>
</li>
<li><p><strong>Some exposure to asynchronous patterns:</strong> Knowing what patterns such as <code>setTimeout()</code> do.</p>
</li>
<li><p><strong>Some familiarity with basic CPU concepts (cores and threads):</strong> This will help you better understand concurrency and parallelism.</p>
</li>
<li><p><strong>Awareness of promises and async/await:</strong> This is optional and not a strict requirement, but will be helpful.</p>
</li>
</ol>
<h2 id="heading-what-are-synchronous-and-asynchronous-code">What are Synchronous and Asynchronous Code?</h2>
<p>When writing code for Node.js applications, there are two different ways it can run: synchronous (sync) and asynchronous (async). Synchronous code is referred to as <em>blocking</em> because when it runs, no other code runs until execution is complete.</p>
<p>An analogy for this is a busy restaurant. Picture a waiter who refuses to wait on other tables until the table they’re presently serving has received their orders and has started eating. While the food is being prepared, the waiter waits around doing nothing and only approaches your table to take your order when they are completely finished with the previous table. Needless to say, the waiter may not receive a great tip for that service.</p>
<p>This is what synchronous code is. It halts the execution of other processes until it’s complete. You can see how it works in the example below:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> syncWaiter = <span class="hljs-function">(<span class="hljs-params">name</span>) =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`<span class="hljs-subst">${name}</span> attends to tables pretty slowly.`</span>);
};

syncWaiter(<span class="hljs-string">"Devin"</span>);
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"At least all the orders are correct!"</span>);
</code></pre>
<p>The code above will be run in sequence, in the order it appears.</p>
<p>Asynchronous code, unlike synchronous code, doesn’t halt all other processes until one task is executed – rather, it proceeds to carry out other tasks while a longer process runs in the background.</p>
<p>Using our waiter analogy, in this case the waiter in the restaurant would go take an order from one table, pass the order to the kitchen, and while it’s being prepared, proceeds to your table to take your order as well. This way, the waiter is able to ensure different processes are started even if one process takes a bit longer than the rest. Check out the example below:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> asyncWaiter = <span class="hljs-function">(<span class="hljs-params">name</span>) =&gt;</span> {
    <span class="hljs-built_in">setTimeout</span>(<span class="hljs-function">()=&gt;</span> {<span class="hljs-built_in">console</span>.log(<span class="hljs-string">`<span class="hljs-subst">${name}</span> attends to tables pretty quickly.`</span>)}, <span class="hljs-number">3000</span>)
};

asyncWaiter(<span class="hljs-string">"James"</span>);
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Wow! All the tables are attended to in a short time."</span>);
</code></pre>
<p>Unlike synchronous code, this code does run the function <code>asyncWaiter()</code> – but the callback inside the function executes later. When the duration elapses, the result is then shown on the screen. This is why asynchronous programs are referred to as <em>non-blocking.</em> They don’t halt the program, but move from one available task to another.</p>
<p>The code above returns the following:</p>
<pre><code class="lang-bash">Wow! All the tables are attended to <span class="hljs-keyword">in</span> a short time.
James attends to tables pretty quickly.
</code></pre>
<p>This print order happens because of how the <em>event loop</em> manages tasks: the synchronous <code>console.log()</code> that comes after <code>asyncWaiter()</code> runs immediately, while the asynchronous callback inside <code>asyncWaiter()</code> (from <code>setTimeout</code>) is scheduled to run later. If you don’t understand this just yet, don’t worry as I’ll break it down in detail shortly.</p>
<h2 id="heading-what-concurrency-and-parallelism-mean">What Concurrency and Parallelism Mean</h2>
<p>Node.js is single-threaded but often gives the appearance of a multi-threaded environment due to how it handles concurrency and parallelism. A thread is a single sequence of instructions executed by the CPU independently. Think of it like a single waiter named James in a restaurant.</p>
<p>If James handles multiple tasks around the same time and quickly, an onlooker outside the restaurant who sees the number of customers moving in and out of the restaurant may assume that there are a ton of waiters serving tables. In reality, James just handles his tasks asynchronously.</p>
<p>Before grasping the concept of the event loop, it’s good to understand what concurrency and parallelism are, as they help explain this.</p>
<h3 id="heading-concurrency-in-nodejs">Concurrency in Node.js</h3>
<p>Concurrency means having multiple processes run around the same time. In the waiter analogy, it is like James carrying out different tasks, though not simultaneously. He could, for instance take an order from a table and, while waiting for the food to arrive, request that extra salt be provided to another table. While the salt is on its way, he uses the waiting time to read the bill to a third table.</p>
<p>The key idea is that James never sits idle — he works on other tasks while waiting for one to finish. If this sounds an awful lot like asynchronous programming, it is because asynchronous code is just one way to achieve concurrency.</p>
<p>Other ways to execute concurrency are <a target="_blank" href="https://www.freecodecamp.org/news/multithreading-for-beginners/">multithreading</a> on a single CPU core and <a target="_blank" href="https://www.freecodecamp.org/news/how-to-handle-concurrency-in-go/">coroutines</a> which are just functions that pause their execution to resume at a later time.</p>
<h3 id="heading-parallelism-in-nodejs">Parallelism in Node.js</h3>
<p>Parallelism, on the other hand, also means having several tasks run at the same time – but instead of the tasks just being processed around the same time, they are executed at exactly the same time, simultaneously. In this case, the restaurant manager decides to hire multiple waiters and each table has a waiter who is taking orders at exactly the same time.</p>
<p>Parallelism can be achieved using multithreading on multiple CPU cores. In this setup, the threads share the same memory and run simultaneously while using clusters which run independently – each with its own memory space. Here’s a clear example of parallelism using the <code>worker_threads</code> module:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> { Worker}  = <span class="hljs-built_in">require</span>(<span class="hljs-string">'worker_threads'</span>);

<span class="hljs-keyword">new</span> Worker(<span class="hljs-string">'./worker.js'</span>);
<span class="hljs-keyword">new</span> Worker(<span class="hljs-string">'./worker.js'</span>);
<span class="hljs-keyword">new</span> Worker(<span class="hljs-string">'./worker.js'</span>);

<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Main thread keeps running in the process..."</span>);
</code></pre>
<p>The code above creates three worker threads in parallel on a multi-core machine. This doesn’t stop the main thread which continues to run, allowing each worker thread independently do its task. <code>worker.js</code> could be a simple file carrying out any task. In this case, it simply logs a message to the screen:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-string">"This worker thread is running here!"</span>);
</code></pre>
<p>Note that the argument for the <code>Worker</code> constructor can be any file path, and the order in which they are executed isn’t dependent on the order they appear in code. Each worker runs independently of the others and they run in parallel.</p>
<p>Concurrency and parallelism allow Node.js (which is single-threaded) to appear to manage multiple tasks simultaneously. Understanding these concepts sets the stage for the event loop, showing how Node.js manages to give the appearance of concurrency while still executing code in a single-threaded environment.</p>
<h2 id="heading-what-is-the-event-loop">What is the Event Loop?</h2>
<p>The event loop listens for events in the Node.js environment. It essentially listens for actions and then processes tasks or outputs values.</p>
<p>To better understand how this works, you can picture the Node.js environment as a fast-paced organization and the event loop as an overworked manager who refuses to hire a personal assistant. The manager oversees the operations of the entire office, and has a dedicated desk that contains whatever they are working on at that particular time. Let’s call this desk <em>the call stack</em>.</p>
<p>The call stack consists of whatever processes or tasks that Node.js is currently working on. When input is entered or code is written to do something, it gets moved to the call stack and from there gets executed.</p>
<p>The order in which this execution takes place is important, as synchronous code makes it to the call stack before asynchronous code. What happens to the asynchronous code you may ask? It goes into something known as the callback queue first before ending up on the call stack.</p>
<p>The callback queue is a lineup of asynchronous tasks that make it to the call stack only if the stack is empty. You can think of it like a file cabinet in the office, where asynchronous code that is processed by a specialized team of workers under the manager go to stay until the manager’s desk is cleared. The manager only heads to the cabinet when they’re done handling all the synchronous task on the call stack. This specialized team that handles asynchronous code like callbacks and async/await are the Node APIs or the Web APIs.</p>
<p>Node or Web APIs process asynchronous code. When the code comes in, it’s processed here and then placed in the callback queue for the event loop to pick up and take to the call stack. But there are some asynchronous tasks that are prioritized. These are known as microtasks, such as <a target="_blank" href="https://www.freecodecamp.org/news/guide-to-javascript-promises/">promises</a>.</p>
<p>Microtasks are given particular priority and are queued in a special microtask queue. This is usually checked after an operation before checking the callback queue. If nothing is present, the event loop checks the callback queue but if some task exists such as <code>process.nextTick()</code>, it gets handled immediately. Macrotasks consist of tasks that are regularly scheduled and handled by the event loop only after the microtasks are treated, such as <code>setTimeout()</code> and <code>setInterval()</code><em>.</em></p>
<p>So as you can see, the event loop is basically what it sounds like – a loop. It looks through events and handles tasks based on a prioritized schedule.</p>
<p>One thing to note, though, is that even within callback queues and microtask queues, there are phases. The event loop, for instance, must handle certain tasks before others even within the same category. This is where the phases of the event loop come in.</p>
<h2 id="heading-the-phases-of-the-event-loop">The Phases of the Event Loop</h2>
<p>By analogy, the event loop is akin to a manager who checks the status of projects and tasks at regular intervals. In this case, they have a specific schedule for checking the status of projects. Some projects or tasks take priority over others, and the manager has to look through them in a set order.</p>
<p>You can also visualize event loop phases as a train moving from station to station. It starts from one location and moves to others in a particular order until it’s complete, then starts the journey again. This arrangement determines what tasks get executed before others.</p>
<p>Here are the phases of the event loop in order:</p>
<ol>
<li><p>The timers phase: This phase executes the <code>setTimeout()</code> and <code>setInterval()</code> callbacks after the duration is run. The event loop starts here, like the first station on a train’s journey.</p>
</li>
<li><p>The pending callbacks phase: These are system-level callbacks, checked after the timers phase operations.</p>
</li>
<li><p>The poll phase: This phase handles input/output (I/O) events and executes the callbacks. In the absence of callbacks, the event loop waits for new ones here.</p>
</li>
<li><p>The check phase: This phase executes <code>setImmediate()</code> callbacks.</p>
</li>
<li><p>Close callbacks: This phase is concerned with executing close events like socket closes.</p>
</li>
</ol>
<p>These callback events are checked in order and run accordingly, so that if <code>setTimeout()</code> and <code>setImmediate()</code> are in the same code, <code>setTimeout()</code> runs first unless the “train” is say, in the Poll Phase of the loop so that <code>setImmediate()</code> runs before <code>setTimeout()</code>.</p>
<p>You can see this illustrated with the example below:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">'fs'</span>);

fs.readFile(<span class="hljs-string">'trainMap.txt'</span>, <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">setTimeout</span>(<span class="hljs-function">() =&gt;</span> {
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Train takes off"</span>);
    }, <span class="hljs-number">0</span>);
    setImmediate(<span class="hljs-function">() =&gt;</span> {
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Oops! Immediate halt! There's a cat on the tracks!"</span>);
    })
});
</code></pre>
<p>You see in the code above that the callbacks are handled asynchronously. Recall that the event loop waits for new callbacks in the poll phase. What this means is that since <code>fs.readfile()</code> is a callback, it gets processed in the poll phase.</p>
<p><code>setTimeout()</code> is set to run in the timers phase but the event loop proceeds to the check phase (which comes next) where <code>setImmediate()</code> is executed. This is why <code>setImmediate()</code> runs before <code>setTimeout()</code> in this case. The event loop then continues from the check phase to the close phase, back to the timers phase, repeating this cycle continuously.</p>
<p>This explains why you see the output below printed to the screen:</p>
<pre><code class="lang-bash">Oops! Immediate halt! There<span class="hljs-string">'s a cat on the tracks!
Train takes off</span>
</code></pre>
<p>This illustrates how the event loop enforces order of execution across the different phases, ensuring that asynchronous operations run in the correct sequence.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The Node.js event loop can sometimes appear mysterious, but it really isn’t as complex as it first seems. At its core, it really is just the engine that ensures JavaScript can handle multiple tasks without freezing.</p>
<p>In this article, you’ve learnt about synchronous and asynchronous code, concurrency, parallelism, and how these concepts help explain the event loop and the phases of the event loop. Understanding how they work gives you the confidence to write asynchronous code without fear, debug more efficiently, and appreciate the power behind Node.js’s ability to handle concurrent tasks.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
