<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Ajay Yadav - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Fri, 15 May 2026 22:29:21 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/author/ATechAjay/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Test a Complex Full-Stack App: Manual Approach vs AI-Assisted Testing ]]>
                </title>
                <description>
                    <![CDATA[ A few days ago, I ran an experiment with an AI-powered testing agent that lets you write test cases in plain English instead of code. I opened its natural language interface and typed four simple sent ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-test-a-complex-full-stack-app-manual-vs-ai-assisted-testing/</link>
                <guid isPermaLink="false">69b843852ad6ae5184d6fa75</guid>
                
                    <category>
                        <![CDATA[ AI ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Web Development ]]>
                    </category>
                
                    <category>
                        <![CDATA[ full stack ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Ajay Yadav ]]>
                </dc:creator>
                <pubDate>Mon, 16 Mar 2026 17:53:09 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/3970744b-194e-4573-b49a-c057a4632d8c.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>A few days ago, I ran an experiment with an AI-powered testing agent that lets you write test cases in plain English instead of code. I opened its natural language interface and typed four simple sentences to test google.com:</p>
<pre><code class="language-plaintext">1. Go to google.com
2. There should be a long input field on the page
3. Type something and verify suggestions appear in a dropdown
4. The input field should not have any placeholder text
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6198d3da5bb9cc256fc69512/24f353d9-8c98-49a9-ba81-3e236546dab2.png" alt="KaneAI's natural language test authoring interface showing a text input field with the prompt &quot;What do you want to test today?" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>A real browser opened Google, found the search bar, typed a query, checked for the autocomplete dropdown, and verified there was no placeholder, all from those four lines.</p>
<p>No Playwright selectors. No <code>page.getByRole()</code>. No CSS class names. Just plain English describing what a user would do.</p>
<p>That made me curious: what happens if I try this on something actually complex? So I tested my own full-stack app's auth endpoint the same way:</p>
<blockquote>
<p><em><strong>Send a GET request to /api/auth/status without any session cookie. Verify it returns 401.</strong></em></p>
</blockquote>
<p>Within 15 seconds, done.</p>
<p>The same test took me an hour to set up manually, building a session helper, separating my Express app from the server startup, seeding a test database, just so I could write five lines of Supertest code.</p>
<p>I ended up testing my entire application both ways: the traditional manual approach and the AI-assisted approach. Same endpoints, same assertions, completely different experience. This article is about what I learned.</p>
<p>But before I get into how I tested it, let's talk about what actually matters: the testing concepts themselves. Because no approach, manual or automated, will save you time or energy if you don't understand what you're testing and why.</p>
<h3 id="heading-what-well-cover">What we'll cover:</h3>
<ol>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-how-testing-actually-works-in-fullstack-apps">How Testing Actually Works in Full-Stack Apps</a></p>
</li>
<li><p><a href="#heading-what-makes-this-hard">What Made This Hard</a></p>
</li>
<li><p><a href="#heading-the-manual-approaach">The Manual Approach</a></p>
</li>
<li><p><a href="#heading-the-aiassisted-approach">The AI-Assisted Approach</a></p>
</li>
<li><p><a href="#heading-when-to-use-which-approach">When to Use Which Approach</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To get the most out of this article, you should have a basic understanding of JavaScript and Node.js, along with some familiarity with React and Express.</p>
<p>Experience writing simple tests with any JavaScript testing framework like Jest or Vitest will be helpful, though I'll explain the core testing concepts as we go.</p>
<p>You should also have Node.js installed on your machine. If you want to follow along with the manual testing examples, you'll need Vitest (or Jest) for unit and API tests, Supertest for HTTP endpoint testing, and Playwright for end-to-end browser tests. For the AI-assisted approach, I used KaneAI by LambdaTest, which you can explore through their platform.</p>
<h2 id="heading-how-testing-actually-works-in-full-stack-apps">How Testing Actually Works in Full-Stack Apps</h2>
<p>If you've only tested isolated React components or written a few unit tests for utility functions, full-stack testing feels like a different sport. The concepts are the same, but the complexity jumps dramatically. Here's what you actually need to know.</p>
<h3 id="heading-three-layers-three-different-jobs">Three Layers, Three Different Jobs</h3>
<p>Every full-stack application has three natural testing layers, and trying to cover everything with just one of them leads to either fragile tests or blind spots.</p>
<p>Unit Tests</p>
<p>Unit tests check that individual functions return the right output for a given input. They don't touch the database, the network, or the browser.</p>
<p>They run in milliseconds. If your function takes a string and returns a formatted slug, a unit test calls that function and checks the result. That's it.</p>
<pre><code class="language-ts">it("converts a title to a slug", () =&gt; {
  expect(slugify("My First Post")).toBe("my-first-post");
});
</code></pre>
<h4 id="heading-api-tests">API Tests</h4>
<p>API tests check that your backend endpoints return the right responses. They send real HTTP requests to your Express (or Next.js) app and verify the status codes, response shapes, and error handling.</p>
<p>If your <code>/api/auth/status</code> endpoint should return 401 without a session cookie, an API test confirms that contract.</p>
<pre><code class="language-ts">it("returns 401 without session cookie", async () =&gt; {
  const res = await request(app).get("/api/auth/status");
  expect(res.status).toBe(401);
});
</code></pre>
<h4 id="heading-end-to-end-e2e-tests">End-to-end (E2E) Tests</h4>
<p>End-to-end (E2E) tests open a real browser and interact with your app the way a user would. They click buttons, fill forms, navigate pages, and check that the right things appear on screen.</p>
<p>If your login flow should redirect to a dashboard after authentication, an E2E test walks through that entire journey.</p>
<pre><code class="language-ts">test("login redirects to dashboard", async ({ page }) =&gt; {
  await page.goto("/");
  await page.getByTestId("username-input").fill("ajay");
  await page.getByTestId("password-input").fill("password123");
  await page.getByTestId("login-button").click();
  await expect(page.getByTestId("dashboard")).toBeVisible();
});
</code></pre>
<h3 id="heading-the-pain-points-nobody-warns-you-about">The Pain Points Nobody Warns You About</h3>
<p>Tutorials make all three layers look straightforward. In practice, each one has a trap.</p>
<p>First, we have the session cookie problem. Most real apps have authentication. To test any authenticated endpoint, you need a valid session.</p>
<p>That means you need a helper function that logs in a test user, extracts the session cookie from the <code>Set-Cookie</code> header, and returns it for future requests.</p>
<p>This sounds simple. It took me an hour to build one that actually works with express-session. Every project reinvents this wheel.</p>
<p>Then we have the app vs. server separation issue. <a href="https://github.com/forwardemail/supertest#readme">Supertest</a> (the most popular API testing library) needs to import your Express app without starting a real server.</p>
<p>If your <code>app.ts</code> file has <code>app.listen(3000)</code> at the bottom, Supertest will try to bind to port 3000, and your tests will crash when running in parallel.</p>
<p>You have to separate your app definition from the server startup. <code>app.ts</code> exports the Express instance, <code>server.ts</code> calls <code>.listen()</code>. It's a three-minute refactor, but nobody tells you about it until your tests fail.</p>
<p>You also have the SSE and real-time nightmare. If your app uses Server-Sent Events (SSE) or WebSockets, you're testing time-dependent behavior.</p>
<p>You open a connection, trigger an action, and wait for an event to arrive. If the event takes too long, your test times out. If you don't set a timeout, the test hangs forever. You end up writing 30 lines of Promise wrappers, timeout handlers, and cleanup logic for a single assertion.</p>
<p>Finally, there's the selector fragility trap. E2E tests that use CSS selectors (<code>.btn-primary</code>, <code>.card-title</code>) break every time you rename a class.</p>
<p>The fix is using <code>data-testid</code> attributes, stable identifiers that exist solely for testing and don't change during refactors. But retrofitting them into an existing app means touching dozens of components.</p>
<h3 id="heading-schema-validation-the-hidden-time-sink">Schema Validation: The Hidden Time Sink</h3>
<p>Here's something nobody tells you about API testing. Writing the assertion for "does this endpoint return 200" takes one line.</p>
<p>Writing assertions that verify the shape of the response, every field exists, every field has the right type, every enum value is valid, takes 15 to 20 lines per endpoint. Multiply that across a dozen endpoints and you're spending hours writing boilerplate like:</p>
<pre><code class="language-ts">expect(res.body[0]).toHaveProperty("title");
expect(typeof res.body[0].title).toBe("string");
expect(res.body[0]).toHaveProperty("status");
expect(["open", "closed", "merged"]).toContain(res.body[0].status);
</code></pre>
<p>It's important work, though: schema validation catches real bugs when your backend changes a response shape. But the repetitiveness is what makes it a good candidate for automation, which I'll get to later.</p>
<p>These aren't edge cases. These are the everyday realities of testing a full-stack app. Knowing them upfront saves you from the "why is this so much harder than the tutorial??" frustration.</p>
<h2 id="heading-what-made-this-hard">What Made This Hard</h2>
<p>A few months ago, I wrote a <a href="https://www.freecodecamp.org/news/how-to-test-javascript-apps-from-unit-tests-to-ai-augmented-qa/">freeCodeCamp article</a> about testing JavaScript apps from unit tests to AI-augmented QA. That article covered testing fundamentals with clean, simple examples.</p>
<p>After publishing it, I kept thinking: what happens when you apply all of this to something messy?</p>
<p>I had the perfect candidate. <strong>Creoper</strong> (code name) is an AI-powered project management tool I built that connects GitHub with Discord.</p>
<p>Teams can monitor repositories, track pull requests, and query project status using natural language, all without leaving their chat platform.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6198d3da5bb9cc256fc69512/57f5c35a-20fc-483e-b871-e1f55632b683.png" alt="Ajay Yadav receiving &quot;The Visionary&quot; trophy at the Hatch&amp;Hype hackathon hosted at Montrose Golf Resort and Spa, alongside a close-up of the award celebrating bold innovation with the CreoWis logo" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>I built it across two internal hackathons at <a href="https://www.creowis.com/">CreoWis</a>, and it won both times. What started as a simple GitHub-Discord automation bot evolved into a full product with five interconnected components:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6198d3da5bb9cc256fc69512/b4881ec0-b5bf-4b80-b85d-ffd400240b41.png" alt="Architecture diagram of Creoper showing six interconnected components: React dashboard, Express backend, Discord bot, PostgreSQL database, GitHub webhook handlers, and LLM layer." style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>It has a React dashboard with GitHub OAuth. An Express backend with REST APIs and SSE. A Discord bot that processes natural language through an LLM intent detection layer. PostgreSQL with Prisma. GitHub webhook handlers.</p>
<p>But here's the thing: despite winning two hackathons, Creoper had <strong>zero test cases</strong>. The app wasn't even deployed yet. I'd been stuck on Railway monorepo deployment issues for weeks.</p>
<p>So I was staring at a system that had every real-world testing challenge I'd just written about, auth flows, real-time events, multiple integration points, complex business logic, and no safety net at all.</p>
<p>I decided to test it two different ways and document what actually happened. If you want to explore the full project, I've written two separate <a href="https://www.creowis.com/blog/building-an-ai-powered-project-management-tool">blogs</a> about how I built it.</p>
<h2 id="heading-the-manual-approach">The Manual Approach</h2>
<p>I mapped pure logic components like the intent parser and embed builder to unit tests, since they deal with straightforward input-output behavior. I assigned Express endpoints to API tests using Supertest, which let me send real HTTP requests and verify response codes and shapes.</p>
<p>I planned to cover the React dashboard with end-to-end tests using Playwright, simulating actual user interactions in a real browser. As for Discord bot interactions and webhook delivery, those couldn't be automated reliably yet, so I documented them and tested them manually.</p>
<p>Here's what each layer looked like in practice.</p>
<h3 id="heading-unit-tests-the-easy-win">Unit Tests: The Easy Win</h3>
<p>Creoper has a function that classifies Discord messages into structured intents. If someone types "list prs," it should return <code>LIST_PRS</code> with a high confidence score.</p>
<p>If the message is gibberish, it should return <code>UNKNOWN</code> with zero confidence. The confidence score matters because anything below a threshold triggers a safe fallback instead of executing an action.</p>
<pre><code class="language-ts">it("detects LIST_PRS intent", () =&gt; {
  const result = parseIntent("list prs");
  expect(result.action).toBe("LIST_PRS");
  expect(result.confidence).toBeGreaterThan(0.8);
});

it("returns low confidence when repo name is missing", () =&gt; {
  const result = parseIntent("set active repo");
  expect(result.confidence).toBeLessThan(0.8);
});
</code></pre>
<p>Notice these aren't just <strong>"does it work"</strong> checks. They're testing a safety mechanism, the threshold between executing an action and falling back.</p>
<p>These are exactly the kinds of tests that need to be written by hand because you have to understand the business logic behind the numbers.</p>
<p>I also tested the Discord embed builder the same way. Give it push event data, check that the formatted message contains the right repo name, author, branch, and commit messages.</p>
<p>Pure input, pure output, no external dependencies. Unit tests ran in milliseconds and caught edge cases like empty commit arrays immediately.</p>
<h3 id="heading-api-tests-where-the-friction-starts">API Tests: Where the Friction Starts</h3>
<p>Testing the Express endpoints required the infrastructure work I described earlier. I separated <code>app.ts</code> from <code>server.ts</code>, built the <code>createTestSession()</code> helper, and set up an in-memory test database so tests wouldn't touch real data.</p>
<pre><code class="language-ts">it("returns 401 without session cookie", async () =&gt; {
  const res = await request(app).get("/api/auth/status");
  expect(res.status).toBe(401);
  expect(res.body).toHaveProperty("error");
});

it("returns user data with valid session", async () =&gt; {
  const cookie = await createTestSession();
  const res = await request(app)
    .get("/api/auth/status")
    .set("Cookie", cookie);
  expect(res.status).toBe(200);
  expect(res.body).toHaveProperty("username");
  expect(res.body).not.toHaveProperty("accessToken");
});
</code></pre>
<p>Five lines of test code, one hour of infrastructure to make those five lines work.</p>
<p>Then I had to repeat this pattern across every endpoint: repos, pull requests, issues, active repo configuration, each with happy path, error cases, and the tedious schema validation I mentioned earlier.</p>
<p>The SSE test was the worst. I needed a Promise wrapper, an EventSource connection, a timeout handler, an <code>onopen</code> callback to trigger the change, an event listener to catch the response, and cleanup for both the connection and the server. About 30 lines for a single assertion, and it took three attempts to get the timing right.</p>
<h3 id="heading-e2e-tests-the-full-journey">E2E Tests: The Full Journey</h3>
<p>Playwright's E2E tests were actually pleasant to write once I added <code>data-testid</code> attributes to the React components. The login flow, note creation, editing, and deletion all followed a predictable pattern.</p>
<pre><code class="language-ts">test("login and create a note", async ({ page }) =&gt; {
  await page.goto("/");
  await page.getByTestId("username-input").fill("ajay");
  await page.getByTestId("password-input").fill("password123");
  await page.getByTestId("login-button").click();
  await expect(page.getByTestId("username-display")).toContainText("ajay");
});
</code></pre>
<p>The real cost wasn't writing the tests — it was maintaining them. Midway through development, I renamed a CSS class from <code>.repo-list-item</code> to <code>.repository-card</code>. Two Playwright tests broke immediately. I found the references, updated them, re-ran. Ten minutes for a CSS rename. I can see this becoming death-by-a-thousand-cuts as the UI evolves.</p>
<h2 id="heading-the-ai-assisted-approach">The AI-Assisted Approach</h2>
<p>Now here's the same project, tested with a fundamentally different workflow.</p>
<p>Instead of writing test code, you describe what you want to test in natural language. An AI agent interprets your intent, interacts with the actual application, generates assertions, and produces exportable test code.</p>
<p>The tool I used is <a href="https://www.testmuai.com/">KaneAI</a>, a GenAI-native testing agent that covers web UIs, APIs, and mobile apps through natural language test authoring with real browser execution. That's the only background you need. Let me show you the workflow.</p>
<h3 id="heading-api-testing-describing-instead-of-coding">API Testing: Describing Instead of Coding</h3>
<p>Instead of writing Supertest code, I opened the slash command menu, selected API, and pasted a curl command:</p>
<pre><code class="language-bash">curl -X GET http://localhost:3000/api/auth/status
</code></pre>
<p>It fired the request through the tunnel, showed the 401 response, and I added it to my test steps. For the authenticated version, I pasted the same command with a session cookie from DevTools. No <code>createTestSession()</code> helper. No test database. No app separation.</p>
<p>For the repository endpoints, I described the flow in plain English:</p>
<pre><code class="language-plaintext">1. Set active repository to "atechajay/no-javascript" via POST to /api/repos/active
2. Verify the response confirms the repository is active
3. Fetch open pull requests via GET to /api/repos/pulls
4. Verify each item has title, author, url, and status fields
5. Try an invalid repository name, verify 400 error
</code></pre>
<p>It generated assertions for the happy path and added schema validation I didn't ask for checking that <code>title</code> is a string, <code>labels</code> is an array, <code>status</code> is one of the expected values. That's the tedious work that ate up hours in the manual approach, generated in seconds.</p>
<h3 id="heading-e2e-testing-plain-english-real-browser">E2E Testing: Plain English, Real Browser</h3>
<p>For the React dashboard, instead of Playwright selectors, I described:</p>
<pre><code class="language-plaintext">1. Navigate to localhost:3001
2. Click "Go to Dashboard"
3. Verify redirect to GitHub OAuth
4. After auth, verify the dashboard loads
5. Verify the username appears in the sidebar
</code></pre>
<p>It executed each step in a real cloud browser connected to my localhost. No <code>page.getByRole()</code>, no <code>page.waitForURL()</code>, no selector debugging.</p>
<p>After each test, I exported the generated code. It came with wait conditions and assertion logic baked in.</p>
<p>It wasn't perfect copy-paste: I updated environment variables, adjusted base URLs, and fixed a few field name mismatches where it expected <code>pullRequestUrl</code> instead of my actual <code>url</code> field. But it gave me roughly 70–80% of the foundation.</p>
<h3 id="heading-the-feature-that-surprised-me">The Feature That Surprised Me</h3>
<p>Midway through testing, I renamed that CSS class from <code>.repo-list-item</code> to <code>.repository-card</code>. My manual Playwright tests broke immediately.</p>
<p>But the AI tool's auto-healing detected the selector change, found the closest matching element based on the test's original intent, and continued the test with a review flag. No code changes needed.</p>
<p>For a rapidly changing MVP where class names are still in flux, that alone saved significant maintenance time.</p>
<h2 id="heading-when-to-use-which-approach">When to Use Which Approach</h2>
<p>After testing the same project both ways, here's my honest take.</p>
<p>Write tests by hand when you're testing business logic that requires domain understanding. For Creoper's intent parser, I needed to think about what "low confidence" means in the context of the application's safety mechanism.</p>
<p>An AI tool can generate assertions, but it can't understand why a confidence score of 0.5 should trigger a fallback instead of an action. Pure logic with meaningful edge cases is where hand-written tests earn their keep.</p>
<p>You should also write tests by hand when they need to run in CI without external dependencies. Vitest tests with mocked dependencies are self-contained. They run in milliseconds and don't need a tunnel, a cloud browser, or a third-party account.</p>
<p>Hand-written tests are also best when the team needs to maintain them. Hand-written tests are transparent. Generated code, even when exported, can feel opaque to someone who wasn't there when it was authored.</p>
<p>Reach for AI-assisted testing, on the other hand, when your UI changes frequently. For an MVP where CSS classes and component structure are still in flux, auto-healing prevents the "my tests broke because I renamed a div" problem. You spend less time fixing selectors and more time shipping features.</p>
<p>AI-assisted testing is also helpful when you need coverage fast and plan to refine later. The 70–80% foundation is a real boost when you're the only developer and you need coverage now. You can always hand-tune the exported code later.</p>
<p>Never rely solely on either approach to understand your system. No tool knows that an SSE connection drops after 30 seconds if the heartbeat isn't configured. No tool understands that a Discord bot should never execute a write action when confidence is below 0.8. No tool realizes the OAuth callback silently fails if the <code>redirect_uri</code> doesn't match precisely.</p>
<p>The strategy relies on you knowing which endpoints are crucial, identifying dangerous edge cases, and understanding what should occur during failures. The tool simply accelerates how quickly you can articulate and implement that strategy.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>My Full-stack app won two hackathons. But without tests, it was a house of cards. One renamed CSS class, one changed API response, and the whole system could silently break.</p>
<p>Testing it both ways taught me that the manual vs AI question is the wrong question. The real skill is matching the approach to the problem.</p>
<p>Write unit tests by hand for business logic. Use AI-assisted testing when you're drowning in repetitive schema validation across a dozen endpoints.</p>
<p>Use auto-healing for E2E tests on a fast-changing UI. And for the things you can't automate yet, like Discord bot interactions or webhook delivery, document them and test them manually until you can.</p>
<p>If you're building something complex and thinking <strong>"I'll add tests after I deploy"</strong>, flip that. Test what you can now. Document what you can't. When deployment day comes, you'll ship with confidence instead of anxiety.</p>
<h2 id="heading-before-we-end"><strong>Before We End</strong></h2>
<p>I hope you found this article insightful. I’m Ajay Yadav, a software developer and content creator.</p>
<p>You can connect with me on:</p>
<ul>
<li><p><a href="https://x.com/atechajay">Twitter/X</a> and <a href="https://www.linkedin.com/in/atechajay/">LinkedIn</a>, where I share insights to help you improve 0.01% each day.</p>
</li>
<li><p>Check out my <a href="https://github.com/ATechAjay">GitHub</a> for more projects.</p>
</li>
<li><p>Check out my <a href="https://thedivsoup.com">Medium</a> page for more blogs.</p>
</li>
<li><p>I also run a <a href="http://youtube.com/@atechajay">YouTube Channel</a> where I share content about careers, software engineering, and technical writing.</p>
</li>
</ul>
<p>See you in the next article — until then, keep learning!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Test JavaScript Apps: From Unit Tests to AI-Augmented QA ]]>
                </title>
                <description>
                    <![CDATA[ As a software engineer, you should always be open to the challenges this field brings. Two months ago, my project manager assigned me a task: write test cases for an API. I was super excited because it meant I got to learn something new beyond just c... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-test-javascript-apps-from-unit-tests-to-ai-augmented-qa/</link>
                <guid isPermaLink="false">68e68c3655c4d79b6db4f4c4</guid>
                
                    <category>
                        <![CDATA[ JavaScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ React ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Testing ]]>
                    </category>
                
                    <category>
                        <![CDATA[ automation ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Ajay Yadav ]]>
                </dc:creator>
                <pubDate>Wed, 08 Oct 2025 16:07:18 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759939599135/507c5e9a-954b-497b-b3b8-c8d89b2d1a03.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>As a software engineer, you should always be open to the challenges this field brings. Two months ago, my project manager assigned me a task: write test cases for an API. I was super excited because it meant I got to learn something new beyond just coding features.</p>
<p>Now, if you’re thinking “writing test cases isn’t my job as a frontend or backend developer”, then you’re missing the point. That mindset holds you back.</p>
<p>At the very least, every engineer should understand Unit Testing and Integration Testing. Writing test cases isn’t rocket science, it’s as simple as English and feels very similar to writing JavaScript code.</p>
<p>That said, if you’ve ever tried setting up testing in a JavaScript application, you probably know how complicated and frustrating it can get.</p>
<p>The JavaScript ecosystem is massive, with endless libraries and frameworks. Things shift constantly, new tools replace old ones, and community standards evolve almost overnight. That’s exactly why I decided to write this article.</p>
<p>In it, we’ll explore a modern approach to JavaScript testing, covering practical patterns, workflows, and even how AI-assisted tools are changing the game.</p>
<p>Let’s dive in.</p>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-the-evolution-of-testing">The Evolution of Testing</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-the-core-layers-of-testing">The Core Layers of Testing</a></p>
<ul>
<li><p><a class="post-section-overview" href="#heading-unit-testing">Unit testing</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-integration-testing">Integration testing</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-end-to-end-testing">End-to-End testing</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-ai-augmented-testing">AI-Augmented testing</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#heading-future-of-javascript-testing">Future of JavaScript Testing</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-before-we-end">Before We End</a></p>
</li>
</ul>
<h2 id="heading-the-evolution-of-testing">The Evolution of Testing</h2>
<p>Software testing has been around for as long as software itself. According to IBM (2016), testing started right alongside the very first programs. After World War II, three computer scientists wrote what’s considered to be the <a target="_blank" href="https://en.wikipedia.org/wiki/Manchester_Baby">first piece of software</a>.</p>
<p>It ran on June 21, 1948, at the University of Manchester in England, performing mathematical calculations with basic machine code instructions.</p>
<p>Since then, testing methods and principles have continuously evolved. As software became more complex and development cycles got faster, the need for reliable and systematic testing grew stronger.</p>
<p>In the early days, the concept of the <strong>Testing Pyramid</strong> became popular. At the base, you had unit tests, in the middle integration tests, and at the very top a thin layer of end-to-end (E2E) tests. This approach worked well for simpler applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759395722389/0067bc6e-f038-40a6-905c-61406f41e430.png" alt="Image of the testing pyramid showing the different layers" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>But as apps grew more dynamic and interconnected, the pyramid approach began to show its limits. That’s where the <strong>Testing Trophy model</strong> came in. Instead of overloading with unit tests, it puts greater emphasis on integration testing while still keeping E2E tests and unit tests in balance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759395841713/b92ea402-5002-4c48-be7c-aee6f1dfacfd.png" alt="Diagram of a &quot;Testing Trophy&quot; pyramid. Top to bottom: &quot;End-to-End Tests&quot; (Slow, Few, Expensive), &quot;Integration Tests&quot; (Moderate Speed, Fewer, Moderate Cost), &quot;Unit Tests&quot; (Fast, Numerous, Cheap), &quot;Static Analysis&quot; (Instant, Numerous, Cheapest). Left axis: Confidence increases up, Speed decreases down. Right axis: Cost increases up, Frequency decreases down." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>Now, with the rise of AI in QA, testing has entered a new phase. AI-driven tools don’t just run tests, they help generate, maintain, and even self-heal them. This shift is creating a future-ready testing framework designed to handle the complexity of modern software in 2025 and beyond.</p>
<h2 id="heading-the-core-layers-of-testing">The Core Layers of Testing</h2>
<p>Testing is not just about finding bugs, but also ensuring reliability, scalability, and user satisfaction. Every testing strategy should cover four main layers:</p>
<h3 id="heading-unit-testing">Unit Testing</h3>
<p>Unit testing is a method where you test individual components or units of software in isolation to make sure they work as expected. A unit can be a simple function, a React component, or even a utility module.</p>
<p>When building JavaScript apps, we usually create separate modules or components that later get combined. If any one of those small pieces is broken, the entire application can fail. That’s why unit tests are essential, they catch problems early and ensure reliability before integration.</p>
<p>In the JavaScript ecosystem, there are several tools you can use for writing unit tests:</p>
<ul>
<li><p><a target="_blank" href="https://vitest.dev/"><strong>Vitest</strong></a> – a modern, fast, and developer-friendly testing framework built to work seamlessly with Vite projects.</p>
</li>
<li><p><a target="_blank" href="https://jestjs.io/"><strong>Jest</strong></a> – one of the most widely used testing frameworks, great for React apps among others.</p>
</li>
</ul>
<p>For this section, we’ll focus on <strong>Vitest</strong>, because it’s lightweight, super-fast, and feels very natural for modern frontend development. Let’s write a test case for a small module.</p>
<p>Imagine we have a simple utility function that adds two numbers:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// sum.ts</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> sum = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">a: <span class="hljs-built_in">number</span>, b: <span class="hljs-built_in">number</span></span>) </span>{
  <span class="hljs-keyword">return</span> a + b;
};
</code></pre>
<p>Every test typically has 3 parts:</p>
<ol>
<li><p>A description (string).</p>
</li>
<li><p>The code execution.</p>
</li>
<li><p>The assertion.</p>
</li>
</ol>
<p>Now, let’s write a unit test for the above function using Vitest.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// sum.test.ts</span>
<span class="hljs-keyword">import</span> { describe, expect, it } <span class="hljs-keyword">from</span> <span class="hljs-string">"vitest"</span>;
<span class="hljs-keyword">import</span> { sum } <span class="hljs-keyword">from</span> <span class="hljs-string">"./sum"</span>;

describe(<span class="hljs-string">"sum function"</span>, <span class="hljs-function">() =&gt;</span> {
  it(<span class="hljs-string">"should return the sum of two numbers"</span>, <span class="hljs-function">() =&gt;</span> { <span class="hljs-comment">// 1. description</span>
    <span class="hljs-keyword">const</span> result = sum(<span class="hljs-number">2</span>, <span class="hljs-number">3</span>); <span class="hljs-comment">// 2. code execution</span>
    expect(result).toBe(<span class="hljs-number">5</span>);   <span class="hljs-comment">// 3. assertion</span>
  });

  <span class="hljs-comment">// ... other test cases</span>
});

<span class="hljs-comment">// ... other describe blocks</span>
</code></pre>
<p>Breaking it down:</p>
<ul>
<li><p><code>describe</code> groups related test cases together. Here, we group everything about the <code>sum</code> function.</p>
</li>
<li><p><code>it</code> (or <code>test</code>) defines a single test case. In this example: “should return the sum of two numbers.”</p>
</li>
<li><p><code>expect</code> makes the actual assertion. It checks if the result from <code>sum(2,3)</code> equals <code>5</code>.</p>
</li>
</ul>
<p>When you run this test, Vitest will quickly execute it and show you whether the function passed or failed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759399251713/3c051bbb-4813-40ed-8656-d1bd2730dc38.png" alt="Command line interface showing test results using &quot;vitest&quot; in a development environment. Two test files, &quot;sum.test.ts&quot; and &quot;App.test.tsx&quot;, have passed successfully. Total test duration was 828ms." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>If the function works, you’ll see <code>1 passed</code> in green. If it fails, the output will be red with details about what went wrong.</p>
<h3 id="heading-integration-testing">Integration Testing</h3>
<p>Now that we’ve covered unit testing, let’s move one step up to integration testing. While unit tests focus on testing individual pieces in isolation, integration tests ensure those pieces work together as expected.</p>
<p>Think of it like assembling Lego blocks: each piece might work fine on its own, but when you connect them, something might not fit right. Integration testing helps you catch those issues early.</p>
<p>In simple terms, Integration testing checks how components and modules interact with each other.</p>
<p>Let’s say we have a React component that fetches user data from an API and displays it on the screen.<br>We’re no longer just testing one function – we’re testing how the component behaves when it calls an API, manages loading states, and renders data dynamically.</p>
<p>Here’s a simple example:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { useEffect, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;

<span class="hljs-keyword">const</span> User = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">const</span> [users, setUsers] = useState&lt;{ name: <span class="hljs-built_in">string</span>; email: <span class="hljs-built_in">string</span> }[]&gt;([]);
  <span class="hljs-keyword">const</span> [loading, setLoading] = useState(<span class="hljs-literal">false</span>);

  <span class="hljs-keyword">const</span> fetchUsers = <span class="hljs-keyword">async</span> () =&gt; {
    setLoading(<span class="hljs-literal">true</span>);
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"https://api.escuelajs.co/api/v1/users"</span>);
      <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> res.json();
      setUsers(data);
    } <span class="hljs-keyword">catch</span> (e) {
      <span class="hljs-built_in">console</span>.log(e);
    } <span class="hljs-keyword">finally</span> {
      setLoading(<span class="hljs-literal">false</span>);
    }
  };

  useEffect(<span class="hljs-function">() =&gt;</span> {
    fetchUsers();
  }, []);

  <span class="hljs-keyword">return</span> (
    &lt;&gt;
      {loading ? (
        &lt;h2&gt;Loading...&lt;/h2&gt;
      ) : (
        &lt;div&gt;
          {users.map(<span class="hljs-function">(<span class="hljs-params">user, index</span>) =&gt;</span> (
            &lt;p key={index}&gt;
              {user.name}: {user.email}
            &lt;/p&gt;
          ))}
        &lt;/div&gt;
      )}
    &lt;/&gt;
  );
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> User;
</code></pre>
<p>This component does a few things:</p>
<ul>
<li><p>Calls an external API when the component mounts.</p>
</li>
<li><p>Sets a loading state while fetching data.</p>
</li>
<li><p>Renders the fetched users on the screen once the data is ready.</p>
</li>
</ul>
<p>Now, our job is to test the complete flow, from the API call to the rendered UI, using Vitest and <a target="_blank" href="https://testing-library.com/">React Testing Library</a>.</p>
<p>Here’s what the test file looks like:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { render, screen, waitFor } <span class="hljs-keyword">from</span> <span class="hljs-string">"@testing-library/react"</span>;
<span class="hljs-keyword">import</span> User <span class="hljs-keyword">from</span> <span class="hljs-string">"../components/User"</span>;
<span class="hljs-keyword">import</span> { describe, test, expect } <span class="hljs-keyword">from</span> <span class="hljs-string">"vitest"</span>;

describe(<span class="hljs-string">"User Component"</span>, <span class="hljs-function">() =&gt;</span> {
  test(<span class="hljs-string">"fetches and displays users successfully"</span>, <span class="hljs-keyword">async</span> () =&gt; {
    render(&lt;User /&gt;);

    <span class="hljs-comment">// 1. Initially shows loading</span>
    expect(screen.getByText(<span class="hljs-string">"Loading..."</span>)).toBeInTheDocument();

    <span class="hljs-comment">// 2. Wait for API response and UI update</span>
    <span class="hljs-keyword">await</span> waitFor(<span class="hljs-function">() =&gt;</span> {
      expect(
        screen.getByText(<span class="hljs-string">"Ajay Yadav: ajay.yadav@example.com"</span>)
      ).toBeInTheDocument();
      expect(
        screen.getByText(<span class="hljs-string">"Jane Smith: jane.smith@example.com"</span>)
      ).toBeInTheDocument();
    });

    <span class="hljs-comment">// 3. Loading should disappear</span>
    expect(screen.queryByText(<span class="hljs-string">"Loading..."</span>)).not.toBeInTheDocument();
  });
});
</code></pre>
<p>This test looks simple, but it covers the entire flow of our component. Let’s understand it step-by-step:</p>
<ul>
<li><p><strong>Render the component:</strong> Render the <code>&lt;User /&gt;</code> component inside the test environment.</p>
</li>
<li><p><strong>Check the loading state:</strong> As soon as the component mounts, the <strong>“Loading…”</strong> text should appear, indicating that data is being fetched.</p>
</li>
<li><p><strong>Wait for the data to load:</strong> Since the API call is asynchronous, use <code>waitFor()</code> to wait until the users are fetched and displayed.</p>
</li>
<li><p><strong>Verify the data:</strong> Once the API resolves, check if the user names and emails are correctly rendered on the screen.</p>
</li>
<li><p><strong>Confirm loading disappears:</strong> Finally, ensure that the “Loading…” text is removed once the data is displayed, confirming a proper state update.</p>
</li>
</ul>
<p>You can also test how your component behaves when the API fails. For example, you can mock the <code>fetch()</code> call to reject and then verify if an error message appears on the screen.</p>
<p>Vitest and React Testing Library make it easy to mock responses and simulate both success and failure cases, helping you ensure that your app handles real-world scenarios gracefully.</p>
<h3 id="heading-end-to-end-testing">End-to-End Testing</h3>
<p>Now that we’ve seen how integration testing ensures that different components work together, let’s move to the third layer, End-to-End (E2E) testing.</p>
<p>While unit and integration tests run in isolated or simulated environments, E2E tests mimic how real users interact with your app.</p>
<p>They open a browser and perform actions like clicking buttons, typing in fields, and verifying what appears on the screen, exactly like a real person would.</p>
<p>Think of E2E testing as putting your entire app on stage and watching if it performs flawlessly in front of the audience. In simple words, E2E testing verifies the full user journey from start to finish.</p>
<p>Let’s take a common example, a login flow. As a developer, you’ve probably built dozens of login forms, but how do you know if they truly work under real conditions? That’s where E2E testing comes in.</p>
<p>Using tools like <a target="_blank" href="https://playwright.dev/">Playwright</a> or <a target="_blank" href="https://www.cypress.io/">Cypress</a>, you can perform effective E2E testing. Both Playwright and Cypress are powerful tools and are popular among developers.</p>
<p>We can simulate a real browser, fill out the login form, submit it, and confirm that the user is redirected to the dashboard. Here’s what a simple E2E test looks like using Playwright:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// tests/login.e2e.ts</span>
<span class="hljs-keyword">import</span> { test, expect } <span class="hljs-keyword">from</span> <span class="hljs-string">"@playwright/test"</span>;

test(<span class="hljs-string">"should login successfully"</span>, <span class="hljs-keyword">async</span> ({ page }) =&gt; {
  <span class="hljs-comment">// 1. Visit the login page</span>
  <span class="hljs-keyword">await</span> page.goto(<span class="hljs-string">"http://localhost:3000/login"</span>);

  <span class="hljs-comment">// 2. Fill in the form</span>
  <span class="hljs-keyword">await</span> page.fill(<span class="hljs-string">'input[name="email"]'</span>, <span class="hljs-string">"user@example.com"</span>);
  <span class="hljs-keyword">await</span> page.fill(<span class="hljs-string">'input[name="password"]'</span>, <span class="hljs-string">"password123"</span>);

  <span class="hljs-comment">// 3. Click login button</span>
  <span class="hljs-keyword">await</span> page.click(<span class="hljs-string">'button[type="submit"]'</span>);

  <span class="hljs-comment">// 4. Wait for navigation and verify success message or dashboard</span>
  <span class="hljs-keyword">await</span> expect(page).toHaveURL(<span class="hljs-string">"http://localhost:3000/dashboard"</span>);
  <span class="hljs-keyword">await</span> expect(page.getByText(<span class="hljs-string">"Welcome back!"</span>)).toBeVisible();
});
</code></pre>
<p>Let’s understand what’s happening here step-by-step:</p>
<ul>
<li><p><strong>Visit the page:</strong> The test opens your web app in a real browser. It navigates to <code>http://localhost:3000/login</code>.</p>
</li>
<li><p><strong>Simulate user input:</strong> Playwright fills in the email and password fields, just like a real user typing into the form.</p>
</li>
<li><p><strong>Perform actions:</strong> It clicks the login button, triggering all the same logic your frontend and backend would normally handle.</p>
</li>
<li><p><strong>Verify the outcome:</strong> Once the user logs in, check if the URL changes to <code>/dashboard</code> and whether a welcome message appears on the screen.</p>
</li>
</ul>
<p>That’s it, you just automated your first user journey from login to dashboard. Both frameworks achieve the same goal, ensuring your app behaves correctly in a real browser, not just in isolated tests.</p>
<h3 id="heading-ai-augmented-testing">AI-Augmented Testing</h3>
<p>As testing evolves, a new layer has emerged that is <strong>AI-Augmented QA</strong>. This isn’t just another tool in the developer’s toolkit. It’s a complete transformation in how software quality is managed.</p>
<p>Traditionally, testing has been a manual process. Engineers wrote, maintained, and updated test cases whenever the product changed. But with AI entering the scene, that manual burden is decreasing.</p>
<p>AI models can now analyze your codebase, understand logic, and generate relevant test cases almost instantly, covering edge cases you might never think of. Tools like <a target="_blank" href="https://github.com/features/copilot">GitHub Copilot</a> and <a target="_blank" href="https://www.codium.ai/qodo/">CodiumAI</a> already assist in generating smart test suites, while continuously learning from your coding style and past patterns.</p>
<p>Beyond code suggestions, complete AI QA platforms are changing automation itself. For example, an AI QA agent like <a target="_blank" href="https://bug0.com/">Bug0</a> can adjust to UI changes automatically. If a button label or DOM structure changes, its self-healing tests find elements visually instead of depending on fixed selectors.</p>
<p>It also produces real-time test reports with detailed logs and video recordings, helping developers pinpoint UI or data changes causing failures.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759925194041/7b3a5b82-6313-4ce8-8ae8-6d80dafbc5be.png" alt="A screenshot of a code editor displaying a test script, including code snippets for page navigation and URL checks. Below the code, there is a section labeled &quot;Videos&quot; with a video player showing" class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>With CI/CD integrations like GitHub or GitLab, it can automatically start and validate test runs for every pull request, updating PR checks just like a human QA engineer would.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759924826000/ff55cf75-0b8d-4d01-9f12-2f1920be6862.png" alt="A screenshot of a GitHub interface showing a failed Vercel deployment, a skipped public API test, and six successful checks. An arrow points to the successful &quot;Bug0 QA Agent&quot; test. Notifications also indicate that a review is required, and the branch is out-of-date with the base branch." class="image--center mx-auto" width="600" height="400" loading="lazy"></p>
<p>While AI-assisted testing is powerful, it’s not a full replacement for human judgment. Developers still play a vital role in the following ways:</p>
<ul>
<li><p>AI can generate test cases, but humans must decide what truly matters for business logic and user experience.</p>
</li>
<li><p>Reviewing AI-generated tests to ensure they are relevant and to avoid false positives.</p>
</li>
<li><p>Interpreting failures contextually means understanding whether a test failure indicates a real bug or an expected change.</p>
</li>
<li><p>Maintaining ethical and data-safe workflows involves avoiding the exposure of sensitive data when using cloud-based AI tools.</p>
</li>
</ul>
<p>When used responsibly, AI becomes a testing partner, automating the tedious tasks while leaving creative problem-solving, decision-making, and domain understanding to developers.</p>
<p>This shift marks the beginning of intelligent, autonomous QA. AI isn’t just automating repetitive testing, it’s transforming the process into a continuous, adaptive feedback loop, capable of predicting and resolving failures on its own.</p>
<p>In the coming years, expect testing to evolve into a collaborative process between human engineers and AI copilots, ensuring every release is not just faster, but smarter and more reliable than ever before.</p>
<h2 id="heading-future-of-javascript-testing">Future of JavaScript Testing</h2>
<p>JavaScript testing is changing faster than ever. A few years ago, developers had to deal with tons of testing libraries and confusing setups. Now, things are becoming much more unified, smarter, and easier to work with.</p>
<p>In the future, testing will move from being reactive to proactive. That means instead of catching bugs after they happen, tools will be smart enough to predict and prevent them before they appear.</p>
<p>With AI-powered test generation and real-time monitoring, every commit you make could be automatically checked for reliability and performance without you even running a command.</p>
<p>Frameworks like <code>Vitest</code>, <code>Playwright</code>, and <code>React Testing Library</code> will still be the core tools, but the real progress will come from how they integrate and learn.</p>
<p>We’ll also see tighter CI/CD integrations, where pipelines can automatically adjust based on your test coverage and code risk. Testing won’t feel like an extra step anymore, it’ll become a natural part of development, powered by both human logic and machine intelligence.</p>
<p>In short, the future of JavaScript testing is about speed, intelligence, and automation. A world where developers spend more time building and less time debugging.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Testing isn’t just about preventing bugs, it’s about building confidence. Confidence that your code works, your features scale, and your users have a seamless experience.</p>
<p>Whether it’s unit tests ensuring logic, integration tests validating flow, E2E tests simulating real behavior, or AI-enhanced automation managing it all. Testing is the silent force that makes great software possible.</p>
<p>As a developer, understanding how testing fits into your workflow is no longer optional. Rather, it’s a skill that sets you apart. The more you test, the better you code and the faster you ship with peace of mind.</p>
<p>So, the next time someone says <strong>writing tests isn’t your job</strong>, you’ll know the truth: Testing isn’t extra work. Instead, it’s part of writing better, more reliable software.</p>
<h2 id="heading-before-we-end"><strong>Before We End</strong></h2>
<p>I hope you found this article insightful. I’m Ajay Yadav, a software developer and content creator.</p>
<p>You can connect with me on:</p>
<ul>
<li><p><a target="_blank" href="https://x.com/atechajay">Twitter/X</a> and <a target="_blank" href="https://www.linkedin.com/in/atechajay/">LinkedIn</a>, where I share insights to help you improve 0.01% each day.</p>
</li>
<li><p>Check out my <a target="_blank" href="https://github.com/ATechAjay">GitHub</a> for more projects.</p>
</li>
<li><p>I also run a <a target="_blank" href="http://youtube.com/@atechajay">YouTube Channel</a> where I share content about careers, software engineering, and technical writing.</p>
</li>
</ul>
<p>See you in the next article — until then, keep learning!</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Optimize Search in JavaScript with Debouncing ]]>
                </title>
                <description>
                    <![CDATA[ A few months ago, my manager assigned me a task: implement a search functionality across an entire page. The tricky part was that the displayed text was shown in the form of prompts, and each prompt could be truncated after two lines. If the text exc... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/optimize-search-in-javascript-with-debouncing/</link>
                <guid isPermaLink="false">68d2d63ab7bb607e13c45cf4</guid>
                
                    <category>
                        <![CDATA[ JavaScript ]]>
                    </category>
                
                    <category>
                        <![CDATA[ optimization ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Ajay Yadav ]]>
                </dc:creator>
                <pubDate>Tue, 23 Sep 2025 17:17:46 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758647563748/4f1c792d-5912-4bbb-9144-fcdda83d78ec.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>A few months ago, my manager assigned me a task: implement a search functionality across an entire page. The tricky part was that the displayed text was shown in the form of prompts, and each prompt could be truncated after two lines.</p>
<p>If the text exceeded the limit, a split button appeared, allowing users to open the full prompt in a separate split-view section (see the illustration below for better understanding).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757771259264/66158a97-07d1-4b4f-ad24-c57c62a498ae.png" alt="A black interface with a search bar at the top left, a rating of 5/10, and navigation arrows. Below are four text boxes with dummy text snippets. On the right is a detailed explanation of &quot;Lorem Ipsum,&quot; discussing its history in the printing and typesetting industry. The page number is 1 of 7 out of 12." class="image--center mx-auto" width="1076" height="705" loading="lazy"></p>
<p>Now, if the requirement had been just plain text, I could have solved it with a simple regex-based search. In fact, inside the split view itself, I initially used a regex approach for searching along with navigation to the matches. That worked fine.</p>
<p>Since I already had a working search helper function, I thought, “Why not reuse it for the global search as well?”</p>
<p>Well, I tried that. But this time, the UI started lagging whenever I clicked the next/previous buttons in the search bar. Even the pagination controls on the top-right slowed down during navigation. To figure out a better approach, I turned to AI tools for brainstorming and came across multiple ideas and concepts.</p>
<p>As developers, we use Google daily, and naturally, I became curious about how Google’s search works under the hood. I opened up Chrome DevTools, started typing in Google’s search bar, and noticed something interesting.</p>
<p>While Google Search updates results in real-time with each keystroke, we don’t have Google’s server power. In our apps, debouncing is a practical way to avoid unnecessary API calls and improve performance. That idea matched exactly what ChatGPT had suggested to me earlier.</p>
<p>So, I applied a similar approach to my project and finally delivered the feature using debouncing, along with React hooks like <code>useTransition</code> and <code>useDeferredValue</code>. That’s how the idea for this article came out.</p>
<p>In this article, I’ll show you how to optimize your application’s performance by implementing the debounce technique.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-problem-without-debouncing">Problem Without Debouncing</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-what-is-debouncing">What is Debouncing</a>?</p>
</li>
<li><p><a class="post-section-overview" href="#heading-how-to-implement-debouncing-in-javascript">How to Implement Debouncing in JavaScript</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-benefits-of-using-debouncing-in-search">Benefits of Using Debouncing in Search</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-common-mistakes-to-avoid">Common Mistakes to Avoid</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-before-we-end">Before We End</a></p>
</li>
</ul>
<p>Let’s dig in.</p>
<h2 id="heading-problem-without-debouncing">Problem Without Debouncing</h2>
<p>Imagine you’re building a search bar that fetches results from an API. Every time the user types a letter, the search bar immediately makes a new request.</p>
<p>If someone types the word “JavaScript”, that means 10 separate API calls will be fired — one for each character.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758013146153/204187ef-1757-490b-8b29-7d11951c88b3.png" alt="the Google search page with &quot;javascript&quot; in the input box" class="image--center mx-auto" width="2047" height="1506" loading="lazy"></p>
<p>Now, in Google Search, results update in real-time with every keystroke. But unlike Google, we don’t have massive infrastructure to handle that load. In most applications, firing a request for every single character quickly becomes inefficient.</p>
<p>At first, this might not seem like a big deal, but in practice it leads to serious problems. The browser has to manage a flood of unnecessary requests, the server gets overloaded with repeated calls, and the user ends up with a laggy or inconsistent experience. The whole interface feels heavy and unresponsive.</p>
<p>This is exactly the situation I ran into when I reused my simple regex-based search function for the global search. It worked fine for a small prompt inside the split view, but when applied at a larger scale with navigation buttons and pagination, the UI started freezing and slowing down.</p>
<h2 id="heading-what-is-debouncing">What is Debouncing?</h2>
<p>Debouncing is a technique, not a programming language feature. It’s simply a way to control how often a function gets called. Instead of running the function every single time an event happens, you delay its execution.</p>
<p>If the event keeps firing during that delay, the timer resets. The function only runs when the user finally pauses.</p>
<p>Think about typing into a search bar. Without debouncing, the app would make a request for every keystroke. With debouncing, the app waits until the user stops typing for a short time—say 300 milliseconds and then makes just one request with the final input.</p>
<p>Behind the scenes, this is usually implemented with <code>setTimeout</code> and <code>clearTimeout</code>. A timer starts when the event occurs, and if another event happens before the timer ends, the timer is cleared and restarted. Only when the user stops typing for the specified delay does the function execute.</p>
<h2 id="heading-how-to-implement-debouncing-in-javascript">How to Implement Debouncing in JavaScript</h2>
<p>As I mentioned earlier, debouncing is not tied to any specific programming language. It’s simply a concept that can be implemented using timers. In JavaScript, we typically use <code>setTimeout</code> and <code>clearTimeout</code> to achieve this.</p>
<p>Here’s a simple example of a debounce function in JavaScript:</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">debounce</span>(<span class="hljs-params">fn, delay</span>) </span>{
  <span class="hljs-keyword">let</span> timer;
  <span class="hljs-keyword">return</span> <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">...args</span>) </span>{
    <span class="hljs-built_in">clearTimeout</span>(timer);
    timer = <span class="hljs-built_in">setTimeout</span>(<span class="hljs-function">() =&gt;</span> {
      fn.apply(<span class="hljs-built_in">this</span>, args);
    }, delay);
  };
}
</code></pre>
<p>We start with a function <code>debounce</code> that takes two arguments:</p>
<ul>
<li><p><code>fn</code> is the function we want to control, such as the API call.</p>
</li>
<li><p><code>delay</code> is how long we want to wait before actually running <code>fn</code>.</p>
</li>
</ul>
<p>Inside, we declare a variable <code>timer</code>. This will hold the reference to the <code>setTimeout</code>.</p>
<p>The <code>debounce</code> function then returns another function. This returned function is the one that will actually run whenever an event (like typing in the input or API call) happens.</p>
<p>Every time the user types, the first thing you do is <code>clearTimeout(timer)</code>. This cancels any previously scheduled function call. Then you create a new timeout with <code>setTimeout</code>.</p>
<p>If the user keeps typing before the delay finishes, the old timer is cleared and restarted. Only when they pause long enough does the timeout finish and <code>fn</code> gets executed.</p>
<p>Did you notice how I used <code>fn.apply(this, args)</code>? That’s just a safe way of calling the original function with the correct <code>this</code> context and passing along all arguments.</p>
<p>Now here’s how you use it in practice:</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fetchResults</span>(<span class="hljs-params">query</span>) </span>{
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Fetching results for:"</span>, query);
  <span class="hljs-comment">// Here you could call your API</span>
}

<span class="hljs-comment">// Wrap it with debounce</span>
<span class="hljs-keyword">const</span> debouncedSearch = debounce(fetchResults, <span class="hljs-number">300</span>);

<span class="hljs-comment">// Attach to input event</span>
<span class="hljs-keyword">const</span> input = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"search"</span>);
input.addEventListener(<span class="hljs-string">"input"</span>, <span class="hljs-function">(<span class="hljs-params">e</span>) =&gt;</span> {
  debouncedSearch(e.target.value);
});
</code></pre>
<ol>
<li><p><code>fetchResults</code> is our actual search function. Normally, it would run for every keystroke.</p>
</li>
<li><p>We wrap it with <code>debounce</code> and set a delay of 300ms. That means it won’t run until the user stops typing for 300ms.</p>
</li>
<li><p>On every <code>input</code> event, instead of calling <code>fetchResults</code> directly, we call <code>debouncedSearch</code>. This ensures only the debounced version of the function executes.</p>
</li>
</ol>
<p>So if a user types “hello,” instead of five API calls, only one or two will fire once they pause.</p>
<h2 id="heading-benefits-of-using-debouncing-in-search">Benefits of Using Debouncing in Search</h2>
<p>Using debouncing in a search feature may feel like a small optimization, but it has a big impact. The most obvious benefit is performance.</p>
<p>Instead of making a request for every single keystroke, your app waits until the user pauses, which saves both browser and server resources. The UI feels much smoother because it isn’t constantly being interrupted by unnecessary calls.</p>
<p>Debouncing also improves scalability. If hundreds or thousands of users are typing at once, you’re cutting down a huge number of wasted API calls. That means your backend can handle more users without getting overloaded.</p>
<p>There’s also an indirect benefit for SEO and analytics. When your app performs efficiently and feels snappy, users stay longer, interact more, and bounce less. This kind of responsiveness can make a big difference in how people perceive the quality of your product.</p>
<h2 id="heading-common-mistakes-to-avoid">Common Mistakes to Avoid</h2>
<p>While debouncing is powerful, there are a few mistakes developers often make. One common issue is setting the delay too high. If you make users wait one or two seconds before seeing results, the search will feel unresponsive.</p>
<p>On the other hand, a delay that’s too short may not reduce calls enough to be useful. A sweet spot is usually between 300–500 milliseconds, but it depends on your use case.</p>
<p>Another mistake is forgetting to clear old timers. Without clearing, your app may still execute older, outdated calls, which can lead to glitches or memory leaks. That’s why <code>clearTimeout</code> is just as important as <code>setTimeout</code> in any debounce function.</p>
<p>It’s also important to think about edge cases. What happens if the input is cleared quickly? Or if someone pastes a long string instead of typing? Testing these cases ensures your debounce function works smoothly in real-world scenarios.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>When I first encountered the challenge of building a global search, I thought I could simply reuse my basic regex-based solution. However, the UI soon began to lag, and the user experience declined. It's surprising how such a small concept can significantly impact performance.</p>
<p>Debouncing ensures that your functions run at the right time, not every time. Whether you’re building a simple JavaScript app or working with React and Next.js, this technique helps reduce unnecessary calls, improves performance, and keeps your app scalable.</p>
<p>So the next time you build a search bar, remember: don’t just make it work, make it efficient.</p>
<h2 id="heading-before-we-end">Before We End</h2>
<p>I hope you found this article insightful. I’m Ajay Yadav, a software developer and content creator.</p>
<p>You can connect with me on:</p>
<ul>
<li><p><a target="_blank" href="https://x.com/atechajay">Twitter/X</a> and <a target="_blank" href="https://www.linkedin.com/in/atechajay/">LinkedIn</a>, where I share insights to help you improve 0.01% each day.</p>
</li>
<li><p>Check out my <a target="_blank" href="https://github.com/ATechAjay">GitHub</a> for more projects.</p>
</li>
<li><p>I also run a Hindi <a target="_blank" href="http://youtube.com/@atechajay">YouTube Channel</a> where I share content about careers, software engineering, and technical writing.</p>
</li>
</ul>
<p>See you in the next article — until then, keep learning!</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
