<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ iyiola - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Tue, 12 May 2026 20:30:46 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/author/iyiola/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How Database Indexes Work – A Practical Guide with PostgreSQL Examples ]]>
                </title>
                <description>
                    <![CDATA[ Every developer eventually runs into a slow query. The table has grown from a few hundred rows to a few million, and what used to take milliseconds now takes seconds — or worse. The fix, more often th ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-database-indexes-work-a-practical-guide-with-postgresql-examples/</link>
                <guid isPermaLink="false">69e11c10ffbb787634dea035</guid>
                
                    <category>
                        <![CDATA[ Databases ]]>
                    </category>
                
                    <category>
                        <![CDATA[ PostgreSQL ]]>
                    </category>
                
                    <category>
                        <![CDATA[ indexing ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ iyiola ]]>
                </dc:creator>
                <pubDate>Thu, 16 Apr 2026 17:27:44 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/cf6919a4-f803-4783-83ff-5c7674141c55.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Every developer eventually runs into a slow query. The table has grown from a few hundred rows to a few million, and what used to take milliseconds now takes seconds — or worse.</p>
<p>The fix, more often than not, is an index.</p>
<p>A database index is a data structure that helps the database find rows faster without scanning the entire table. It works a lot like the index at the back of a textbook: instead of reading every page to find a topic, you look it up in the index, get the page number, and go straight there.</p>
<p>In this tutorial, you'll learn how indexes work under the hood, how to create and use them effectively in PostgreSQL, and how to avoid the common mistakes that make indexes useless or even harmful.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-why-do-you-need-indexes">Why Do You Need Indexes?</a></p>
</li>
<li><p><a href="#heading-how-indexes-work-under-the-hood">How Indexes Work Under the Hood</a></p>
</li>
<li><p><a href="#heading-how-to-create-your-first-index">How to Create Your First Index</a></p>
</li>
<li><p><a href="#heading-how-to-use-explain-analyze-to-measure-performance">How to Use EXPLAIN ANALYZE to Measure Performance</a></p>
</li>
<li><p><a href="#heading-types-of-indexes-in-postgresql">Types of Indexes in PostgreSQL</a></p>
</li>
<li><p><a href="#heading-how-to-create-a-composite-index">How to Create a Composite Index</a></p>
</li>
<li><p><a href="#heading-how-to-create-a-partial-index">How to Create a Partial Index</a></p>
</li>
<li><p><a href="#heading-how-to-create-an-expression-index">How to Create an Expression Index</a></p>
</li>
<li><p><a href="#heading-how-to-create-a-unique-index">How to Create a Unique Index</a></p>
</li>
<li><p><a href="#heading-how-to-manage-indexes">How to Manage Indexes</a></p>
</li>
<li><p><a href="#heading-when-indexes-hurt-instead-of-help">When Indexes Hurt Instead of Help</a></p>
</li>
<li><p><a href="#heading-common-mistakes-that-prevent-index-usage">Common Mistakes That Prevent Index Usage</a></p>
</li>
<li><p><a href="#heading-best-practices-for-indexing">Best Practices for Indexing</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow along with the examples, you'll need:</p>
<ul>
<li><p>Basic knowledge of SQL (SELECT, INSERT, UPDATE, DELETE, WHERE, JOIN)</p>
</li>
<li><p>A running PostgreSQL instance (version 12 or later)</p>
</li>
<li><p>A SQL client like <code>psql</code>, pgAdmin, or DBeaver</p>
</li>
</ul>
<p>If you don't have PostgreSQL installed locally, you can use a free cloud-hosted instance from services like <a href="https://neon.tech">Neon</a> or <a href="https://supabase.com">Supabase</a>.</p>
<h2 id="heading-why-do-you-need-indexes">Why Do You Need Indexes?</h2>
<p>When you run a query like <code>SELECT * FROM users WHERE email = 'jane@example.com'</code>, the database needs to find the matching row. Without an index, PostgreSQL performs a <strong>sequential scan</strong> — it reads every single row in the table and checks whether the <code>email</code> column matches.</p>
<p>For a table with 100 rows, this is fine. For a table with 10 million rows, it's painfully slow.</p>
<p>An index solves this by creating a separate, sorted data structure that maps column values to their row locations. Instead of scanning 10 million rows, PostgreSQL can look up the value in the index and jump directly to the matching row. This can reduce query time from seconds to milliseconds.</p>
<p>But indexes aren't free. They come with trade-offs you need to understand before adding them everywhere. You'll learn about those trade-offs throughout this tutorial.</p>
<h2 id="heading-how-indexes-work-under-the-hood">How Indexes Work Under the Hood</h2>
<p>PostgreSQL's default index type is the <strong>B-tree</strong> (balanced tree). Understanding how a B-tree works will help you make smarter decisions about when and how to index.</p>
<p>A B-tree organizes data into a sorted, hierarchical structure with three levels:</p>
<ol>
<li><p><strong>Root node</strong> — the top of the tree. It holds a few values that divide the data into broad ranges.</p>
</li>
<li><p><strong>Internal nodes</strong> — each one further narrows down the range.</p>
</li>
<li><p><strong>Leaf nodes</strong> — the bottom level. These hold the actual indexed values along with pointers to the corresponding rows in the table.</p>
</li>
</ol>
<p>When PostgreSQL uses a B-tree index to find a value, it starts at the root and follows the path that matches the target value, moving through internal nodes until it reaches the correct leaf node. This path is called a <strong>tree traversal</strong>, and it typically requires only 3–4 steps even for tables with millions of rows.</p>
<p>Think of it like a phone book. You don't start at page one and read every name. You open to roughly the right section (root), narrow it down to the right page (internal nodes), and scan the entries on that page (leaf node).</p>
<p>This sorted structure is also why B-tree indexes work well for range queries like <code>WHERE price &gt; 50 AND price &lt; 100</code>. The database finds the starting point in the tree and then scans forward through the leaf nodes, which are already in order.</p>
<h2 id="heading-how-to-create-your-first-index">How to Create Your First Index</h2>
<p>Let's build a practical example. You'll create a table, load it with data, and see the difference an index makes.</p>
<h3 id="heading-step-1-create-the-table-and-insert-sample-data">Step 1 – Create the Table and Insert Sample Data</h3>
<pre><code class="language-sql">CREATE TABLE customers (
    id SERIAL PRIMARY KEY,
    first_name VARCHAR(50) NOT NULL,
    last_name VARCHAR(50) NOT NULL,
    email VARCHAR(100) NOT NULL,
    city VARCHAR(50),
    created_at TIMESTAMP DEFAULT NOW()
);
</code></pre>
<p>Now insert a large number of rows so the performance difference is visible. This generates 500,000 rows of sample data:</p>
<pre><code class="language-sql">INSERT INTO customers (first_name, last_name, email, city)
SELECT
    'User' || gs,
    'Last' || gs,
    'user' || gs || '@example.com',
    (ARRAY['Lagos', 'London', 'New York', 'Berlin', 'Tokyo'])[1 + (gs % 5)]
FROM generate_series(1, 500000) AS gs;
</code></pre>
<h3 id="heading-step-2-query-without-an-index">Step 2 – Query Without an Index</h3>
<pre><code class="language-sql">EXPLAIN ANALYZE
SELECT * FROM customers WHERE email = 'user250000@example.com';
</code></pre>
<p>You'll see output similar to this:</p>
<pre><code class="language-plaintext">Seq Scan on customers  (cost=0.00..11374.00 rows=1 width=52) (actual time=45.123..91.456 rows=1 loops=1)
  Filter: ((email)::text = 'user250000@example.com'::text)
  Rows Removed by Filter: 499999
Planning Time: 0.085 ms
Execution Time: 91.502 ms
</code></pre>
<p>The key detail here is <code>Seq Scan</code> — PostgreSQL scanned all 500,000 rows to find a single match. It filtered out 499,999 rows. That's a lot of wasted work.</p>
<h3 id="heading-step-3-create-an-index">Step 3 – Create an Index</h3>
<pre><code class="language-sql">CREATE INDEX idx_customers_email ON customers (email);
</code></pre>
<p>This creates a B-tree index on the <code>email</code> column. The name <code>idx_customers_email</code> follows a common naming convention: <code>idx_</code> prefix, then the table name, then the column name.</p>
<h3 id="heading-step-4-query-with-the-index">Step 4 – Query With the Index</h3>
<p>Run the same query again:</p>
<pre><code class="language-sql">EXPLAIN ANALYZE
SELECT * FROM customers WHERE email = 'user250000@example.com';
</code></pre>
<p>Now you'll see something like this:</p>
<pre><code class="language-plaintext">Index Scan using idx_customers_email on customers  (cost=0.42..8.44 rows=1 width=52) (actual time=0.034..0.036 rows=1 loops=1)
  Index Cond: ((email)::text = 'user250000@example.com'::text)
Planning Time: 0.112 ms
Execution Time: 0.058 ms
</code></pre>
<p>The scan type changed from <code>Seq Scan</code> to <code>Index Scan</code>. The execution time dropped from ~91ms to ~0.06ms. That's roughly a 1,500x improvement — from one line of SQL.</p>
<h2 id="heading-how-to-use-explain-analyze-to-measure-performance">How to Use <code>EXPLAIN ANALYZE</code> to Measure Performance</h2>
<p><code>EXPLAIN ANALYZE</code> is your most important tool for understanding how PostgreSQL executes a query. You already saw it in the previous section, but let's break down what the output means.</p>
<pre><code class="language-sql">EXPLAIN ANALYZE SELECT * FROM customers WHERE city = 'Lagos';
</code></pre>
<p>The output will tell you several things:</p>
<ul>
<li><p><strong>Scan type</strong> — whether PostgreSQL used a sequential scan, index scan, bitmap index scan, or another access method</p>
</li>
<li><p><strong>Cost</strong> — the estimated cost in arbitrary units. The first number is the startup cost, the second is the total cost</p>
</li>
<li><p><strong>Rows</strong> — how many rows PostgreSQL estimated it would find versus how many it actually found</p>
</li>
<li><p><strong>Actual time</strong> — the real time in milliseconds to execute the query</p>
</li>
<li><p><strong>Rows Removed by Filter</strong> — how many rows were scanned but didn't match the condition</p>
</li>
</ul>
<p>If you see <code>Seq Scan</code> on a large table with a selective WHERE clause, that's usually a sign you need an index. If you see <code>Index Scan</code> or <code>Index Only Scan</code>, your index is working.</p>
<p>One thing to keep in mind: <code>EXPLAIN</code> without <code>ANALYZE</code> shows the plan without actually running the query. <code>EXPLAIN ANALYZE</code> runs the query and shows real timing data. Always use <code>EXPLAIN ANALYZE</code> when you're investigating performance, but be careful with it on destructive queries — <code>EXPLAIN ANALYZE DELETE FROM ...</code> will actually delete the rows. Wrap those in a transaction and roll back:</p>
<pre><code class="language-sql">BEGIN;
EXPLAIN ANALYZE DELETE FROM customers WHERE city = 'Berlin';
ROLLBACK;
</code></pre>
<h2 id="heading-types-of-indexes-in-postgresql">Types of Indexes in PostgreSQL</h2>
<p>PostgreSQL supports several index types, each optimized for different query patterns.</p>
<h3 id="heading-b-tree-default">B-tree (Default)</h3>
<p>B-tree is the default index type and covers the vast majority of use cases. It supports equality checks (<code>=</code>), range queries (<code>&lt;</code>, <code>&gt;</code>, <code>&lt;=</code>, <code>&gt;=</code>, <code>BETWEEN</code>), sorting (<code>ORDER BY</code>), and <code>IS NULL</code> / <code>IS NOT NULL</code> checks.</p>
<pre><code class="language-sql">-- These are equivalent – B-tree is the default
CREATE INDEX idx_name ON customers (last_name);
CREATE INDEX idx_name ON customers USING btree (last_name);
</code></pre>
<p>Use B-tree when you don't have a specific reason to use something else.</p>
<h3 id="heading-hash">Hash</h3>
<p>Hash indexes are optimized purely for equality comparisons (<code>=</code>). They don't support range queries or sorting. In practice, B-tree handles equality checks almost as fast, so hash indexes are rarely necessary.</p>
<pre><code class="language-sql">CREATE INDEX idx_email_hash ON customers USING hash (email);
</code></pre>
<p>Consider a hash index only if you have a very large table with frequent equality-only lookups and want to save a small amount of index space.</p>
<h3 id="heading-gin-generalized-inverted-index">GIN (Generalized Inverted Index)</h3>
<p>GIN indexes are designed for values that contain multiple elements — like arrays, JSONB documents, or full-text search vectors. Instead of indexing a single value per row, GIN indexes every element within the value.</p>
<pre><code class="language-sql">-- Add a JSONB column
ALTER TABLE customers ADD COLUMN preferences JSONB DEFAULT '{}';

-- Index the JSONB column
CREATE INDEX idx_preferences ON customers USING gin (preferences);

-- Now this query uses the GIN index
SELECT * FROM customers WHERE preferences @&gt; '{"newsletter": true}';
</code></pre>
<p>Use GIN when you're querying inside JSONB data, searching arrays with <code>@&gt;</code> or <code>&amp;&amp;</code>, or doing full-text search with <code>tsvector</code>.</p>
<h3 id="heading-gist-generalized-search-tree">GiST (Generalized Search Tree)</h3>
<p>GiST indexes support geometric data, ranges, and full-text search. They're commonly used with PostGIS for geospatial queries.</p>
<pre><code class="language-sql">-- Range type example
CREATE TABLE events (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    duration TSRANGE
);

CREATE INDEX idx_event_duration ON events USING gist (duration);

-- Find overlapping events
SELECT * FROM events WHERE duration &amp;&amp; '[2025-01-01, 2025-01-31]'::tsrange;
</code></pre>
<p>Use GiST when you're working with spatial data, range types, or need overlap/containment operators.</p>
<h3 id="heading-brin-block-range-index">BRIN (Block Range Index)</h3>
<p>BRIN indexes are extremely small and work well on large tables where the physical row order correlates with the indexed column's value. A common example is a timestamp column on an append-only table where new rows always have later timestamps.</p>
<pre><code class="language-sql">CREATE INDEX idx_created_at_brin ON customers USING brin (created_at);
</code></pre>
<p>BRIN stores summary information (min/max values) for each block of rows rather than indexing every row individually. This makes the index much smaller than a B-tree, but it only works well when the data is naturally ordered.</p>
<p>Use BRIN for very large, append-only tables with naturally ordered data — like logs, events, or time-series data.</p>
<h2 id="heading-how-to-create-a-composite-index">How to Create a Composite Index</h2>
<p>A composite index (also called a multi-column index) covers more than one column. It's useful when your queries frequently filter or sort by multiple columns together.</p>
<pre><code class="language-sql">CREATE INDEX idx_city_lastname ON customers (city, last_name);
</code></pre>
<p>The order of columns in a composite index matters. PostgreSQL can use this index for queries that filter on <code>city</code> alone, or on both <code>city</code> and <code>last_name</code>. But it <strong>can't</strong> efficiently use this index for queries that filter only on <code>last_name</code>.</p>
<p>Think of it like a phone book sorted by city first, then by last name within each city. You can easily look up everyone in Lagos. You can also look up everyone named "Adeyemi" in Lagos. But finding all people named "Adeyemi" across all cities requires scanning the whole book.</p>
<p>This principle is called the <strong>leftmost prefix rule</strong>: PostgreSQL can use a composite index for queries that include the leftmost column(s) of the index, but not for queries that skip them.</p>
<pre><code class="language-sql">-- ✅ Uses the index (matches leftmost column)
SELECT * FROM customers WHERE city = 'Lagos';

-- ✅ Uses the index (matches both columns, left to right)
SELECT * FROM customers WHERE city = 'Lagos' AND last_name = 'Adeyemi';

-- ❌ Cannot use this index efficiently (skips the leftmost column)
SELECT * FROM customers WHERE last_name = 'Adeyemi';
</code></pre>
<p>When deciding column order, place the most selective column first — the one that narrows down the results the most.</p>
<h2 id="heading-how-to-create-a-partial-index">How to Create a Partial Index</h2>
<p>A partial index covers only a subset of rows in a table. You define the subset with a WHERE clause in the index definition.</p>
<p>This is useful when you only query a specific portion of the data. For example, if you have an <code>orders</code> table and you frequently query for pending orders but rarely look at completed ones:</p>
<pre><code class="language-sql">CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    customer_id INT NOT NULL,
    status VARCHAR(20) NOT NULL DEFAULT 'pending',
    total NUMERIC(10, 2),
    created_at TIMESTAMP DEFAULT NOW()
);

-- Only index rows where status is 'pending'
CREATE INDEX idx_orders_pending ON orders (customer_id)
WHERE status = 'pending';
</code></pre>
<p>This index is smaller than a full index because it skips all rows that don't match the WHERE condition. Smaller indexes use less disk space, consume less memory, and are faster to maintain during writes.</p>
<p>For the index to be used, your query's WHERE clause must match the index's condition:</p>
<pre><code class="language-sql">-- ✅ Uses the partial index
SELECT * FROM orders WHERE status = 'pending' AND customer_id = 42;

-- ❌ Cannot use the partial index (different status)
SELECT * FROM orders WHERE status = 'shipped' AND customer_id = 42;
</code></pre>
<h2 id="heading-how-to-create-an-expression-index">How to Create an Expression Index</h2>
<p>Sometimes you need to index the result of a function or expression rather than a raw column value. Expression indexes (also called functional indexes) handle this.</p>
<p>A common scenario is case-insensitive email lookups. If your queries use <code>LOWER(email)</code>, a regular index on <code>email</code> won't help — PostgreSQL sees the function call as a different expression.</p>
<pre><code class="language-sql">-- Regular index on email – won't help with LOWER() queries
CREATE INDEX idx_email ON customers (email);

-- This query does NOT use the index above
SELECT * FROM customers WHERE LOWER(email) = 'user100@example.com';
</code></pre>
<p>To fix this, create an index on the expression itself:</p>
<pre><code class="language-sql">CREATE INDEX idx_email_lower ON customers (LOWER(email));
</code></pre>
<p>Now queries that use <code>LOWER(email)</code> in their WHERE clause will use this index:</p>
<pre><code class="language-sql">-- ✅ Uses the expression index
SELECT * FROM customers WHERE LOWER(email) = 'user100@example.com';
</code></pre>
<p>The rule is straightforward: the expression in your query must match the expression in the index exactly. If the index is on <code>LOWER(email)</code>, your query must also use <code>LOWER(email)</code>.</p>
<h2 id="heading-how-to-create-a-unique-index">How to Create a Unique Index</h2>
<p>A unique index guarantees that no two rows have the same value (or combination of values) in the indexed columns. It serves a dual purpose: it enforces data integrity and provides fast lookups.</p>
<pre><code class="language-sql">CREATE UNIQUE INDEX idx_customers_email_unique ON customers (email);
</code></pre>
<p>If you try to insert a duplicate value, PostgreSQL will reject the operation:</p>
<pre><code class="language-sql">INSERT INTO customers (first_name, last_name, email, city)
VALUES ('Test', 'User', 'user1@example.com', 'Lagos');
-- ERROR: duplicate key value violates unique constraint "idx_customers_email_unique"
</code></pre>
<p>You might wonder how this differs from a UNIQUE constraint. Under the hood, PostgreSQL implements UNIQUE constraints by creating a unique index. The two are functionally identical.</p>
<p>The difference is intent — a UNIQUE constraint expresses a data integrity rule, while a unique index explicitly focuses on query performance with uniqueness as a bonus.</p>
<h2 id="heading-how-to-manage-indexes">How to Manage Indexes</h2>
<p>As your database grows, you'll need to inspect, monitor, and maintain your indexes.</p>
<h3 id="heading-how-to-list-all-indexes-on-a-table">How to List All Indexes on a Table</h3>
<pre><code class="language-sql">SELECT
    indexname,
    indexdef
FROM pg_indexes
WHERE tablename = 'customers';
</code></pre>
<p>This shows the name and full definition of every index on the table.</p>
<h3 id="heading-how-to-check-index-size">How to Check Index Size</h3>
<pre><code class="language-sql">SELECT
    pg_size_pretty(pg_relation_size('idx_customers_email')) AS index_size;
</code></pre>
<p>For a broader view of all indexes and their sizes:</p>
<pre><code class="language-sql">SELECT
    indexrelname AS index_name,
    pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
WHERE relname = 'customers'
ORDER BY pg_relation_size(indexrelid) DESC;
</code></pre>
<h3 id="heading-how-to-find-unused-indexes">How to Find Unused Indexes</h3>
<p>Indexes that are never used waste disk space and slow down writes. You can find them by checking <code>pg_stat_user_indexes</code>:</p>
<pre><code class="language-sql">SELECT
    indexrelname AS index_name,
    idx_scan AS times_used,
    pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
WHERE relname = 'customers'
AND idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;
</code></pre>
<p>If an index has <code>idx_scan = 0</code> after a reasonable period of normal usage, it's a candidate for removal. Just make sure to check across a full business cycle — some indexes are only used during monthly reports or seasonal operations.</p>
<h3 id="heading-how-to-drop-an-index">How to Drop an Index</h3>
<pre><code class="language-sql">DROP INDEX IF EXISTS idx_customers_email;
</code></pre>
<p>If you're dropping an index on a production table and want to avoid locking writes, use <code>CONCURRENTLY</code>:</p>
<pre><code class="language-sql">DROP INDEX CONCURRENTLY IF EXISTS idx_customers_email;
</code></pre>
<h3 id="heading-how-to-rebuild-an-index">How to Rebuild an Index</h3>
<p>Over time, indexes can become bloated as rows are inserted, updated, and deleted. You can rebuild an index to reclaim space:</p>
<pre><code class="language-sql">REINDEX INDEX idx_customers_email;
</code></pre>
<p>Or rebuild all indexes on a table:</p>
<pre><code class="language-sql">REINDEX TABLE customers;
</code></pre>
<p>On production systems, use <code>REINDEX CONCURRENTLY</code> (PostgreSQL 12+) to avoid locking the table:</p>
<pre><code class="language-sql">REINDEX INDEX CONCURRENTLY idx_customers_email;
</code></pre>
<h2 id="heading-when-indexes-hurt-instead-of-help">When Indexes Hurt Instead of Help</h2>
<p>Indexes aren't free. Every index you add comes with costs:</p>
<ol>
<li><p><strong>Write overhead</strong> — every INSERT, UPDATE, or DELETE must also update every index on the table. If a table has 10 indexes and you insert a row, PostgreSQL performs 11 write operations (one for the table and one for each index). On write-heavy tables, excessive indexes can significantly slow down data modification.</p>
</li>
<li><p><strong>Storage cost</strong> — indexes consume disk space. On large tables, indexes can take up as much space as the table itself, sometimes more. You can check this with <code>pg_relation_size</code>.</p>
</li>
<li><p><strong>Memory consumption</strong> — PostgreSQL caches frequently used indexes in memory. More indexes means more memory pressure, which can push useful data out of the cache and slow down other queries.</p>
</li>
<li><p><strong>Maintenance burden</strong> — indexes need periodic maintenance (vacuuming, reindexing) and add complexity to schema migrations.</p>
</li>
</ol>
<p>The question to ask is not "should I add an index?" but rather "does the read performance gain justify the write performance cost for this table's workload?"</p>
<h2 id="heading-common-mistakes-that-prevent-index-usage">Common Mistakes That Prevent Index Usage</h2>
<p>You can have the perfect index and PostgreSQL might still ignore it. Here are the most common reasons.</p>
<h3 id="heading-wrapping-the-indexed-column-in-a-function">Wrapping the Indexed Column in a Function</h3>
<pre><code class="language-sql">-- Index on email
CREATE INDEX idx_email ON customers (email);

-- ❌ PostgreSQL cannot use the index because of LOWER()
SELECT * FROM customers WHERE LOWER(email) = 'user1@example.com';

-- ✅ Fix: create an expression index on LOWER(email)
CREATE INDEX idx_email_lower ON customers (LOWER(email));
</code></pre>
<p>Any function applied to the indexed column in a WHERE clause prevents the standard index from being used. You need an expression index that matches the function.</p>
<h3 id="heading-implicit-type-casting">Implicit Type Casting</h3>
<pre><code class="language-sql">-- id is an INTEGER column with an index
-- ❌ Passing a string forces a type cast, which may prevent index usage
SELECT * FROM customers WHERE id = '42';

-- ✅ Use the correct type
SELECT * FROM customers WHERE id = 42;
</code></pre>
<p>When the query's value type doesn't match the column type, PostgreSQL may cast the column to match, which prevents index usage.</p>
<h3 id="heading-using-or-conditions-across-different-columns">Using OR Conditions Across Different Columns</h3>
<pre><code class="language-sql">-- ❌ OR across different columns can prevent index usage
SELECT * FROM customers WHERE email = 'user1@example.com' OR city = 'Lagos';

-- ✅ Rewrite as UNION for better index utilization
SELECT * FROM customers WHERE email = 'user1@example.com'
UNION
SELECT * FROM customers WHERE city = 'Lagos';
</code></pre>
<h3 id="heading-leading-wildcards-in-like-queries">Leading Wildcards in LIKE Queries</h3>
<pre><code class="language-sql">-- ❌ Leading wildcard cannot use a B-tree index
SELECT * FROM customers WHERE email LIKE '%@example.com';

-- ✅ Trailing wildcard CAN use a B-tree index
SELECT * FROM customers WHERE email LIKE 'user1%';
</code></pre>
<p>A B-tree index is sorted from left to right. A leading wildcard (<code>%something</code>) means the database can't use the sorted structure and falls back to a sequential scan. If you need to search by suffix or substring, consider a GIN index with the <code>pg_trgm</code> extension.</p>
<h3 id="heading-low-selectivity">Low Selectivity</h3>
<p>If a column has very few distinct values relative to the number of rows (low selectivity), PostgreSQL may decide a sequential scan is faster than using the index.</p>
<p>For example, if a <code>status</code> column has only three possible values (<code>'pending'</code>, <code>'shipped'</code>, <code>'delivered'</code>) and each value covers roughly a third of the table, an index on <code>status</code> alone provides little benefit. PostgreSQL would still need to read a large portion of the table, and the extra index lookup adds overhead.</p>
<p>A partial index is often the better solution in these cases.</p>
<h2 id="heading-best-practices-for-indexing">Best Practices for Indexing</h2>
<p>Here's a summary of the key principles to follow:</p>
<ol>
<li><p><strong>Index columns that appear in WHERE, JOIN, and ORDER BY clauses.</strong> These are the columns the database needs to search, match, or sort by. Start with the queries that run most frequently or take the longest.</p>
</li>
<li><p><strong>Measure before and after with EXPLAIN ANALYZE.</strong> Never add an index based on guesswork. Run your query with <code>EXPLAIN ANALYZE</code>, add the index, and run it again. If the execution time doesn't improve meaningfully, the index isn't helping.</p>
</li>
<li><p><strong>Don't index every column.</strong> Each index slows down writes and consumes storage. Be deliberate about which columns you index based on actual query patterns.</p>
</li>
<li><p><strong>Use composite indexes for multi-column filters.</strong> If your queries commonly filter on <code>city</code> and <code>last_name</code> together, a composite index on <code>(city, last_name)</code> is more efficient than two separate single-column indexes.</p>
</li>
<li><p><strong>Put the most selective column first in composite indexes.</strong> The column that narrows the results the most should come first.</p>
</li>
<li><p><strong>Use partial indexes when you only query a subset of data.</strong> If 90% of your queries target rows where <code>status = 'active'</code>, a partial index on that subset is smaller and faster than a full index.</p>
</li>
<li><p><strong>Monitor index usage regularly.</strong> Query <code>pg_stat_user_indexes</code> to find unused indexes and remove them.</p>
</li>
<li><p><strong>Rebuild bloated indexes periodically.</strong> On tables with heavy update/delete activity, indexes can become bloated. Use <code>REINDEX CONCURRENTLY</code> on production systems.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this tutorial, you learned what database indexes are and why they matter for query performance. You explored how B-tree indexes work under the hood, created several types of indexes (single-column, composite, partial, expression, and unique), and used <code>EXPLAIN ANALYZE</code> to measure the impact.</p>
<p>You also learned about the trade-offs indexes introduce — write overhead, storage cost, and memory pressure — and the common mistakes that silently prevent PostgreSQL from using your indexes.</p>
<p>The core principle is simple: index deliberately based on your actual query patterns, measure the results, and remove anything that isn't pulling its weight.</p>
<p>If you found this tutorial helpful, you can find more of my writing on <a href="https://freecodecamp.org/news/author/iyiola">freeCodeCamp</a> and connect with me on <a href="https://linkedin.com/in/iyioladev">LinkedIn</a> and <a href="https://x.com/iyiola_dev_">X</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ What Are Database Triggers? A Practical Introduction with PostgreSQL Examples ]]>
                </title>
                <description>
                    <![CDATA[ If you've ever needed your database to automatically respond to changes – like logging every update to a sensitive table, enforcing a business rule before an insert, or syncing derived data after a de ]]>
                </description>
                <link>https://www.freecodecamp.org/news/what-are-database-triggers-practical-intro-with-postgresql-examples/</link>
                <guid isPermaLink="false">69c6d1357cf270651037755c</guid>
                
                    <category>
                        <![CDATA[ Databases ]]>
                    </category>
                
                    <category>
                        <![CDATA[ PostgreSQL ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ iyiola ]]>
                </dc:creator>
                <pubDate>Fri, 27 Mar 2026 18:49:25 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/b5940820-d1aa-4d10-8b40-06005bec7e60.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>If you've ever needed your database to automatically respond to changes – like logging every update to a sensitive table, enforcing a business rule before an insert, or syncing derived data after a delete – then triggers are the tool you're looking for.</p>
<p>A database trigger is a function that the database executes automatically when a specific event occurs on a table. You don't call it manually. Instead, you define the conditions, and the database handles the rest.</p>
<p>In this tutorial, you'll learn what triggers are, how they work, when to use them, and when to avoid them. You'll work through practical examples using PostgreSQL, but the core concepts apply to most relational databases.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ul>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-how-triggers-work">How Triggers Work</a></p>
</li>
<li><p><a href="#heading-how-to-create-your-first-trigger">How to Create Your First Trigger</a></p>
</li>
<li><p><a href="#heading-before-vs-after-triggers">BEFORE vs AFTER Triggers</a></p>
</li>
<li><p><a href="#heading-how-to-build-an-audit-log-with-an-after-trigger">How to Build an Audit Log with an AFTER Trigger</a></p>
</li>
<li><p><a href="#heading-how-to-use-a-before-trigger-for-validation">How to Use a BEFORE Trigger for Validation</a></p>
</li>
<li><p><a href="#heading-row-level-vs-statement-level-triggers">Row-Level vs Statement-Level Triggers</a></p>
</li>
<li><p><a href="#heading-the-new-and-old-variables-reference">The NEW and OLD Variables Reference</a></p>
</li>
<li><p><a href="#heading-how-to-manage-triggers">How to Manage Triggers</a></p>
</li>
<li><p><a href="#heading-when-to-use-triggers">When to Use Triggers</a></p>
</li>
<li><p><a href="#heading-when-to-avoid-triggers">When to Avoid Triggers</a></p>
</li>
<li><p><a href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To follow along with the examples, you'll need:</p>
<ul>
<li><p>Basic knowledge of SQL (SELECT, INSERT, UPDATE, DELETE)</p>
</li>
<li><p>A running PostgreSQL instance (version 12 or later)</p>
</li>
<li><p>A SQL client like <code>psql</code>, pgAdmin, or DBeaver</p>
</li>
</ul>
<p>If you don't have PostgreSQL installed, you can use a free cloud-hosted instance from services like <a href="https://neon.tech">Neon</a> or <a href="https://supabase.com">Supabase</a> to follow along.</p>
<h2 id="heading-how-triggers-work">How Triggers Work</h2>
<p>At a high level, a trigger has three parts:</p>
<ol>
<li><p><strong>The event</strong>: what action activates the trigger (INSERT, UPDATE, DELETE, or TRUNCATE)</p>
</li>
<li><p><strong>The timing</strong>: when the trigger fires relative to the event (BEFORE or AFTER)</p>
</li>
<li><p><strong>The function</strong>: what logic runs when the trigger fires</p>
</li>
</ol>
<p>Here's the general flow: a user or application performs an operation on a table, the database checks if any triggers are associated with that operation, and if a match is found, the database executes the trigger function automatically.</p>
<p>You can think of triggers as event listeners for your database. Just like a JavaScript <code>addEventListener</code> watches for a click or keypress, a database trigger watches for row-level changes on a table.</p>
<h2 id="heading-how-to-create-your-first-trigger">How to Create Your First Trigger</h2>
<p>In PostgreSQL, creating a trigger is a two-step process. You first create a trigger function, then you attach that function to a table with a <code>CREATE TRIGGER</code> statement.</p>
<p>Let's build a concrete example. Say you have a <code>products</code> table and you want to automatically set the <code>updated_at</code> timestamp every time a row is modified.</p>
<h3 id="heading-step-1-create-the-table">Step 1 – Create the Table</h3>
<pre><code class="language-sql">CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100) NOT NULL,
    price NUMERIC(10, 2) NOT NULL,
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW()
);
</code></pre>
<h3 id="heading-step-2-create-the-trigger-function">Step 2 – Create the Trigger Function</h3>
<p>A trigger function in PostgreSQL is a special function that returns the <code>TRIGGER</code> type. Inside the function body, you have access to two important variables: <code>NEW</code> (the row after the operation) and <code>OLD</code> (the row before the operation).</p>
<pre><code class="language-sql">CREATE OR REPLACE FUNCTION set_updated_at()
RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;
</code></pre>
<p>This function sets the <code>updated_at</code> column to the current timestamp every time it runs. It then returns <code>NEW</code>, which tells PostgreSQL to proceed with the modified row.</p>
<h3 id="heading-step-3-attach-the-trigger-to-the-table">Step 3 – Attach the Trigger to the Table</h3>
<pre><code class="language-sql">CREATE TRIGGER trigger_set_updated_at
BEFORE UPDATE ON products
FOR EACH ROW
EXECUTE FUNCTION set_updated_at();
</code></pre>
<p>Let's break down each part of this statement:</p>
<ul>
<li><p><code>BEFORE UPDATE</code> – the trigger fires before the update is applied to the table</p>
</li>
<li><p><code>ON products</code> – the trigger is associated with the <code>products</code> table</p>
</li>
<li><p><code>FOR EACH ROW</code> – the function runs once for every row affected by the update</p>
</li>
<li><p><code>EXECUTE FUNCTION set_updated_at()</code> – the function to call</p>
</li>
</ul>
<h3 id="heading-step-4-test-it">Step 4 – Test It</h3>
<pre><code class="language-sql">INSERT INTO products (name, price) VALUES ('Wireless Keyboard', 49.99);

-- Wait a moment, then update the row
UPDATE products SET price = 44.99 WHERE name = 'Wireless Keyboard';

SELECT name, price, created_at, updated_at FROM products;
</code></pre>
<p>You'll see that <code>updated_at</code> has been automatically updated to the time of the UPDATE operation, even though you didn't explicitly set it in your query. That's the trigger doing its job.</p>
<h2 id="heading-before-vs-after-triggers">BEFORE vs AFTER Triggers</h2>
<p>The timing of a trigger determines when the function executes relative to the actual data change.</p>
<p><strong>BEFORE triggers</strong> run before the row is inserted, updated, or deleted. They are useful when you want to modify or validate the incoming data. Since the change hasn't been applied yet, you can alter the <code>NEW</code> row or even cancel the operation entirely by returning <code>NULL</code>.</p>
<p><strong>AFTER triggers</strong> run after the row change has been committed to the table. They are useful for side effects like logging, sending notifications, or updating related tables. At this point, the change is already done, so you can't modify the row – but you can read both <code>OLD</code> and <code>NEW</code> to see what changed.</p>
<p>Here's a rule of thumb: use BEFORE triggers when you need to change or reject data, and use AFTER triggers when you need to react to a completed change.</p>
<h2 id="heading-how-to-build-an-audit-log-with-an-after-trigger">How to Build an Audit Log with an AFTER Trigger</h2>
<p>One of the most common uses for triggers is audit logging – keeping a record of every change made to an important table. Let's build one.</p>
<h3 id="heading-step-1-create-an-audit-table">Step 1 – Create an Audit Table</h3>
<pre><code class="language-sql">CREATE TABLE product_audit (
    audit_id SERIAL PRIMARY KEY,
    product_id INT NOT NULL,
    action VARCHAR(10) NOT NULL,
    old_price NUMERIC(10, 2),
    new_price NUMERIC(10, 2),
    changed_by TEXT DEFAULT current_user,
    changed_at TIMESTAMP DEFAULT NOW()
);
</code></pre>
<h3 id="heading-step-2-create-the-audit-trigger-function">Step 2 – Create the Audit Trigger Function</h3>
<pre><code class="language-sql">CREATE OR REPLACE FUNCTION log_product_changes()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'UPDATE' THEN
        INSERT INTO product_audit (product_id, action, old_price, new_price)
        VALUES (OLD.id, 'UPDATE', OLD.price, NEW.price);
    ELSIF TG_OP = 'DELETE' THEN
        INSERT INTO product_audit (product_id, action, old_price)
        VALUES (OLD.id, 'DELETE', OLD.price);
    ELSIF TG_OP = 'INSERT' THEN
        INSERT INTO product_audit (product_id, action, new_price)
        VALUES (NEW.id, 'INSERT', NEW.price);
    END IF;

    RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
</code></pre>
<p>There are a few important things happening here. The <code>TG_OP</code> variable is a special string that PostgreSQL provides inside trigger functions. It tells you which operation activated the trigger: <code>'INSERT'</code>, <code>'UPDATE'</code>, or <code>'DELETE'</code>. This lets you handle different operations with a single function.</p>
<p>The <code>RETURN COALESCE(NEW, OLD)</code> at the end ensures the function returns the correct row. For INSERT and UPDATE operations, <code>NEW</code> exists and is returned. For DELETE operations, <code>NEW</code> is null, so <code>OLD</code> is returned instead.</p>
<h3 id="heading-step-3-attach-the-trigger">Step 3 – Attach the Trigger</h3>
<pre><code class="language-sql">CREATE TRIGGER trigger_product_audit
AFTER INSERT OR UPDATE OR DELETE ON products
FOR EACH ROW
EXECUTE FUNCTION log_product_changes();
</code></pre>
<p>Notice the <code>AFTER INSERT OR UPDATE OR DELETE</code> syntax. You can bind a single trigger to multiple events, which keeps your setup clean.</p>
<h3 id="heading-step-4-test-it">Step 4 – Test It</h3>
<pre><code class="language-sql">-- Insert a new product
INSERT INTO products (name, price) VALUES ('USB-C Hub', 29.99);

-- Update the price
UPDATE products SET price = 24.99 WHERE name = 'USB-C Hub';

-- Delete the product
DELETE FROM products WHERE name = 'USB-C Hub';

-- Check the audit log
SELECT * FROM product_audit ORDER BY changed_at;
</code></pre>
<p>You'll see three rows in <code>product_audit</code> (one for each operation) with the old and new prices recorded automatically. No application code needed.</p>
<h2 id="heading-how-to-use-a-before-trigger-for-validation">How to Use a BEFORE Trigger for Validation</h2>
<p>Triggers can also enforce business rules at the database level. Let's say you want to prevent any product from having a negative price.</p>
<pre><code class="language-sql">CREATE OR REPLACE FUNCTION prevent_negative_price()
RETURNS TRIGGER AS $$
BEGIN
    IF NEW.price &lt; 0 THEN
        RAISE EXCEPTION 'Product price cannot be negative. Got: %', NEW.price;
    END IF;
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trigger_check_price
BEFORE INSERT OR UPDATE ON products
FOR EACH ROW
EXECUTE FUNCTION prevent_negative_price();
</code></pre>
<p>Now test it:</p>
<pre><code class="language-sql">INSERT INTO products (name, price) VALUES ('Faulty Item', -10.00);
-- ERROR: Product price cannot be negative. Got: -10.00
</code></pre>
<p>The insert is rejected entirely. The row never makes it into the table. This is powerful because the rule is enforced at the database level regardless of which application or script sends the query.</p>
<h2 id="heading-row-level-vs-statement-level-triggers">Row-Level vs Statement-Level Triggers</h2>
<p>All the triggers you've seen so far use <code>FOR EACH ROW</code>, which means the function runs once per affected row. If you update 100 rows in a single query, the trigger function runs 100 times.</p>
<p>PostgreSQL also supports <code>FOR EACH STATEMENT</code> triggers, which run once per SQL statement regardless of how many rows are affected.</p>
<pre><code class="language-sql">CREATE OR REPLACE FUNCTION log_bulk_update()
RETURNS TRIGGER AS $$
BEGIN
    RAISE NOTICE 'A bulk operation was performed on the products table';
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trigger_bulk_update_notice
AFTER UPDATE ON products
FOR EACH STATEMENT
EXECUTE FUNCTION log_bulk_update();
</code></pre>
<p>Statement-level triggers are less common, but they're useful for operations like refreshing a materialized view or sending a single notification after a batch update instead of one notification per row.</p>
<p><strong>Important</strong>: in statement-level triggers, the <code>NEW</code> and <code>OLD</code> variables are not available because the trigger isn't tied to any specific row.</p>
<h2 id="heading-the-new-and-old-variables-reference">The NEW and OLD Variables Reference</h2>
<p>Here's a quick reference for when <code>NEW</code> and <code>OLD</code> are available in row-level triggers:</p>
<table>
<thead>
<tr>
<th>Operation</th>
<th>OLD</th>
<th>NEW</th>
</tr>
</thead>
<tbody><tr>
<td>INSERT</td>
<td>Not available</td>
<td>Contains the new row</td>
</tr>
<tr>
<td>UPDATE</td>
<td>Contains the row before the change</td>
<td>Contains the row after the change</td>
</tr>
<tr>
<td>DELETE</td>
<td>Contains the deleted row</td>
<td>Not available</td>
</tr>
</tbody></table>
<p>Understanding when each variable is available will save you from runtime errors in your trigger functions.</p>
<h2 id="heading-how-to-manage-triggers">How to Manage Triggers</h2>
<p>As you add more triggers to your database, you'll need to know how to inspect, disable, and remove them.</p>
<h3 id="heading-how-to-list-all-triggers-on-a-table">How to List All Triggers on a Table</h3>
<pre><code class="language-sql">SELECT trigger_name, event_manipulation, action_timing
FROM information_schema.triggers
WHERE event_object_table = 'products';
</code></pre>
<h3 id="heading-how-to-disable-a-trigger-temporarily">How to Disable a Trigger Temporarily</h3>
<pre><code class="language-sql">-- Disable a specific trigger
ALTER TABLE products DISABLE TRIGGER trigger_product_audit;

-- Disable all triggers on a table
ALTER TABLE products DISABLE TRIGGER ALL;
</code></pre>
<p>This is useful during bulk data migrations where you want to skip trigger execution for performance reasons.</p>
<h3 id="heading-how-to-re-enable-a-trigger">How to Re-Enable a Trigger</h3>
<pre><code class="language-sql">ALTER TABLE products ENABLE TRIGGER trigger_product_audit;
</code></pre>
<h3 id="heading-how-to-drop-a-trigger">How to Drop a Trigger</h3>
<pre><code class="language-sql">DROP TRIGGER IF EXISTS trigger_product_audit ON products;
</code></pre>
<p>Note that dropping a trigger does not drop the associated function. You'll need to drop the function separately if you no longer need it:</p>
<pre><code class="language-sql">DROP FUNCTION IF EXISTS log_product_changes();
</code></pre>
<h2 id="heading-when-to-use-triggers">When to Use Triggers</h2>
<p>Triggers work well for specific use cases. Here are the scenarios where they're a strong choice:</p>
<ul>
<li><p><strong>Audit logging</strong>: automatically recording who changed what and when, as you saw earlier in this tutorial.</p>
</li>
<li><p><strong>Derived data maintenance</strong>: keeping computed columns, counters, or summary tables in sync with the source data.</p>
</li>
<li><p><strong>Data validation</strong>: enforcing business rules that go beyond what CHECK constraints can express, like cross-table validations.</p>
</li>
<li><p><strong>Automatic timestamping</strong>: setting <code>created_at</code> and <code>updated_at</code> fields without relying on the application layer.</p>
</li>
</ul>
<h2 id="heading-when-to-avoid-triggers">When to Avoid Triggers</h2>
<p>Triggers are powerful, but they come with trade-offs. Here are cases where you should think twice before using them:</p>
<ul>
<li><p><strong>Complex business logic</strong>: if the logic involves calling external APIs, sending emails, or orchestrating multi-step workflows, it belongs in your application layer. Triggers should stay lightweight.</p>
</li>
<li><p><strong>Performance-sensitive bulk operations</strong>: row-level triggers on tables that frequently receive bulk inserts or updates can create significant overhead. If you're inserting millions of rows, those triggers fire millions of times.</p>
</li>
<li><p><strong>Cascading triggers</strong>: when one trigger's action fires another trigger, which fires another, debugging becomes extremely difficult. If you find yourself building a chain of triggers, reconsider the design.</p>
</li>
<li><p><strong>Logic that developers need to discover easily</strong>: triggers are sometimes called "hidden logic" because they execute automatically without appearing in application code. If your team frequently asks "why did this column change?" and the answer is always "there's a trigger," that's a sign the logic might be more discoverable if placed in your application layer or a stored procedure that's called explicitly.</p>
</li>
</ul>
<p>A good rule of thumb: if the logic is tightly coupled to the data and should always execute regardless of which client or service touches the table, a trigger is appropriate. If the logic depends on application context (like the current user's session, feature flags, or external state), it belongs in the application.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this tutorial, you learned what database triggers are and how they work in PostgreSQL. You built three practical triggers: an automatic timestamp updater, a full audit logging system, and a data validation guard. You also learned the difference between BEFORE and AFTER triggers, row-level and statement-level triggers, and when <code>NEW</code> and <code>OLD</code> variables are available.</p>
<p>Triggers are a powerful tool for keeping your data consistent and your business rules enforced at the database level. Use them for focused, data-centric operations, and keep the logic simple.</p>
<p>If you found this tutorial helpful, you can connect with me on <a href="https://linkedin.com/in/iyioladev">LinkedIn</a> and <a href="https://x.com/iyiola_dev_">X</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
