<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ Olalekan Odukoya - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Thu, 14 May 2026 22:43:17 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/author/olamilekan000/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Run Multiple Kubernetes Clusters Without the Overhead Using kcp ]]>
                </title>
                <description>
                    <![CDATA[ In Kubernetes, when you need to isolate workloads, you might start by using namespaces. Namespaces provide a simple way to separate workloads within a single cluster. But as your requirements grow, es ]]>
                </description>
                <link>https://www.freecodecamp.org/news/how-to-run-multiple-kubernetes-clusters-without-the-overhead-using-kcp/</link>
                <guid isPermaLink="false">69c6ea5a7cf27065104ab997</guid>
                
                    <category>
                        <![CDATA[ Kubernetes ]]>
                    </category>
                
                    <category>
                        <![CDATA[ multi-cloud ]]>
                    </category>
                
                    <category>
                        <![CDATA[ #multitenancy ]]>
                    </category>
                
                    <category>
                        <![CDATA[ consumer ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Provider ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Olalekan Odukoya ]]>
                </dc:creator>
                <pubDate>Fri, 27 Mar 2026 20:36:42 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/a42c1a28-7a9e-4676-891d-eae7d64f2900.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>In Kubernetes, when you need to isolate workloads, you might start by using namespaces. Namespaces provide a simple way to separate workloads within a single cluster.</p>
<p>But as your requirements grow, especially around compliance, security, multi-tenancy, or conflicting dependencies, your team will likely move beyond namespaces and start creating separate clusters.</p>
<p>What starts as a clean separation quickly becomes cluster sprawl, bringing higher costs, complex networking, and constant operational overhead.</p>
<p>In this article, we'll explore how <strong>kcp</strong> can help fix this problem by allowing you to run multiple “logical clusters” inside a single control plane.</p>
<h3 id="heading-table-of-contents">Table of Contents</h3>
<ul>
<li><p><a href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a href="#heading-the-challenge-of-namespaces-and-multiple-kubernetes-clusters">The Challenge of Namespaces and Multiple Kubernetes Clusters</a></p>
</li>
<li><p><a href="#heading-introducing-kcp">Introducing kcp</a></p>
</li>
<li><p><a href="#heading-getting-started-with-kcp">Getting Started with kcp</a></p>
</li>
<li><p><a href="#heading-deploying-and-managing-applications">Deploying and Managing Applications</a></p>
</li>
<li><p><a href="#heading-beyond-the-primitives-what-we-didnt-cover">Beyond the Primitives: What We Didn't Cover</a></p>
</li>
</ul>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p><strong>kubectl</strong> installed.</p>
</li>
<li><p>A terminal to run commands</p>
</li>
<li><p><strong>Curl</strong> installed</p>
</li>
</ul>
<h2 id="heading-the-challenge-of-namespaces-and-multiple-kubernetes-clusters">The Challenge of Namespaces and Multiple Kubernetes Clusters</h2>
<p>While namespaces provide some level of isolation, many teams often default to creating entirely new Kubernetes clusters to achieve stronger multi-tenancy, environment separation, or geographic distribution.</p>
<p>At first, this approach works well. But as systems grow, managing a fleet of clusters introduces challenges that often outweigh the benefits.</p>
<p>Every new cluster comes with its own control plane, which you'll need to continuously patch, upgrade, and monitor. Over time, this operational overhead will add up, consuming cycles that platform teams could otherwise spend on higher-value work.</p>
<p>Also, clusters don't naturally share service discovery or identity. This forces you to introduce extra layers like service meshes or VPN-based networking, which increases your system's complexity and expands the overall attack surface.</p>
<p>There’s also the cost factor. Clusters incur baseline infrastructure costs regardless of how much workload they run. Creating dedicated clusters for small teams can lead to underutilized resources or, worse, delay the creation of necessary environments because the cost feels too high.</p>
<p>As a result, platform teams often find themselves acting as “cluster plumbers”, spending more time maintaining infrastructure than enabling developer productivity.</p>
<h3 id="heading-illustrating-the-namespace-problem">Illustrating the Namespace Problem</h3>
<p>As I mentioned earlier, when managing multiple clusters gets too complex, a natural alternative is to use namespaces for isolation within a single cluster.</p>
<p>At first glance, this seems like the perfect solution.</p>
<p>But to understand where this approach falls short, let’s walk through a real-world example using a common requirement in shared Kubernetes environments: running databases.</p>
<p>We'll start by creating different namespaces for each team:</p>
<pre><code class="language-shell">➜ ~ kubectl create namespace team-a 
➜ ~ kubectl create namespace team-b
</code></pre>
<p>Let's say <strong>Team A</strong> needs a MongoDB database for one of its services. The team must first install the required <a href="https://github.com/mongodb/mongodb-kubernetes">MongoDB Custom Resource Definitions (CRDs)</a> into the cluster, so Kubernetes knows how to understand the different <code>MongoDB</code> resources:</p>
<pre><code class="language-shell">➜ ~ kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-kubernetes/1.7.0/public/crds.yaml

customresourcedefinition.apiextensions.k8s.io/clustermongodbroles.mongodb.com created customresourcedefinition.apiextensions.k8s.io/mongodb.mongodb.com created customresourcedefinition.apiextensions.k8s.io/mongodbmulticluster.mongodb.com created customresourcedefinition.apiextensions.k8s.io/mongodbsearch.mongodb.com created customresourcedefinition.apiextensions.k8s.io/mongodbusers.mongodb.com created customresourcedefinition.apiextensions.k8s.io/opsmanagers.mongodb.com created customresourcedefinition.apiextensions.k8s.io/mongodbcommunity.mongodbcommunity.mongodb.com created
</code></pre>
<p>Secondly, <strong>Team A</strong> installs the actual Operator application (the controller that continuously runs the database logic) into their designated namespace:</p>
<pre><code class="language-shell">➜ ~ kubectl apply -n team-a -f https://raw.githubusercontent.com/mongodb/mongodb-kubernetes/1.7.0/public/mongodb-kubernetes.yaml
</code></pre>
<p>But the installation isn't completed due to the error below:</p>
<pre><code class="language-shell">the namespace from the provided object "mongodb" does not match the namespace "team-a". You must pass '--namespace=mongodb' to perform this operation.
</code></pre>
<p>Why did this fail? This is because most Kubernetes Operators are designed assuming they own the entire cluster and not just a single namespace.</p>
<p>To force the operator to run in <code>team-a</code>, we can modify the manifest on the fly:</p>
<pre><code class="language-shell">curl -s https://raw.githubusercontent.com/mongodb/mongodb-kubernetes/1.7.0/public/mongodb-kubernetes.yaml \
  | sed 's/namespace: mongodb/namespace: team-a/g' \
  | kubectl apply -f 
</code></pre>
<p>We can then confirm that the operator is installed and running:</p>
<pre><code class="language-plaintext">➜ ~ k get po -n team-a 
NAME                                          READY STATUS  RESTARTS AGE 
mongodb-kubernetes-operator-6f5f8bb7fd-8h5hj  1/1   Running 0        59s
</code></pre>
<p>But even after tricking the Operator into running inside <code>team-a</code>'s namespace, we still haven't solved the real problem.</p>
<p>At first glance, <code>team-a</code>'s operator is neatly confined to their namespace. But remember Step 1? <strong>The CRDs aren't namespaced – they're strictly cluster-scoped.</strong> So, even though <code>team-a</code> orchestrated this deployment purely for their own use, those CRDs are now globally registered across the entire cluster.</p>
<p>If Team B checks the API, they'll see all the MongoDB-related CRDs installed by Team A.</p>
<pre><code class="language-shell">➜ ~ kubectl get crds | grep mongodb

clustermongodbroles.mongodb.com               2026-03-24T10:49:35Z
mongodb.mongodb.com                           2026-03-24T10:49:36Z
mongodbcommunity.mongodbcommunity.mongodb.com 2026-03-24T10:49:38Z
mongodbmulticluster.mongodb.com               2026-03-24T10:49:36Z
mongodbsearch.mongodb.com                     2026-03-24T10:49:37Z 
mongodbusers.mongodb.com                      2026-03-24T10:49:37Z 
opsmanagers.mongodb.com                       2026-03-24T10:49:37Z
</code></pre>
<p>Now consider what happens if Team B needs to install a different version of MongoDB for its own services. Because the CRDs are shared across the cluster, both teams are now coupled to the same definitions. This means one team’s changes can easily impact the other, turning what should be isolated environments into a source of conflict.</p>
<h2 id="heading-introducing-kcp">Introducing kcp</h2>
<p><strong>kcp</strong> is an open-source project that lets you run multiple logical Kubernetes clusters on a single control plane.</p>
<p>These logical clusters are called <strong>workspaces</strong>, and each one behaves like an independent Kubernetes cluster. Every workspace has its own API endpoint, authentication, authorization, and policies, giving teams the experience of working in fully isolated environments.</p>
<img src="https://cdn.hashnode.com/uploads/covers/5e6abef0af89662115c0f5ca/ede32f6e-c260-426e-8d50-4f78f11fa1b1.svg" alt="brief kcp architecture and component" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>This decoupling of the control plane from the worker nodes is what makes kcp different.</p>
<p>In traditional Kubernetes, spinning up a new cluster means provisioning a new API server, a new etcd instance, and all the associated controllers. With kcp, you spin up a workspace, and you have a strong, confined environment for your workload.</p>
<p>It's worth noting that <strong>kcp itself doesn't run workloads.</strong> It's strictly a control plane. Your actual applications still run on physical Kubernetes clusters. kcp only manages the workspaces and the synchronization of resources to those underlying clusters.</p>
<h2 id="heading-getting-started-with-kcp">Getting Started with kcp</h2>
<p>Now that we've covered what kcp is and why it matters, let's get our hands dirty. We'll set up a local kcp environment and explore the core concepts in action.</p>
<p>To make this realistic, we'll follow a common kcp workflow: a platform team that provides custom APIs, and tenant teams that consume them.</p>
<p>In our case, the platform team will export a MongoDB API, and our two tenant teams will subscribe to those APIs using <strong>APIBindings</strong>. Once bound, they can deploy MongoDB instances into their workspaces and sync them to physical clusters.</p>
<p>This pattern is at the heart of how kcp enables scalable multi-tenancy. The platform team controls the API definitions and versioning. Tenant teams get self-service access without needing to understand the underlying infrastructure. Let's see how it works!</p>
<h3 id="heading-installing-kcp">Installing kcp</h3>
<p>Running kcp locally is incredibly lightweight since there are no heavy worker nodes to spin up. You will need two things: the <code>kcp</code> server itself, <code>kubectl-kcp</code> , and the <code>kubectl-ws</code> plugin to manage workspaces.</p>
<p>To install the binaries, let's head over to the <a href="https://github.com/kcp-dev/kcp/releases/tag/v0.30.1">kcp-dev releases page</a>.</p>
<p>The commands below are for macOS Apple Silicon. If you're using an Intel Mac or Linux, simply replace <code>darwin_arm64</code> with your respective architecture.</p>
<ol>
<li>Download the kcp server and workspace plugins:</li>
</ol>
<pre><code class="language-shell">➜ ~ curl -LO https://github.com/kcp-dev/kcp/releases/download/v0.30.1/kcp_0.30.1_darwin_arm64.tar.gz 

➜ ~ curl -LO https://github.com/kcp-dev/kcp/releases/download/v0.30.1/kubectl-kcp-plugin_0.30.1_darwin_arm64.tar.gz

➜ ~ curl -LO https://github.com/kcp-dev/kcp/releases/download/v0.30.1/kubectl-ws-plugin_0.30.1_darwin_arm64.tar.gz
</code></pre>
<ol>
<li>Extract the archives:</li>
</ol>
<pre><code class="language-shell">➜ ~ tar -xzf kcp_0.30.1_darwin_arm64.tar.gz 
➜ ~ tar -xzf kubectl-kcp-plugin_0.30.1_darwin_arm64.tar.gz
➜ ~ tar -xzf kubectl-ws-plugin_0.30.1_darwin_arm64.tar.gz
</code></pre>
<ol>
<li>Move the required binaries into your <strong>PATH</strong>:</li>
</ol>
<pre><code class="language-shell">➜ ~ sudo mv bin/kcp /usr/local/bin/
➜ ~ sudo mv bin/kubectl-kcp /usr/local/bin/
➜ ~ sudo mv bin/kubectl-ws /usr/local/bin/
</code></pre>
<p>You can confirm the installation by checking the version.</p>
<pre><code class="language-shell">➜ ~ kcp --version
kcp version v1.33.3+kcp-v0.0.0-627385a6
</code></pre>
<h3 id="heading-starting-the-server">Starting the Server</h3>
<p>With the binaries installed, let's boot up your local control plane and bind it to localhost. But first, let's create a "work-folder".</p>
<pre><code class="language-plaintext">➜ ~ mkdir kcp-test
➜ ~ cd kcp-test
</code></pre>
<p>We can then start the kcp server in this directory.</p>
<pre><code class="language-shell">➜ ~ kcp start --bind-address=127.0.0.1
</code></pre>
<p>You'll see a flurry of logs as kcp boots up its internal database and exposes the API server. Leave this terminal running in the background.</p>
<h3 id="heading-connecting-to-the-root-workspace">Connecting to the Root Workspace</h3>
<p>Open a new terminal window and navigate back into the <code>kcp-test</code> folder we just created.</p>
<p>At first, if you run a standard <code>ls</code> command, the folder will look empty. But during startup, kcp silently generated a hidden <code>.kcp</code> directory that contains our local certificates and our administrative <code>kubeconfig</code> file. Let's verify that:</p>
<pre><code class="language-shell">➜ ~ cd kcp-test 
➜ kcp-test ls
➜ kcp-test ls -a . .. .kcp 
➜ kcp-test ls .kcp admin.kubeconfig apiserver.crt apiserver.key etcd-server sa.key
</code></pre>
<p>Now that we know exactly where the configuration file lives, let's export it so our <code>kubectl</code> commands are routed to kcp instead of your default cluster:</p>
<pre><code class="language-plaintext">export KUBECONFIG=$PWD/.kcp/admin.kubeconfig
</code></pre>
<p>Finally, let's use the workspace plugin we installed earlier to verify that we're connected accurately:</p>
<pre><code class="language-shell"> ➜ kubectl ws .
</code></pre>
<p>You should see the message below printed to the console:</p>
<pre><code class="language-shell">Current workspace is 'root'.
</code></pre>
<p>This shows that you're now officially inside the kcp <strong>root workspace</strong>. This is the highest-level administrative boundary where we'll begin creating our tenant logical clusters.</p>
<h3 id="heading-creating-and-managing-workspaces">Creating and Managing Workspaces</h3>
<p>As we discussed above, in a standard Kubernetes cluster, separating teams means using <code>kubectl create namespace</code>. In kcp, we solve the problem by creating entirely isolated logical clusters – workspaces.</p>
<p>If you recall our architecture diagram from earlier, we want to create three distinct environments for our company: one for the platform engineers to manage shared APIs, and two for our isolated tenant development teams.</p>
<p>Since we're currently inside the administrative <code>root</code> workspace, we can create our new tenant workspaces as children of the <code>root</code>:</p>
<pre><code class="language-plaintext">➜ kubectl ws create platform-team
Workspace "platform-team" (type root:organization) created.
Waiting for it to be ready... 
Workspace "platform-team" (type root:organization) is ready to use.

➜ kubectl ws create team-a 
Workspace "team-a" (type root:organization) created.
Waiting for it to be ready... 
Workspace "team-a" (type root:organization) is ready to use.

➜ kubectl ws create team-b
Workspace "team-b" (type root:organization) created.
Waiting for it to be ready... 
Workspace "team-b" (type root:organization) is ready to use.
</code></pre>
<p>Now, here is where kcp truly shines. Unlike a standard cluster, where objects are just a massive flat list, kcp manages its API as a hierarchy. We can visually prove the structure of our new logical clusters using the <code>tree</code> command:</p>
<pre><code class="language-shell">➜ kubectl ws tree
.
└── root
      ├── platform-team
      ├── team-a
      └── team-b
</code></pre>
<p>Jumping between these logical clusters is as fast as changing directories in a terminal. Let's switch our context over into Team A's workspace:</p>
<pre><code class="language-plaintext">➜ kubectl ws team-a 
Current workspace is 'root:team-a' (type root:organization).
</code></pre>
<h4 id="heading-proving-the-isolation">Proving the Isolation</h4>
<p>To truly understand the power of what we just did, let's try running a standard Kubernetes command while inside <code>team-a</code>:</p>
<pre><code class="language-plaintext">➜ kubectl get namespaces

NAME STATUS AGE 
default Active 15m
</code></pre>
<p>Let's also ask the cluster what APIs are actually available to us out of the box:</p>
<pre><code class="language-plaintext">➜ kubectl api-resources
</code></pre>
<p>Your output should be similar to what is in the image below:</p>
<img src="https://cdn.hashnode.com/uploads/covers/5e6abef0af89662115c0f5ca/775eff52-8ae7-4363-bd37-fce6ab0cc587.png" alt="775eff52-8ae7-4363-bd37-fce6ab0cc587" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>When you take a closer look at that list. You'll notice that there are no Pods, Deployments, or even ReplicaSets. You don't see all the available APIs that you see in a standard Kubernetes Cluster.</p>
<p>This output proves exactly what we discussed in the architecture section. kcp is incredibly lightweight because every new workspace is born <strong>completely stripped of compute</strong>. Out of the box, it only contains the absolute bare-minimum control plane APIs needed for routing, RBAC, namespaces, and authentication.</p>
<p>From Team A's perspective, they own this pristine, empty universe. If they install a massive, noisy operator right now, like the MongoDB CRD, it will only exist right here in this specific API bucket.</p>
<p>But this raises the ultimate question: If there are no <code>Deployments</code> or <code>Pods</code> APIs in this workspace... how do we actually deploy our applications?</p>
<h2 id="heading-deploying-and-managing-applications">Deploying and Managing Applications</h2>
<p>Now that we have set up our isolated environments, we must address the glaring issue from our last terminal output: <strong>How do developers actually deploy applications</strong> if there are no <code>Deployment</code> or <code>Pod</code> APIs?</p>
<p>In standard Kubernetes, the API is monolithic. You get everything whether you need it or not, and adding a new schema (like an Operator) forces it globally onto everyone.</p>
<p>kcp takes the exact opposite approach. Every workspace starts completely empty. You then selectively "subscribe" your workspace to only the APIs you actually need using two incredibly powerful new concepts: <strong>APIExports</strong> and <strong>APIBindings</strong>.</p>
<p>Let's see exactly how this solves our MongoDB multi-tenancy problem, step by step.</p>
<h3 id="heading-1-the-platform-team-exports-the-api">1. The Platform Team "Exports" the API</h3>
<p>Instead of treating Custom Resource Definitions as global hazards, the platform engineers manage them centrally. First, lets switch into the platform-team workspace:</p>
<pre><code class="language-plaintext">➜ kubectl ws :root:platform-team

Current workspace is 'root:platform-team' (type root:organization).
</code></pre>
<p>Here, we'll install the MongoDB Operator CRDs in the platform-team's workspace:</p>
<pre><code class="language-plaintext">➜ kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-kubernetes/1.7.0/public/crds.yaml
</code></pre>
<p>To confirm that this is indeed isolated, let's first check what CRDs were installed,</p>
<pre><code class="language-shell">➜ kubectl get crd

NAME                                          CREATED AT
clustermongodbroles.mongodb.com               2026-03-24T20:45:50Z
mongodb.mongodb.com                           2026-03-24T20:45:50Z
mongodbcommunity.mongodbcommunity.mongodb.com 2026-03-24T20:45:51Z
mongodbmulticluster.mongodb.com               2026-03-24T20:45:50Z
mongodbsearch.mongodb.com                     2026-03-24T20:45:51Z
mongodbusers.mongodb.com                      2026-03-24T20:45:51Z
opsmanagers.mongodb.com                       2026-03-24T20:45:51Z
</code></pre>
<p>We can switch to <code>team-a'</code>s workspace (any of the team's workspaces can be used, we're just trying to establish that the installed <em><strong>CRD</strong></em> is only visible in the <code>platform-team'</code>s workspace).</p>
<pre><code class="language-shell">➜ kubectl ws :root:team-a

Current workspace is 'root:team-a' (type root:organization).
</code></pre>
<pre><code class="language-plaintext">➜ kubectl get crd 
No resources found
</code></pre>
<p>What we get as output is that there are no custom resources found or registered. This is the power of kcp.</p>
<p>If you don't want to continually type out paths to switch between your logical clusters, the <code>kcp</code> plugin includes a powerful interactive UI right in your terminal.</p>
<p>By running <code>kubectl ws -i</code>, you can use your arrow keys to navigate through your hierarchy and press <code>Enter</code> to instantly switch your context. Even better, this interactive mode provides a holistic view of your environment at any given time. With a single glance, you can see exactly how many APIExports are hosted inside a specific workspace, or which APIs are currently <strong>bound</strong> by other workspaces.</p>
<img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/4d86d960-a23c-4cb1-8155-6fe236240893.png" alt="4d86d960-a23c-4cb1-8155-6fe236240893" style="display:block;margin:0 auto" width="600" height="400" loading="lazy">

<p>Let's switch back to the <code>platform-team'</code>s workspace to continue with our setup.</p>
<p>Now, we need to do something kcp-specific. If you check your resources right now, those CRDs are strictly local to this workspace. To safely share them with our tenant teams, we need to convert them into an internal kcp tracking object called an <strong>APIResourceSchema</strong>. This is how kcp structurally version-controls APIs so they can be securely exported.</p>
<p>To do this, we use our <code>kcp</code> plugin to take a "snapshot" of the local MongoDB CRD:</p>
<pre><code class="language-plaintext">kubectl get crd mongodbcommunity.mongodbcommunity.mongodb.com -o yaml | kubectl kcp crd snapshot -f - --prefix v1 | kubectl apply -f -
</code></pre>
<p>You should see an output that says:</p>
<blockquote>
<p>apiresourceschema.apis.kcp.io/v1.mongodbcommunity.mongodbcommunity.mongodb.com created</p>
</blockquote>
<p>This tells kcp: "Get the CRD we just installed, take a snapshot with the prefix 'v1', and apply the resulting <strong>APIResourceSchema</strong> back to the cluster."</p>
<p>Now, let's look for the schema kcp just generated for us:</p>
<pre><code class="language-plaintext">➜ kubectl get apiresourceschemas

NAME                                             AGE
v1.mongodbcommunity.mongodbcommunity.mongodb.com 11s
</code></pre>
<p>To safely share this API with our teams, we wrap that generated schema into an <code>APIExport</code>. This acts like "APIs as a Service," publishing the schema so that other workspaces can optionally choose to consume it.</p>
<p>Let's create the Export using the exact schema name we just found:</p>
<pre><code class="language-shell">➜ cat &lt;&lt;EOF | kubectl apply -f -
apiVersion: apis.kcp.io/v1alpha1
kind: APIExport
metadata:
  name: mongodb-v1
spec:
  latestResourceSchemas:
    - v1.mongodbcommunity.mongodbcommunity.mongodb.com
EOF
</code></pre>
<p>We can confirm this was successfully created by checking the APIExport resource we have</p>
<pre><code class="language-plaintext">➜ kubectl get apiexports

NAME       AGE
mongodb-v1 2m46s
</code></pre>
<h3 id="heading-2-tenant-teams-bind-to-the-api">2. Tenant Teams "Bind" to the API</h3>
<p>Now let's switch our terminal context back over to Team A. Remember our previous output? Their workspace currently has no idea what a MongoDB cluster is. Let's prove it:</p>
<pre><code class="language-plaintext">➜ kubectl ws :root:team-a
Current workspace is "root:team-a" (type root:organization).

➜ kubectl api-resources | grep mongodb
# (No output. The API does not exist here!)
</code></pre>
<p>To securely subscribe to the platform team's newly created API service, Team A needs to create an <code>APIBinding</code>.</p>
<p>While we can write standard Kubernetes YAML to do this, the <code>kcp</code> plugin provides a <code>bind</code> command. Team A simply points the <code>bind</code> command directly at the workspace and the specific API export they want to consume:</p>
<pre><code class="language-plaintext">➜ kubectl kcp bind apiexport root:platform-team:mongodb-v1
apibinding mongodb-v1 created. Waiting to successfully bind ...
mongodb-v1 created and bound.

➜ kcp-test kubectl get apibindings
NAME                  AGE   READY
mongodb-v1            73s   True
tenancy.kcp.io-bqt7a  7h10m True
topology.kcp.io-9dlvq 7h10m True
</code></pre>
<p>The moment Team A executes that <code>bind</code> command, their workspace is magically updated with the new capabilities. Let's check our <code>api-resources</code> one more time:</p>
<pre><code class="language-plaintext">➜ kubectl api-resources | grep mongodb
mongodbcommunity mdbc mongodbcommunity.mongodb.com/v1 true MongoDBCommunity
</code></pre>
<h2 id="heading-beyond-the-primitives-what-we-didnt-cover">Beyond the Primitives: What We Didn't Cover</h2>
<p>At this point, you should have a firm, hands-on grasp of the core user primitives of kcp, that is <strong>Workspaces</strong>, <strong>APIExports</strong>, and <strong>APIBindings</strong>. But we've only just scratched the surface of what this architecture makes possible.</p>
<p>To keep this guide digestible, there are a few massive topics that I deliberately didn't cover in this article:</p>
<ol>
<li><p><strong>Shards and High Availability:</strong> Since kcp is designed to host thousands of logical clusters, a single database isn't enough. kcp introduces the <code>Shard</code> primitive, allowing platform administrators to horizontally partition workspace state across multiple underlying <code>etcd</code> instances. This gives kcp infinite scalability and massive High Availability (HA) without complicating the developer experience.</p>
</li>
<li><p><strong>Front-Proxy:</strong> When kcp scales to host millions of logical clusters, it needs a way to seamlessly direct traffic. The kcp <strong>Front-Proxy</strong> sits at the absolute edge of the architecture, dynamically routing incoming <code>kubectl</code> API requests go straight to the correct underlying workspace and shard. It ensures the developer experience feels perfectly unified, no matter how massive the background infrastructure actually becomes.</p>
</li>
<li><p><strong>Virtual Workspaces:</strong> While the workspaces we built today act as simple isolated buckets of state, kcp also supports <strong>Virtual Workspaces</strong>. These act as dynamic, read-only projections of data. For example, <em><strong>kcp</strong></em> uses virtual workspaces to project a unified view of a specific API across multiple tenant workspaces so that controllers can easily watch them all at once.</p>
</li>
<li><p><strong>APIExportEndpointSlices:</strong> Just like standard Kubernetes uses endpoints to route traffic to pods, kcp uses <code>EndpointSlices</code> to efficiently route and scale the delivery of massive <code>APIExports</code> across thousands of consuming workspaces.</p>
</li>
<li><p><strong>Wiring up the Sync Agent (</strong><code>api-syncagent</code><strong>):</strong> We discussed this conceptually in our architecture diagram, but we didn't actually attach a physical cluster. In a production scenario, you deploy the Sync Agent onto a fleet of downstream execution clusters (like EKS, GKE, or On-Premises environments) to automatically pull workloads safely out of kcp and execute them seamlessly on physical hardware.</p>
</li>
<li><p><strong>External Integrations Like Crossplane:</strong> Because kcp acts purely as a multi-tenant API control plane, it pairs incredibly well with <strong>Crossplane</strong>. By publishing Crossplane as an <code>APIExport</code>, you can empower developer teams to provision actual cloud infrastructure (like AWS databases or Cloud Spanners) using standard YAML directly from their completely isolated kcp workspaces.</p>
</li>
</ol>
<p>We will cover those advanced integrations in a future deep-dive. But armed with just the base primitives we built today, we can already solve the incredibly complex infrastructure problems we outlined at the beginning of the article.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
