If someone asked you how secrets flow from AWS Secrets Manager into a running pod, could you explain it confidently?
Storing them is straightforward. But handling rotation, stale env vars, and the gap between what your pod reads and what AWS actually holds is where many engineers go quiet.
In this guide, you'll build a complete secrets pipeline from AWS Secrets Manager into Kubernetes pods. You'll provision the infrastructure with Terraform, sync secrets using the External Secrets Operator, and run a sample application that reads the same credentials in two different ways: via environment variables and via a volume mount.
By the end, you'll be able to:
Explain the full architecture from vault to pod
Run the lab locally in about 15 minutes
Prove why environment variables go stale after rotation, while mounted secret files stay fresh
Deploy the same pattern on Amazon Elastic Kubernetes Service with OpenID Connect-based CI/CD
Troubleshoot the most common failures
Below is an architecture diagram showing secrets flowing from AWS Secrets Manager through the External Secrets Operator into a Kubernetes Secret, then splitting into environment variables set at pod start and a volume mount that updates within 60 seconds.
Table of Contents
How to Choose Between External Secrets Operator and the CSI Driver
How to Deploy the Pattern on Amazon Elastic Kubernetes Service
How to Configure GitHub Actions Without Stored AWS Credentials
Prerequisites
Before you begin, make sure you have the following tools installed and configured.
For the local lab:
An AWS account with access to AWS Secrets Manager
The AWS CLI installed and configured. Run
aws configureand provide your access key, secret key, default region, and output format. The credentials need permission to read and write secrets in AWS Secrets Manager.kubectlinstalled. For Microk8s, runmicrok8s kubectl config view --raw > ~/.kube/configafter installation to connect kubectl to your local cluster.Terraform installed
Helm installed
Docker installed
A local Kubernetes cluster: the lab supports Microk8s and kind. If you do not have either installed, follow the Microk8s install guide before continuing.
For the Amazon Elastic Kubernetes Service sections:
An Amazon Elastic Kubernetes Service cluster you can create or manage
A GitHub repository you can configure for workflows and secrets
The lab repository includes two deployment paths: a local path for fast learning and an Amazon Elastic Kubernetes Service path for a production-like setup. All the exact commands for each path live in the repo's docs/DEPLOY-LOCAL.md and docs/DEPLOY-EKS.md.
How to Understand the Secret Flow
Before you run any command, you need to understand how the pieces connect.
The flow has four stages:
A developer or automated system updates a secret in AWS Secrets Manager.
The External Secrets Operator polls AWS Secrets Manager on a schedule and creates or updates a Kubernetes Secret.
Your pod reads that Kubernetes Secret.
During rotation, the Kubernetes Secret updates, but your two consumption modes behave differently.
How the External Secrets Operator Sync Works
The External Secrets Operator reads a custom Kubernetes resource called ExternalSecret. That resource tells the operator three things:
Which secret store to connect to
Which Kubernetes Secret name to create or update
How often to refresh
In this lab, the ExternalSecret creates a Kubernetes Secret named myapp-database-creds. The operator also adds a template annotation that can trigger a pod restart when the secret rotates.
How the App Consumes Secrets
The sample application exposes three endpoints so you can validate behavior at any time.
/secrets/envshows what environment variables the pod sees/secrets/volumeshows what files in the mounted secret directory look like/secrets/comparecompares both and reports whether rotation has been detected
The app checks four keys: DB_USERNAME, DB_PASSWORD, DB_HOST, and DB_PORT.
How to Run the Local Lab
The local lab gives you a fast learning loop. You can see the full pipeline working and test rotation without waiting for a cloud deployment.
Step 1: Clone the Repo
git clone https://github.com/Osomudeya/k8s-secret-lab
cd k8s-secret-lab
Step 2: Run the Spin-Up Script
bash spinup.sh
The script will ask you to choose a local cluster type. Pick Microk8s or kind, depending on what you have installed. The script installs the External Secrets Operator via Helm, applies the Terraform configuration, and deploys the sample application.
If the script fails at any point, check docs/TROUBLESHOOTING.md before retrying. The most common causes are missing AWS credentials, a misconfigured kubeconfig, or a Microk8s storage add-on that is not enabled.
Important: Run the Lab UI
The lab ships with a separate guided tutorial interface that runs on your laptop. This is not the in-cluster application, it's a React-based checklist at lab-ui/ that walks you through each concept and checkpoint as you work through the lab.
To start it, open a second terminal and run:
cd lab-ui && npm install && npm run dev
Then open http://localhost:5173. You'll see a module-by-module guide covering the full flow from external secrets to rotation to CI/CD.
Keep this terminal running alongside your lab. The Lab UI and the in-cluster app (localhost:3000) are two separate things, the UI guides you through the steps, the app shows you the live secrets.
Step 3: Access the Application
Once the lab finishes, port-forward the service.
kubectl port-forward svc/myapp 3000:80 -n default
Open http://localhost:3000. You should see a table showing each secret key and whether the environment variable value matches the volume mount value.
Step 4: Validate That Secrets Match
Run the compare endpoint directly from the terminal.
curl -s http://localhost:3000/secrets/compare | python3 -m json.tool
When everything is working, the response will include "all_match": true.
How to Inspect the ExternalSecret and the Application
At this point the lab is running. Now you'll want to inspect the manifests so you understand what each part does.
Step 1: Read the ExternalSecret Manifest
Open k8s/aws/external-secret.yaml. Focus on these four fields:
refreshInterval: how often the operator polls AWS Secrets ManagersecretStoreRef: which store the operator authenticates againsttarget: the name of the Kubernetes Secret to createdata: the mapping from AWS Secrets Manager JSON keys to Kubernetes Secret keys
Here is what that mapping looks like in this lab:
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: myapp-database-creds
creationPolicy: Owner
data:
- secretKey: DB_USERNAME
remoteRef:
key: prod/myapp/database
property: username
The property field tells the operator which JSON key inside the AWS secret to extract. If your secret in AWS Secrets Manager is a JSON object, each field gets its own entry here.
Two fields here are worth understanding before you move on. creationPolicy: Owner means the operator owns the Kubernetes Secret it creates. If you delete the ExternalSecret, the Secret is deleted too. ClusterSecretStore is a cluster-scoped store, meaning any namespace in the cluster can use it. A plain SecretStore is namespace-scoped. For this lab, cluster-scoped is the right choice because it keeps the setup simple.
Step 2: Read the Deployment Manifest
Open k8s/aws/deployment.yaml. You are looking for two sections: envFrom and volumeMounts.
envFrom:
- secretRef:
name: myapp-database-creds
volumeMounts:
- name: db-secret-vol
mountPath: /etc/secrets
readOnly: true
Both paths read from the same Kubernetes Secret, myapp-database-creds. The envFrom block injects all keys as environment variables at pod start.
The volumeMounts block mounts the same secret as files under /etc/secrets.
This is the core of the rotation lesson. Both paths read the same source. But they behave differently after that source changes.
Step 3: Read the App Comparison Logic
Open app/server.js. The comparison logic reads environment variables from process.env and reads mounted secret files from /etc/secrets/<key>. Then it computes a per-key match and a global all_match value.
The /secrets/compare endpoint sets rotation_detected: true when any key differs between env and volume.
How to Test Secret Rotation
Secret rotation is where real teams feel pain. This lab makes that pain visible so you can explain it clearly and fix it confidently.
How the Rotation Gap Works
When a pod starts, Kubernetes gives it two ways to read a secret.
The first way is environment variables. Think of these like sticky notes written on the wall of the container the moment it boots up. The value gets written once, at startup, and never changes. Even if the secret in AWS gets updated ten minutes later, the sticky note still says the old value. The container cannot see the update because nobody rewrote the note.
The second way is a volume mount. Think of this like a shared folder that someone else can update remotely. Kubernetes creates a small folder inside the container and puts the secret value in a file there. When the secret changes in AWS and ESO syncs it into Kubernetes, the kubelet quietly updates that file within about 60 seconds. The container reads the file fresh every time it needs the value, so it sees the new password automatically.
Same secret, two paths. One goes stale while one stays fresh.
The problem happens when your app reads the database password from the environment variable, the sticky note, and someone rotates the password in AWS. ESO updates Kubernetes. The file gets the new password. But your app is still reading the sticky note, which has the old one. Connection fails.
That difference isn't a bug. It's how the Linux process model and the kubelet work. Understanding it is the difference between knowing Kubernetes secrets and actually operating them.
Here is what you're about to observe in the lab:
The rotation script updates the secret in AWS
ESO syncs the new value into Kubernetes within seconds
The volume file updates automatically
The environment variable stays stale until the pod restarts
The
/secrets/compareendpoint shows both values side by side so you can see the gap live
Step 1: Confirm the Lab Is Ready
Make sure your pod and the External Secrets Operator are both running before you start.
kubectl get pods -n external-secrets
kubectl get pods -n default
Both should show Running.
Step 2: Run the Rotation Test Script
bash rotation/test-rotation.sh
The script performs these actions in order:
Reads the current
DB_PASSWORDfrom the volume mount at/etc/secrets/DB_PASSWORDReads the current
DB_PASSWORDfrom the environment variableUpdates AWS Secrets Manager with a new password using
put-secret-valueForces an immediate ESO sync by annotating the
ExternalSecretwithforce-syncReads the volume value again
Reads the environment variable again
After the script runs, the volume and the env var will show different values.
Step 3: Validate With the Compare Endpoint
Hit the compare endpoint and look at the output.
curl -s http://localhost:3000/secrets/compare | python3 -m json.tool
You'll see something like this:
{
"comparison": {
"DB_PASSWORD": {
"env": "old-password-value",
"volume": "new-password-value",
"match": false
}
},
"all_match": false,
"rotation_detected": true,
"message": "Volume has new value; env still has old value."
}
Step 4: Restart the Deployment to Sync Env Vars
Env vars don't update in place. You need a pod restart so new containers start with the updated Kubernetes Secret.
kubectl rollout restart deployment/myapp -n default
kubectl rollout status deployment/myapp -n default
Then hit /secrets/compare again. All rows should now show "all_match": true.
How to Automate Restarts With Reloader
If you don't want to restart deployments manually after every rotation, you can install Stakater Reloader. It watches an annotation on the Deployment and triggers a rolling restart automatically when the referenced Kubernetes Secret changes. New pods start with fresh env vars, while old pods drain cleanly. The repo's local deployment guide includes the install steps.
How to Choose Between External Secrets Operator and the CSI Driver
Two patterns dominate when it comes to pulling external secrets into Kubernetes: the External Secrets Operator and the Secrets Store CSI Driver.
Both get cloud secrets into pods, but they do it differently. Here's a plain comparison:
| Feature | External Secrets Operator | Secrets Store CSI Driver |
|---|---|---|
| Creates a Kubernetes Secret | Yes | No by default |
Supports envFrom |
Yes | No (workaround only) |
| Secret stored in etcd | Yes (base64) | No, if you skip sync |
| Rotation | ESO updates the Secret, Reloader restarts pods | Volume file can update in place |
| Best for | Most teams. Multi-cloud, env var support | Security policies that prohibit secrets in etcd |
This lab uses the External Secrets Operator for two reasons. First, it produces a native Kubernetes Secret, which means your application and deployment patterns match standard Kubernetes workflows. Second, having both envFrom and a volume mount point to the same Secret makes the rotation behavior easy to observe side by side.
Use the CSI Driver when your security team prohibits storing secrets in etcd through a Kubernetes Secret. The driver mounts secret data directly into the pod file system without creating a Kubernetes Secret. The tradeoff is that you lose the native envFrom model.
How to Deploy the Pattern on Amazon Elastic Kubernetes Service
The local lab is ideal for learning. The Amazon Elastic Kubernetes Service path adds the production-like pieces: IAM role-based permissions for the operator, a load balancer for the app, and a full CI/CD workflow.
Step 1: Prepare Terraform and OpenID Connect Access
The repository includes a one-time setup guide for OpenID Connect-based access from GitHub Actions to AWS. Run these commands in the terraform/github-oidc folder.
cd terraform/github-oidc
terraform init
terraform plan -var="github_repo=YOUR_ORG/YOUR_REPO"
terraform apply -var="github_repo=YOUR_ORG/YOUR_REPO"
terraform output role_arn
Copy the role ARN from the output. You'll need it in the next step.
Step 2: Set the Required Environment Variable
The Amazon Elastic Kubernetes Service spin-up path needs your GitHub Actions role ARN so Terraform can grant the CI/CD runner access to the cluster.
To find your AWS account ID, run:
aws sts get-caller-identity --query Account --output text
Then set the variable, replacing ACCOUNT with the number that command returns.
export GITHUB_ACTIONS_ROLE_ARN=arn:aws:iam::ACCOUNT:role/your-github-oidc-role
Step 3: Run the Spin-Up Script for Amazon Elastic Kubernetes Service
bash spinup.sh --cluster eks
When the script finishes, it prints the application URL. Open that URL in a browser and confirm that you see the same secrets table you saw locally, with all keys showing Match ✓.
Step 4: Test Rotation on the Deployed App
After you confirm normal operation, run the rotation test the same way you did locally.
bash rotation/test-rotation.sh
Then use /secrets/compare on the Amazon Elastic Kubernetes Service load balancer URL to validate behavior in the cloud environment.
⚠️ Cost warning: Amazon Elastic Kubernetes Service runs at approximately $0.16 per hour. When you're done with the lab, run bash teardown.sh from the repo root to destroy all AWS resources and stop charges.
How to Configure GitHub Actions Without Stored AWS Credentials
The typical CI/CD setup stores AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in GitHub repository secrets. Those keys never rotate. Anyone with repo access can read them. When someone leaves the team, you have to revoke keys and update every workflow.
OpenID Connect eliminates that problem entirely.
How OpenID Connect Works for GitHub Actions
GitHub can issue a short-lived token for each workflow run. That token identifies the run: the repository, branch, and workflow name. You create an IAM role in AWS whose trust policy says: only accept requests that come from this specific GitHub repository and branch. The GitHub Actions runner exchanges that token for temporary AWS credentials via AssumeRoleWithWebIdentity. No long-lived keys are ever stored anywhere.
Step 1: Create the IAM Role With Terraform
The terraform/github-oidc folder creates the OpenID Connect provider and the IAM role for you. You already ran this in the Amazon Elastic Kubernetes Service setup above. The role ARN is the only value you need to store.
Step 2: Add the Role ARN to GitHub Repository Secrets
In your GitHub repository:
Go to Settings → Secrets and variables → Actions
Click New repository secret
Name it
AWS_ROLE_ARNPaste the role ARN from the Terraform output
That is the only secret you store. The role ARN isn't sensitive. It's an identifier, not a credential.
Step 3: Configure Terraform State
For CI/CD to work consistently across runs, Terraform needs a shared state backend. The lab stores Terraform state in an Amazon S3 bucket and uses an Amazon DynamoDB table for state locking. The Amazon Elastic Kubernetes Service deployment guide in the repo covers the backend setup in full.
Step 4: Push to Main and Let Workflows Run
After your first spin-up, every push to the main branch drives the CI/CD pipeline. The repo includes separate workflow files for Terraform infrastructure changes and application deployment changes. Once your application is reachable, use /secrets/compare to validate rotation behavior on the live environment.
How to Troubleshoot the Most Common Failures
Here's a shortlist of the most common symptoms and their fixes.
| Symptom | Most Likely Cause | Fix |
|---|---|---|
ExternalSecret is not syncing |
Missing credentials or wrong store reference | Confirm the operator can access AWS Secrets Manager and that secretStoreRef points to the correct store |
Pod is stuck in Pending |
Missing storage setup for local cluster | For Microk8s, enable the storage add-on |
| Env and volume still match after rotation | Rotation happened but the pod never restarted | Run kubectl rollout restart or install Reloader |
| CRD or API version mismatch | ESO version and manifest apiVersion don't match |
Verify the apiVersion for ClusterSecretStore and ExternalSecret match your installed ESO version |
| Amazon Elastic Kubernetes Service node group never joins | Networking or IAM permissions for nodes are wrong | Fix internet routing and review the node IAM policy |
How to Inspect the Operator and the ExternalSecret
When something isn't syncing, start with these two commands.
# Check the ExternalSecret status
kubectl describe externalsecret app-db-secret -n default
# Check the ESO operator logs
kubectl logs -n external-secrets -l app.kubernetes.io/name=external-secrets
The status conditions on the ExternalSecret resource will usually tell you exactly what failed.
How to Validate Rotation From the App Side
When you are debugging rotation, don't rely only on Kubernetes resource state. Use the /secrets/compare endpoint to see what the running application actually observes. The endpoint tells you whether env and volume match and whether rotation has been detected. That is the ground truth for your application's behavior.
Conclusion
You now have a complete secrets pipeline from AWS Secrets Manager into Kubernetes pods using Terraform and the External Secrets Operator. You ran the local lab, inspected the ExternalSecret and Deployment manifests, and validated that the application sees the right credentials.
You also tested secret rotation and observed the key behavior firsthand: mounted secret files update within the kubelet sync period, while environment variables stay stale until the pod restarts. That single observation explains a large class of production incidents.
Finally, you saw how the same design extends to Amazon Elastic Kubernetes Service with OpenID Connect-based CI/CD, and you have a troubleshooting checklist for the failures most teams hit.
The lab repository is at github.com/Osomudeya/k8s-secret-lab. If you ran the local lab, the natural next step is phases 4 and 5 from the repo's staged learning path: try the CSI driver path on Microk8s, then follow the EKS setup to see the same pipeline with a real CI/CD workflow and no credentials stored in GitHub. Both are documented in the repo and take less than 30 minutes each.
If this helped you, star the repo and share it with someone who is learning Kubernetes.
I send weekly breakdowns of real production incidents and how engineers actually fix them, not tutorials but real failures
→ Join the newsletter