Explain to me like I'm five Kubernetes and infrastructure as code

As an extension of my original post where I asked “Explain to me like I’m 5, why I should use Docker locally

Why should I take things “next level” and go learn and use a technology like Kubernetes, and apply it to infrastructure as code.

From my quick review of this topic, not only do I leverage Docker containers (or any container technology it seems) but I leverage Kubernetes, and something like helm to get my kubernetes setup up to par with what I have setup in my code. So far it seems like throwing more abstractions on top of abstractions, and seems like a lot to learn and kinda overkill.

So explain to me like I’m five the sort of benefits one would see using these sorts of technologies and problems they solve. (And potentially what kind of problems these introduce haha)

To answer every thing in a nutsell for you.
You don’t seem like an idiot and, seem to have done your reachers well enough to make a capble desision on your own.

As you are a Linux usser I am sure you mastered the art of finding alternatives for software programma’s. I get that you want to extend your reachers and want some second opinion here but, it seems like you are over thinking every thing here. Have some confidence here in yourself and just go for it. You can alway’s chance your software programma’s later onto.

It is. But. Say you have a large, complex application split into many parts over a network. All of those parts are individual containers. What you ideally want is

  • all those components to communicate: they will do this over network sockets, each with a port address.
  • if any of those parts fail, you want to restart them easily.
  • if any of those parts fail catastrophically and their internal state becomes corrupt, you want to completely blow them away and rebuild.
  • if something’s going slow or whatever, probs just blow it away and rebuild it.
  • for persistant storage, you probably want replication plus supervisor applications to monitor the storage for issues, so’s you can do the same thing: just blow it away if there’s a problem.
  • if you update the code for one of your containers, you want to completely blow the current version away and rebuild.
  • tl/dr is there problem? Kill the container and rebuild it (This is exactly the same principle as switching your computer (or your TV or microwave or whatever) off and on if there’s an issue). Just wipe everything to last good state, crash it, that’s how you get reliability

This raises several issues.

  • how do you manage inter-app communication?
  • how do you ensure that you can take down parts of this monstrosity without bringing down the whole system?
  • how do you, in a sane manner, build all of the apps correctly?
  • how do you safely and sanely take everything down in a controlled manner and boot everything back up?
  • if something does go down, how do you manage strategies for what else needs to be taken down or what else can stay up and running?
  • how do you share things that need to be shared (env vars maybe?) between your apps?
  • how do you view the collection of individual containers as one discrete thing?
  • if you rebuild, you gotta have a new port address. If you have a new port address, you lose communication with the rest of the overall system. How do you manage this problem dynamically?

(Note: caveat before I get asked for any technical details most all DevOps stuff makes me want to gouge my eyes out, so this just comes for a. working for a while with Kubernetes and b. using Erlang which basically works like a self-contained Kubernetes cluster. I myself have never built or configured any Ks stuff (despite very optimistically buying the Manning books on Docker & Kubernetes). That was all built and configured by the DevOps person I sat next to. I know how it hangs together but I never actually want to use it directly thx

2 Likes

Kubernetes is to whole clusters what Docker is to individual systems. I could go on, but @DanCouper already said it all better, and I’m in one of my more pithy/glib moods besides.

My current decision is to learn more about the technology as I know I don’t know enough to make an informed decision to commit myself to learning this sort of stuff. Heck at this point I’m on the fence that my current setup of using more PaaS offerings instead of “building my own platform” using Kubernetes. I half want people to convince me its a good idea to learn and use, and half want them to say “meh”.

As a dev that needs to play with ops, ops has to be pretty damn easy for me to go out and use and learn it, otherwise its just getting in the way. This doesn’t go for just Kubernetes, even if I’m a Linux user, I want things to be easy so I can focus on building the stuff, rather then supporting it.

But this post isn’t just about me, its about just getting a conversation about Kubernetes going for the average dev/person so anyone can jump in ask, or comment even if they have only a limiting understanding of the subject, odds are they still know more then a “five year old” haha

I’d say you’re better off playing with docker-compose before jumping straight into Kubernetes. K8S is almost a superset of it.

In simple word, Kubernetes helps you to automatically deploy your container to hundreds and thousands of hosts and K8 clusters. You can think it like a Scalability tool. which can deploy new instances when traffic is high and reclaim when traffic is normal. The real use of this is on sites like Amazon, Flipkart where traffic increase during promotions like Amazon prime day.
On technical side, if you want to learn Kubernetes, there are many [free courses to learn Kubernetes] which you can google and find yourself as posting links is not allowed here. Thx

TLDR;
One of the main features of Kubernetes is - resource scheduling. It helps you to manage multiple nodes and be able to use their resources to the max. You can ensure you are Not under utilising your node’s resources as it’s costly to pay for the nodes, especially when using lots and lots of nodes. There are other systems too that can do resource scheduling, for example Nomad by Hashicorp company.

Longer version:
I’m completely onboard with you that it’s a lot of things, a lot of abstractions. Infact for me, they seem as complex ones, when abstractions are meant to make life easier. There are a lot of people who will agree with you, and they will tell you how Kubernetes is not the level at which developers should operate at. Only people who maintain infrastructure should be dealing with the nastiness of Kubernetes and Helm and people use more abstractions on top of them to make it easy for developers to deploy their code. For a developer, ideally what they would usually want for their backend services is, be able to

  • Push code to their git repo and deploy exactly one of the commits (a release)
  • Tell the system how to run the service - running migrations, starting server
  • Use an already running DB or some sort of store
  • Tell the system how many replicas to run in case of multiple instances for horizontal scaling. Or better, let the system auto scale the service based on some metrics - system metrics like CPU, memory, or business metrics like number of users or application metrics like number of requests
  • Tell minimum resources required for the service - CPU, RAM
  • Tell the configuration and secrets for the service
  • Expose the service to other services, or even the outside world with HTTPS with a domain name

Now if you notice, this is what PaaS provides, for example Heroku and others. Many companies who can shell out money can get these services. When they are scaling and going very high scale and realizing they spend a lot of money on infrastructure, they form infrastructure teams, and use cloud VMs and maintain their own platform as I think they are probably cheaper. So they spend money on paying the salary of these infrastructure teams and the VMs instead of getting PaaS services. Now these infrastructure teams need to manage infrastructure in a cost efficient manner. Which translates to being able to run multiple services or programs on the VMs, and also making sure no one program affects another one as that would be bad. So they need isolation. And they also want to make sure that they use all the resources of all the VMs and if this has to be done manually, they gotta keep checking the resources in each VM manually and then run the program in it. Instead you can use a system to help you. If each program tells the system how much resources (for example CPU, RAM) it needs, then the system can schedule the program to run on a VM which has enough resources to run the program. Sometimes some complex scheduling can happen too, based on scheduling conditions imposed by users - service should run in multiple zones in the cloud. These complex systems like Kubernetes, which orchestrate (schedule) containers using multiple nodes are very handy and are the systems that people use for automatic scheduling. They do have lot of other features too, like health checks and readiness checks, restarting dead services, helping with disk / volume management. Kubernetes is a ton of things actually. It also help with config management, networking, secrets etc. If you take Hashicorp Nomad example, Nomad has lesser features, and uses other tools for other features, for example Consul for networking, config management; Vault for secrets.

3 Likes

Bit late to the party but I assumed a 5-year old would love to read a comic about Kubernetes :slight_smile:

Having experienced diving in directly into using a Kubernetes (k8s) for a school project, I really agree with this statement. My team decided to use k8s because it seems like a cool technology and it really is. Features such scaling, high-availability, monitoring is all built-in (or just a kubectl apply away) into k8s but it came at a cost, which is operation complexity.

There are a lot to learn and if you want to get it running reliably and safely for production environment, there will be more to learn. In my scenario, I became the one in the team that is responsible for managing the cluster. I am not anymore mainly a dev, but now the ops guy.

That said, if you just want to play around and write an application and deploy on k8s, it is quite easy actually. Don’t let my story scares you :smiley:

Hey! There are a lot of IAS Tools nowadays.
I worked with terraform to automate the flow of creating AWS Environments in my work
You should watch this video.
He explains how terraform works which is a IAS solution.
He also explain IAS really well.

Reasons why you should use Kubernetes:

  1. It is predicted that apps do not have any downtime for maintenance and improvements with the growth in the number of internet users.
  2. Every company wants its deployments to scale according to the needs of users, i.e. if more requests from users are coming, more CPU and memory should be added to the deployment automatically; otherwise the server would crash.
  3. In addition, if there are no such conditions at all times, no one wants to pay more for CPU and memory on cloud services. There should therefore be a smart system that efficiently allocates and handles the use of the CPU and memory as needed.
    Some added benefits too:
    a) Load balancing
    b) Auto scaling