Kubernetes is an open-source container orchestration platform that automates the deployment, management, scaling, and networking of containers. It makes it simpler to deploy apps to production.
We just published a Kubernetes course on the freeCodeCamp.org YouTube channel.
Bogdan Stashchuk developed this course. He is a popular DevOps instructor who has taught hundreds of thousands of people on Udemy and YouTube.
Here are all the sections covered in this Kubernetes crash course:
- Kubernetes for Beginners Introduction
- What is Kubernetes
- What is Pod
- Kubernetes Cluster and Nodes
- Kubernetes Services
- What is kubectl
- Software required for this course
- Installing kubectl
- Installing Minikube
- Cleating Kubernetes cluster using Minikube
- Exploring the Kubernetes node
- Creating just single Pod
- Exploring Kubernetes Pod
- Creating alias for the kubectl command
- Creating and exploring Deployment
- Connecting to one of the Pods using its IP address
- What is Service
- Creating and exploring ClusterIP Service
- Connecting to the Deployment using ClusterIP Service
- Deleting Deployment and Service
- Creating Node web application
- Dockerizing Node application
- Pushing custom image to the Docker Hub
- Creating deployment based on the custom Docker image
- Scaling custom image deployment
- Creating NodePort Service
Watch the full course below or on the freeCodeCamp.org YouTube channel (3-hour watch).
Kubernetes makes it possible to containerize applications and simplifies deployment to production.
Bogdan Stashchuk will teach you everything you need to know to get started with Kubernetes.
Welcome to the Kubernetes For Beginners.
Kubernetes is the defacto standard for deployment of the containerized applications into production.
Kubernetes is open source and therefore it is free for use.
Let me first introduce myself before we will start this course.
My name is Bogdan Stashchuk and I have been using Docker and Kubernetes multiple years on practice.
And I have deployed real world applications into production using Kubernetes.
Also, I'm teaching online and on my personal websites that shoe.com You could find all courses which I teach.
Now let's get started with Kubernetes for beginners, and I would like to start with course plan.
So what is included in this course, we will start by talking about terminology and the key features of the Kubernetes.
And you'll learn what is Kubernetes cluster, what is node and what the sport and the what Kubernetes essentially does.
Afterwards, we will immediately dive into the practice and we will build small Kubernetes cluster locally on our computers.
And afterwards using such cluster we will create and scale different deployments.
Also, we will build a custom Docker image, push it to Docker Hub, and afterwards create Kubernetes deployment based on this customly built Docker image.
Other than that, we will also create services and deployments in Kubernetes using Yamo configuration files.
Also, we will connect different deployments together because it's very common situation when you have to connect different applications together over the network.
And Kubernetes allows of course to do that.
And also finally, we will change container runtime from the Docker to CRI o because Kubernetes is not bound to Docker.
It also supports other container runtimes like CRI O and container D.
And you could use Kubernetes absolutely without Docker.
One single prerequisite for this course is your familiarity with Docker, I assume that you know what is Docker container and how to create different containers.
Alright, so let's get started with this course.
And I would like to start with definition Kubernetes is a container orchestration system.
using Docker, you could of course create container on any computer.
But of course, if you want to create multiple containers on different computers on different servers, you could get into the troubles.
Kubernetes allows you to create the containers on different servers, either physical or virtual.
And all of that is done automatically without your intervention.
You just tell Kubernetes how many containers you would like to create based on a specific image.
Kubernetes is relatively alone good word, and it consists of 10 different letters.
BUT IT professionals and software developers are relatively lazy people and they don't like to type much.
That's why search alone award is usually shortened just to three characters.
But how is it done? Let's have a look at this word.
between k and s, there are actually eight different letters.
So number eight, that's why loan award Kubernetes would be shortened just to three characters, K eight s.
So eight represents simply quantity of the letters between starting and ending collateral Simple as that.
Knowing this very simple tweak we could go on.
And now let me explain to you what Kubra this takes care of.
So Kubernetes takes care of automatic deployment of the containerized applications across different servers.
And those servers could be either bare metal or physical servers or virtual servers.
Virtual Servers option is of course more common nowadays, and almost no one uses now bare metal servers.
So Kubernetes allows you to perform automated deployments across different servers that could be located even in different parts of the world.
Other than that, Kubernetes also takes care of distribution of the Lord across those multiple servers.
And this allows you to use your resources efficiently and avoid under utilization or over utilization of the resources.
Also Kubernetes takes care of the auto scaling of the deployed applications in case you need to increase for example, quantity of the containers which have to be created on different servers.
And all that done automatically.
You just tell when you want to scale up or down.
Also, Kubernetes takes care of the monitoring and health check of the containers.
And in case some containers fail for some reasons Kubernetes could automatically replace failed containers and all of the done without your intervention.
As I just told you Kubernetes deploys containerized applications and therefore it has to use a specific container runtime.
And Dogra is just one of the possible options Kubernetes nowadays, support such container runtimes Docker, CRI O, and container D, and search container runtime, for example as Docker or CRI o must be running on each of the servers which are included in the Kubernetes cluster.
And main outcome here is that Kubernetes could be utilized even without Docker at all.
It supports other container runtimes like CRI O and container D.
And at the end of this course, I'll demonstrate to you how to change container run time and move from Docker for example to CRI O.
So now let's get started with terminology and architecture of Kubernetes.
And let's start with port both is the smallest unit in the Kubernetes world in Dhaka rolled container is smallest unit in Kubernetes pod is the smallest possible unit and containers are created inside of the pod.
So both an atomic is following inside of the pod, there are could be containers, either one or even several containers.
Also, there are shared volumes and shared network resources for example, a shared IP address.
This means that all containers inside of the same pod share volumes and share IP address.
And you must keep that in mind if you want to create multiple containers inside of the same port.
Most common scenario of course is to have just a single container per port.
But sometimes when the containers have to be tightened together and they heavily depend on each other and they could exist in the same namespace, it is possible to grade several containers in the same port.
But again, single container per port is the most common use case.
Also please keep in mind that each port must be located on the same Sarah it is not possible to spread the containers from one port across different servers in the Kubernetes cluster, one pot one server.
So let's now have a look at that Kubernetes cluster what is that Kubernetes cluster consists of nodes node is actually Sarah either bare metal server or virtual server.
And you could include multiple saij servers into the Kubernetes cluster and they would be located in different data centers in different parts of the world.
But usually nodes which belong to the same Kubernetes cluster are located close to each other in order to perform all jobs more efficiently.
Inside of the node, there are ports both is again the smallest possible unit in the Kubernetes and inside of each port, there are containers usually single container per port and such boards are graded on different nodes and all of that is done automatically for you Kubernetes does this job.
But of course your job is to create such nodes and create clusters based on those nodes.
Nodes will not automatically Form class or without your intervention.
But after such initial configuration, everything will be automated and Kubernetes will automatically deploy both on different nodes.
But how those nodes actually communicate with the Charles or and how are they managed.
In a Kubernetes cluster there was master node and other nodes in the cluster are called worker nodes.
and master node actually manages worker nodes.
And if Master nodes job to distribute for example, load across other worker nodes and all boards, which are related to your applications are deployed on worker nodes, master node runs only system ports, which are responsible for actual work of the Kubernetes cluster in general, we could also say that master node in the Kubernetes cluster is actually control plane, and it does not run your client applications.
So, which services actually run on different nodes? Let's have a look at this diagram.
There are such services as cubelet, cube proxy and container runtime and those services are present on each node in a Kubernetes cluster.
You already know what is the container runtime container runtime runs actual containers inside of each node, and that there are such container runtimes as Docker CRI o or container D.
There is also such service as cubelet.
And such service on each worker node communicates with a BI Server service on the master node.
A Di server service is main point of communication between different nodes in the Kubernetes world.
Cube proxy which is present on each node as well is responsible for network communication inside of each node and between nodes.
Also, there are other services which are present on master node.
And they are scheduler and such service is responsible for planning and distribution of the Lord between different nodes in the classroom.
Also, there was Cube controller manager, and this single point which controls everything actually in the Kubernetes cluster.
And it controls actually what happens on each of the nodes in the cluster.
Also, there was cloud controller manager.
And his job is interaction with cloud service provider where you actually run your Kubernetes cluster, because usually you don't create such clusters yourself using just your own servers.
Instead, you could very easily run the Kubernetes cluster from one of the cloud providers, which actually perform almost automated creation of all nodes and the connections between such nodes.
And for that you have to run cloud controller manager service on that master node.
Also, for example, if you want to create the deployment of your application inside of the Kubernetes cluster, which will be opened to outside world and allow connections from outside, you could create also load balancer IP addresses.
And those load balancers.
IP addresses are usually provided by specific cloud providers.
Also a master node there is such service as etcd.
And this is a service which actually stores all logs related to operation of entire Kubernetes cluster.
And such logs are stored there as key value pairs.
Also, there are other services which are running on master node, for example, a DNS service which is responsible for names resolution, entire Kubernetes cluster.
And for instance, using a DNS service, you could connect to specific deployment by the name of the corresponding deployment service.
And in such way, you could connect different deployments with a job or that's a different services which are running on different nodes in the Kubernetes cluster.
And the main service on the master node is API Sarah.
And by the way, using this API server service, you could actually manage an entire Kubernetes cluster.
How is it done? It is done by using cube CTL or cube control.
And cube control is a separate command line tool, which allows you to connect to specific Kubernetes cluster and manage it remotely.
And cube CTL could be running even on your local computer.
And using such a cube CTL tool you could manage a remote Kubernetes cluster.
And using such tool you actually connect using REST API to a API server service on the master node.
And such communication happens over HTTPS.
By the way, other nodes in the cluster I mean worker nodes communicate with master node in the same fashion.
It means that using such a cube CTL tool, you could manage any remote Kubernetes cluster.
This eat for Kubernetes architecture overview, and now you know that the Kubernetes cluster consists of the nodes, and the one of the nodes is master node, and it manages our other nodes, which are called worker nodes.
On each node, there are boards and the boards are graded automatically by Kubernetes.
And inside of the port, there are containers, usually just single container port.
Also, please keep in mind that all containers inside of the port share namespace as of that port, like volumes or network IP address.
Also, both our smallest units in Kubernetes, and the ports could be created that could be removed could be moved from one node to another.
This happens automatically without your intervention.
But you have to design your application with this mind, both could be deleted on any moment of time.
Also, there are different services which are running on different nodes.
For example, API server service is central point of communication between master node and other worker nodes.
And also using such a BI service service, you actually could manage actual Kubernetes cluster using cube CTL tool, which has to be installed, for example, on your computer if you perform management from your computer.
Alright, I don't want to dive deeper into the details of the Kubernetes because this is very complicated tool.
And I want to focus more on practical tasks instead.
That's why now after such overview, we will dive together with you into the practice and perform different practical tasks together.
For example, we will create deployments, services, scale deployments, create custom Docker image and create the deployment based on that image, and so on.
In order to perform all practical tasks together with me, you have to install some programs on your computer.
And now when we talk about required software, and first we have to create actual Kubernetes cluster, where we will create different deployments.
Of course, you could create cluster using services from one of the cloud providers like Amazon Web Services, or Google Cloud, but you have to pay for such service.
If you want free solution, you could create class for locally on your computer.
And for that there was such tool as mini cube.
And such tool will essentially create just a single node cluster.
And this node will be both worker node and master node.
But for test deployments, and as a playground, it works pretty nice and all that is free and will be running on your local computer.
In order to run mini cube successfully, you have to utilize virtual machine or container manager.
And there are following virtual machine or container managers which are supported VirtualBox, VMware, Docker, Hyper V, or parallels.
There are also other options available.
But you have to utilize one of such options.
In order to actually create the virtual node, which will run all both in your Kubernetes cluster.
I suggest you to go ahead with Hyper V, if you're a Windows user.
And if you're a Mac OS user, you could go ahead with welchol box it is free and open source or you could utilize VMware or parallels.
By the way, there was also an option to run mini cube as container inside of the Docker.
Of course, if you have Docker already installed on your computer, you could utilize it in order to create a Kubernetes cluster using mini cube and essentially it will create a separate Docker container and inside of that container all both will be created.
But I personally don't recommend to you to go ahead with Docker option because there were some limitations.
For example, I was not able to change the container runtime inside of the Docker container to CRI or container D.
And therefore I recommend to you to go ahead with other options which are mentioned here.
By the way, Hyper V is available on Windows computers out of the box and And you could utilize it as Butoh Machine Manager for running mini cube node.
To summarize using mini cube, you will create a single node Kubernetes cluster.
But as I mentioned before, you have to use a specific tool in order to manage this cluster.
And this tool is called cube CTL.
By the way, cube CTL is included in mini cube.
But if you want to use such included version, you have to enter mini cube cube CTL commands, it is not convenient.
Therefore, I recommend to you to install cube CTL separately.
And using separate installation, you will be able, of course to manage other Kubernetes clusters, which are located for example, at Amazon Web Services.
So, cube CTL is also one of the programs which you have to install on your computer.
Certainly, I'll explain to you how to install mini cube and cube CTL.
Other than that, we will also do some coding in this practical course.
And for that you have to use one of the code editors and I recommend to you to install Visual Studio code, it is open source and free for use.
And if you have not yet installed it, please go ahead and install it.
Also, it has many different extensions, and one of them is Kubernetes extension.
And using such extension you could very fast great for instance Yamo configuration files for your deployments and services in Kubernetes.
That's all what you need for this course mini cube cube CTL and Visual Studio code.
Now let's get started with practical part.
And I hope you'll enjoy this course.
And we will start with the installation of the mini cube and cube CTL.
Alright guys now let's get started with practical part of the course.
And we will start by installing communicate cube along with cube CTL.
But first, I would like to ask you to navigate to kubernetes.io.
This is the main site dedicated to Kubernetes in general and it has all documentation related to configuration of the mini cube clusters creation of deployments, etc.
Please click here on documentation.
And here in the left section go to getting started.
And here you could read how you could create a learning environment.
Along with that you could also find information how to create a production environment.
But we are interested now in creation of the local Kubernetes cluster.
And for that we will utilize mini cube.
And also we will install tube CTL.
That's why a police click here on this hyperlink install tools.
And do find instructions how to install cube CTL on different operating systems, please choose yours.
I'll choose Mac OS here.
And if you're a Mac OS user, you could very easily install cube CTL using homebrew let me demonstrate to you how to do that.
Let me click on this option.
Run the installation command brew install cube CTL Let me copy this command and open up terminal I'm using item to a Mac and paste this command here.
Let's go ahead and install cube CTL.
The warden package and cube CTL was installed.
Let me check version of the cube CTL.
For that I could utilize this command cube CTL version, there's desk client.
Let me enter it here.
And I see that the cube CTL was installed successfully.
And here was client version major one and minor 22.
And here was actually exact version which was installed.
Alright, if you're a Windows user, I recommend to you to install cube CTL using package manager, you could go to install and setup cube CTL on Windows and here choose option install on Windows using chocolaty or scoop.
Click on this option and here you'll find instructions how to install different packages using chocolaty package manager or scoop command line installer, I recommend to you to install chocolaty Package Manager.
In order to do that, please open up this link here and find instructions how to install this package manager for Windows.
Using the same package manager you could very easily install mini cube as well.
So please go ahead and install this package manager and afterwards, follow those instructions on how to install cube CTL on Windows.
Use just a single command this one and afterwards, verify that cube CTL is available using this command.
After this, I assume that cube CTL is already available on your computers.
And now let's go ahead and install mini cube.
Go back to Getting Started page and here scroll down to learning environment and click on hyperlink installed tools.
This one, you will find the game that you have to install cube CTL.
That's what we just did together.
And if you scroll down, you'll find options which you could utilize in order to create a local Kubernetes cluster for learning and testing purposes.
Nowadays, there are such tools as kind, and it allows you to create Kubernetes cluster locally on your computer, but it requires Docker.
Also, there is mini cube and cube ADM.
We will utilize mini cube throughout this course.
That's why let's get started with installation of the mini cube.
It does not require Docker.
But it does require one of the virtual machine managers like Hyper V parallels.
Or if you want, you could also utilize Docker, please navigate to this link mini cube, you could simply enter mini cube in the Google search bar and click on the first result in the output.
And here you'll find documentation how to get started with mini cube, please click on this link get started.
And here you could read about mini cube and it will basically create a Kubernetes cluster locally on your computer.
And it will contain just a single node which will act both as master node and worker node.
And all what you need in order to create such a cluster is just single command mini cube start simple as that.
But in order to successfully create a mini cube cluster, you have to install one of the container or your doll machine managers such as Docker, HyperCard, Hyper V parallels and so on.
I told you before that if you're a Windows user, I recommend to you to utilize Hyper V because it is available on Windows out of the box.
If you're a Mac user, I recommend to you to go ahead either with will toolbox option or parallels or VMware Fusion beautiful box is free for us.
Therefore, if you don't want to buy anything for Virtual Machine Manager, I recommend you to go ahead with this option.
Also, it is possible to create mini cube cluster inside of the container Docker container and for that you could just install Docker.
But I personally don't recommend to you to go ahead with this option because there are some limitations in running such a glass door inside of the Docker container.
All right here below you could find out how to install mini cube on different operating systems Linux, Mac OS and Windows.
If you're a Windows user, you could similarly as with cube CTL utilize choco latte, please select this option and enter just single command choco install mini cube simple as that and you'll get mini cube installed on a Windows computer but I am using Mac OS That's why I'll select this option and select homebrew and in order to install mini cube using brew, I could simply enter just one command brew install mini cube.
Let me go ahead and install it using brew.
Let's go back to terminal and here based command brew install mini cube.
Let's wait a bit until this installed.
Alright mini cube was installed and now I could verify its version mini cube version.
Here was version in my case and you could also enter mini cube HELP command in order to find the list of all available commands in mini cube.
And if you scroll up, you'll find out how to start a local Kubernetes cluster, get status of the local Kubernetes cluster, stop glass or delete and open dashboard.
Also you could pause and unpause Kubernetes cluster right now let's create the actual cluster I assume that now you have also installed mini cube and cube CTL.
Those tools are two separate tools.
That's why they are available as two separate commands mini cube and cube CTL.
And now let's go ahead and create a Kubernetes cluster.
For that please utilize command mini cube start but first let's enter mini cube status in order to check the current status of the cluster mini cube status.
Here I see that profile mini cube not found around mini cube profile these two profiles and also ice In order to start cluster, I have to run mini cube start command.
That's what we are about to do right now, let's create cluster mini cube start.
And here I would like to ask you to start the mini cube cluster with option.
And this option will be driver, and you should write with two dashes, that's the driver.
And here will be equal sign.
And after it, you have to specify virtual machine or container Manager, which you would utilize in order to create mini cube cluster.
I mentioned before that if you're a Windows user, I recommend to you to use Hyper V.
Therefore, if you're on Windows, simply enter here, Hyper V like that all lowercase without dash, there's this driver equal sign Hyper V.
If you're a Mac OS user, you could go ahead with any of those options VirtualBox parallels or VMware Fusion, I will utilize VirtualBox.
If you don't have your toolbox, please go ahead and install it and are here your toolbox and click on the first link here.
Your toolbox is free.
And here is the link where you could download your toolbox.
So if you don't have your toolbox, and if you're a Mac OS user, please go ahead and download your toolbox.
Right, I already have your toolbox installed.
That's why I will simply specify it as bio for that driver option here.
So I will type Vertol box.
Let's go ahead and create mini cube cluster inside of the VirtualBox virtual machine in my case, creating your toolbox virtual machine quantity of CPUs memory and disk.
It will take a while Let me wait a bit virtual machine was created.
And now I see message preparing Kubernetes on Docker.
And it means that by default, we will utilize Docker container runtime for running actual containers inside of the Kubernetes cluster.
Here I see step generating certificates and keys booting up control plane.
And finally at the end, I see done cube CTL is not configured to use mini cube cluster.
And that means that you don't need to do anything in order to connect from cube CTL to actual mini cube cluster.
This connection was created automatically for you during creation of the mini cube cluster.
Also, in my case, I see such a warning, you have selected your toolbox driver, but there are better options for better performance and support consider using a different driver HyperCard parallels or VMware.
Now let's actually verify the status of this Kubernetes cluster.
And for that you could enter command mini cube status.
And you should find out that host is ironic cubelet is running API server is running and cube config is configured.
That's normal status of the mini cube cluster.
And now we are actually able to create Deployment Services at cetera inside of this mini cube cluster.
Also, I would like to mention that I don't have Docker running right now.
I actually have it installed on this computer.
But over here in the icons bar I don't see Docker icon, it means that now Kubernetes on my local computer does not utilize actual dog or installation on my computer.
But there is Docker which is running inside of the mini cube node.
And that's what we will check right now.
So as you already know, each node in the Kubernetes cluster is just server either virtual or physical.
In our case, we have created a virtual server and you could connect to any server by using SSH protocol.
And in order to connect via SSH, we first need to find out which IP address was assigned to Kubernetes node.
Mini cube provides command for that.
Simply type mini cube IP and you'll see which IP address was assigned to a virtual machine which is running our Kubernetes node which was created by mini cube he was a address in my case, simply grab this IP address and afterwards, type SSH Docker it is username which is default username for such node such beautiful Sarah and after add sign based a balance of mini cube note and afterwards please press enter you will be presented with fingerprint please go ahead and The type here as withdrawals such a fingerprint.
And afterwards you will be prompted for a password default password for minikube.
Your toe machine is DC user, please go ahead and type it here.
So user name is darker and password is DC user password was entered.
And now I see welcome prompt from the mini cube Sarah.
Now I am inside of the Kubernetes node.
And first command which I would like to ask you to enter here is docker ps.
Such command will list you all run in Docker containers.
And here are a bunch of different containers which were created inside of the Kubernetes node.
And again, please recap that Docker is default container runtime for Kubernetes.
There are other container runtimes, such as CRI O and container D.
But here we see that there are a bunch of different containers, which were created by Docker inside of the mini cube note.
And for instance, here I see such container as the cube API server, cube scheduler and so on.
Recap that we discussed with you different services which are running on master nodes and worker nodes.
And those services are actually running inside of the containers, as you see right now here.
Also, there are such containers as the cube proxy storage provisioner, or, for instance, core DNS.
And again, every container has its own purpose.
That's how you could verify which containers were created inside of the Kubernetes node.
But if I enter here a cube CTL command, I will see error cube CTL command not found because cube CTL command is not available inside of the Kubernetes node cube CTL is external tool which is used in order to manage Kubernetes cluster this way, let's now go out of this SSH connection connection was closed.
And now let's utilize cube CTL command here of growth cube CTL is available locally on our computers because we have installed cube CTL before let's first check class for info cube CTL cluster this info and I get following output Kubernetes control plane is running.
And here I see IP address which we just saw after entering mini cube IP command.
In your case, of course such IP address will be different.
And also I see that core DNS service is running as well.
It means that now we are able actually to create Deployment Services and cetera on our Kubernetes cluster.
But first let's list nodes which are available in our Kubernetes cluster.
For that please enter command cube CTL get nodes.
And here I see just single node because mini cube create single node cluster he was name of such node status is ready.
And here are roles control plane and master.
We'll get that now inside of our mini cube cluster.
This single node acts both as master node and worker node and on the worker node Kubernetes creates different bolts related to your deployments which you deploy in a Kubernetes cluster.
Now let me check which ports are available or right now in this cluster.
For that let's enter command cube CTL get pods and now I see output no resources found in default namespace.
Let's list all namespaces which are available now in this Kubernetes cluster.
For that please enter command cube CTL get name spaces like that.
And I see several namespaces like default cube node lease cube public and cube system namespaces are used in Kubernetes in order to group different resources and configuration objects.
And if you enter simply cube CTL get pods you will see only ports available inside of the default namespace here.
And by the way, all ports which will create throughout the scores will be great in default namespace, but we have not yet created any ports so far.
That's why let's try to find out which ports are running inside of our other namespaces for instance cube system.
In order to list ports in a specific namespace, you have to utilize namespace option.
So cube CTL.
There's the namespace, equal sign and here enter cube system.
Let's go ahead and run this command.
And now here in this namespace cube system, I see such boards as gordianus etcd, which stores all logs of entire Kubernetes cluster, cube, API server, Cube controller, cube, proxy, cube schedule, and storage provisioner.
All those bolts are system balls, which are running on this master note.
Alright, this is how we could find out which ports are running right now in our Kubernetes cluster.
Now let's go ahead and create both manually.
And for that we could utilize command which is similar to docker run command.
But with docker run command, you could create just a single Docker container here or you could utilize cube CTL Run command in order to create just single port.
Let's do that cube CTL run here will be named.
Let's for instance, utilize nginx Docker image which is available at Docker Hub.
For that let's enter here name of the port Nginx.
And afterwards, let's add option image and its value will be Ingenix Ingenix.
Here is name of the Docker image which will be pulled automatically and new container will be created based on this image and this container will be running inside of the Kubernetes port.
Let's go ahead and see what will happen.
Cube CTL Iran Ingenix does this image equal sign Ingenix and here I see output port Ingenix.
Let's enter command cube CTL get pots.
And here I see now single port nginx it is not yet ready.
And instead of his container created.
Let's enter same command again cube CTL get pods and now I see that this board is now run it is ready and it status is run.
Of course it took some time in order to create such a pod because inside of it there was nginx container and Docker inside of the Kubernetes node was actually requested to pull nginx image from Docker Hub and create corresponding container based on the search image Ingenix.
Let us know find out the details of this specific port was named Nginx.
This name we specified manually here.
For that please enter command cube CTL describe port and here will be name of the port, which you would like to describe get details about here will be Ingenix.
It is name of the run port.
And here are many different details related to this specific port.
Let me scroll a bit up.
And here I'll find out such information as namespace this port belongs to and it was automatically assigned to default namespace.
That's why we were able to see it in the list when we entered command cube CTL get pods because such command lists all both inside of the default namespace.
Here is information about node where such specific port was created or kept that Kubernetes automatically distributes a load across all nodes worker nodes inside of the cluster and it selects specific node for a specific board contents information about the node where this particular port was created.
In our single node cluster of course, you will see the same node each time when you create any port and here was IP address of such node here with start time here are labels status is running.
And here is a B address which was assigned to this particular port 172 17 03.
But please know that now we will not be able to connect to this particular port using such internal IP address or the port.
In order to be able to connect to ports, you have to create services in Kubernetes.
And that's what we'll look at a bit later.
Here or below, you could also find out which containers were created inside of this port.
And there was just a single container here was ID of such container here was long ID here was image which was utilized for this particular container.
And that's that image which was specified using dash dash Image option when we created new port.
Here was image ID and it is ID of the image from the Docker hub because by default Kubernetes pulls all images from the Docker Hub.
Of course, it's possible to configure Kubernetes to pull images from other repos.
Recording services, but default is Docker Hub.
Alright, this are the tales about this particular port.
And we find out that there was a B address assigned to this port, there was container which was created inside of this port.
And here are logs related to creation of this particular port.
You see that this particular port was successfully assigned to minute you note here was message about pulling image engine X from the Docker Hub successfully pulled image created Container Engine X and started container Ingenix.
And it means that now there was nginx container running inside of the grid port.
Again, you could find the list of all boats by entering command cube CTL get pods, and here was single board available right now in the default namespace.
Now, let's quickly go back again inside of the Kubernetes mini cube node, and least again, Docker containers and we will find out that now there was one more container which was created in the engine export.
So using up arrow, I could go back to the SSH command which I entered before.
Here it is and connect to this node again.
Enter TC user password.
And let's enter here a docker ps.
But now let's filter output by name ngi NX.
And now I will find out that there are two different containers here was first one and here's the second one which are related to engine X.
Here is the name of the thrust container, K eight s nginx nginx default and so on.
And also there was one more container GAE eight s port nginx default and so on.
And this second container is called boss container.
Here you see that inside of the container balls executable was actually launched one container was graded and if Dogra is a container runtime in Kubernetes, there are always such both containers, which are created for each specific port.
And such post containers are created in order to let's say look namespace of specific port.
And we discussed before that all containers inside of the same board actually share namespaces.
And this container, which is actually run in our nginx web server would be stopped could be recreated by Kubernetes.
But both remains untouched.
And this second container which is called boss container is required to keep namespace of the port.
Alright, that's how containers look right now.
And let's actually connect to this container where we are running nginx service.
In order to connect to container you could use either ID this one or this name.
Let's connect to container by this ID please select this container note pause container.
And let's enter command Docker Exec.
This ID here was ID of the container.
And let's connect this container using SHA executable.
Now I'm inside of the container.
Let's check its hostname.
It is name of the board actually.
And the same name was assigned to this container.
And let's also check IP address of this container here was such a address 172 17 03.
And that's that IP address which we saw in the details of the port.
Now let's try to connect to web server which is running inside of this container where we are currently in by using this IP address.
And for that we could utilize see URL command and here are based on IP address of this particular container.
Regardless, we are inside of the nginx container.
Let's try to make connection.
And here I see Welcome to nginx page which was returned by this web server.
It means that nginx web server is up and running.
Let's go out of this container and now we are still inside of the Kubernetes node.
Okay but we made SSH connection to it.
And now let's exit from this connection as well like that.
And now let's enter the following command cube CTL get pods this all white and with such option you will find also IP address of particular port in this output Here's the same IP address, which we just saw inside of the container.
Let's now try to connect to such a P address from our computer, our computer is external relative to our Kubernetes cluster, because Kubernetes node is running inside of the virtual machine.
And this IP address was assigned by Docker to particular container inside of the node.
Let's try to connect to such IP address, I could utilize the same command CRL.
Here it is available on Mac.
Or you could simply open up a web browser and connect to this IP address.
And you'll find out that you will not be able to connect to engine X container like that.
Because as I told you before, such IP address is internal IP address of the port and you're not able to connect to such port from outside rolled outside of the classroom.
That's how we created very first board in our Kubernetes cluster.
And we also explored the tail of switchboard.
And I hope that now it is clear to you what happens when you create a board inside of the port Kubernetes grades container.
And in order to create container it has to pull image from the Docker Hub.
And in our example nginx image was pulled and new container was created based on that image.
Also, we went inside of the container and verify that nginx web server is now actually running.
But we are not able to connect to such port from the outside, because now it has only internal IP address which was assigned by the Docker.
But of course creation of the port using cube CTL Run command is not really convenient, because you're not able to scale.
And for example, increase quantity of the ports, there was just a single port, which we created using cube CTL Run command.
That's why now let's simply delete created port for that there is command cube CTL Delete.
Next will be name of the resource we would like to delete and the Dysport and here will be name of the port in our example Ingenix because we specified site name for the port manually when we entered the cube CTL Run command.
So let's go ahead and delete Sasha port bought Ingenix deleted, cube CTL get pods and there are no resources found in default namespace and such port is simply gone.
And all volumes all namespaces related to this particular port were removed.
You might notice that we entered cube CTL command multiple times and in my opinion, such command is relatively long.
And we are able to create alias for such command in order to save some time in the future.
And we could alias cube CTL command to just a single letter K.
And afterwards we'll enter such commands as K get ports.
But now if I enter a search command, I'll get the command not found error because I have not yet created any aliases.
On Linux like operating systems, you could very easily create aliases by entering command alias.
Next will be an alias for the command then equal sign and after equal sign in double quotes you could write command which you would like to alias in our example cube CTL if you are a Windows user and if you are using PowerShell this command will not work there.
If you want to get the similar to mine experience in command line I recommend to you to install Git Bash.
In order to install it please navigate to get SCM comm and simply download Git.
If get is already installed on your Windows computer, you already have access to Git Bash, it is another terminal.
If you don't have Git installed please go to Windows section here and download git for windows here are different links to eat.
And after installation please open Git Bash terminal instead of the power shell.
So I already have alias command available in this terminal on Mac OS.
Therefore I will be able to create such an alias let's create LS LS was created and now I'm able to utilize just a short version of the Command K get pods and now I see output no resources found in default namespace.
From this point on I will utilize such short version of the cube CTL command but please notice that such alias will leave only during this session in the terminal.
After a lot of the computer or when you open up another terminal session.
This alias will be gone.
If you want I'd like to create a permanent alias, of course, it is possible to do, in order to perform such a task, you have to edit your shell configuration profile.
But I will not do that right now, it's enough to have just temporal alias.
Now, let's continue.
As I mentioned before, it is not convenient to create separate ports inside of the Kubernetes cluster, because you're not able to scale and increase quantity of the ports.
Therefore, the most common way to create multiple ports, when you're able to increase quantity of the ports, degrees quantity, modify configuration, etc, is by using deployment.
And deployment will be responsible for creation of the actual ports.
But please notice that all ports inside of the deployment will be the same, exactly the same.
But you could create multiple copies of the same board and distribute load across different nodes in the Kubernetes cluster, this purpose of the deployment.
Now let's go ahead and create deployment.
And we will utilize same image which we utilized before it is nginx image.
And also afterwards, we will increase quantity of the ports in this deployment.
And also we'll create a service for this deployment in order to be able to connect to our deployment from external rolled.
Let's go ahead and create size deployment.
And for that, we will utilize command cube CTL, create deployment, we already alias the cube CTL command.
That's why I will enter key create deployment.
And next will be the name of the deployment.
Let's name it Ingenix.
And afterwards, let's specify also image similarly, as we did in cube CTL, Run command.
And here after equal sign will be the name of the image we would like to utilize for this deployment.
And let's enter here Nginx.
In order to avoid any confusion, let's modify the name of this deployment.
And let's name it for instance Ingenix.
That deployment like that, let's go ahead and create this deployment.
deployment was created with ant or Command K get deployments.
He was single deployment, it is ready and up to date.
And let's enter K get pods.
And now I see that there is no single board which is managed by this particular deployment.
And such a Board was created automatically after we created such deployment.
And now this port is managed by this deployment.
There was just single port at the moment.
And of course disposable to scale quantity of the ports and the degrees wanted to have the port for instance 235 and so on.
But first, before doing that, let's read the details about size deployment.
And for that let's enter command gay describe deployment.
And here will be name of the deployment nginx deployment.
Here are the details have created deployment.
Let's scroll a bit up.
And here at the beginning of the output I see name of the deployment engine X deployment.
That's that name, which was given to this deployment by us here was namespace where our sash deployment was created default namespace below, you'll see that Kubernetes automatically assigned all labels to this particular deployment.
And there was just single label up equal sign engine X deployment.
Also there are annotations, again, created automatically here was annotation.
And he was also selector selectors are used in order to connect boards with the blueprints.
Because in Kubernetes, boards and deployments are actually separate objects.
And we have to know how to assign specific boards to particular deployments.
And here we see selector up equals sign nginx deployment and for a particular port, which was automatically created and assigned to this deployment, we will find a label with the same value up equal sign nginx deployment, we will have a look at that just in a minute.
Also, there was a replica field and here you could find information about quantity of the boards, which are desired by this deployment and actual quantity of the ports which are running here was just single port single replica which is desired, one updated one total and one available.
It means that now there was one port which is assigned to this deployment He was also strategy type rolling update, which tells how to perform updates of deployments.
We will get back to it a bit later in this course.
And also below, you will find other details about this particular deployment.
Here you'll find all the details about both template.
And as I just told you, you'll find out inside of the port corresponding label up equal sign nginx deployment.
And same label is mentioned here in the selector field in the deployment.
And that's how the deployment is connected to particular ports.
Also, if you scroll a bit down, you'll find actual events related to this particular deployment.
And here we see single event scaling replica set.
But what is replica set a bit above you see a new replica set nginx deployment and ID of the replica set.
And replica set actually manages all balls related to deployment.
And replica set is a set of replicas of your application, because you could create 510 100 different boards in the same deployment.
And all of them are included in the replica set.
And that's why here you see that size replica set was scaled up to one and one port was created in this replica set.
Great, this are the details about this particular deployment.
And now let's have a look again, at the list of the ports get pods here was just single port and notice its name.
It starts with the name of the replica set, which we just discussed.
And so in the output of the details of the deployment, nginx deployment and this hash, and afterwards, there was specific hash for this particular port.
And if there are multiple ports in the same deployment, which belongs to the same replica set, you'll see same graphics here, but different hashes here for different ports.
Now, there will be a single port, which is ready, and we just grind.
Also, we could get the details about this particular port, let's grab its name, and enter here gay describe board.
And here, let's paste corporate hash.
And here are the details about this particular port.
Let's scroll up.
He was name of the board it matches with this name.
It was assigned to default namespace same as deployment.
Here is node where this particular pod is running right now, here are labels of this particular port.
And I see such label as up nginx deployment and port template hash.
Notice that this hash is equal to this hash.
And it matches with ID of the replica set.
Also isis that was running here was IP address of the board.
And here I see that this is controlled by specific replica set, replica set nginx deployment.
And here's a hash of the replica set, which controls this particular port.
Also, here are the details about containers which are running inside of the port.
And here was container ID and image which was used for creation of the container in this particular port.
Also, at the end, you could find logs related to the port, as we discussed before, successfully assigned to specific node pooling, image nginx and so on.
Now, there was deployment.
And instead of deployment, there was a replica set.
And ports aren't managed by this deployment.
Now there was just single port.
Now let's try to scale this deployment and increase quantity of the ports inside of the deployment.
Now there was a single port for that we could utilize following command cube CTL.
Shortly, Kay in our case scale, we would like to scale deployment.
Let's right here deployment.
Name of the deployment in our case is Nginx.
And afterwards, let's specify one of the replicas we would like to have in this deployment right now.
Now there will be a single replica.
In order to specify desired quantity of the replicas, we could add here option replicas, then after equal sign, specify quantity of the replicas.
Let's say we want to scale to five replicas.
Let's enter file here.
And let's scale our deployment.
deployment was scaled and now let's enter command gate get votes.
And I see that there are four New containers which are currently being created.
Here was one port, second, third and fourth port.
And this port is still run.
In a moment, you should see that all five bolts will be run.
Here they are.
We just scaled our deployment to five different replicas.
And now there are five different pods running right now inside of our Kubernetes cluster.
And notice how easy it is, we did not create those balls manually Kubernetes scaled this deployment automatically for us.
Notice that all those ports have the same prefix, it is name of the replica set, all those ports belong to.
And those parts are different for different ports.
And again, all both are running.
Let's also read the details about those ports, which will include IP addresses, let's add here, option this all white.
And I'll see that each port has different IP address here or here, here, here or here.
And all those bolts were graded just on single node mini cube, because again, we're here just single node cluster.
If you were running multiple nodes, of course, a lot would be distributed across different nodes.
And you would see that the Poles were assigned to different nodes.
Alright, now let's try to scale down quantity of the replicas in this deployment.
from five to let's say three.
Let's go back to this command, K, scale deployment nginx deployment.
And here, let's specify I want to do the replicas as three like that.
deployment was scaled, let's get boards.
And I see that now there are only three boards up and run, two ports were removed.
Now let's try to connect to one of the ports.
For instance, this one was grabbed his IP address.
And before we already tried to connect to one of the ports from our local computer, and again, our local computer is external relative to Kubernetes cluster, it is very, very important.
Let's try to connect to this IP address CRL and paste the IP address and connection is not successful.
Let's now go inside of the node mini cube node and try to connect to this port any of those ports from inside of the node.
And it will mean that we are trying to connect to the port inside of the cluster.
Let's SSH again to our mini cube node as a sage, Docker, apt and this IP address.
In your case, this IP is of course different password is DC user.
And now here, let's see URL and enter IP address of one of the ports.
Any port works here.
So let's see URL and I was able to connect to a specific port by its IP address.
Let's try to connect to another port with another IP.
For instance list y 0.3.
I'm not sure whether I have such port running.
Yes, I have.
And I also got response from such port with such a boundless recap that now I'm located inside of the node.
And I am able to connect to ports which are running on this specific note using IP addresses of those ports.
Let's exit from this node.
And again, let's list ports along with a address.
So I was able to connect to this port and this port from the node.
But of course such IP addresses are assigned to both dynamically when you both are created.
And you should not rely on IP addresses of the ports if you would like to connect to specific port.
Therefore, it's not convenient to utilize such IP addresses.
And you should utilize some sort of other IP addresses which are managed by Kubernetes and which allow you to connect to any of the ports inside of the deployment.
Let me show you how to do that.
You have to create so called services if you would like to connect to specific deployments using specific IP addresses.
And there are different options available you could create so called cluster IP and such IP address will be created and assigned to specific deployment and you will be able to connect to this specific deployment all the inside of the cluster using this virtual IP address.
And Kubernetes will distribute a lot across different ports which are assigned to specific deployment But please know that such API address will be just single IP address for entire deployment.
And of course, it is much more convenient to use such API address than specific IP addresses of the ports as we just tried.
Also, there was an option to create external IP addresses to open deployment to outside world.
And it is possible to expose specific deployment to the IP address of the node or use a load balancer or get the inside of the Kubernetes cluster, it is possible to have multiple nodes and both could be distributed across different nodes.
Therefore, the most common solution is to have a load balancer IP address, which will be just single for the entire Kubernetes cluster for a specific deployment.
And you will be able to connect to your deployment no matter where both are graded using just a single external IP address.
But such load balancer IP addresses are usually assigned by specific cloud provider like Amazon Web Services, or Google Cloud.
And such assignments are managed by cloud controller manager is service which is running on the master node.
Let's now try to create cluster IP for our deployment.
And we will create for that specific service.
So we already have deployment created, let's read the details about size deployment, Kay, get deployments.
By the way, you could shorten such a long road just to deploy like that get deployed.
And here was the name of our deployment.
Now there are three different ports, which are already in this deployment.
Let's now create a service for this particular deployment.
And using a service, you could expose a specific port from the deployment.
Now we are running three different ports.
And inside of each port, there was nginx container.
And by default nginx web server is running at Port 80.
It means that we have to expose internal port 80 from the containers to any other prod outside of the deployment.
And we could choose for instance, Port 8080.
For this particular example, in order to expose internal port from the deployment ports, you should do device command expose, let's enter a gay expose deployment.
Here will be name of the deployment, we would like to create service for nginx deployment.
Here was the name of our deployment.
Recap that we don't mention here ports, we work with deployments.
And ports are managed by deployments.
So Let's expose deployment engine X deployment here will be option port, let's specify a port which we would like to utilize as external port for our deployment.
And let's set it to port 8080.
And also we have to add here target port, if brought inside of the containers is different from this prod which we specify right now here.
And in our example right now, brought inside of the container is aged.
That's why we have to add here option does this target port equal sign 80 So we expose internal port from the containers to external prod 88.
And we do that for the deployment heroes name of the deployment.
Alright, Let's expose nginx deployment service engine is deployment was exposed and now less least services for that there was command gay get services.
And now there are two services throat service is default system service.
It is called Kubernetes type is cluster ID and here is cluster IP address.
And also here we see one more serious nginx deployment.
He was name he was type cluster IP and here was cluster IP.
And this IP address is completely different from the IP address assigned to the port's this IP address is even from another network IP addresses of the board's start from 172.
And here this IP address starts from 10 and this is a virtual IP address which was created by Kubernetes and it is just a single IP address which could be used in order to connect to any of the ports and such type cluster IP allows you to connect to specific deployment in our example nginx deployment only from inside of the Kubernetes cluster.
For instance, if in your Kubernetes cluster, there was a database deployment, for instance, there was MongoDB database or my SQL database.
And the search database should not be available and accessible from outside rolled, you could create cluster IP service for such database deployment and other deployments inside of your Kubernetes cluster, we'll be able to connect to database deployment using cluster IP.
But again, session deployment will be hidden from the outside world.
But on other hand, if you create any web service deployment, and it should be available from outside, of course, you have to expose it to outside world using node port or load balancer.
That's what we'll try a bit later.
cluster IP will be available only inside of the cluster.
So here we see cluster IP address.
Here it is, and here is brought which was exposed by our deployment.
Also, you could get list of services using short version of this command, simply enter a gay get SVC, it is short version of services SVC.
And result will be the same.
There are now two different services with those last three IP address this one for Kubernetes service, and this one for nginx deployment service.
And now let's try to connect to our deployment using this service.
Let's grab this cluster IP.
And let's first try to connect to this cluster IP address from our local computer.
See URL and paste this IP address.
And after cold, let's also add port which we exposed.
It is external port for our deployment.
Internally, it will be proxy to port 80 on each container.
Let's try whether this works on our local computer or not CRL.
And I see no response.
That's what I just told you.
cluster IP address is not available outside of the Kubernetes cluster.
This behavior of the cluster IP is just internally available IP address, but you're able to access cluster IP from any node.
And let's try that.
Let's SSH to our node.
Let me go back to this command, SSH Dogra and the IP address of the node DC user.
And let's now try to connect to cluster IP from the node.
And here again, let's specify port 8080.
And I see result Welcome to Ingenix.
And this response was provided by one of the ports in the deployment.
Of course now from this output, I am not able to recognize which port exactly answered me.
But this answer was provided by one of the ports in the deployment.
I could try again.
Again Again, and I get answer from the ports in the deployment.
That's how it works.
cluster IP is just single IP which was graded as a service for particular deployment.
And you could utilize just single IP in order to connect to any of the ports and behind the scenes Kubernetes does all proxying of the request to particular selected port and it balances a lot across different ports in the deployment.
Alright, let's close this connection endless no also read the tales about particular service we'll get there with Command K get SVC short version for get services and he was name of the service which we create before.
Let's take this name and right now K describe service and name of the service we would like to get details about here was Gloucester IP which we just used in order to connect to this series.
Also there was external abroad and here was targett port.
And here are three different endpoints.
Those endpoints indicate particular ports, which are used for connections to this particular cluster IP.
So load will be balanced across those three different ports.
And notice everywhere here port 80.
It is targeted broad for our service, Dysport, ad is default port where Ingenix server is run.
So AD AD and AD also here You see type of the service cluster IP, and selector here is also present up equal sign Ingenix deployment.
And that's how this service is actually connected to particular ports.
All those ports also have particular label up nginx deployment.
And this label was created by Kubernetes.
Automatically, we did not specify it, because we just created the deployment using grid deployment command.
And we created a service using expose command.
Namespace here is default, same as for both and deployment, which is called Nginx.
Alright, that's it for this service.
And we exposed our deployment using cluster IP.
And cluster IP is only available inside of the Kubernetes clusters.
Now you know how to create a port, how to create deployment, how to scale deployment, and how to create a service for a particular deployment.
And we utilize the Ingenix image which is available at Docker Hub.
And now it's time to move on to the next step in our plan.
And I promised you that we will create a custom Docker image, push it to Docker Hub, and afterwards grid deployment based on this image.
And that's what we'll do next.
But before doing that, let's remove existing service and deployment in order to have clean, let's say setup.
So let's remove deployment key delete deployment.
And here will be name of the deployment nginx deployment.
And also, let's delete series gate delete series Ingenix deployment like that service was deleted as well.
Let's list now deployments gate get deploy it the short version of the deployments, no resources found, and let's list the services gay get SVC just single default service with named Kubernetes.
We are good to go.
And now let's create a very basic Node js application.
They are we will utilize Express web server package and create our custom web server.
Plan for this next step is following the rules we will create no GS Express web application and we will configure a web server in such a way that it will respond with the hostname of the server.
When we connect to it using a root URL and such way we will be able to distinguish which port actually answers a specific request.
We will also Docker eyes such application using a Docker file and Docker build command.
And afterwards, push build Docker image to Docker Hub.
And finally, we will create deployment using such customly built image.
You could find all final project files in the GitHub repository, you'll find a link to it right now here.
But I highly recommend to you to create such an application yourself even if you don't know much about no GS and express.
And of course I recommend to you to build your custom image and push to your Docker Hub account.
Of course, you could utilize images available under My Account.
They are public, it's up to you.
So let's no open Visual Studio code.
If you don't have it, please install it.
I'll open it up.
And here in the Visual Studio Code, please open up embedded terminal press key combination Ctrl backtick like that terminal was opened.
And now let's create folder for our project files.
I will CD to desktop.
And here using make directory command I'll create folder gate eight S stands for Kubernetes.
As you already know, let's see 2k eight s folder and you could open any folder using code dot command if you correctly installed Visual Studio code.
If such a command is not available on your computer, you could press key combination Command Shift B or Ctrl Shift B on Windows and here enter gold and select Show command install gold command in path and afterwards called command will be available from any terminal.
So I have this command already available in the past.
That's why I'll open the K eight s folder in the Visual Studio Code Editor.
So called.it will be opened in the new window like that.
Now this folder is completely empty and has no files.
Let me again open embedded terminal here.
It will be opened in K eight s folder.
And now here let's create folder for our first project.
And this folder name will be let's say gay.
eight s, this web this Hello, because we will build a very basic web application, which will simply respond hello from Sarah and server name.
So gay a s web Hello.
And here in this folder we will create Node js application.
Let's hear in the terminal CD to K eight s web Hello folder like that.
And here are throws we will initialize Node js application.
And in order to do so you have to install NPM on your computer.
If you don't have no GS and NPM available, please go to no GS download.
Here is link, download no Gs, and install no GS from your computer, it will be installed along with NPM.
Of course, we will not run application using Node js on our computers, we will run application inside of the container in the Kubernetes cluster instead.
But in order to initialize project locally, and install dependencies, you have to use NPM.
So therefore, please install no GS along with NPM.
Afterwards NPM should become available on your computer.
I'll hide this back pain.
And now here let me NPM in it.
And I'll initialize this project.
And you could add here option dash y, which will keep all answers to you during initialization of the new node js project.
So NPM in it, that's why New Project was initialized.
And here you should find now package the JSON file with such contents name of the project version and so on.
And now let's install a package called Express and BM install Express.
It will be downloaded from the NPM registry package was installed and package the JSON file was updated here was no dependency called Express.
So now let's create the index js file in our K eight s web Hello folder.
By the way, notice that now there was node modules folder, which contains all dependencies of our project.
But now it is safe to simply remove this folder, we don't need it anymore, because we will not run this project locally with help of node.
So simply select this folder and delete.
All right, now we have only package the JSON and package log the JSON files.
Now let's create file and we will create it inside of the gay eight s web Hello folder.
So new file, and let's name it index dot m j s y m because we will utilize EF six modules syntax, which is available in Node js starting from version 13.
And if you would like to use import statements, instead of the require statements, you have to name files with extension MJS.
So let's create file index dot MJS.
And here I will not die per contents of entire application, I will simply copy them in order to save some time.
So paste here.
If you would like you could of course write this very tiny application yourself.
So we import Express from Express package.
We also import oils from oils oils is built in no GS module.
And afterwards we create Express application here is port 3000.
You could specify any other port if you would like.
And that's that port where our Express web server will be running inside of the container.
Here we are adding a route handler for slash URL.
It is a route URL.
And we simply answer to the client with text Hello from their oils hostname using oils package we retrieve hostname of the server.
And in the response from the server, you will see simply text message hello from there and name of the server where this application is run.
So arrests sent we sent to the client such message and also we log it to the console.
And finally we start Express Web Server Update listen, and we'll started at this port 3000 and the web server starts we will look to the console message web server is listening at port and here was port.
I will not dive deeper in those explanations and syntax of node GS because it is not an old GS course.
But you get an idea.
We create a very basic web server which will respond with such text message.
So let's save changes in this file.
Go CTRL S on Windows or Command S on Mac like that changes were saved.
And now let's dock arise this application and for that we will create a Docker file.
Now I recommend to you to install extensions called Docker and Kubernetes.
So go to Extensions here and enter throws Docker here.
Select Dogra and click Install.
In my case, Docker extension was already installed.
And afterwards please also installed Kubernetes extension, find it like that and install it as well.
In my case, Kubernetes extension was installed earlier.
So now we are good to go.
And also, in order to build actual Docker image, you have to run Docker on your computer.
Before we did not allow Docker in order to start a mini cube Kubernetes cluster.
But now we need Docker in order to build Docker image.
That's why please go ahead and install Docker.
If it is not installed on your computer and or Docker download.
go to this link and install Docker desktop which is available for Windows Mac OS.
And also you could install Docker directly on Linux like systems.
So please go ahead and install Docker.
Please run it I'll run it like that.
I already installed it before.
Now here I see icon Dogra is starting here I see information about that.
I'll wait a bit.
Meanwhile, let's create a Docker file inside of the K eight s web Hello folder.
Because we are creating no separate obligation in the K eight s root folder.
So let's create a Docker file here new file and name it Docker file like that.
Notice that Visual Studio Code recognize the type of this file and it has added the icon Docker.
And here in the Docker file live add instructions how we would like to build custom Docker image.
I'll again grab the contents of this Docker file and paste here in order to save some time.
And there are following instructions.
We will use Node Alpine as a base image for our custom image.
Here we set working directory as slash up.
Next, we expose port 3000.
Afterwards, we go up here to files package the JSON and package the slug the JSON which are present here.
And then we run npm install when we will build this custom image.
And after NPM installed, we are copying all the remaining files from this folder where our Docker file is located.
In our example, it will be just single file index dot NGS.
And this file will be quite a bit to the app folder inside of the image.
If you're not familiar with those steps, and don't know what from Rockdoor, exports and so on mean, I highly recommend you to find the Docker Crash Course and go through it in order to understand how Docker works and how you're able to build custom images.
This course is not a Docker course.
So we go through all the remaining application files to target the folder up.
And finally we add instruction CMD that stands for command, which actually indicates which command will be executed when corresponding container based on this custom rebuilt image will be started.
And we want to run npm start command.
But what happens now if we enter npm start in our project folder.
Let's try that.
Let's save those changes in Docker file, save.
And let's open up again embedded terminal.
I am still in K eight s web Hello folder.
And here in this folder there was index both in GS file.
And here let's enter npm start command.
And what I see I see missing script start.
Now our obligation will not allow anything if you enter npm start command.
In order to be able to run our obligation start web server, we need to modify package dot JSON file.
Let's do that.
Let's open up package the JSON file and here you will find scripts section and there was just single default script test.
Let's actually replace test with start like that.
And here in double quotes type node index dot MJS.
So now there will be a start script and it will essentially run using a node file index both NGS.
And this will start actually our Express web server on the port 3000.
That's what we discussed here.
We have saved changes in package the JSON file.
Now NPM installed is available.
And we are able now to build our image.
If you want to create custom image you need to utilize Docker build command.
And usually this command is entered in the folder where our Docker file is located.
That's this folder in our example here, and if I list files and folders here, I will find a Docker file.
And Docker file contents are here from work, dir expose and so on, do CMD now, let's build a Docker image.
And also we will add tech to this customly built Docker image.
And that will include all the username of your account at Docker Hub.
If you don't have Docker Hub account, please go to hub docker.com and create account.
I already have an account and here is my username.
Please create your account and choose your username.
And afterwards, you will be able to push repositories to Docker Hub under your account.
And when you push repositories, you will find them here in the repositories section.
Now, under My Account, there are no repositories in this namespace.
So let's go back to Visual Studio code.
And here let's build a custom image for our application.
Let's use Command Docker build.
Next will be a path to Docker file.
I'll use simply dot because now I'm inside of the folder where Docker file was created.
So Docker build dot.
Next, let's add a tag to our image using the SD option, then will be user name of your GitHub account.
In my case, I'll type mine.
And after a slash, I'll give name to the repository.
Let's name it same as we named this folder, gay eight s that's web does Hello.
Let's build this image using Docker build command.
Build an image notice that in my case, base image was already present in the cache and other layers were also cached.
And finally image was built.
Now if I enter command, Docker images and add grep and here will be K eight s web I will find image that is now available in our local images repository.
Here was name of the image which we just built.
That is latest because in this command which I just entered, I did not specify tag which you're able to specify after column here.
But I built image with later stack.
So that's why tech here is latest.
And here is the ID of the build image and size of the image.
Now let's suppose this cost only build image to Docker Hub.
In order to do so you have to first order to Docker Hub.
For that please enter command Docker login.
I already logged in before.
That's why Docker utilized cached credentials and login succeeded.
In your case if you enter Docker login for the first time, you will be prompted to enter Docker Hub login and password.
Now I'm able to push this build image to Docker Hub.
For that I'll utilize command docker push and here will be the name of the image which I would like to push to a remote repository hosting service.
Mr Xu K s web Hello.
Let's go ahead and push it using default deck latest pushing the different layers of the image it takes some time because image Israel too large.
So you see waiting for all those layers.
And finally this image was pushed to Docker Hub.
Let me verify that it appeared here.
Refresh page and now there was gate eight s web Hello image.
This image by the way is public and you're able to utilize it as well for your Kubernetes deployments.
Image is ready and ready Add Docker Hub.
And now we are able to create the Kubernetes deployment based on this image.
Let's do that.
Let's go to terminal here.
And let's create a new deployment.
By the way, now there are no deployments, Kay, get deploy.
No deployments gave, get SVC services, no services, just single one Kubernetes.
It is default one.
And now let's create a new deployment based on customly built image.
For that, let's first utilize same command as we used before, okay, great deployment.
Okay, great deployment.
Let's name it A S.
And the next will be option image.
And image will be be stashed up in my case, slash gay eight s Web.
Hello, same name, as it appears here when I push the this image to the Docker Hub, and the same name, as I see here.
So here's my user name.
And here was the name of the Docker repository.
Let's go ahead and create deployment based on this image.
Of course, if you performed all steps yourself and push the customer build image to Docker Hub under your account, you could utilize here your username.
So let's go ahead and create new deployment.
deployment was created with K get pods.
Now I see that container creating status is status of this port, which was created for this deployment.
Let's check again, still container creating, because it takes some time to pull image from Docker Hub.
And now I see that container is run.
Right here is the name of the container.
And here was a hash of the replica set, which was created by deployment.
And here was specific hash for this particular port.
Now, let's create a service using cluster IPL.
And try to connect afterwards to our web server, which is running using no GS Express.
So let's create service.
For that we will expose port key expose deployment.
Here will be name of the deployment gay eight s Web.
Next, let's add option prod and recap that our Express web server is running at Port 3000.
Let's go back to our application.
Go to index dot NGS file.
And here is this port 3000.
And we could basically expose it to the same external port that will be used for connections to deployment, let's utilize for that just a single prod we will not need for that target port option.
That's why let's add this port equal sign 3000 like that.
And this will create cluster IP for deployment K eight s web Hello.
Let's go ahead and create service with least servicescape get SVC.
And here is a newly created service.
Here was name of the service and here was cluster IP, which was created by Kubernetes.
And again, using such IP address, we will be able to connect to our deployment, it means that you could connect to any of the ports, you don't select which port you will be connected to Kubernetes does this job for you.
And here we see that we opened port 3000 exposed port 3000.
Let's try to connect to this cluster IP and port 3000.
Let me go up here this cluster IP and of course, I will not be able to connect to this cluster IP from my local computer.
That's why let's as usually quickly connect to our mini cube node as a sage Docker at this IP 50 911.
In your case, this IP is different.
Our password is DC user.
And now here let's see URL to cluster IP and port 3000.
That's that port which were exposed from deployment.
So let's go ahead and I see message hello from the gate eight s web Hello, and entire name of the server where Express no GS application is actually run.
And that's that message which comes from the Express web server here.
That's where we send it back to the client hello from the and name host name of the session.
over where this obligation is run.
And it means that everything works perfectly.
We created a deployment based on the customly built image, which we pushed to Docker Hub.
And here a result of it and we see response from the web server application.
By the way, if you enter a CRL command like that, you will be automatically moved to the new prompt, and you will stay on this line.
If you want to move to the new line, you could enter a C URL, then add the semicolon and add here act like that.
And now you will be moved to the new line here in the out.
Now we got response from the same port.
Here we see same hostname, as we saw here.
Now, let's keep this window open here.
And let's increase quantity of the polls in our deployment.
Let's scale our deployment.
I'll open up a new tab in the item application, you could open a new terminal window if you're on Windows, I'll use key combination, Command T in order to open new tab here.
And here in the new tab.
If I enter key, I will say error command not found because alias works only in this tab.
That's why let's create alias once again, alias k equal sign cube CTL.
By the way, you could omit double quotes here, if a command consists only of single word.
So let me remove those double goals like that.
Alias was created.
And now let's gay get pods.
And here I see same name of the board, which I saw here in the answer from the web server.
Let's know scale this deployment.
The name of our deployment is K eight s web Hello.
Here's the name of the deployment.
And now it currently has just a single pod.
let's scale it K scale deployment.
K eight s Web.
And I would like to scale deployment to let's say four replicas.
Let's go ahead, deployment was scaled.
Let's now get pods K get pods.
Now there are four ports, one is still been created.
Let's check back again.
And now all four ports are running.
All of them belong to the same deployment K eight s web Hello.
And each port of course has its own unique IP address.
Let's get details about ports, Gigabit ports, dash or white.
And here I see different IP addresses of each port.
And of course, they differ from the cluster IP, which was created for a specific service.
With list services get SVC.
And here as before, we see cluster IP, 4k eight s web Hello service.
And each of those bolts is now available via this cluster IP and Kubernetes will decide which port to choose for each particular request.
Let's now go back to this step where we are inside of the Kubernetes node and try to make connections to this cluster IP and this port again.
Let's clear terminal here and see URL.
Now I see a response from this port with such name.
If I repeat command is a response from another port.
Let's repeat command once again, once again, and so on.
I see that now, load is balanced across different ports.
Because here I see responses from different ports.
Different names appear here in the responses.
Different polls serve different requests.
And here are different names of the polls.
That's how distribution of the Lord works in action.
And now using search customly build Docker image, we were able to prove that.
Alright, let's continue.
And now let's modify type of the service which we create for our deployment because now there was cluster IP, which is available only from inside of the cluster.
Let's go here and let's delete the current service K eight s web Hello.
Simply take its name was clear terminal and antigay delete SVC stands for service and based name of the service we would like to delete service was deleted get SVC there was no such service anymore.
And if I tried to connect again to deployment using cluster IP which we used before, I will definitely get an error.
So I see No response.
Let's now create a service again.
But now we will set its type to node port.
Let's use send command K, expose deployment, K eight s web Hello.
And here will be type, which will be set to node port.
In Pascal case notation node port.
And let's add here port option and brought will be set to 3000.
Same port as we utilized before it is that port where our containers actually run expressed no GS application.
Let's go ahead and create such service of type node port deployment was exposed Let's list services get SVC.
And now there was again service with name gate s web Hello, its type now is note port.
Here was again cluster IP, and port.
But here I see port which was specified here using port option.
And also after column I see randomly created port 32,142.
And now I am able to connect to deployment using Node IP address, I could get node IP address by entering mini cube IPL, we have only single node recap.
And here was it a bit in my case.
And I could take this IP at this port.
And I will connect to one of the ports in this deployment.
Let's do that.
Let me grab this IP, go to web browser for example.
Because now I will be able to connect to deployment from my computer, let me paste this IP and after column, I will add this port like that.
So in my case, connection string is here.
So here was AP address, it is IP address of the node and here was randomly generated port, which was created by Kubernetes itself when we created service for our deployment.
Let's go ahead and try to connect.
And I see response from one of the ports.
And I see response from other report with RS or name, refresh again, and I see response from other port here.
That's how we are able to connect to our deployment using node port service.
Also, it is more simple way to get the URL for connection to this deployment by using mini cube service command mini cube service.
And here type name of the service in our case, gay eight s web Hello.
Let's enter and new tab in the web browser will be opened automatically.
And notice URL here same as I just used before IP address of the node and port which was automatically generated by Kubernetes.
That's node port type of the service.
Also here you could see this actual URL when you enter a mini cube service K eight s web Hello, here was actual URL.
And if you want to get only URL, you could add here option does this URL and you'll see only URL and you could grab it and based it anywhere you want.
So that's a node port service type.
And it exposes the deployment at the port of the node.
And here was this specific port in my example, in your case, this prod will be completely different.
And the one more option, which I would like to show you in terms of creation of the service is a load balancer.
Let's create a load balancer service and for that throws the length delete existing service gate delete SVC service gate s web Hello.
Service was removed gate gift SVC there was no more such service.
And let's create a service of type load balancer.
Gate expose the deployment.
Gate s web Hello.
Here let's add type load balancer protocol Pascal Case Foundation again, and let's add default option and its value will be 3000.
Same as before.
Let's create such service Service was created get SVC.
Now I see that there is service of type load balancer.
Here was cluster IP but external IP here is pending.
And you will see here pending if you're using mini cube but when you deploy Your applications somewhere in the cloud using one of the cloud providers like Amazon, Google Cloud, etc.
Here you will see load balancer IP address assigned automatically.
But in our case, using mini cube is behavior is similar to note port.
And here, we will still be able to connect to our deployment using IP address of the node.
And if I enter now mini cube, service, gate s web Hello, I will see connection which is made to IP address of the node.
And here was a game random brought which was generated here.
Here was this random port.
Alright, let's continue.
And let's keep this service and this deployment in place.
Recap that we have single service here which we created before it is k eight s web hello and the SType now is load balancer.
And there are a single deployment gate get deployed.
And this name is K eight s web Hello.
And now this deployment has four different ports available for us.
We could read the details about this deployment Kay, describe deployment, gait s web Hello.
And here are the details of this deployment.
By the way, at the end here I see logs related to scaling of replica set, initially, quantity of the replica was one and later we scaled it to four.
If you want you could try to scale it to any other number if you want to do so.
Above I see that this deployment has a name gay as web Hello.
namespace is default labels annotation selector replicas desired for updated for available for.
And now let's try forming.
Let's try to update version of the image in our deployment.
Notice here that strategy type is a rolling update.
What does it mean? When you release a new version of your application, of course, you want to roll out this new version in production smoothly without any interruption of service.
And Kubernetes allows that out of the box.
And this is very easy.
And this strategy type rolling update means that new ports will be created with new image while previous ports will be still running.
So both will be replaced one by one.
And finally, in some time, or both will be running in you updated image.
And now let's try that in action.
We will update a bit our application and the build new image with NASA tech, for instance, version two dot 0.0.
And afterwards, we will modify image in our deployment and see what will happen how Kubernetes will roll out this update.
So let's go ahead and do that.
By the way, I don't need this step anymore, where I created SSH connection to node.
Therefore, let me exit from this connection and the close this step actually, and I'll keep only one tab open.
So let's go back to Visual Studio code.
And here are contents of the index dot MJS file.
And we already have image available for this version of the application where server answers us with such string.
Now let's modify this twink for instance, let's add here br ethics version two like that.
And let's know build a new image with a new tag, push it to Docker Hub and afterwards modified image in our deployment.
Let's save changes in this file, open up and better terminal.
And let's build image and assign another tag to eat.
Now image has only a single tag it is latest.
So let's build a new image Docker build dot dusty tech, Mr.
Chu prepend your name if you want to push this version of the image to your Docker Hub account.
So base the sugar and here will be the name of the image gay eight s web Hello.
And after calm.
I'll add the tag for instance two dot 0.0 like that.
Let's build the site image recap that I modified index dot NGS file and I updated this string so it should build new image.
Let's go ahead and build it building exporting layers image was built and now let's push it to Docker Hub.
I will Copy this name, including the tech.
And now here I enter docker push.
And a new version of my application will be pushed using separate tech for the same k eight s web Hello image.
So let's go ahead and push pushing, I see that some layers already exist and only one layer was pushed to image was pushed successfully, let me verify that, go to my Docker Hub Account refresh it still see just a single image here.
Notice, let me click on it.
And here I should find to tax latest and do dot 0.0.
Now let's deploy this new version of this image.
And for that we will utilize following command.
Let's go to the terminal here.
And let's set a new image for the deployment key set image.
We are setting new image for particular deployment next will be deployment name of the deployment K eight s web Hello.
And afterwards we need to select ports where we would like to set a new image.
Here we will arrive at gate eight s web Hello.
And after equal sign we will specify a new image.
In my case it is bits the shoe slash gay ID s web Hello, column two dot 0.0.
That's new tech which was assigned by us before we push the new version of the image to Docker Hub.
After entering this command image will be changed and roll out update will be started.
Be ready to enter follow in command after this 1k rollout status deployment K eight s web Hello, you could prepare this command somewhere in the text editor and afterwards quickly basted.
After this command, I'll try to enter it manually.
Let's go ahead and change image image updated key rollout status deployment gay eight s web Hello waiting for deployment allowed to finish.
And now I see the deployment successfully rolled out.
And before I saw such messages as three out of four new replicas have been updated, one old replicas are pending termination, and so on.
Finally, all previous replicas were terminated and new replicas were rolled out.
Let's know at least bots gay get balls.
And now I see that there are four ports.
And notice age of those ports from 30 to 40 seconds, it means that previous ports were fully terminated and new ports were created.
And now in all those four bolts, we are running a new version of our application.
And we could verify that very easily by accessing our deployment using service and there was still service get SVC type of it is a load balancer.
And he was cluster IP here was brought.
And by entering command mini cube service, K eight s Webb Hello, I could open connection to one of the run ports.
So let's go ahead and actually was opened and now I see response, which includes version two.
That's how wrong updates work, you could refresh page here, then you should see a response from other server.
For instance, this one.
And again, here I see version two, it means that our new application version was successfully deployed to all boats in the deployment.
That's where rolling updates and that's how you could check status of the rolling update by entering Command K rollout status.
This one, if you would like you could create the one Motek for your image, push it to Docker Hub and verify how rollout works again.
Or if you want you could roll back to previous version.
By the way, we could quickly try that together.
Let's go to this command and here I will remove deck and it will mean that I would like to utilize only this deck and let's check how image will be modified again.
And we are actually going back to a previous version of our application.
Let's modify image let's chercher or allow status to our default new replica have been updated, waiting for deployment or allowed to finish.
And finally, deployment successfully rolled out.
And again, I will see four completely new ports, get pods, here was age of those four ports.
Let's connect to our deployment again, mini cube service gay it as web Hello.
And I again see hello from the server without version two.
That's where rolling updates.
Now, let's try fold.
Now we have four bolts up and run.
But what happens if I simply delete one of the ports manually? Let's try that.
Let's take for example, this name of the port, notice H of those four ports, and enter key delete port and paste name of any port you like.
And after this command, let's get pods again.
And I will see that new port will be created automatically container created.
Because we told Kubernetes that desired quantity of the ports in the deployment is four.
That's why it tries to do all his best to create desired quantity of the replicas his specific deployment.
And now, I will see that there are four ports again, up and running.
And age of this port is 33 seconds.
All right, we try it rolling updates.
And also we tried to delete one of the one ports.
And don't let me demonstrate to you how to lounge Kubernetes dashboard.
If you're using a community cube, it is very easy, just single command.
But if you're running Kubernetes on your own premises, or somewhere in the cloud, it could be a bit more difficult, because you need to secure web access to dashboard.
But here again, there's just single command mini cube dashboard.
Like that, let's enable this broad lounging proxy verifying proxy hells.
And this broad web page will be opened here.
And as you saw here, no prompt for user name and password.
This dashboard was simply launched like that, because we are using mini cube.
And here you could observe the details about your Kubernetes cluster.
For example, you could go to deployments, and you will find a single deployment here notice Let me decrease a bit on size like that.
So here is metadata for this deployment.
Here are labels.
Strategy rolling update, that's what we observed.
Both status for available for updated.
Here are conditions new replica set, here was ID of the replica set, which was created by this deployment.
You could click on this hyperlink and read the details about specific replica set.
Notice that replica set deployment and ports all belong to the same namespace default, but of course, you could assign deployments to different custom namespaces labels for this replica set port status for running for desired and here are links to all boats which belong to this particular replica set.
Also below there was service which was created for this replica set for this deployment type is a load balancer here.
And if you click on specific port, you could read the details about a particular port here is name of the port namespace when port was created.
Here is information about the node where this boat was created status is running IP address of the port.
Below you could also find the details about replica set which actually controls this particular port.
And notice that the label for this port is here up Golang K eight s web Hello.
And the same label is found here controlled by replica set and here was label in the replica set.
Also below you might find the details about events which happened in this particular port.
For example, here is step pulling image Mr.
Xu K eight s web Hello.
And here are the deals about containers which are no running inside of the port.
There was no just a single container inside of each port.
By the way, if I go to replica sets, I will find actually two different replica sets which were created for different images for this image and for this image.
And now in this replica set there are zero point because we rolled back from this image to this image with Tag latest, that's why now here I see four ports, and here, zero ports.
Alright, this graphical user interface, you could click on other menu items and observe which, for example, nodes belong to the classroom.
Now there was a single mini cube note.
And if you click on it, you could read the details about this particular node.
That was Kubernetes dashboard.
And you could utilize it in order to observe status of the Kubernetes deployments and services, and so on.
Let me close it.
And now let's continue and the left Ctrl C in order to interrupt this process here.
And now mini cube does brought will not be available anymore.
Now, important notice, we just used imperative approach to creation of deployments and services.
And we create deployments and services using different cube CTL commands.
But usually, that's not the way how deployments and services are created.
And in most cases, declarative approach is utilized.
And in declarative approach, you have to create Yamo configuration files, which describe all the details for deployments and services.
And afterwards, apply those configuration files using cube CTL.
And that's actually declarative approach of creation of different resources and objects in Kubernetes.
And that's what we will do right now.
Let's now delete both deployment and service which we created before.
And for that, you could do the alias command, cube CTL, delete the deployment, and afterwards, delete service.
But there was also a short version of deletion of all resources in default namespace, and it is gate delete all with option, there's there's all like that all the resources will be deleted, you'll see that both were deleted, service was deleted, and deployment was deleted as well.
Let's get pods.
Both are still being terminated.
Let's check again.
Still terminating was wait a bit.
And now there are no resources found in default namespace.
All ports were terminated, let's get services get SVC.
And now there was only a single default system service Kubernetes.
And of course, there are no deployments get deployed, no resources found.
Now let's create Yamo configuration file for deployment.
And for serious heroes, we will create two separate configuration files.
Let's go to Visual Studio code.
And here, let me by the way, height and by the terminal for the moment.
And let me close those old windows.
And now here, let's collapse this folder gay eight s web Hello, because this folder was dedicated to our Node js Express web application.
And now here in the root of the K eight s folder here, let's create the following files deployment both Yamo.
And also let's create file service dot Yamo.
So two files deployment Yamo and service dot Yamo.
Let's throws the field in deployment dot Yamo file.
If you installed Kubernetes extension here in VS code, what I asked you to do before you could create such configuration files very fast.
Let me show you.
So type here simply deployment and you will see suggestion Kubernetes deployment we have selected and you will see that Yamo configuration file will be created automatically for you.
And you will see that all appearances of the my app will be highlighted and you could simply type name of the deployment which you would like to set and left set name to K eight s same as we did before.
But now we are using Yamo configuration file.
So K eight s web Hello.
deployment name was set here was actual name for this deployment.
He was a diversion ops slash v1.
Here was selector with match label scheme.
And by this match label scheme, we specify which poles will be managed by this deployment.
And below there was nested template and this template describes actually port but here you don't need To specify kind of the template like that, because this nested template for deployment template is always bought.
Therefore, current here is not needed.
Next, for the ports which belong to this deployment, here, we set labels.
And label here is the same as label here in selector match labels.
So this label and this label match here.
And next in spec give, we specify which containers we want to create in this port.
And there was just single container with such name image is to be filled, we will replace this image placeholder now with real image.
And also below, you could specify which ports you would like to open in specific container.
And below in the port section, you could specify a list of the container ports, which will be open on the ports IP address.
So let's feel now the details in this spec section.
image will be based Ashok slash K eight s web Hello.
And both here will be contained in Port 3000.
Also notice that VS code Kubernetes extension added resources section here.
And in this section, there was limit skip, which specify memory and CPU limits for each board, which will be created by this deployment.
And here will limit amount of memory and amount of CPU resources.
500 m means half of the CPU core.
If you want to modify 500 to 250, for example, 1/4 of the CPU core, let's keep it like that.
And that's it for this specification of the deployment.
Let's save changes in this file.
And now let me actually show how to find documentation on how to create specific Yamo configuration files.
Let's go to web browser and here let's go to kubernetes.io.
And go here to documentation.
And here or go to reference.
And here, select Kubernetes API.
And in Kubernetes API, you could select for example, workload resources.
And you will find the details how to describe for example ports, or create port template or create replica set, we just create a deployment.
Therefore, let's click on deployment section here.
And you will find the details how to create Yamo configuration files.
If you want to create a deployment, a diversion must be set to apps slash v1.
And below you will find other keys which must be present in deployment configuration file.
For example, client must be set to deployment.
Then there was metadata section spec status status, by the way, is filled by Kubernetes automatically, and it will be added after you create deployment based on specific configuration file.
And if I go back to configuration file here, I will find such keys as API version kind metadata and spec, same keys which are described here API version kind metadata and spec.
And if you want to get details about deployment spec, you could click here and you will find that inside of the spec, you have to specify a selector template replicas, strategy mean radio seconds and so on.
And if I have a look at this back here, I will find out that inside of spec in this example, there are a selector and template selector and template here, and both are required.
If you click on template, you will read the details about both template spec.
Let's click on it.
And here you will find that inside of the board template spec, you have to specify metadata and spec we just bought a spec.
And that's what we added here in the template section, metadata and spec for the pod.
Let's click on pod spec and read the details about pod specification containers is required and you could add multiple containers in the pod specification if you want to do so.
And here in this spec we have containers here it is let's click on container here.
And in this container specification you will find name which is required.
That's why we set name as this all Also you have to set image for a specific container.
Here with image resources and ports are also described here on this recommendation page.
So here we support section.
Also, you could set for instance, environment variables for a particular container.
I highly recommend to you to get familiar with documentation, and with all options available, and I just demonstrated to you how you could dig through documentation and find out how to feel different sections of the configuration file.
Alright, we are done with creation of the configuration file for deployment.
So current here is deployment.
And let's now apply this configuration file using declarative approach, creating such file in place, you will no longer required to enter command cube CTL create deployment, you will just enter command cube CTL apply.
Let's do that.
Let's open up embedded terminal here.
And now I'm still inside of the gate eight s web Hello folder.
Let me go out of this folder where this deployment Yamo file is located.
Let's least files here.
So here is deployment Yamo configuration file.
And here let's enter command cube CTL.
Apply this F stands for File, and here will be the name of the file deployment.
Simple as Det file was applied and deployment was created was generated.
By the way here I don't have alias with adequately alias, alias K, cube CTL.
So K get deployments.
And now there was K eight s web Hello deployment.
And there was single port in this deployment get ports.
Here's this port, which is up and running.
But how we could now scale this deployment and increase quantity of the replicas were easily by modification of this specification file.
But how we could increase quantity of the replicas.
Here at the moment, we don't see any numbers.
Let's try to find out that using documentation.
I assume that I don't know how to change that.
So let's go to work load resources, check deployment and scroll down here.
So spec status metadata kind, no options here.
Let's click on deployment spec.
Here we'll select or template replicas number of desired ports.
And that's that option which you need to utilize if you want to modify quantity of the replicas from default number one, oh here was default number one to any other number.
And you have to add replica Skee in the deployment spec.
So let's go to deployment spec.
So here we spec.
And here, let's add new deal replicas.
And its value will do let's say number five.
If I go back, I could read that its value is integer 32.
So you don't need to write here file like that.
As a number, we have save changes.
And let's apply this deployment Yamo configuration file again, using same command cube CTL, apply this F deployment Yamo.
Let's go ahead configured, get pods.
And now I see that four new containers are being created.
And now my deployment has five different ports, four of them are already already know five around.
That's how we were able to very easily modify our deployment simply by modification of the deployment Yamo configuration file.
But of course now there are no services K get SVC let's create a new service using similar approach.
Let's hide them by the terminal.
And let's now go to service dot Yamo configuration file.
And here I'll let's also speed up the process for creation of the service configuration file and type service and Kubernetes extension will offer us this option.
And this spec will be graded automatically.
Client is service API version is b one here it differs from API version for deployment kind metadata name sets name for this service.
Let's enter name gate s web hello and also hero let's modify selector and select all will be up Golden Gate eight s web Hello as well.
By the way If you hover mouse over any of the keys in this configuration file, you could find hyperlinks to documentation for each particular key, for example, key name URL is here.
Or if you hover mouse over kind, you could also find hyperlinks here for a diversion for kind of metadata, for instance.
All right, what is left here in this series is modification of the port section.
Here, I see that port and target port were our auto populated.
Before we utilize the we exposed actually same port 3000, which is the port where our Express web server is running.
And now we could try different port expose Port 3000, which is target port to port, let's say 3030, like that.
And it means that target port 3000 will be exposed to external Port Authority authority.
Also notice that in this specification, there was no type for this service.
And let's say that we want to create a load balancer service.
For that, let's go to documentation.
Let's find documentation for service here.
Kubernetes, API, service resources, service.
And here are let's click on service specification.
Here we'll select Reports.
And also below you could find type of the service type, either cluster IP, which is default, or external name, or node port or load balancer.
Let's set type to load balancer.
Let's go back to our configuration file.
And here inspect, let's set type to load balancer like that.
That's it for this service specification.
Let's save changes in this file.
And let's apply this using same command as before cube CTL.
Apply dash F stands for File, and here are type service dot Yamo.
I'm still inside of the gate eight s folder.
So let's apply a service Yamo file applied GAE get SVC.
And now I see that new service with type load balancer was created.
And now similarly, as before, I could open connection to our deployment using command mini cube service K eight s Web.
Hello, let's go ahead and page will be opened.
Refresh and a good response from another port.
It means that now everything works as before, there was no service and deployment.
And we created everything using declarative approach using configuration files.
He was filed for service and here's file for deployment.
We could also very easily delete deployment and service which we just created using just one command.
Let's do that, go to the terminal and here enter key delete this f here will be the name of the first file deployment Yamo.
And here, let's add dash f option once again and type of service that Yamo.
Let's go ahead and delete both deployment and service.
We'll check okay, get SVC no service, get deployment and no deployment.
Now let's do following.
Let's try to create two deployments.
And let's connect them with each other.
Because it's very common task when you want to connect different obligations together for example, connect front end to back end or connect back end to database service and so on.
Therefore, now we will do following let me draw a picture.
So similarly as before, we will create a web deployment and deploy their our no GS Express web application.
And here let me type web like that.
And also we will create one more deployment and we will name it Ingenix.
This Ingenix deployment will run basic nginx image with no adjustments and this nginx deployment will have corresponding service with cluster IP.
So let me draw here a line and here will be in Gloucester IBM will start with capital C cluster IP like that.
And those two deployments of course are allocated in the Kubernetes cluster.
And this deployment will be also exposed but each service will have type Load Balancer, it means that we will be able to connect to either from outside using load balancers IP address here with a load balancer.
Now, inside of the web application, we will create two endpoints, first will be route and point, same as before, which will return simply hello from the web server.
And one more endpoint here will be, let's say Ingenix.
And when connection will be made to nginx, we will connect to nginx service by using cluster IP.
Or you will see that using nginx service name like that.
So from one deployment, we will connect to another deployment and get response from one of the ports which will be run in nginx application.
And afterwards, we will return back result to the client with the contents which we received from the engineer and export.
So that's the plan.
And in such way, we will actually connect to different deployments, web deployment and nginx deployment.
Let's get started with that.
You're inside of the K eight s folder.
Let me close those files for the moment.
Let's create a copy of this folder K eight s web Hello, which contains all files related to our Weber Express application.
Let's copy this folder and paste here.
And let's rename it simply click on this name, press enter and enter a new name K eight s web to Ingenix like that.
Next, let's keep Docker file unmodified, Kira Coulter's have this Docker file in this folder, so no modifications here, but we will slightly modify index dot MGS file.
And I will grab a modified file from here.
Let me copy it and paste here.
So here are the contents of index dot MGS file for this second application.
There is still route URL here forward slash, we still great Express web seller.
But we add here one more endpoint Ingenix.
And its route handler is now async function because now we will connect to another server simulating connection between different deployments.
And notice URL here it is simply http colon slash slash Nginx.
And we will create a service called Ingenix.
For second nginx deployment.
And from this deployment, we will be able to connect to another deployment using just name of the corresponding service.
Simple as that.
Of course, it is possible to connect using either cluster IP or not cluster IP is dynamic name of the service is static.
So we will connect to one of the nginx servers by using nginx service name.
And here, we utilize fetch, which we import from node fetch package, by the way, in the moment, we will install it in this application as well.
So we are waiting for response from the engine X server.
And afterwards, we take body from the response calling text method and simply return to the client, this body essentially what this endpoint will do, it will simply proxy request to the nginx server and return a result response from the Ingenix to the client.
Simple as that.
And here we import fetch from node fetch, therefore, we have to install such external dependency.
And for that, let's open up another terminal and the here make sure that you see deep to this folder, gate s web to Nginx.
Web to Nginx.
And here let's enter npm install Node fetch installing package you will see that this package the JSON file will be modified.
So now here are two dependencies Express and node fetch dependency was installed.
And notice again, that node modules folder has appeared here but we don't need it here.
Let's simply remove it because inside of the image, which we will build right now, we have instruction to run npm install.
Here it is.
That's why all necessary dependencies will be installed inside of the Docker.
All right, now we are good to go.
By the way, this application is also included in the final project files, which you could clone from my GitHub repository.
So K eight s web to Nginx.
And here was index dot MJS file.
And now let's again build the Docker image, but it will be in your Docker image from this application.
And now let's build it using the same command as before Docker build.
So let's list files and folders here there was a Docker file.
Docker file was not modified, it is the same as for previous application.
And now let's build Docker image Docker build dot.
And let's add a tag, Mr.
Ashok, slash, and let's name it gay eight s web two Ingenix.
Same name as for this folder.
And let's keep tag as latest.
You don't have to add here tag like that.
In this case, it will be added automatically.
So let's build such image image is being built notice npm install command image was built.
And now let's push it to Docker Hub.
Let me grab this name.
And here will be the name of the image which I want to push to Docker Hub pushing some of the layers will be reused from our image.
And finally image was pushed and now we are able to utilize it in Kubernetes deployment.
Great, let's hide the terminal and no less the following.
Now I would like to demonstrate you how you could combine deployment Yamo and service Yamo configuration files just in one file.
Let's close those files.
And let's create a new file and let's name it K eight s web to nginx dot Yamo.
Now in this configuration file, we will combine instructions for creation of the deployment and service.
Let's first go to service dot Yamo.
Take contents of this file and paste here in the gate s web to nginx Yamo.
Afterwards, please add three dashes exactly three and go to the next line.
Now let's go to Yamo file for the deployment with take all its contents and paste here under those three dashes.
And now just in single file, we have specification, both for service and deployment.
Now let's modify deployment because we would like to utilize our other image and here we will utilize our other name.
By the way, we could select this name here, then right button click and select Change all occurrences.
And let's name service and deployment as gay eight s web to Ingenix.
Let's keep type for this service as a load balancer port here could be modified to another for instance, let's set it to all three like that.
And here in the deployment specification, let's decrease quantity of the replicas to let's say three.
And also let's modify specification for containers.
And the image which we would like to use right now will be K eight s web to engine X it was replaced when I modified the K eight s web Hello name.
So he was correct image which we just pushed to Docker Hub and this image we would like to utilize for this particular deployment.
And container port will be unchanged this 3000 Alright, that's it for specification for deployment and service for new application.
And now let's create the specification for NJ MCs deployment and service because we will deploy now two different applications.
One which will be based on our custom image and second, which is based on official Docker image called Nginx.
Let's simply copy this file and paste here.
And let's name this file as nginx dot Yamo.
And now in this nginx dot Yamo file.
Let's modify names.
Once again select this and change all occurrences.
And let's name it simply Ingenix.
Please type exactly as I just did.
Exactly Ingenix because this name for the service we will utilize inside of our deployment inside of our containers which are running Express no GS web server.
We will connect to this deployment by using Ingenix service name.
That's why this name is very important here.
Now, we decided to utilize cluster IP service type for this second deployment.
That's why let's remove type load balancer from here.
By default cluster IP will be assigned here.
And let's modify ports section here, let's remove target port and keep only port and set it to eight.
It is default port where Ingenix will run web server.
Also in the deployment section, we have to modify image.
And now it will be simply Ingenix without your user name, because we would like to utilize the default nginx image.
Let's keep resource limits here in place, and let's modify here container port to add like that.
So quantity of the replicas will be for instance five instead of three.
Let's save changes here and delete summarize what we did here, we will create a new service called Nginx.
And name here is important for our setup.
Here is a port section with single key port.
And that means that external port A will be mapped to internal port 80.
It is port where nginx web server is run by default.
And service type here is cluster IP, it means that such service will not be available outside of the Kubernetes cluster.
And here in this deployment, we utilize official Docker image called Nginx.
Here a specification of for the port containers and container port is set to eight.
So we created the two Yamo configuration files nginx dot Yamo and K eight s web two Ingenix which utilizes customly built image.
Here's this image name.
And now let's apply both configuration files, this one and this one.
Let's go to the terminal and verify that now we don't have any deployments get deploy.
No deployments at the moment, and no services.
And now let's deploy everything using just a single command.
Now I'm still inside of the K eight s web to engine X folder.
Let's go one level up where our Yamo configuration files are located.
Here they are, this one and this one.
And let's apply them.
Okay, apply this f here will be first file.
Here is its name.
And we could also apply one more file, this F Ingenix Yamo.
Let's go ahead and create both deployments and both services.
Service created deployment created service created and deployment created.
Let's check deployments gay get deployed.
There are now two deployments, this one and this one.
Partially, they already get the SVC.
Let's read information about services.
There are two services which were created this one which has type load balancer, and nginx, which has type cluster IP recap that we will connect from this web to nginx deployment to nginx deployment using this name for the service.
Or we could also utilize cluster IP in order to connect from one deployment to another.
But cluster IP is dynamic.
This service name is static.
So let's also check boards game get pods.
Some of the polls are running and one is still pending.
And now we are able to test our setup and try to connect from web application from web deployment to nginx deployment.
Let's open connection to our web service deployment because we used here a load balancer type for this service.
And we could utilize this port here an IP address of the Kubernetes node.
Recap that in order to quickly get a URL for a specific deployment using mini cube you could enter command mini cube service and here will be the name of the service K eight s web two Ingenix.
Web browser page was opened and we got response from one of the ports which belongs looks to K eight s web to nginx deployment.
And here is the name of the corresponding replica set.
Alright, we just triggered connection to root URL.
Let's go back quickly to our application.
Let me hide this terminal here.
And let's go to this new application and open index dot m.
And recap that here we have two endpoints root endpoint, and slash nginx endpoint, which will essentially send request to nginx URL, this one and return back to the client proxy response from other server.
This server is external relative to this server where we are running this web application.
We just received this stream from the server, let's not try to connect to Ingenix endpoint.
Let's go back to our browser and add here slash Nginx.
And what I got, I got Welcome to Nginx.
It means that the response from the nginx Sarah was proxy successfully inside of the server, which belongs to our web deployment.
And we were able in such way to connect from one deployment to another deployment by the name of another service.
So nginx, here is name of another service as we specified here.
And such service has no cluster IP, get SVC.
So here is cluster IP for nginx service.
And we were able to connect to another deployment by name of the service.
That's how you could very easily connect different deployments together.
And such a resolution of the service name to IP address is performed by internal service of the Kubernetes, which is called DNS.
And we could actually verify that quickly.
Let's go to the terminal for example here.
And recap, the week would still get here list of the ports K get pods.
And now we could take any of the pods and execute any commands inside of the running container inside of the pod.
Using cube CTL.
Exec command, this command is very similar to Docker exec command.
Let's now take the name of any port which belongs to k s web to nginx deployment.
Let me copy this name in your case names will be completely different.
And let me clear our terminal here and enter K exec here will be name of the port.
Next will be two dashes.
And afterwards, let's type NS lookup in Janux.
Like that, let's press Enter.
And I got foreign corresponds.
What does it mean? Using this command? We tried to resolve NJ leaks name from inside of the container which belongs to the sport and such resolution was successful.
And this IP address which I got from the DNS server when I made request and as lookup Ingenix.
But where does this IP come from? With right gay get SVC.
And here I get such a view.
And notice that this IP is the same as this one.
And that's what I told you.
Now Kubernetes is able to resolve name of the service to corresponding cluster IP.
And that's what we see here.
And we performed resolution inside of the port which belongs to our deployment and our other service.
And of course, I could execute similarly command in one of the ports for example, I could type here, we get there skew all this and here type http Ingenix.
And this will actually connect to Nginx.
Sarah, then retrieved from it route web page.
Let's try that.
And I successfully received response from one of the Ingenix ports.
And here was nginx web page.
Alright, this how different deployments could be very easily connected with each other.
Let's go back to Visual Studio code and let's delete everything what we have so far.
If you want to dive deeper and To explore details about those deployments, you could do that of course, now you could allow it for instance, mini cube dashboard, you could dig deeper, go to different ports and explore what happens inside of them and so on.
You have all necessary knowledge for that.
Alright, now let's delete both deployments and both services.
Delete this F Ingenix dot Yamo and dash f gay it s web to nginx dot Yamo.
Let's go ahead, both services and deployments were deleted, gay get deployed.
No resources found gay get as we see just single service.
I think you would agree with me that deployment using Yamo configuration files is a much more pleasant and faster than deployment using separate cube CTL commands.
We are done with main practical part of this course.
And one step left.
And I would like to demonstrate to you how to change container runtime which is currently darker in our Kubernetes cluster to other container on time for instance, CRI o or container D and in order to modify container runtime we need to delete current mini cube set up and create a new one.
But let's go to the terminal here.
And let's enter mini cube status command.
Now it is running cubelet is running a barrier service is run.
Let's no stop, mini cube mini cube stop stopping node mini cube one node was stopped and now let's delete current set up mini cube delete the little mini cube in virtual box in my case I removed all traces of the mini cube cluster.
Now let's create Gloucester over again.
Mini cube start I will again use driver option and its value will be virtual box in my case.
If you're a Windows user, you should type here Hyper V same as for previous mini cube start.
But please note is that Docker option did not work for me at least, I wasn't able to run CRI o containers inside of the Docker container.
So for some reason it didn't work.
Therefore I utilize VirtualBox and they don't recommend you to use now.
So driver is VirtualBox in my case.
And next I will add option.
There's this container this runtime and this value will be CRI this oh one more option is container D.
You could try both if you want to do so.
So I will enter here CRI o like that.
And let's create a mini cube cluster from the scratch once again.
But now container run time is set to CRI o let's go ahead start in control plane node mini cube in the classroom mini cube again creating real doll books.
It will take a while now I see step preparing Kubernetes on CRI oh and finally it was done.
Let's now check the status mini cube status.
Type Ctrl Blaine host is running everything is run same as before.
But now container run time inside of the node is CRI O.
Let's check that.
Let's SSH to minikube IP address and let's enter here throws mini cube IBM SSH Docker at 192 168 59.
One auto Yes, DC user.
Now I'm inside of the mini cube node.
And now let's enter docker ps.
And what I see I see cannot connect to the Docker.
And it is an indication that Docker is not running inside of this node, because we are using CRI o container runtime instead.
But how to check whether there are CRI o containers.
Were easily using the following command sudo CRI CTL PS and here are a bunch of different containers which are running like do proxy core DNS storage provisioner and so on.
Now let's try again to deploy both applications web application and nginx application and create both services and see how it goes.
Let's go back to results.
To the code.
And here, let's perform the same task as before.
Simply apply configuration gate, applied this F and apply both Yamo configuration files.
Well, let's go ahead created deployments and created services gate get deploy.
Not ready yet, but there are two deployments.
Once again, again, not ready, let's get information about services.
There are two services same as before, gate, get pods.
Some of the posts are instead of container creating one expanding love check a game.
Some are already running.
Gay get deploy for AWS to deployment is radio Ingenix is not yet ready.
Let's check again.
Now nginx is partially ready.
And now all those containers were created by CRI o container runtime.
Let's enter a mini cube service and name of the service in order to test whether everything works as before.
So here was response from one of the polls.
And let's add here slash Ingenix.
And same as before it works.
Now we were able to poke through response from the engine X deployment.
We just modified the container runtime.
And now it is set to CRI oh, let's go back to this SSH connection where we connected to node.
And here let's enter again.
Sudo CRI CTL PS.
And let's for example, grab by the name, K eight s web two Ingenix.
And I see now three different containers which belong to this deployment.
And now they are arriving using CRI.
Alright, let's no disconnect from this server.
Alright guys, that's it for this Kubernetes for beginners course.
You could of course keep your mini cube cluster running.
You could create other deployments, you could build other Docker images.
In other words, play with Kubernetes as you want and get familiar with all Kubernetes tools which we discussed in this course.
I hope you enjoy this course and I hope you enjoyed spending time with me.
If so, you could find me in the social media networks all links you'll find here on the screen and I will be more than happy to meet you in other videos and I wish you all the best.
Bye bye .