Kubernetes: The components

In previous articles we have seen some details of how the Kubernetes architecture is built.

Today the working mechanisms of the Kubernetes engine will be described indicating the name of each component; to remain faithful to the comparison of the car engine, we will speak of the camshafts, valves, bearings, … that belong to the Cloud Native

Note 1: The installation of k8s in Datacenter, Cloud, and Laboratory will not be discussed, the network has already made comprehensive tutorials available.

To familiarize yourself with k8s I recommend using Minikube (Linux platform) Docker Desktop (Windows & Mac platform).

Let’s begin!

Kubernetes Master:It is the main node of the cluster on which three processes that are vital for the existence of the cluster run.

  • kube-apiserver
  • kube-controller-manager
  • Kube-scheduler

In the master node, there is also the DataBase etcd, which stores all configurations created in the cluster.

The nodes that take care of running the applications and therefore the services are said worker node. The processes present on the worker node I’m:

  • Kubelet
  • kube-proxy

kubectl : AND’ The official Kubernetes client ( CLI ) through which you can manage the cluster ( Kube-apiserver ) using the API.

Some simple examples of kubectl commands are:

  • kubectl version (indicates the version of k8s installed)
  • kubectl get nodes (find out the number of nodes in the cluster)
  • kubectl describe nodes nodes-1 (shows the health status of the node, the platform on which k8s is running (Google, AWS, ….) and the allocated resources (CPU, RAM)).

Kube-Proxy : He is responsible for managing networking, from Routing to Load Balancing rules.

Note 2 : K8s will try to use them all libraries available at the level of operating system .

Container Runtime : It is the foundation on which the k8s technology rests.

kubernetes supports several runtimes among which we remember, container-d , cri-o , rktlet .

Note 3 : The runtime Docker it has been deprecated in favor of those that use interfaces CRI ; Docker images will still continue to work in the cluster.

The objects Kubernetes base are:

  • Pod
  • Services
  • Volumes
  • Namespace

THE controller provide additional functionality and are:

  • ReplicaSet
  • Deployment
  • StatefulSet
  • DaemonSet
  • Job

Between Deployment it is imperative to mention Kube-DNS which provides name resolution services. Since kubernetes version 1.2 the name has changed to Core-dns.

Add-On : they are used to configure further cluster functions and are placed inside the name space kube-system (such as Kube-Proxy, Kube-DNS, kube-Dashboard)

Add-ons are categorized according to their use:

  • Add-on of Netwok policy . (For example the NSX-T add-on takes care of the communication between the K8s environment and VMware)
  • Add-on Infrastructural (For example KubeVirt which allows connection with virtual architectures)
  • Add-on of Visualization and Control (For example Dashboard a web interface for K8s).

For commissioning, Add-ons use controllers DaemonSet And Deployment .

The image in figure 1 summarizes what has just been explained.

Figure 1

Take care and see you soon.

Kubernets: Know the details

A good way to describe cloud-native environments is to refer to the image of your car.

The container is the engine, k8s is the electronic control unit that manages the proper functioning of the vehicle, the drivers, indicating the route and the destination, select the type of service to be provided.

Today’s article will reveal some architectural details to understand how “the car” manages to reach its destination in an efficient way.

Containers are of two types:

The first is called System Container. It is the bodywork of the car (I mean from the plates to seats, steering wheel, gear lever and accessories).

Often for simplicity of creation, it is a Virtual Machine (VM) with Linux operating system (it can also be Windows).

The most common services present in the VM are ssh , cron and syslog , the File System is of type ext3, ext4, etc.

The second type is called Application Container and is the place where the image will carry out the activities.

Note1: The image is not a single large file. They are usually multiple files which, through an internal cross-pointing system, allow the application to operate correctly.

The Container application (from now on container only), has an operating mode based on a rigid logic, where all levels (layers) have the peculiarity of communicating with each other and are interdependent.

Figure 1

This approach is very useful as it is able to manage the changes that may occur over time in an effective and hierarchical way.

Let’s take an example: When a service configuration change occurs, for which Layer C is updated, Layer A and B are not affected, which means that they must NOT be modified in turn.

Since Developers like to refine their own images (program files) rather than dependencies, it makes sense to set the service logic in the mode indicated in figure 2 where the dependencies are not affected by a new image.

Figure 2

Note2 : The File system on which the images are placed (in the example of the car engine we are talking about pistons, connecting rods, shafts …) is mainly of three different types:

  • Overlay
  • Overlay 2
  • AUFS

Note3 : A good advice on the security side is not to build the architecture so that the passwords are contained in the images ( Baked in – Cooked)

One of the splendid innovations introduced in the container world is the management of images:

In a classic high-reliability environment, the application is installed on every single node of the cluster.

In containers, the application is downloaded and deployed only when the workload requires more resources than a new cluster node with a new image.

For this reason, the images are saved in “virtual” warehouses, which can be local or distributed on the internet. They are called “Register Server”.

The most famous are Docker Hub, Google Container Registry, Amazon Elastic Container Registry, Azure Container Registry.

We conclude this article by talking about the management of resources associated with a service.

The container platform uses two features called Cgroup and NameSpace to allocate resources that work at the kernel level.

The purpose of the Cgroup is to assign the correct resources ( CPU & RAM ).

Name spaces have the purpose of grouping the different processes and making sure that they are isolated from each other ( Multitenancy ).

The type of NameSpace can affect all the components of the service as indicated in the list below.

  • Cgroup
  • PID
  • Users
  • Mount
  • Network
  • IPC (Interprocess communication)
  • UTS (allows a single system to appear with different host and domain names and with different processes, useful in case of migration)

An example of limiting the resources of an application is shown in Figure 3 where thegable image, downloaded from the Register Server grcgp, has a limit of RAM and CPU resources allocated.

Figure 3

Soon

Cloud Native Kubernetes: Flow and Job Opportunities

This new article aims to indicate the new job opportunities created by a cloud-native environment.

Image n ° 1 shows the four main levels required by the architecture to function correctly ( left rectangular part).

On the right side ( circles ) are represented the occupations of the operator with respect to every single level.

Picture 1

Bottom up:

1- Storage and Network Operators ( SNOs ) are responsible for managing the hardware architecture.

Role activity number may decrease if deployed in a public cloud or IaaS (Infrastructure as a Service)

2- The Operating System Operator ( OSO ) works at the level of the operating system where the k8s service is running.

The OSO needs expertise in Linux and Windows . Skills in virtualization architecture such as VMware , RedHat , Nutanix , etc. are often required.

If the architecture has been leased from the public cloud or in an IaaS in general, the skills must cover this new architecture.

3- The orchestrator operator (OO) works with the core of the cloud-native administration environment. This world needs a lot of new skills.

Automation is the child of orchestration.

The main concept is that the OO should have sufficient skills to be able to follow all the processes of “Continuous Integration” and “Continuous Delivery” (often called CI / CD ).

Image 2 gives an idea about it

The central arrows show the flow to allow the delivery of a service.

For every single arrow, there are new tools to know to manage the entire release of the service.

Just a few examples: to test the environment you can work with cucumber or Cypress.io, for distribution and construction you can use Jenkins … and so on …

Image 2

Note 1 There are so many platforms available that choosing the right one can be very challenging

4- The development operator is the role of the people who are writing lines of code. They often use software to run businesses like Jira Core and Trello.

Note 2: In my personal opinion, the vendor who creates a software layer capable of centrally managing all these 6 core activities will have a competitive advantage over their competitors.

The big vendors are already playing: RedHat is working from the beginning with its platform ( OpenStack ), VMware has released Tanzu, Nutanix with Carbonite and Microsoft will play its role with the new version of Windows 2022 .

The only good suggestion I can give you is to study this new and fantastic world.

See you soon and take care of yourself

Kubernets: Why adopt it?

Kubernetes (k8s), is an open source platform, introduced by Google, introduced as a simple container orchestration tool but has become the de facto cloud-native platform.

Because “ k8s “is it so widely used?

As it is able to respond effectively to the requests of users of service, adapt to the creativity of cloud architects, and be so stable that meets the demands of the operation.

These advantages can be summarized as follows:

  • Reliable
  • Scalable
  • Self-healing
  • Rapid
  • Efficient
  • Sure
  • Agile
  • Transportable

In this article we will develop the 8 arguments just indicated:

1- Reliability means an architecture capable of functioning even if a part of it is no longer available. K8s was conceived with a natively clustered philosophy.

2- Scalability is required to handle any peak workload. In other words, it is capable of responding to requests for new resources on demand.

The architecture model on which it is based is the decoupling. Every single component has its own characteristics and can be easily added to the k8s environment.

Through the configuration files, the various objects are authorized to communicate with each other.

The key components K8S are the nodes, the K8S services components are load balancers, name-spaces, and so on (see next items)

3- Healing: K8s acts to ensure that the current state automatically matches the desired state of the architecture .

4- Fast: K8s is able to distribute components immediately. In this way, it is possible to respond to an overload management request and/or the need to quickly implement new services.

5- Efficiency is the ratio between the useful work and the total amount of energy used to obtain the result. K8s for its architecture has the best relationship because it has essentiality in its DNA.

6- Security works closely with K8S.

a- The images that turn in the containers are by their definition immutable. This approach has a great advantage because:

No changes it is implemented at the system (container) level.

– Nothing can change the nucleus service unless the entire image is deleted and redistributed.

Let’s compare this approach with a standard environment, where there is a Linux VM.

If we wanted to install, modify, update an application on the latter, we would have to act via apt-get on the necessary packages.

By doing so, we will change the environment by actually opening a security breach.

In K8s the image is not modified but deleted and recreated.

b- Another big advantage is that configuration changes are managed through declarative files (configuration files). These files have a description of the final state of the system. The result is that they show the effect of the configuration before it is run.

7- Agility means greater ease and efficiency in creating container images than using VM images. The developer can write code regardless of compatibility issues.

8- Portability sets the standard for the software development life cycle.

Note 1: The first version of Kubernetes dates back to 2014 and was created to respond to the need to implement a solid cluster solution for container environments.

Take care and see you soon

Modern Applications – Episod 4: Docker Compose – YAML

The topic of this article is understanding how to automatize the delivery of a micro-service.

In the previous one, I showed the flow process of a service. This flow requires typing a lot of commands to launch any single container.

Is there a way to automatize the entire process making it easier?

Yes, Docker-compose is a tool for defining and running an environment multi-container.

Docker-compose works with a describing file that includes all the configurations. The file has the extension YAML (human-readable data-serialization language).

After writing the YAML file (in this example it is named mypersonalfile.yaml), the syntax of the command is:

docker-compose -f mypersonalfile.yaml

Let’s see an easy example using the article I wrote in the last episode as a source.

I had to type all these commands to implement the service:

a. Mongo DB commands

b. Mongo Express commands

In their place,  it’s possible to use the following yaml file.

mypersonalfile.yaml

We will find yaml files again when writing about Kubernetes architecture and its protection.

That’s all for now, take care guys

Modern Applications – Episod 3: Service workflow

The first two articles explained what a container is (article 1) and how they can talk to each other (article 2).

In this third article, I’m going to show how to deploy a service through this new and amazing container technology.

Note 1: I won’t cover the image flow deployment part (Git – Jenkins, Docker repository, and so on) because my goal is to explain how to implement a service, not how to write lines code.

Main Point:

  • As many of you already know, a service is a logical group of applications that talk to each other
  • Every single application can run as an image
  • Any image can run to a container
  • Conclusion: Deploying container technology is possible to build up any service

An example could clarify the concept.

Example: Web application

A classical web application is composed of a Front-End, a Back-End, and a DB.

In the traditional world, every single application runs on a single server (virtual or physical it doesn’t matter).

This old scenario required to work with every single brick of the wall. It means that to design correctly the service the deployers and engineers have to pay attention to all the objects of the stack, starting from OS, drivers, networks, firewall, and so on.

Why?

Because they are a separate group of objects that need a compatibility and feasibility study to work properly together and they require great security competencies also.

Furthermore, when the service is deployed and every single application is going to be installed, it often happens that remote support from the developer team is required. The reason is that some deployment steps are not clear enough just because they are not well documented (developers are not as good at writing documentation as they are at writing codes). The result is that opening a ticket to customer service is quite normal.

Someone could object and ask to deploy a service just using one server. Unfortunately, it doesn’t solve the issue, actually, it amplifies it up just because in that scenario, it’s common to meet scalability problems.

Let’s continue our example by talking about the architecture design and the components needed (Picture 1)

Picture 1

Note 2: In the next rows, I will skip how to deploy the front and backend architecture as well as the docker technology because:

  • Writing HTML and Javascript files for creating a website is quite easy. On Internet, you can find a lot of examples that will meet your needs.
  • Node.js is a very powerful open-source product downloadable from the following website where it’s possible to get all the documentation needed to work with it.
  • Docker is open-source software; it can also be downloaded from the official open-source website. The installation is a piece of cake.

My focus here is explaining how to deploy and work with docker images. Today’s example is the Mongo DB and Mongo Express applications.

I wrapped up the steps in 4 main stages:

a. First point: Creating a Network

It allows communication from and to the images.

In our example, the network will be named “thegable-network”.

From the console (terminal, putty….) just run the following commands:

b. Download from docker hub the images needed

c) Running the mongo DB image with the correct settings:

d) Run the mongo-Express image with the correct settings:

Note 3: The configurable settings are available directly from the docker images.

For example, to mongo-express, picture 2  shows the common settings. (https://hub.docker.com/_/mongo-express)

Picture 2

Connecting to the main web page of mongo-express (localhost: port), have to appear the mongo default Databases  as shown in picture 3

Picture 2

Now creating new Mongo DBs (through the Mongo-Express web interface just for example create the DB named “my-GPDB“) and managing your javascript file, it’s possible to build up your own web application.

In the javascript file (normally is named server.js) the main points to connect to the DB are:

(Please refer to a javascript specialist to get all details needed)

Is it easy? Yes, and this approach allows having a fast and secure deployment.

In just a word, it is amazing!

That’s all folks for now.

The last article of this first series on modern applications is Docker Compose

See you soon and take care