Cloud Native Kubernetes: Flow and Job Opportunities

This new article aims to indicate the new job opportunities created by a cloud-native environment.

Image n ° 1 shows the four main levels required by the architecture to function correctly ( left rectangular part).

On the right side ( circles ) are represented the occupations of the operator with respect to every single level.

Picture 1

Bottom up:

1- Storage and Network Operators ( SNOs ) are responsible for managing the hardware architecture.

Role activity number may decrease if deployed in a public cloud or IaaS (Infrastructure as a Service)

2- The Operating System Operator ( OSO ) works at the level of the operating system where the k8s service is running.

The OSO needs expertise in Linux and Windows . Skills in virtualization architecture such as VMware , RedHat , Nutanix , etc. are often required.

If the architecture has been leased from the public cloud or in an IaaS in general, the skills must cover this new architecture.

3- The orchestrator operator (OO) works with the core of the cloud-native administration environment. This world needs a lot of new skills.

Automation is the child of orchestration.

The main concept is that the OO should have sufficient skills to be able to follow all the processes of “Continuous Integration” and “Continuous Delivery” (often called CI / CD ).

Image 2 gives an idea about it

The central arrows show the flow to allow the delivery of a service.

For every single arrow, there are new tools to know to manage the entire release of the service.

Just a few examples: to test the environment you can work with cucumber or Cypress.io, for distribution and construction you can use Jenkins … and so on …

Image 2

Note 1 There are so many platforms available that choosing the right one can be very challenging

4- The development operator is the role of the people who are writing lines of code. They often use software to run businesses like Jira Core and Trello.

Note 2: In my personal opinion, the vendor who creates a software layer capable of centrally managing all these 6 core activities will have a competitive advantage over their competitors.

The big vendors are already playing: RedHat is working from the beginning with its platform ( OpenStack ), VMware has released Tanzu, Nutanix with Carbonite and Microsoft will play its role with the new version of Windows 2022 .

The only good suggestion I can give you is to study this new and fantastic world.

See you soon and take care of yourself

Kubernets: Why adopt it?

Kubernetes (k8s), is an open source platform, introduced by Google, introduced as a simple container orchestration tool but has become the de facto cloud-native platform.

Because “ k8s “is it so widely used?

As it is able to respond effectively to the requests of users of service, adapt to the creativity of cloud architects, and be so stable that meets the demands of the operation.

These advantages can be summarized as follows:

  • Reliable
  • Scalable
  • Self-healing
  • Rapid
  • Efficient
  • Sure
  • Agile
  • Transportable

In this article we will develop the 8 arguments just indicated:

1- Reliability means an architecture capable of functioning even if a part of it is no longer available. K8s was conceived with a natively clustered philosophy.

2- Scalability is required to handle any peak workload. In other words, it is capable of responding to requests for new resources on demand.

The architecture model on which it is based is the decoupling. Every single component has its own characteristics and can be easily added to the k8s environment.

Through the configuration files, the various objects are authorized to communicate with each other.

The key components K8S are the nodes, the K8S services components are load balancers, name-spaces, and so on (see next items)

3- Healing: K8s acts to ensure that the current state automatically matches the desired state of the architecture .

4- Fast: K8s is able to distribute components immediately. In this way, it is possible to respond to an overload management request and/or the need to quickly implement new services.

5- Efficiency is the ratio between the useful work and the total amount of energy used to obtain the result. K8s for its architecture has the best relationship because it has essentiality in its DNA.

6- Security works closely with K8S.

a- The images that turn in the containers are by their definition immutable. This approach has a great advantage because:

No changes it is implemented at the system (container) level.

– Nothing can change the nucleus service unless the entire image is deleted and redistributed.

Let’s compare this approach with a standard environment, where there is a Linux VM.

If we wanted to install, modify, update an application on the latter, we would have to act via apt-get on the necessary packages.

By doing so, we will change the environment by actually opening a security breach.

In K8s the image is not modified but deleted and recreated.

b- Another big advantage is that configuration changes are managed through declarative files (configuration files). These files have a description of the final state of the system. The result is that they show the effect of the configuration before it is run.

7- Agility means greater ease and efficiency in creating container images than using VM images. The developer can write code regardless of compatibility issues.

8- Portability sets the standard for the software development life cycle.

Note 1: The first version of Kubernetes dates back to 2014 and was created to respond to the need to implement a solid cluster solution for container environments.

Take care and see you soon

Modern Applications – Episod 4: Docker Compose – YAML

The topic of this article is understanding how to automatize the delivery of a micro-service.

In the previous one, I showed the flow process of a service. This flow requires typing a lot of commands to launch any single container.

Is there a way to automatize the entire process making it easier?

Yes, Docker-compose is a tool for defining and running an environment multi-container.

Docker-compose works with a describing file that includes all the configurations. The file has the extension YAML (human-readable data-serialization language).

After writing the YAML file (in this example it is named mypersonalfile.yaml), the syntax of the command is:

docker-compose -f mypersonalfile.yaml

Let’s see an easy example using the article I wrote in the last episode as a source.

I had to type all these commands to implement the service:

a. Mongo DB commands

b. Mongo Express commands

In their place,  it’s possible to use the following yaml file.

mypersonalfile.yaml

We will find yaml files again when writing about Kubernetes architecture and its protection.

That’s all for now, take care guys

Modern Applications – Episod 2: Ports & Networking

As written in the last article a container can manage more images.

Picture 1 shows an example of three different workloads running in a single container.

Picture 1

It’s possible to work with different versions of the same image also.

For example, MySQL has several images that can be installed and run to the same container.

Note 1: Nowadays MySQL available images are:

  • 8.0.25, 8.0, 8, latest
  • 5.7.34, 5.7, 5
  • 5.6.51, 5.6

Picture 2 shows a container where three different images run with two kinds of version applications.

Picture 2

Let’s digress slightly talking about how a service is built.

Most of the time it is made by grouping applications that means grouping several types of images.

The question is: How do images talk to each other?

The answer is quite easy. They talk through the networks, where IP addresses and ports are in charge of the communication to and from the applications (picture 3).

Picture 3

There is just a simple rule to remember when a container network architecture is deployed.

As shown in picture 4, if the ports used by a running image can be the same for different applications (in example 161616), the port assigned to the back-end server must be always different (4000,40001,4002).

Note 2: The port numbers are just an example also because the port with the higher number is 216 = 65535.

Picture 4

Wrap-up: The binding network architecture is completely allowed but the host back-end port can’t expose the same port number to more than one service.

Let’s go deeper into networking in the Container environment:

The network’s topology is defined by the used drivers.

They can be:

1. Host

When the container comes up it attaches its ports to the host network directly.

In this way, it shares the TCP/IP stack and the Host NameSpace.

The segregation is guaranteed by Docker technology (Picture 5)

Picture 5

2. Bridge

This is the default network mode.

It creates an isolated bridge network where the containers run inside a range of IP addresses.

In the previous scenario, the containers can talk to each other but no connection is allowed from outside.

To allow communication with external service in Docker, it’s necessary to start docker with the -p option.

docker run -pserverport:containerport nameservice (ie: docker run -p2400:2451 mysql)

port 2400 is now working with 2451

From a security point of view, it is amazing. You can monitor and select which ports are going to be used for a service (Picture 6)

Picture 6

3. Overlay

If the previous technologies are single-host networking topology, the Overlay allows communication among the container hosted in different hosts.

This scenario requires cluster intelligence to manage the traffic and guarantee segregation. It could be Swarm or Kubernetes (picture 7)

The technology core that allows it is vxlan that creates a tunnel on top of the underlay network and it is part of the operating system

The traffic is encrypted (AES) with a rotating password.

When a service is exposed (-p option wrote before), all traffic is automatically routed, nevermind where the service is running

More interesting details: each container has two IP addresses: the first one insists on the overlay network and is used by the containers to talk to each other (internal). The second address is for vxlan and allows the traffic to outside.

Picture 7

4. Null (Black box)

No network connection

5. MacVLan

It’s possible to implement a MacVLan through a driver. The scope is giving to the network container the behaviour of a traditional network. It’s necessary that the network accepts the promiscuous mode.

That’s all for now. Take care and see you soon.

Modern Applications – Episod 1: Foundamentals

Introduction

This is the first of a group of articles about the technologies that can modernize the applications.

The scope is helping the reader to understand the potentiality of this new way to make business allowing the Companies to be more competitive.

These articles follow my personal approach and studies of Kubernetes.

I’m paying attention to how to make services available and protected by exploiting internal and external native technologies

Let’s start !!!

What is a container

It’s a way to package the applications with their pertinent dependencies and configurations in just one block.

There are at least two big advantages of this approach:

  • The container for his native architecture is portable. It means you can run it in any architecture wherever they are located. (please read the  article Digital Transformation and Cloud Mobility to get all detail)
  • Deploying services prove easier and more efficient than in the traditional world because there are already plenty of software images ready to be used.

Where can I download images to run to the containers?

There are public and private Repositories (please do not mess it with a VBR Repository).

The most famous container technology is Docker that has a public repository called docker hub.

What is a container exactly?

A container allows isolated images to run to an operating system.

Container vs Virtual Machine

The difference between the two architecture seems to be very tiny but actually, they represent two worlds.

The two technologies are virtualization tools but if Docker focuses on the applications layer (picture 1),  VM puts its attention to Kernel and application (picture 2)

Picture 1

Picture 2

Which are the main advantages of this new approach:

  • The container has a small footprint (few MB compare to GB).
  • The boot is faster.
  • Easier compatibility list.
  • It can run in all common operating systems, such as Windows, Mac-OS, Linux.

Container vs Image

It’s crucial to the next articles to have very clear the difference between a container and an image.

Let’s help ourselves through picture #3 that shows the application composition.

There are four main elements:

  1. Image: It’s the code written by developers. It is downloaded from Repositories.
  2. Configuration: It represents the setup created to allow the application to run.
  3. File System: It’s the place where the application and its data are stored.
  4. Network: It allows all components to talk to each other.

The container is where the application runs.

Picture 3

Note 1: Images are part of the container. Think of the container as a multitasking OS specialized to run applications simultaneously.

Note 2: To get info about Docker, please refer to the official website.                I.E.: to run an image just launch the following command:                                  docker run image-name

Note 3: There are more Container technologies; the most common are:

  • RTK (CoreOS)
  • LXC
  • LXD (Canonical)
  • Linux VServer
  • OpenVZ/Virtuozzo 7
  • runC

That’s all for now,  see you soon and take care.

Digital Trasformation & Data Mobility

If Cloud has been the most used word in the last five years, the words that have been buzzing the IT world in the last five months are Digital Transformation

From Wikipedia:

Digital Transformation (DT or DX) is the adoption of digital technology to transform services and businesses, through replacing non-digital or manual processes with digital processes or replacing older digital technology with newer digital technology”.

Or: Digital Transformation must help companies to be more competitive through the fast deployment of new services always aligned with business needs.

Note 1: Digital transformation is the basket, technologies to be used are the apples, services are the means of transport, shops are clients/customers.

1. Can all the already existing architectures work for Digital Transformation?

  • I prefer to answer rebuilding the question with more appropriate words:

2. Does Digital transformation require that data, applications, and services move from and to different architectures?

  • Yes, this is a must and It is called Data Mobility

Note 2: Data mobility regards the innovative technologies able to move data and services among different architectures, wherever they are located.

3. Does Data-Mobility mean that the services can be independent of the below Infrastructure?

  • Actually, it is not completely true; it means that, despite nowadays there is not a standard language allowing different architecture/infrastructure to talk to each other, the Data-mobility is able to get over this limitation.

4. Is it independent from any vendors?

  • When a standard is released all vendors want to implement it asap because they are sure that these features will improve their revenue. Currently, this standard doesn’t still exist.

    Note 3: I think the reason is that there are so many objects to count, analyze, and develop that the economical effort to do it is at the moment not justified

    5. Is already there a Ready technology “Data-Mobility”?

    The answer could be quite long but, to do short a long story, I wrote the following article that is composed of two main parts:

  • Application Layer (Container – Kubernetes)
  • Data Layer (Backup, Replica)

Application Layer – Container – Kubernetes

In the modern world, services are running in a virtual environment (VMware, Hyper-V, KVM, etc).

There are still old services that run on legacy architecture (Mainframe, AS400 ….), (old doesn’t mean that they are not updated but just they have a very long story)

In the next years, the services will be run in a special “area” called “container“.

The container runs on Operating System and can be hosted in a virtual/physical/Cloud architecture.

Why containers and skills on them are so required?

There are many reasons and I’m listing them in the next rows.

  1. The need of IT Managers is to move data among architectures in order to improve resilience and lower costs.
  2. The Container technology simplifies the developer code writing because it has a standard widely used language.
  3. The services ran on the container are fast to develop, update and change.
  4. The container is de facto a new standard that has a great advantage. It gets over the obstacle of missing standards among architectures (private, hybrid, and public Cloud).

A deep dive about point d.

Any company has its own core business and in the majority of cases, it needs IT technology.

Any size of the company?
Yes, just think about your personal use of the mobile phone, maybe to book a table at the restaurant or buying a ticket for a movie. I’m also quite sure it will help us get over the Covid threat.

This is the reason why I’m still thinking that IT is not a “cost” but a way to get more business and money improving efficiency in any company.

Are there specif features to allow data mobility in the Kubernetes environment?

Yes, an example is Kasten K10 because it has many and advanced workload migration features (the topic will be well covered in the next articles).

Data-LayerCloud Backup Restore Icona - Download gratuito, PNG e vettoriale

What about services that can’t be containerized yet?

Is there a simple way to move data among different architectures?

Yes, that’s possible using copies of the data of VMs, Physical Servers.

In this business scenario, it’s important that the software can create Backup/Replicas wherever the workloads are located.

Is it enough? No, the software must be able to restore data within architectures.

For example, a customer can need to restore some on-premises workloads of his VMware architecture in a public cloud,  or restore a backup of a VM located in a public cloud to a Hyper-V on-premises environment.

In other words, working with Backup/Replica and restore in a multi-cloud environment.

The next pictures show the Data Process.

I called it “The cycle of Data” because leveraging from a copy it is possible to freely move data from and to any Infrastructure (Public, hybrid, private Cloud).

Pictures 1 and 2 are just examples of the data-mobility concept. They can be modified by adding more platforms.

The starting point of Picture 1 is a backup on-premises that can be restored on-premises and on-cloud. Picture 2 shows backup of a public cloud workload restored on cloud or on-premises.

It’s a circle where data can be moved around the platforms.

Note 4: A good suggestion is to use data-mobility architecture to set up a cold disaster recovery site (cold because data used to restore site are backup).

Picture 1

Picture 2

There is one last point to complete this article and that is the Replication features.

Note 5: For Replica I intend the way to create a mirror of the production workload. Comparing to backup, in this scenario the workload can be switched on without any restore operation because it is already written in the language of the host hypervisor.

The main scope of replica technology is to create a hot Disaster Recovery site.

More details about how to orchestrate DR are available on this site at the voice Veeam Availability Orchestrator (Now Veeam Disaster Recovery Orchestrator)

The replica can be developed with three different technologies: 

  • Lun/Storage replication
  • I/O split
  • Snapshot based

I’m going to cover those scenarios and Kasten k10 business cases in future articles.

That’s all for today folks.

See you soon, and take care.