ESXi v.7: host patching

In this article, I will explain the procedure to upgrade the ESXi Host when the VMware environment consists of only one server.

Note 1: The first task is to update the vCenter ( VCSA ) by checking which ESXi versions are supported.

Note 2: The traditional method of updating ESXi Hosts uses the automated update process managed by the vCenter console.

Note 3: The DR site of my laboratory consists of a single VMware ESXi Host on which the secondary vCenter ( VCSA ) is present; in this scenario, the methodology indicated in note 1 cannot be used, since, during the update phase, the ESXi Host is placed in maintenance mode. In this state, all the VMs present are off (including the VCSA ).

The solution is to use the procedure on the VMware ESXi Patch Tracker site which consists of the following steps:

1- Selection of the software version that will be installed on the host at the end of the process (see image 1)

Picture 1

2- Determine the CLI commands to use during the update procedure:

The procedure is illustrated in the pop-up that appears when you click on the selected package (see image 2)

picture 2

3- Enable the ESXi Host for ssh connection (image 3)

Picture 3

4- Connect via ssh to the ESXi host and run the commands previously shown in the pop-up.

In my case:

  1. esxcli network firewall ruleset set -e true -r httpClient
  2. esxcli software profile update -p ESXi-7.0U3d-19482537-standard \ -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
  3. esxcli network firewall ruleset set -e false -r httpClient

5- Put the ESXi Host in Maintenance mode and restart it.

6- At the end, check that the update was successful (image 4 and 5)

Picture 4 – Pre Update

Picture 5 – Post Update

Note 4 : In case the hardware is not in the compatibility matrix, the advice is to use the option< –no-hardware-warning> . In my case the second command was changed to:

esxcli software profile update -p ESXi-7.0U3d-19482537-standard \ -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml –no-hardware-warning

See you soon

Tips VMware – Module MonitorLoop power on failed

During laboratory maintenance operations, suddenly a Virtual Machine was no longer able to start.

The vCENTER console reported an error in initializing the server swap file.

Like any good system engineer, before making any changes to the environment, I tried to back up the aforementioned VM.

The job stopped due to the following error: (” An error occurred while taking a snapshot: Invalid change tracker error code “).

Troubleshooting:

  1. Since the swap file handles memory over-commitment, I tried to change the allocated amount of RAM.
  2. I added space to the Datastore on which the VM resided to make sure VMware had enough space to manage the swap file.
  3. I searched in the configuration file ( vmx ) for differences with respect to the configuration of the other VMs.

All tests and changes made did not solve the problem.

Aware that I would have to change the VM configuration, I implemented a simple strategy to:

  • Backup the VM through the Veeam Agent for Linux (The VAL operates at the Guest-OS level and not at the hypervisor level).
  • Write down all the changes that I would have made to the VMs (editor’s note: I had worn Hop-o’-My-Thumb‘s hat, that is, able to return to the initial configuration in a short time).

The methodical ” change, note, check and turn on” approach allowed me to discover that the problem was related to the CPU configuration of the Virtual Machine.

In fact, by resetting the ” CPU reservation ” values to Zero and ” CPU share” to Normal (see image 1), the problem went away, allowing me to start the VM and back it up.

Sapiens nihil affirmat quod non probet (A wise man says nothing that he cannot prove)

Picture 1

Veeam Disaster Recovery Orchestrator v.5: Components verification

This article explains how to configure the Veeam Disaster Recovery Orchestrator (VDrO) administration menu.

Before proceeding to the administration phase, it is essential to have already labeled the resources that will have to be part of the Disaster Recovery plans.

The classification was illustrated in the previous article, available by clicking on the following link: VDrO – VOne – Tagging .

Note 1 : To access the administration menu, select the item called “Administration” (see image 1)

Picture 1

The configuration of the administration menu is divided into three main areas:

In the first, the following are set:

  • The name of the VDrO Server and the contact name (image 2).
  • connections to Veeam Backup & Replication Servers (VBR) (image 3)
  • connections to vCenters (image 4)
  • the optional connection to the storage (image 5) (refer to this article to find out the details)

picture 2

Picture 3

Picture 4

Picture 5

The second area identifies the resources to be added to the DR plans through tagging:

  • The recovery location (image 6)
  • In the recovery location the datastores where the VM filesystems will reside (image 7)
  • Network mapping (image 8)
  • IP address remapping (image 9)

Note 2: The operations described above are possible if and only if all necessary resources have been tagged.

Note 3: Automatic remapping of IP addresses when starting a DR plan is only available for Windows VMs.

Picture 6

Picture 7

Image 8

Image 9

In the third area are identified:

  • User profiling. In simple terms, the VDrO allows you to create users capable of administering only specific workloads which are called “scopes” (image 10).
  • The assignment of the DataLabs to the “scopes”. Remember that the DataLabs allow you to verify that the DR plan is usable (image 11).

Image 10

Image 11

The last configuration allows you to link the group of VMs replicated or saved via backup (called VM Groups) to the users’ scopes.

For example, image 12 shows that the VM Group “B&R Job – Replication VAO Win 10” is assigned (included) to both the Admin and Linux scopes.

Image 10

In the next and last article, we will find out how to create and verify a DR plan.

See you soon

Kubernetes: The components

In previous articles we have seen some details of how the Kubernetes architecture is built.

Today the working mechanisms of the Kubernetes engine will be described indicating the name of each component; to remain faithful to the comparison of the car engine, we will speak of the camshafts, valves, bearings, … that belong to the Cloud Native

Note 1: The installation of k8s in Datacenter, Cloud, and Laboratory will not be discussed, the network has already made comprehensive tutorials available.

To familiarize yourself with k8s I recommend using Minikube (Linux platform) Docker Desktop (Windows & Mac platform).

Let’s begin!

Kubernetes Master:It is the main node of the cluster on which three processes that are vital for the existence of the cluster run.

  • kube-apiserver
  • kube-controller-manager
  • Kube-scheduler

In the master node, there is also the DataBase etcd, which stores all configurations created in the cluster.

The nodes that take care of running the applications and therefore the services are said worker node. The processes present on the worker node I’m:

  • Kubelet
  • kube-proxy

kubectl : AND’ The official Kubernetes client ( CLI ) through which you can manage the cluster ( Kube-apiserver ) using the API.

Some simple examples of kubectl commands are:

  • kubectl version (indicates the version of k8s installed)
  • kubectl get nodes (find out the number of nodes in the cluster)
  • kubectl describe nodes nodes-1 (shows the health status of the node, the platform on which k8s is running (Google, AWS, ….) and the allocated resources (CPU, RAM)).

Kube-Proxy : He is responsible for managing networking, from Routing to Load Balancing rules.

Note 2 : K8s will try to use them all libraries available at the level of operating system .

Container Runtime : It is the foundation on which the k8s technology rests.

kubernetes supports several runtimes among which we remember, container-d , cri-o , rktlet .

Note 3 : The runtime Docker it has been deprecated in favor of those that use interfaces CRI ; Docker images will still continue to work in the cluster.

The objects Kubernetes base are:

  • Pod
  • Services
  • Volumes
  • Namespace

THE controller provide additional functionality and are:

  • ReplicaSet
  • Deployment
  • StatefulSet
  • DaemonSet
  • Job

Between Deployment it is imperative to mention Kube-DNS which provides name resolution services. Since kubernetes version 1.2 the name has changed to Core-dns.

Add-On : they are used to configure further cluster functions and are placed inside the name space kube-system (such as Kube-Proxy, Kube-DNS, kube-Dashboard)

Add-ons are categorized according to their use:

  • Add-on of Netwok policy . (For example the NSX-T add-on takes care of the communication between the K8s environment and VMware)
  • Add-on Infrastructural (For example KubeVirt which allows connection with virtual architectures)
  • Add-on of Visualization and Control (For example Dashboard a web interface for K8s).

For commissioning, Add-ons use controllers DaemonSet And Deployment .

The image in figure 1 summarizes what has just been explained.

Figure 1

Take care and see you soon.

Kubernets: Know the details

A good way to describe cloud-native environments is to refer to the image of your car.

The container is the engine, k8s is the electronic control unit that manages the proper functioning of the vehicle, the drivers, indicating the route and the destination, select the type of service to be provided.

Today’s article will reveal some architectural details to understand how “the car” manages to reach its destination in an efficient way.

Containers are of two types:

The first is called System Container. It is the bodywork of the car (I mean from the plates to seats, steering wheel, gear lever and accessories).

Often for simplicity of creation, it is a Virtual Machine (VM) with Linux operating system (it can also be Windows).

The most common services present in the VM are ssh , cron and syslog , the File System is of type ext3, ext4, etc.

The second type is called Application Container and is the place where the image will carry out the activities.

Note1: The image is not a single large file. They are usually multiple files which, through an internal cross-pointing system, allow the application to operate correctly.

The Container application (from now on container only), has an operating mode based on a rigid logic, where all levels (layers) have the peculiarity of communicating with each other and are interdependent.

Figure 1

This approach is very useful as it is able to manage the changes that may occur over time in an effective and hierarchical way.

Let’s take an example: When a service configuration change occurs, for which Layer C is updated, Layer A and B are not affected, which means that they must NOT be modified in turn.

Since Developers like to refine their own images (program files) rather than dependencies, it makes sense to set the service logic in the mode indicated in figure 2 where the dependencies are not affected by a new image.

Figure 2

Note2 : The File system on which the images are placed (in the example of the car engine we are talking about pistons, connecting rods, shafts …) is mainly of three different types:

  • Overlay
  • Overlay 2
  • AUFS

Note3 : A good advice on the security side is not to build the architecture so that the passwords are contained in the images ( Baked in – Cooked)

One of the splendid innovations introduced in the container world is the management of images:

In a classic high-reliability environment, the application is installed on every single node of the cluster.

In containers, the application is downloaded and deployed only when the workload requires more resources than a new cluster node with a new image.

For this reason, the images are saved in “virtual” warehouses, which can be local or distributed on the internet. They are called “Register Server”.

The most famous are Docker Hub, Google Container Registry, Amazon Elastic Container Registry, Azure Container Registry.

We conclude this article by talking about the management of resources associated with a service.

The container platform uses two features called Cgroup and NameSpace to allocate resources that work at the kernel level.

The purpose of the Cgroup is to assign the correct resources ( CPU & RAM ).

Name spaces have the purpose of grouping the different processes and making sure that they are isolated from each other ( Multitenancy ).

The type of NameSpace can affect all the components of the service as indicated in the list below.

  • Cgroup
  • PID
  • Users
  • Mount
  • Network
  • IPC (Interprocess communication)
  • UTS (allows a single system to appear with different host and domain names and with different processes, useful in case of migration)

An example of limiting the resources of an application is shown in Figure 3 where thegable image, downloaded from the Register Server grcgp, has a limit of RAM and CPU resources allocated.

Figure 3

Soon

Cloud Native Kubernetes: Flow and Job Opportunities

This new article aims to indicate the new job opportunities created by a cloud-native environment.

Image n ° 1 shows the four main levels required by the architecture to function correctly ( left rectangular part).

On the right side ( circles ) are represented the occupations of the operator with respect to every single level.

Picture 1

Bottom up:

1- Storage and Network Operators ( SNOs ) are responsible for managing the hardware architecture.

Role activity number may decrease if deployed in a public cloud or IaaS (Infrastructure as a Service)

2- The Operating System Operator ( OSO ) works at the level of the operating system where the k8s service is running.

The OSO needs expertise in Linux and Windows . Skills in virtualization architecture such as VMware , RedHat , Nutanix , etc. are often required.

If the architecture has been leased from the public cloud or in an IaaS in general, the skills must cover this new architecture.

3- The orchestrator operator (OO) works with the core of the cloud-native administration environment. This world needs a lot of new skills.

Automation is the child of orchestration.

The main concept is that the OO should have sufficient skills to be able to follow all the processes of “Continuous Integration” and “Continuous Delivery” (often called CI / CD ).

Image 2 gives an idea about it

The central arrows show the flow to allow the delivery of a service.

For every single arrow, there are new tools to know to manage the entire release of the service.

Just a few examples: to test the environment you can work with cucumber or Cypress.io, for distribution and construction you can use Jenkins … and so on …

Image 2

Note 1 There are so many platforms available that choosing the right one can be very challenging

4- The development operator is the role of the people who are writing lines of code. They often use software to run businesses like Jira Core and Trello.

Note 2: In my personal opinion, the vendor who creates a software layer capable of centrally managing all these 6 core activities will have a competitive advantage over their competitors.

The big vendors are already playing: RedHat is working from the beginning with its platform ( OpenStack ), VMware has released Tanzu, Nutanix with Carbonite and Microsoft will play its role with the new version of Windows 2022 .

The only good suggestion I can give you is to study this new and fantastic world.

See you soon and take care of yourself