Veeam Dr Orchestrator v.5: VONE – Tagging

Today we will show how to tell Veeam Disaster Recovery Orchestrator which resources to use to start a Disaster Recovery plan.

Before reading this article, we suggest you read the previous article ( by clicking here ) which allows you to check the status of the VDrO Server.

The main tool of asset labeling is Veeam One, which is installed by default with the Veeam Disaster Recovery Orchestrator v.5.

The procedure is very simple:

After connecting via RDP to the VDrO Server select Veeam One Client on the desktop (see Figure 1)

Figure 1

After selecting the Business View item (bottom left), the resources to be labeled are:

  1. Clusters: this item identifies the Disaster Recovery and production vCenter resources (Figure 2)
  2. The DataStores: this item identifies the disk areas where the VMs will reside once turned on (Figure 3)
  3. Virtual Machines: this item identifies the VMs that guarantee service continuity in the event of a Disaster (Figures 4 and 5).

Figure 2

Figure 3

Figure 4

Figure 5

Note 1 : The replication jobs have been configured on the embedded VBR of the VDrO server (see figure 6)

Figure 6

Note 2 : The tagging operation is discussed in a previous post available at the following link:

https://lnx.gable.it/home-page/veeam-availability-orchestrator-v-3-0-dr-from-replicas/

That’s all for today, see you soon!

Kubernetes: The components

In previous articles we have seen some details of how the Kubernetes architecture is built.

Today the working mechanisms of the Kubernetes engine will be described indicating the name of each component; to remain faithful to the comparison of the car engine, we will speak of the camshafts, valves, bearings, … that belong to the Cloud Native

Note 1: The installation of k8s in Datacenter, Cloud, and Laboratory will not be discussed, the network has already made comprehensive tutorials available.

To familiarize yourself with k8s I recommend using Minikube (Linux platform) Docker Desktop (Windows & Mac platform).

Let’s begin!

Kubernetes Master:It is the main node of the cluster on which three processes that are vital for the existence of the cluster run.

  • kube-apiserver
  • kube-controller-manager
  • Kube-scheduler

In the master node, there is also the DataBase etcd, which stores all configurations created in the cluster.

The nodes that take care of running the applications and therefore the services are said worker node. The processes present on the worker node I’m:

  • Kubelet
  • kube-proxy

kubectl : AND’ The official Kubernetes client ( CLI ) through which you can manage the cluster ( Kube-apiserver ) using the API.

Some simple examples of kubectl commands are:

  • kubectl version (indicates the version of k8s installed)
  • kubectl get nodes (find out the number of nodes in the cluster)
  • kubectl describe nodes nodes-1 (shows the health status of the node, the platform on which k8s is running (Google, AWS, ….) and the allocated resources (CPU, RAM)).

Kube-Proxy : He is responsible for managing networking, from Routing to Load Balancing rules.

Note 2 : K8s will try to use them all libraries available at the level of operating system .

Container Runtime : It is the foundation on which the k8s technology rests.

kubernetes supports several runtimes among which we remember, container-d , cri-o , rktlet .

Note 3 : The runtime Docker it has been deprecated in favor of those that use interfaces CRI ; Docker images will still continue to work in the cluster.

The objects Kubernetes base are:

  • Pod
  • Services
  • Volumes
  • Namespace

THE controller provide additional functionality and are:

  • ReplicaSet
  • Deployment
  • StatefulSet
  • DaemonSet
  • Job

Between Deployment it is imperative to mention Kube-DNS which provides name resolution services. Since kubernetes version 1.2 the name has changed to Core-dns.

Add-On : they are used to configure further cluster functions and are placed inside the name space kube-system (such as Kube-Proxy, Kube-DNS, kube-Dashboard)

Add-ons are categorized according to their use:

  • Add-on of Netwok policy . (For example the NSX-T add-on takes care of the communication between the K8s environment and VMware)
  • Add-on Infrastructural (For example KubeVirt which allows connection with virtual architectures)
  • Add-on of Visualization and Control (For example Dashboard a web interface for K8s).

For commissioning, Add-ons use controllers DaemonSet And Deployment .

The image in figure 1 summarizes what has just been explained.

Figure 1

Take care and see you soon.

Kubernets: Know the details

A good way to describe cloud-native environments is to refer to the image of your car.

The container is the engine, k8s is the electronic control unit that manages the proper functioning of the vehicle, the drivers, indicating the route and the destination, select the type of service to be provided.

Today’s article will reveal some architectural details to understand how “the car” manages to reach its destination in an efficient way.

Containers are of two types:

The first is called System Container. It is the bodywork of the car (I mean from the plates to seats, steering wheel, gear lever and accessories).

Often for simplicity of creation, it is a Virtual Machine (VM) with Linux operating system (it can also be Windows).

The most common services present in the VM are ssh , cron and syslog , the File System is of type ext3, ext4, etc.

The second type is called Application Container and is the place where the image will carry out the activities.

Note1: The image is not a single large file. They are usually multiple files which, through an internal cross-pointing system, allow the application to operate correctly.

The Container application (from now on container only), has an operating mode based on a rigid logic, where all levels (layers) have the peculiarity of communicating with each other and are interdependent.

Figure 1

This approach is very useful as it is able to manage the changes that may occur over time in an effective and hierarchical way.

Let’s take an example: When a service configuration change occurs, for which Layer C is updated, Layer A and B are not affected, which means that they must NOT be modified in turn.

Since Developers like to refine their own images (program files) rather than dependencies, it makes sense to set the service logic in the mode indicated in figure 2 where the dependencies are not affected by a new image.

Figure 2

Note2 : The File system on which the images are placed (in the example of the car engine we are talking about pistons, connecting rods, shafts …) is mainly of three different types:

  • Overlay
  • Overlay 2
  • AUFS

Note3 : A good advice on the security side is not to build the architecture so that the passwords are contained in the images ( Baked in – Cooked)

One of the splendid innovations introduced in the container world is the management of images:

In a classic high-reliability environment, the application is installed on every single node of the cluster.

In containers, the application is downloaded and deployed only when the workload requires more resources than a new cluster node with a new image.

For this reason, the images are saved in “virtual” warehouses, which can be local or distributed on the internet. They are called “Register Server”.

The most famous are Docker Hub, Google Container Registry, Amazon Elastic Container Registry, Azure Container Registry.

We conclude this article by talking about the management of resources associated with a service.

The container platform uses two features called Cgroup and NameSpace to allocate resources that work at the kernel level.

The purpose of the Cgroup is to assign the correct resources ( CPU & RAM ).

Name spaces have the purpose of grouping the different processes and making sure that they are isolated from each other ( Multitenancy ).

The type of NameSpace can affect all the components of the service as indicated in the list below.

  • Cgroup
  • PID
  • Users
  • Mount
  • Network
  • IPC (Interprocess communication)
  • UTS (allows a single system to appear with different host and domain names and with different processes, useful in case of migration)

An example of limiting the resources of an application is shown in Figure 3 where thegable image, downloaded from the Register Server grcgp, has a limit of RAM and CPU resources allocated.

Figure 3

Soon

VDrO v.4 – Run a DR plan

This is the last article about how to integrate the Continuous Data Protection (CDP)  technology (available from VBR v.11) and VDrO v.4 (former VAO).

In this part, we are going to see what happens when an orchestration plan is launched.

Yes, I wrote the word “see” because I created a short video showing the tasks that are automatically completed when a Disaster Recovery is occurring.

If you need more details about how to set up the environment, please read the previous articles.

Let me know if videos and youtube platform are a good way to expose technological valuable topics.

Thx for reading and watching and take care

Veeam Disaster Recovery Orchestator v.4 – How to Upgrade

Also Veeam Availability Orchestrator, commonly called VAO, changed its name with this new release.

The new name is Veeam Disaster Recovery Orchestrator (VDrO).

The main news of this version is the support of the technology of continuous data protection (CDP) introduced in VBR v.11.

Which are the main benefits allowed by this new feature?

  • New readiness checks now including RPO and SLA.
  • Recovery Point Object close to real-time.
  • Detailed reports to track and audit the Disaster Recovery plan of your company.

The next article will explain how to implement a DR plan using CDP.

Before doing the upgrade procedure please:

  1.  Perform backup of all existing databases (VAO, VBR, ONE)
  2. Make sure there is enough space for the upgrade of the Microsoft SQL Server configuration database
  3. Make sure there are no orchestration plans being tested or executed
  4. Make sure there are no orchestration plans scheduled to run during the upgrade.
  5. Read carefully the user guide.

Before proceeding please check that the VAO current version on the server is 3.0 (picture 1).

Picture 1

After downloading the ISO file from the Veeam website and mounting it (picture 2)

Picture 2

just select the “Setup” voice; the wizard immediately begins the upgrade (picture 3).

Picture 3

Please check that the previous version of VAO has been discovered. If so the upgrade button is available (picture 4).

Picture 4

The setup checks if Visual C++ 2019 Redistributable package is already installed.  If not it will automatically be deployed. This procedure requires the server reboot (pictures 5 and 6).

Picture 5

Picture 6

After reboot is completed, relaunch the setup.  The wizard will show which components will be automatically upgraded (picture 7).

picture 7

Now the wizard will ask for a valid license (picture 8) and will install the missing components (Pictures 9 and 10).

Picture 8

picture 9

picture 10

The next steps are about the Veeam Databases.
The wizard will ask to connect to them and update the VBR one if necessary (pictures 11 and 12).

Picture 11

Picture 12

The main point of the upgrade procedure is the certification step.
As shown in picture 13, the wizard will ask the VAO administrator which certificate to use. It can be a self-signed and autogenerated or an own certificate created from an external authority.
My suggestion is to ask your security specialist to know which is the best choice for your company.

Picture 13

Picture 14

Clicking on the install button it will complete the upgrade wizard as shown in pictures 15 & 16.

Picture 15

Picture 16

After upgrading please check the versions of VAO (4.0.0.2088), VBR (11.0.0.837), ONE (11.0.0.1379) now installed.

Just a note before ending the article: has already said, VAO (Veeam Availability Orchestrator) has changed its name to VDrO  (Veeam Disaster Recovery Orchestrator).
The web pages of the product still show the old name. It will be updated in the next release.

That’s all for now guys. Take care

VDrO-Baseline 2

Let’s continue the VDrO features description talking about scope (Picture 1).

Picture 1

The VDrO controls access to its functionality with the scopes.

A scope defines which operations users can perform.

Let’s back to my example, I created a SQL Production scope where only the users belonging to the SQL administrator group can manage and launch the DR process.

The plan components are probably the main VRrO attention point (Picture 2).

(Picture3)

From this menu, it’s possible to group as a single entity all objects you need to create a Disaster Recovery strategy.

I’m talking scope (first to select), VM (applications and services), recovery locations, plan steps, credentials, and jobs template.

To be clearer, it’s like creating a picnic basket and putting it inside different dishes.

Now you just have to lay the table.

How to do it? (Which dishes do I have to put into the basket?)

Just select scope (Picture 4), then from VM groups include the needed VMs source (Picture 5), from recovery locations, select the DR site (picture 6), and at the end select plan steps, credential, and Template Job.

Picture 4

Picture 5

Picture 6

The last point is the DataLabs assignment but I’m sure you can now include them on the right scopes.

Exit from the Administrator menu and move to the main menu to create the first Recovery Plan.

The wizard is very easy to be used:

Picture 7

Picture 8 shows how to select the Scope.

Picture 8

Picture 9 shows the detailed plan info and Picture 10 the plan type (next articles will deep how to set them up).

Picture 9

Picture 10

Pictures 11, 12, 13 show how it’s possibles to discover the VMs that belong to the group selecting VM group.

Picture 11

Picture 12

Picture 13

Picture 14 shows the control options for the DR plan. If something goes wrong the plan can be halted or not.

Picture 14

Picture 15 shows the steps, 16 the option to protect VMs switched on after the failover has been completed, 17 the RPO and RTO that the plan has to respect.

Picture 15

Picture 16

Picture 17

Picture 18 shows the template docs that will be used, while picture 19 shows a (for me) interesting mandatory check option.

Before doing any new activity the Readiness check analyzes that all components are correctly set up.

Picture 18

Picture 19

In my next article, I will cover two examples: DR-plan from Replica and DR-plan from backup. Keep in touch!