Veeam & Google Cloud Platform – Part 2

In the previous article, it was shown how to use VBR (Veeam Backup & Replication) as a framework to protect the instances (VMs) present in the Google Cloud Platform ( GCP ).

The integrated component of VBR that automates backup and restore processes is VBGP (Veeam Backup for Google Platform), now in its second version (January 2022).

VBGP allows you to save Google instances at the image level, but to date, it is unable to restore applications in granular mode.

Note 1: The VBGP allows you to create “Application Consistency” backups of the instances through:

  • le VSS (Windows Volume Snapshot Copy Services ) for Microsoft-Windows operating systems.
  • Customizable scripts for Linux operating systems.

In cases where transaction log backup or granular recovery of application objects is required, the Veeam Agent ( VA ) must be used.

Note 2: At www.gable.it you will find many articles detailing how to implement Veeam Agents.

Note 3: The Backup Server VBR can be installed both in the cloud (for example as an instance in GCP ) and on-premises. Correct connectivity between components must be ensured in all scenarios.

Note 4: VBR version 12 (due out in 2022) will add a number of Cloud enhancements. For example, the ability to manage the deployment and Veeam Agent components, without having to create a VPN between the on-premises VBR and the instances to be protected in advance.

Let’s now see the two main phases to perform the Backup of the instance:

The first phase has the purpose of carrying out discovery and deployment of the Agent on the instance (see image 1) (Inventory menu, Create a Protection Group).

Picture 1

In the second phase, the creation of the Backup job by selecting Veeam Agent for Windows (Image 2)

picture 2

During the Wizard, select the Backup Repository (image 4) under Backup Mode, Entire Computer (image 3), and Storage.

Picture 3

Picture 4

The focus of this article is managing application security (in this MS-SQL scenario).

After enabling the application-aware processing (image 5), it is possible to operate at the Transaction Log level, selecting whether to delete them after each Backup operation (Trunking) or whether to backup only the T-Logs. (images 6-8).

Picture 5

Picture 6

Picture 7

Image 8

After starting the job, we check that at the Disk entry there is at least one restore point (see image 9).

Image 9

We conclude this article by explaining the recovery options of the Veeam Agent for Windows: (image 10)

  • Towards VMware & Hyper-V virtual architectures
    • Instant Recovery
    • Restoring Volumes
    • Exporting Disks (VMDK, VHD, VHDX)
  • Towards Public Cloud architectures
    • AWS
    • Azure
    • GCP
  • The creation of a Recovery Media to perform a Bare Metal Restore
  • File and Folder recovery (image 10, also available with VBGP )
  • Application object recovery (image 11 & 12, available only via VA )

Image 10

Image 11

Image 12

All recovery options using Veeam Explorer for SQL are available at the following site .

Note 5 : In the example, a Scale Out Backup Repository has been chosen which has the advantage of copying data to the Google Object Storage (see image 13). Version 12 of VBR will allow direct writing to the Object Storage

Image 13

See you soon

Veeam & Google Cloud Platform – Part 1

The first article of 2022 is dedicated to how to secure Google instances ( GCPs ).

The flow and protection architecture is shown in image 1 where there are two Veeam components.

  1. The Veeam Backup for Google Platform ( VBGP ) instance is responsible for making backups and restores of GCP instances.
  2. Veeam Backup & Replication ( VBR ) has the responsibility to centrally manage the movement of Backup data to and from the cloud (Data Mobility).

Picture 1

  • Note 1 : VBGP can be installed in stand-alone mode or using the VBR wizard.
  • Note 2: This article will show how to hook a VBGP instance already present in GCP from VBR.

Let’s see the steps in detail:

From the VBR console, we choose the Backup Infrastructure item.

By clicking with the right mouse button, select add server and then Google Cloud Platform (see image 2)

picture 2

The next step is to enter the login credentials to the Google Service Account (image 3)

Picture 3

The wizard continues asking you to enter the name of the VBGP server already created (image 4)

Picture 4

After selecting the type of network present (image 5), the next step is to enter the credentials to access the Repository (image 6).

Remember that the best protection practice is to back up the instance as a snapshot, then pour the snapshot into Google’s Cloud Object Storage.

Thus the 3-2-1 rule is respected, i.e. having 3 copies of data (Production + Snapshot + Object Storage) on two different media (Primary Storage + Object Storage) with an offsite copy (Object storage should belong to another region).

Picture 5

Picture 6

Once the wizard is finished, still from the VBR console we can connect to the console to the VBGP server (image 7) to start creating protection policies.

Picture 7

After entering the login credentials (image 8)

Image 8

it is possible to monitor the environment through an overview of the present instances, of the protected ones (image 9 & 10)

Image 9

Image 10

Manage protection policies through:

The creation of the Backup policies, indicating the name (image 12), selecting the project (image 13), the region (image 14), the resources (image 15), the Backup target (image 16), the schedule, and the type backup (images 17 to 19)

Image 11

Image 12

Image 13

Image 14

Image 15

Image 16

Picture 17

Image 18

Image 19

The last two items indicate the estimated monthly costs to implement the backup policy (image 20) and the setting of retries and notifications (image 21)

Image 20

Image 21

Once the configuration is complete and the monitoring has verified that the policy has been completed successfully, it is possible to proceed with the recovery (image 22).

Image 22

The available options are:

  • Entire Instance
  • Files and Folders

The next images (23-24-25) show the key steps to restore the entire instance.

Image 23

Image 24

Image 25

In the next article we will see how to protect and restore a SQL DB present in a GCP instance

See you soon

Veeam Backup & Replication: License count

Starting July 1, 2022, the sale of perpetual per-socket licenses of Veeam Backup & Replication ™, Veeam Availability Suite ™, Veeam Backup Essentials ™, and Veeam ONE ™ will cease to both new and existing customers.

The products currently in operation will continue to work but it will not be possible to purchase new Socket licenses to upgrade.

The licenses that can be purchased and available are the Veeam Universal Licenses (VUL) which use the single workload as the unit of measure.

The most important advantages of the VUL model can be summarized in:

  1. Ability to protect any supported workload (such as instances in AWS, Azure, and GCP) and not just VMware and Hyper-V virtual machines.
  2. Freedom to move licenses as needed between all supported workloads.

Note 1 : Each instance can be used to protect 500 GB source data of a NAS

Note 2: Let’s take an example to simplify the count: let’s assume we need to protect an environment made of 50 Hyper-V VMs, 30 instances in Azure (or in Aws or in GCP), 10 physical servers, and 5 TB of data.

The total number of instances is the algebraic sum of:

a. 50 (VM-HV) + 30 (Azure) + 10 (Server) + 10 (NAS) = 100 instances = 10 VUL

If 20 Hyper-V VMs will be migrated to Azure, the count changes to

b. 30 + 50 + 10 + 10 = 100 instances = 10 VUL

As you can see, the total number of instances does not change.

The good news is that Veeam has a plan available to help customers migrate their licenses.

Your Veeam Sales Representative will be able to advise you on the best options available.

Note 3 : In this scenario it is essential to provide the Veeam contact with the log files.

The one that describes the licenses used is called VMC.log

See you soon

Digital Trasformation & Data Mobility

If Cloud has been the most used word in the last five years, the words that have been buzzing the IT world in the last five months are Digital Transformation

From Wikipedia:

Digital Transformation (DT or DX) is the adoption of digital technology to transform services and businesses, through replacing non-digital or manual processes with digital processes or replacing older digital technology with newer digital technology”.

Or: Digital Transformation must help companies to be more competitive through the fast deployment of new services always aligned with business needs.

Note 1: Digital transformation is the basket, technologies to be used are the apples, services are the means of transport, shops are clients/customers.

1. Can all the already existing architectures work for Digital Transformation?

  • I prefer to answer rebuilding the question with more appropriate words:

2. Does Digital transformation require that data, applications, and services move from and to different architectures?

  • Yes, this is a must and It is called Data Mobility

Note 2: Data mobility regards the innovative technologies able to move data and services among different architectures, wherever they are located.

3. Does Data-Mobility mean that the services can be independent of the below Infrastructure?

  • Actually, it is not completely true; it means that, despite nowadays there is not a standard language allowing different architecture/infrastructure to talk to each other, the Data-mobility is able to get over this limitation.

4. Is it independent from any vendors?

  • When a standard is released all vendors want to implement it asap because they are sure that these features will improve their revenue. Currently, this standard doesn’t still exist.

    Note 3: I think the reason is that there are so many objects to count, analyze, and develop that the economical effort to do it is at the moment not justified

    5. Is already there a Ready technology “Data-Mobility”?

    The answer could be quite long but, to do short a long story, I wrote the following article that is composed of two main parts:

  • Application Layer (Container – Kubernetes)
  • Data Layer (Backup, Replica)

Application Layer – Container – Kubernetes

In the modern world, services are running in a virtual environment (VMware, Hyper-V, KVM, etc).

There are still old services that run on legacy architecture (Mainframe, AS400 ….), (old doesn’t mean that they are not updated but just they have a very long story)

In the next years, the services will be run in a special “area” called “container“.

The container runs on Operating System and can be hosted in a virtual/physical/Cloud architecture.

Why containers and skills on them are so required?

There are many reasons and I’m listing them in the next rows.

  1. The need of IT Managers is to move data among architectures in order to improve resilience and lower costs.
  2. The Container technology simplifies the developer code writing because it has a standard widely used language.
  3. The services ran on the container are fast to develop, update and change.
  4. The container is de facto a new standard that has a great advantage. It gets over the obstacle of missing standards among architectures (private, hybrid, and public Cloud).

A deep dive about point d.

Any company has its own core business and in the majority of cases, it needs IT technology.

Any size of the company?
Yes, just think about your personal use of the mobile phone, maybe to book a table at the restaurant or buying a ticket for a movie. I’m also quite sure it will help us get over the Covid threat.

This is the reason why I’m still thinking that IT is not a “cost” but a way to get more business and money improving efficiency in any company.

Are there specif features to allow data mobility in the Kubernetes environment?

Yes, an example is Kasten K10 because it has many and advanced workload migration features (the topic will be well covered in the next articles).

Data-LayerCloud Backup Restore Icona - Download gratuito, PNG e vettoriale

What about services that can’t be containerized yet?

Is there a simple way to move data among different architectures?

Yes, that’s possible using copies of the data of VMs, Physical Servers.

In this business scenario, it’s important that the software can create Backup/Replicas wherever the workloads are located.

Is it enough? No, the software must be able to restore data within architectures.

For example, a customer can need to restore some on-premises workloads of his VMware architecture in a public cloud,  or restore a backup of a VM located in a public cloud to a Hyper-V on-premises environment.

In other words, working with Backup/Replica and restore in a multi-cloud environment.

The next pictures show the Data Process.

I called it “The cycle of Data” because leveraging from a copy it is possible to freely move data from and to any Infrastructure (Public, hybrid, private Cloud).

Pictures 1 and 2 are just examples of the data-mobility concept. They can be modified by adding more platforms.

The starting point of Picture 1 is a backup on-premises that can be restored on-premises and on-cloud. Picture 2 shows backup of a public cloud workload restored on cloud or on-premises.

It’s a circle where data can be moved around the platforms.

Note 4: A good suggestion is to use data-mobility architecture to set up a cold disaster recovery site (cold because data used to restore site are backup).

Picture 1

Picture 2

There is one last point to complete this article and that is the Replication features.

Note 5: For Replica I intend the way to create a mirror of the production workload. Comparing to backup, in this scenario the workload can be switched on without any restore operation because it is already written in the language of the host hypervisor.

The main scope of replica technology is to create a hot Disaster Recovery site.

More details about how to orchestrate DR are available on this site at the voice Veeam Availability Orchestrator (Now Veeam Disaster Recovery Orchestrator)

The replica can be developed with three different technologies: 

  • Lun/Storage replication
  • I/O split
  • Snapshot based

I’m going to cover those scenarios and Kasten k10 business cases in future articles.

That’s all for today folks.

See you soon, and take care.