VMware Broadcom – Cosa fare ?

Broadcom's Acquisition of VMware Closed: What Now?

Broadcom ha dato una grande scossa al  2024.

Dagli ultimi rumors, sembra che il licensing della appena acquisita VMware verrà pesantemente rivisto.

Uno dei primi risultati è che molti clienti si chiedono cosa sia giusto fare.

Non ho la pretesa di conoscere la risposta corretta.

Ho effettuato alcune riflessioni che mi hanno portato a pensare a quattro futuri scenari  dei quali vi parlerò nel presente articolo.

  1. Il cliente continua la collaborazione con VMware/Broadcom.
  2. Il cliente sostituisce la tecnologia Hypervisor.
  3. Il cliente migra il proprio datacenter verso un Hyperscaler o  un Service Cloud Provider Locale.
  4. Il cliente trasforma il proprio Datacenter in un  “Datacenter as a Software”.

Per ogni scenario andrò ora a descrivere i macroscopici pro e contro.

1. Credo che il desiderato di Broadcom sia quello di semplificare il più possibile il licensing al fine di avere a portfolio soluzioni snelle e semplici da proporre.

Ciò implica eliminare alcune delle soluzioni ora presenti per concentrare le  energie unicamente su quelle a maggior rilevanza d’uso e guadagno.

Ora vi domando: Le soluzioni VMware ora presenti nel vostro datacenter sono quelle strategiche anche per Broadcom?

E ancora, siamo certi che l’ottimizzazione Broadcom non toccherà il dipartimento R&D di VMware che sviluppa le soluzioni divenute ora strategiche?

E non per ultimo, quale sarà il prezzo per poter rimanere nell’ecosistema Broadcom-VMware?

2. La prima sfida è quella di fornirsi di strumenti in grado di migrare le VM da una tecnologia HyperVisor all’altra.

La seconda è quella di proteggerle.

(NDR. Meno male che Veeam Backup & Replication permette di realizzare con un solo strumento entrambe le cose 🙂 )

Aggiungo, che bisognerà essere anche fortunati nello scegliere un vendor che non sia nel mirino di una nuova Broadcom, perché finire nello stesso giro dantesco sarebbe diabolico.

Pensare ad una tecnologia open-source based?

3. Il modello degli Hyper-Scaler è quello di fornire una serie di servizi  personalizzabili.

Spesso sento qualcuno affermare che esistano dei “costi nascosti”.  Non è vero, sono tutti ben illustrati, solo che capirli preventivamente è spesso molto difficile.

Avrà quindi particolare importanza la fase di creazione del progetto di migrazione che dovrà essere particolarmente accurata al fine di non ritrovarsi brutte sorprese  a fine mese .

4. Data Center as a Software è sinonimo di un’architettura Cloud Native.

Ciò implica riscrivere applicazioni e servizi in modo tale che siano  indipendenti dal HyperVisor.

E’ il nuovo approccio che negli anni diventerà un comune standard per scrivere codice.

Nel sito troverete una serie di articoli sul mondo Container e kubernetes ai quali vi rimando.

Tante domande, una sola risposta giusta?

No, credo che la migliore strategia sia quella di ricercare il miglior bilanciamento nell’utilizzo delle diverse opzioni disponibili, per arrivare a regime con la soluzione che si adatta al meglio alle necessità della vostra azienda in un corretto bilanciamento tra costi e benefici.

Ultima nota: Non dimenticate mai di aggiungere dei piani di formazione del personale perchè il training on the job in scenari particolarmente complessi NON è mai la via migliore per rendere sicuri i vostri sistemi ovunque questi siano

Veeam & Google Cloud Platform – Part 1

The first article of 2022 is dedicated to how to secure Google instances ( GCPs ).

The flow and protection architecture is shown in image 1 where there are two Veeam components.

  1. The Veeam Backup for Google Platform ( VBGP ) instance is responsible for making backups and restores of GCP instances.
  2. Veeam Backup & Replication ( VBR ) has the responsibility to centrally manage the movement of Backup data to and from the cloud (Data Mobility).

Picture 1

  • Note 1 : VBGP can be installed in stand-alone mode or using the VBR wizard.
  • Note 2: This article will show how to hook a VBGP instance already present in GCP from VBR.

Let’s see the steps in detail:

From the VBR console, we choose the Backup Infrastructure item.

By clicking with the right mouse button, select add server and then Google Cloud Platform (see image 2)

picture 2

The next step is to enter the login credentials to the Google Service Account (image 3)

Picture 3

The wizard continues asking you to enter the name of the VBGP server already created (image 4)

Picture 4

After selecting the type of network present (image 5), the next step is to enter the credentials to access the Repository (image 6).

Remember that the best protection practice is to back up the instance as a snapshot, then pour the snapshot into Google’s Cloud Object Storage.

Thus the 3-2-1 rule is respected, i.e. having 3 copies of data (Production + Snapshot + Object Storage) on two different media (Primary Storage + Object Storage) with an offsite copy (Object storage should belong to another region).

Picture 5

Picture 6

Once the wizard is finished, still from the VBR console we can connect to the console to the VBGP server (image 7) to start creating protection policies.

Picture 7

After entering the login credentials (image 8)

Image 8

it is possible to monitor the environment through an overview of the present instances, of the protected ones (image 9 & 10)

Image 9

Image 10

Manage protection policies through:

The creation of the Backup policies, indicating the name (image 12), selecting the project (image 13), the region (image 14), the resources (image 15), the Backup target (image 16), the schedule, and the type backup (images 17 to 19)

Image 11

Image 12

Image 13

Image 14

Image 15

Image 16

Picture 17

Image 18

Image 19

The last two items indicate the estimated monthly costs to implement the backup policy (image 20) and the setting of retries and notifications (image 21)

Image 20

Image 21

Once the configuration is complete and the monitoring has verified that the policy has been completed successfully, it is possible to proceed with the recovery (image 22).

Image 22

The available options are:

  • Entire Instance
  • Files and Folders

The next images (23-24-25) show the key steps to restore the entire instance.

Image 23

Image 24

Image 25

In the next article we will see how to protect and restore a SQL DB present in a GCP instance

See you soon

Kubernetes: The components

In previous articles we have seen some details of how the Kubernetes architecture is built.

Today the working mechanisms of the Kubernetes engine will be described indicating the name of each component; to remain faithful to the comparison of the car engine, we will speak of the camshafts, valves, bearings, … that belong to the Cloud Native

Note 1: The installation of k8s in Datacenter, Cloud, and Laboratory will not be discussed, the network has already made comprehensive tutorials available.

To familiarize yourself with k8s I recommend using Minikube (Linux platform) Docker Desktop (Windows & Mac platform).

Let’s begin!

Kubernetes Master:It is the main node of the cluster on which three processes that are vital for the existence of the cluster run.

  • kube-apiserver
  • kube-controller-manager
  • Kube-scheduler

In the master node, there is also the DataBase etcd, which stores all configurations created in the cluster.

The nodes that take care of running the applications and therefore the services are said worker node. The processes present on the worker node I’m:

  • Kubelet
  • kube-proxy

kubectl : AND’ The official Kubernetes client ( CLI ) through which you can manage the cluster ( Kube-apiserver ) using the API.

Some simple examples of kubectl commands are:

  • kubectl version (indicates the version of k8s installed)
  • kubectl get nodes (find out the number of nodes in the cluster)
  • kubectl describe nodes nodes-1 (shows the health status of the node, the platform on which k8s is running (Google, AWS, ….) and the allocated resources (CPU, RAM)).

Kube-Proxy : He is responsible for managing networking, from Routing to Load Balancing rules.

Note 2 : K8s will try to use them all libraries available at the level of operating system .

Container Runtime : It is the foundation on which the k8s technology rests.

kubernetes supports several runtimes among which we remember, container-d , cri-o , rktlet .

Note 3 : The runtime Docker it has been deprecated in favor of those that use interfaces CRI ; Docker images will still continue to work in the cluster.

The objects Kubernetes base are:

  • Pod
  • Services
  • Volumes
  • Namespace

THE controller provide additional functionality and are:

  • ReplicaSet
  • Deployment
  • StatefulSet
  • DaemonSet
  • Job

Between Deployment it is imperative to mention Kube-DNS which provides name resolution services. Since kubernetes version 1.2 the name has changed to Core-dns.

Add-On : they are used to configure further cluster functions and are placed inside the name space kube-system (such as Kube-Proxy, Kube-DNS, kube-Dashboard)

Add-ons are categorized according to their use:

  • Add-on of Netwok policy . (For example the NSX-T add-on takes care of the communication between the K8s environment and VMware)
  • Add-on Infrastructural (For example KubeVirt which allows connection with virtual architectures)
  • Add-on of Visualization and Control (For example Dashboard a web interface for K8s).

For commissioning, Add-ons use controllers DaemonSet And Deployment .

The image in figure 1 summarizes what has just been explained.

Figure 1

Take care and see you soon.

Kubernets: Know the details

A good way to describe cloud-native environments is to refer to the image of your car.

The container is the engine, k8s is the electronic control unit that manages the proper functioning of the vehicle, the drivers, indicating the route and the destination, select the type of service to be provided.

Today’s article will reveal some architectural details to understand how “the car” manages to reach its destination in an efficient way.

Containers are of two types:

The first is called System Container. It is the bodywork of the car (I mean from the plates to seats, steering wheel, gear lever and accessories).

Often for simplicity of creation, it is a Virtual Machine (VM) with Linux operating system (it can also be Windows).

The most common services present in the VM are ssh , cron and syslog , the File System is of type ext3, ext4, etc.

The second type is called Application Container and is the place where the image will carry out the activities.

Note1: The image is not a single large file. They are usually multiple files which, through an internal cross-pointing system, allow the application to operate correctly.

The Container application (from now on container only), has an operating mode based on a rigid logic, where all levels (layers) have the peculiarity of communicating with each other and are interdependent.

Figure 1

This approach is very useful as it is able to manage the changes that may occur over time in an effective and hierarchical way.

Let’s take an example: When a service configuration change occurs, for which Layer C is updated, Layer A and B are not affected, which means that they must NOT be modified in turn.

Since Developers like to refine their own images (program files) rather than dependencies, it makes sense to set the service logic in the mode indicated in figure 2 where the dependencies are not affected by a new image.

Figure 2

Note2 : The File system on which the images are placed (in the example of the car engine we are talking about pistons, connecting rods, shafts …) is mainly of three different types:

  • Overlay
  • Overlay 2
  • AUFS

Note3 : A good advice on the security side is not to build the architecture so that the passwords are contained in the images ( Baked in – Cooked)

One of the splendid innovations introduced in the container world is the management of images:

In a classic high-reliability environment, the application is installed on every single node of the cluster.

In containers, the application is downloaded and deployed only when the workload requires more resources than a new cluster node with a new image.

For this reason, the images are saved in “virtual” warehouses, which can be local or distributed on the internet. They are called “Register Server”.

The most famous are Docker Hub, Google Container Registry, Amazon Elastic Container Registry, Azure Container Registry.

We conclude this article by talking about the management of resources associated with a service.

The container platform uses two features called Cgroup and NameSpace to allocate resources that work at the kernel level.

The purpose of the Cgroup is to assign the correct resources ( CPU & RAM ).

Name spaces have the purpose of grouping the different processes and making sure that they are isolated from each other ( Multitenancy ).

The type of NameSpace can affect all the components of the service as indicated in the list below.

  • Cgroup
  • PID
  • Users
  • Mount
  • Network
  • IPC (Interprocess communication)
  • UTS (allows a single system to appear with different host and domain names and with different processes, useful in case of migration)

An example of limiting the resources of an application is shown in Figure 3 where thegable image, downloaded from the Register Server grcgp, has a limit of RAM and CPU resources allocated.

Figure 3

Soon

Modern Applications – Episod 4: Docker Compose – YAML

The topic of this article is understanding how to automatize the delivery of a micro-service.

In the previous one, I showed the flow process of a service. This flow requires typing a lot of commands to launch any single container.

Is there a way to automatize the entire process making it easier?

Yes, Docker-compose is a tool for defining and running an environment multi-container.

Docker-compose works with a describing file that includes all the configurations. The file has the extension YAML (human-readable data-serialization language).

After writing the YAML file (in this example it is named mypersonalfile.yaml), the syntax of the command is:

docker-compose -f mypersonalfile.yaml

Let’s see an easy example using the article I wrote in the last episode as a source.

I had to type all these commands to implement the service:

a. Mongo DB commands

b. Mongo Express commands

In their place,  it’s possible to use the following yaml file.

mypersonalfile.yaml

We will find yaml files again when writing about Kubernetes architecture and its protection.

That’s all for now, take care guys

Veeam Backup Office 365 & Cloud Connect

In the last few days, I have been contacted by a Service Provider to design a solution to back up the Microsoft Office 365 environment.

Actually, four months ago, I wrote three articles to show how to set up the environment using a great job of Niels and Timothy, creators and deployers the Martini project.

All details are available clicking  Veeam Backup Office 365 & Cloud Connect,

https://lnx.gable.it/home-page/2020/11/02/vbo-365-portal-a-nice-project-just-behind-the-corner/

Why the Service Provider needs a different way to implement this service?
I think that the two main reasons were:

1) SP has already a Cloud Connect architecture and it wants to use it in all possible scenarios.
2) SP needs always official support from Vendor before implementing any project and the Martini is not. To be clearer, the RestFul Api technology inside VBO is totally supported, the Martini portal isn’t because it is not a Veeam product.

Before continuing the read, there is one requirement to respect: VBR Cloud Connect and VBO-365 have to be installed on the same server (a Windows Server).

Let’s start!

Picture 1 shows the high-level architecture.

Enhanced Self Service Restore in Backup for Office 365 v2.0 - VIRTUALIZATION IS LIFE!Picture 1

The service provider architecture is shown on the right part of picture 1 and it is composed of VBO-365 and the Cloud Connect architectures, while the left part shows the tenant architecture where VBR Server has been installed.

Which are the actions that can be performed by the Tenant?

Backup: the tenant can’t access the VBO-365 console. It means the Tenat can’t set up or launch any sort of backup. In other words, the backup tasks are a managed services.

Restore: The tasks can be driven by the administrator of the Microsoft Office 365 organization through the use of Veeam Explores. The Cloud Connect technology creates the tunnel to connect the two entities.

Note 1: When VBR is installed by default all Veeam Explorers are installed.

I mean that not just the traditional Veeam Explorers (for Active Directory, SQL, Oracle, Exchange, Share-points) are installed but also the Explorer for One Drive and Teams. that are specific for Microsoft 365 technology.

Note 2: Does this scenario require  VBR license?

Yes, but you can use the free community edition.

The point to highlight during the setup is the authentication task that allows the explorer to communicate with VBO-365:

From the VBO-365 console selecting “General Options” (Picture 2) and from the  authentication tab enabling the tenant authentication  you can catch your goal (please for security reason use your own certificate) (Picture 3)

Picture 2

Picture 3

Let’s switch to my demo environment:

1. The Service Provider VBO-365 console, has three Microsoft 365 organizations with a backup job each  (Picture 4). Two of those use modern authentication, the third the basic one.

Picture 4

2. The Cloud-Connect architecture has been set up in order to create a tenant called  Demo-VBO (Picture 5).

Picture 5

  • The VBR Tenant Console shows how the connection towards the service provider has been set up (Picture 6).

Picture 6

The following video shows the tasks performed by the tenant to restore his data (Exchange/Sharepoint/One-Drive/Teams items) located at the Service Provider site.

Video 1

That’s all for now, take care and see you soon