How to find strings with PowerShell

An article to explain how easy it is to answer some working needs using Microsoft Powershell.

In my job, I happen to have the need to search some data written inside files.

Three classic requests:

1) I need to remember some info about a meeting (I take always notes during meetings)

2) I need to get a statistic about how many customers asked a particular feature

3) I need to search for some errors in application logs

In this short article, I show you how to answer.

In my example, I need to find  a string with the content “find me” in my Documents folder

The PowerShell command is:

Get-ChildItem -Recurse -Path “C:\Users\VBR\Documents\” | Select-String Pattern “find me”

it is composed of two parts separated by a vertical bar (|)

In the first part, the command will search all files into the path C:\Users\VBR\Documents\ (Recurse)

In the second will search the type (string) and the object (pattern)

I like the idea of saving the results of the command in a file and also having just the path of the string I searched.

The command is changed as you can see below:

Get-ChildItem -Recurse -Path “C:\Users\VBR\Documents\Test-Find” | Select-String -Pattern “find me” | select path | Out-File C:\Scripts\Results\search_script_out.txt

To remember:
All PowerShell commands support wild card (*, ?, [ ]), which means you can search any string in your environment.

Object Storage Integration – Wasabi

Object Storage is probably the main Backup & Replication feature used by Veeam Customers since his release (9.5 u4)

Today I’m going to cover the improvment now available with version 10 and I’ll show you how it works when it is coupled with Wasabi Object Storage.

Why Wasabi?
The reason is quite easy.
Any Veeam SE has 1 TB of available data to work with and this is a very appreciated gift because I can test VBR features in my personal lab.

So thank you in advance Wasabi guys.

This is the first of three articles where I’m going to show how to implement the Object Storage integration with VBR

  1. Configuring Wasabi Bucket
  2. Implementing Backup and Replication
  3. Performing test of backup and Restore

Let’s start with the first point !!!

After registering to wasabi site (https://wasabi.com/), sign-in and discover the main menu. What surprised me immediately is how easily you can work with the platform.

From “Users” just create a user following the wizard where you need to type name (Picture 1), optionally create a group (Picture 2)  and in the select the right permission in page three  (Picture 3) 

Picture 1

Picture 2

Picture 3

Now move on to the Access key menu and create the two keys. One good suggestion is to save keys on your PC downloading it. (Picture 4) 

Picture 4

Now it’s time to work with the Bucket menu and see how easy it is to create a new container (Picture 5).

Picture 5

Now we are ready to use it with VBR (Veeam Backup & Replication)

See you soon

XFS – Performace

In the previous two articles, I explained how to configure and set up an XFS Repository with Veeam Backup & Replication v.10 (VBR)

In this new article, I’m going to cover why this is a very useful technology and should be adopted as soon as possible.

The main reason is:

“XFS linked-clone technology helps VBR to transform the backup chain” 

Let’s see what happens with Synthetic Full.

What is Synthetic full?

It’s a smart way to help VBR to create a Full Restore point downloading just an incremental backup from production.

The process is composed of two phases.

Firstly it creates a normal incremental backup.

Then it creates a full backup file stacking all previous backups (full and incremental).

This process normally needs a lot of work because VBR commands the repository to copy, paste and delete the data blocks.

The XFS integration, allows the system to do not move any block. In fact, the filesystem is able to re-point his metadata creating a Full Backup in One-Shot.

The result is super fast Full Backup creation.

Let’s see with an example:

A classic Full Backup has lasted 7 mins (Picture 1).

Picture 1

An Incremental Backup has lasted 2 mins and 30 sec (Picture 2).

Picture 2

What about a Synthetic Full

Picture 3 shows that it needs less than 30 seconds (plus the time needs to download the incremental data).

So Amazing technology and Veeamzing integration!!!

Picture 3

That’s all, for now, guys, see you soon and take care.

How to add an XFS Repository to Veeam

This is the second article talking about how to set up a Linux Veeam Repository for using the XFS technology.

In my last article, I wrote about how to create an XFS disk and now we are going to cover how to integrate it.

There are just two steps: 

1. Adding the new Linux Server to the managed VBR server.

2. Creating the Repository Server enabling the XFS add-on.

1. Before working with the VBR console it’s necessary to check the firewall status and more precisely if the ports needed are open to allow the system to work properly.

In this lab the way to set up the firewall is working with ufw command:

sudo ufw status (to check the status) 

If the firewall is disabled, please change its status with the command:

sudo ufw enable  (corrected on 8th May 2021)

Opening the ports with the following command:

sudo ufw allow #port/protocol

In my example I launched the following two commands:

sudo ufw allow 22/tcp

sudo ufw allow 2500:3300/tcp

as shown in the  Veeam user guide (picture 1)

Picture 1

The last command to check the firewall status is on port 22:

sudo lsof -i:22

the output is:

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 915 root 3u IPv4 27288 0t0 TCP *:ssh (LISTEN)
sshd 915 root 4u IPv6 27290 0t0 TCP *:ssh (LISTEN)

2. Now we are ready to create the new XFS repository:

  • 1. From VBR console add a new Linux Server (Picture 2)

Picture 2

  • Click on the Advanced button and check the right match between the ports  (Picture 3 and 4)

Picture 3

Picture 4

  • Add a new Repository, by choosing the just added server (in my case his name is cento01).

In the repository option, browse the server folders selecting the XFS one,  selecting the option Use fast Cloning (Picture 5 and 6)

Picture 5

Picture 6

Complete the task with some more clicks.

Note1: If you need more details about how to set up the firewall please have a look at the following site:

Linux Firewall

The next article will talk about performances,  see you soon and take care.

XFS & Veeam Repository

Today I’m going to talk about how to create a new Veeam repository using the XFS file system.

As much as you already know, v. 10 of Backup & Replication loves Linux. There are 3 top features that attest to it and they are:

  • XFS integration
  • Proxy Linux
  • Direct NFS Repository

The first article wants to talk about the XFS Integration and  which steps you should follow to use this smart technology integrated with Veeam Repositories

We will have 3 majors steps:

  1. Adding New Disk and formatting it as XFS
  2. Adding a Backup Repository
  3. Working and testing with XFS integration

So, let’s start with Point 1, remembering how to add a new disk to a Linux Server (we consider you have already added a disk to your physical or virtual Server)

First command is lsblk  that shows which disks have been recognized by the Operating System (in my case the new disk has been seen as sdc)

 sda           8:0    0   16G  0 disk

 ├─sda1        8:1    0  600M  0 part /boot/efi

 ├─sda2        8:2    0    1G  0 part /boot

 └─sda3        8:3    0 14.4G  0 part

   ├─cl-root 253:0    0 12.8G  0 lvm  /

   └─cl-swap 253:1    0  1.6G  0 lvm  [SWAP]

 sdb           8:16   0  200G  0 disk

 └─sdb1        8:17   0  200G  0 part /media/RepoXFS1

 sdc           8:32   0   16G  0 disk

 sr0          11:0    1    7G  0 from

Running the command fdisk -l  /dev/sdc it’s possible to catch the correct size of the disk.

 Disk /dev/sdc: 16 GiB, 17179869184 bytes, 33554432 sectors

 Units: sectors of 1 * 512 = 512 bytes

 Sector size (logical/physical): 512 bytes / 512 bytes

 I/O size (minimum/optimal): 512 bytes / 512 bytes

 fdisk /dev/sdc to create new   partition

Now it’s time to create a new disk (this procedure deletes all previous file systems present) with the command fdisk /dev/sdc.

Just follow the steps below to create the new disk: 

 n (to create a new partition)

 p (to create a primary partition)

 1 (default)

 First sector (default)

 Last sector (default) (if you want to use all the disk capacity)

 w write 

Relaunching the lsblk command it’s possible to see if the sdc1 disk appeared.

lsblk /dev/sdc

 NAME MAJ:MIN RM SIZE RO TYPE MOUNT POINT
 sdc 8:32 0 16G 0 disk
 └─sdc1 8:33 0 16G 0 part /media/RepoXFS2

Three more steps to complete the first phase: 

1. Creating an XFS file system with Data-Block Sharing enables (reflink=1) 

mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sdc1

2. Creating the mount point on your server with the command:

mkdir  /backup/xfs-01

3. Mounting file system addicting the following line in /etc/fstab file

 /dev/sdc1           /backups/xfs-01             xfs          defaults     0   0

If you know the UUID of the disk (blkid /dev/sdbc1) you can also use the following digit instead of the previous one.

 UUID=UUID  /backup/xfs-01   xfs defaults 0 0

Reboot the server and everything should work.

See you soon with the second phase.

See you soon and take care.

Happy 2020 – Reports from Lecco

I spent the last 2019 hours in Lecco a nice town on Como Lake (BTW the right name of the lake is Lario 🙂

The atmosphere was very good, people walking around the downtown while a Norvegian music group playing music using ice inside buckets like drums.

The atmosphere was very good, people walking around the downtown while a Norvegian music group playing music using ice inside buckets like drums.

The pubs were very crowded but after some minutes in a queue, we got some good wine glass seated in front of the lake (it’s probably the best way to feel the holydays spirit ). The most important moment has been the fireworks at 00:00 to celebrate the new year. Here you can find some pics I got using my mobile phone. So if next year you want to live a relaxed moment to have a great ending year, please visit Lecco.