Storage performance

VDI generates considerable workload on disk storage. This workload depends largely on the OS, applications, and usage patterns of the guests, so there is not a fixed pattern. But there are common worst cases that can be used to size the storage. The most typical one consists of a group of desktops booting or opening large applications at the same time. This situation generates bursts of random (not sequential) reads and writes of 4KB blocks. To ensure good performance, we recommend sizing the storage to be able to serve 40 of these IO operations per second for each desktop it stores (40 IOPS per desktop).

Measuring IOPS

If you have a storage device that you want to test to know how many virtual desktops can be served by it (performance-wise), you can follow these steps:

Identify mountpoints

If you create a flexVDI volume called "my_volume" in the flexVDI image storage "my_image_storage", the host will mount it in /var/lib/flexvdi/image_storages/my_image_storage/my_volume. Note down the mount point of the storage, as you will need it in the next step.

Install fio on the flexVDI host

Fio is a versatile IO workload generator, also able to report the performance of the generated workload, that we will use to stress test our system.

The simplest way to install fio is with yum. Fio is included in the EPEL repository. On CentOS 7, just install the epel-release package:

# yum install epel-release

In RHEL 7, you have to download the epel-release package first:

# wget https://mirrors.n-ix.net/fedora-epel/epel-release-latest-7.noarch.rpm
# yum localinstall epel-release-latest-7.noarch.rpm

The, just install fio:

# yum install fio

Use fio

To get good data, it is very important to test the full disk. Otherwise, you are very likely to get (false) higher values, dismissing delays caused by:

  • different speeds achieved by different disk areas.
  • longer head movements (in mechanical disks).
  • saturation of host and disk caches when accessing large amounts of data.

Now you can do some simple math: calculate the amount of 10GB blocks that are available (it should be the full storage size). You can get the amount of free space with:

# df -h

Or let df do the division for you with:

# df -B 10G

Now substract 1 block to avoid problems caused by rounding up. That is the amount of blocks that you will create, which we will pass with the NUM_JOBS environment variable. Assuming a 10TB disk, you can accomodate 999 blocks of 10GB each.

Execute the fio script provided here, with a command like (replacing 999 and the path with your values):

# MOUNT_POINT='/flexvdi/image_storages/my_image_storage/my_volume' NUM_JOBS=999 fio flexVDI.ini > flexVDI.fio.out

Execute fio; it will take a long time (hours) to fill the disks

# fio flexvdi.ini | tee flexvdi.fio.out
# grep -i iop flexvdi.fio.out 
read: IOPS=59, BW=238KiB/s (243kB/s)(28.3MiB/121739msec)
write: IOPS=134, BW=539KiB/s (552kB/s)(64.1MiB/121739msec)

Now add the amount of reads and writes (in our example 193 IOPS) , and divide it by 40. You will get a good estimate of the amount of desktops that the measured device can handle.

Clean up

During the execution of the fio test we created test files that filled the disk. We have to delete them now with:

# rm /flexvdi/image_storages/my_image_storage/my_volume/flexVDI.*.0

Throughput

There are other limitations of the storage (throughput) that can limit the performance, but are less likely to be the storage bottleneck that the required amount of iops. Typically guests do not need high thorughput at the same time, as they need high iops values when guests boot, or users login, which commonly happen in bursts.

Nevertheless, a regular 7200rpm mechanical disk can read data at 150MBps, and the cache of the flexVDI Hosts, and the disk controller, will increase the  speed perceived by the guests, while iSCSI over 1Gbps ethernet is  limited to 123MBps, so even for small installations, anything below 10Gps iSCSI (or the better 8Gbps FC) is not recommended.