Shared storage
Accessing shared storage
flexVDI 3.1 supports three types of storage: OCFS2, Gluster and External. All three of them make it possible to access a shared storage space among the hosts of your cluster. Usually, you will first need to make one or more disk devices available to the host. These disks can be, for instance, physical SAS disks or iSCSI targets. SAS disks are autodetected during boot. iSCSI targets must be first configured with the iscsiadm
tool. In order to discover and configure the different types of shared storage that can be made available in a flexVDI cluster, read the Red Hat storage management documentation.
Next, flexVDI Agent must be aware of these disks. It periodically rescans the system to discover newly plugged disks or unplugged ones. Furthermore, it will detect multipath disks, if you configured them, and use them instead of one of their components. By default, it will discover new disk devices every 10 seconds. If you want to change this period, edit /etc/flexvdi/flexvdi-agent.conf
, e.g. if you want to set a period of 30 seconds:
[monitors] ... disks = 30
Once your hosts have access to shared storage, read the section on setting up flexVDI storage with flexVDI Dashboard. The following sections explain additional details on how to set up the different storage technologies supported by flexVDI, and how to move your flexVDI Manager instance to a shared storage Volume to provide high availability.
Storage performance
VDI generates considerable workload on disk storage. This workload depends largely on the OS, applications, and usage patterns of the guests, so there is not a fixed pattern. But there are common worst cases that can be used to size the storage. The most typical one consists of a group of desktops booting or opening large applications at the same time. This situation generates bursts of random (not sequential) reads and writes of 4KB blocks. To ensure good performance, we recommend sizing the storage to be able to serve 40 of these IO operations per second for each desktop it stores (40 IOPS per desktop).
Measuring IOPS
If you have a storage device that you want to test to know how many virtual desktops can be served by it (performance-wise), you can follow these steps:
Identify mountpoints
If you create a flexVDI volume called "my_volume" in the flexVDI image storage "my_image_storage", the host will mount it in /var/lib/flexvdi/image_storages/my_image_storage/my_volume. Note down the mount point of the storage, as you will need it in the next step.
Install fio on the flexVDI host
Fio is a versatile IO workload generator, also able to report the performance of the generated workload, that we will use to stress test our system.
The simplest way to install fio is with yum. Fio is included in the EPEL repository. On CentOS 7, just install the epel-release package:
# yum install epel-release
In RHEL 7, you have to download the epel-release package first:
# wget https://mirrors.n-ix.net/fedora-epel/epel-release-latest-7.noarch.rpm # yum localinstall epel-release-latest-7.noarch.rpm
The, just install fio:
# yum install fio
Use fio
To get good data, it is very important to test the full disk. Otherwise, you are very likely to get (false) higher values, dismissing delays caused by:
- different speeds achieved by different disk areas.
- longer head movements (in mechanical disks).
- saturation of host and disk caches when accessing large amounts of data.
Now you can do some simple math: calculate the amount of 10GB blocks that are available (it should be the full storage size). You can get the amount of free space with:
# df -h
Or let df do the division for you with:
# df -B 10G
Now substract 1 block to avoid problems caused by rounding up. That is the amount of blocks that you will create. Assuming a 10TB disk, you can accomodate 999 blocks of 10GB each
Execute the fio script provided here, with a command like (replacing 999 and the path with your values):
# MOUNT_POINT='/flexvdi/image_storages/my_image_storage/my_volume' NUM_JOBS=999 fio flexVDI.ini > flexVDI.fio.out
Execute fio; it will take a long time (hours) to fill the disks
# fio flexvdi.ini | tee flexvdi.fio.out # grep -i iop flexvdi.fio.out read: IOPS=59, BW=238KiB/s (243kB/s)(28.3MiB/121739msec) write: IOPS=134, BW=539KiB/s (552kB/s)(64.1MiB/121739msec)
Now add the amount of reads and writes (in our example 193 IOPS) , and divide it by 40. You will get a good estimate of the amount of desktops that the measured device can handle.
Clean up
During the execution of the fio test we created test files that filled the disk. We have to delete them now with:
# rm /flexvdi/image_storages/my_image_storage/my_volume/flexVDI.*.0
Throughput
There are other limitations of the storage (throughput) that can limit the performance, but are less likely to be the storage bottleneck that the required amount of iops. Typically guests do not need high thorughput at the same time, as they need high iops values when guests boot, or users login, which commonly happen in bursts.
Nevertheless, a regular 7200rpm mechanical disk can read data at 150MBps, and the cache of the flexVDI Hosts, and the disk controller, will increase the speed perceived by the guests, while iSCSI over 1Gbps ethernet is limited to 123MBps, so even for small installations, anything below 10Gps iSCSI (or the better 8Gbps FC) is not recommended.