Clustering and shared storage

A flexVDI platform can easily scale up by adding new Hosts. The flexVDI Manager will balance the resources allocated to Pools among all the available Hosts, and run Guests in any of them as needed. flexVDI uses the OCFS2 clustered filesystem to grant exclusive access to Guests's images by any Host, and to provide cluster consistency in case of communication failures. This chapter explains how to configure and use a cluster of flexVDI Hosts.

Adding a new Host to the cluster

Adding a new Host to a flexVDI cluster is as easy as following these two simple steps:

  1. Install the flexVDI distribution in the new Host, and configure it as you did with the other Hosts, as explained in the Getting Started guide. Be careful to create the same virtual bridges as in the other Hosts, so that they map to the same subnets.
  2. Register the Host with your flexVDI Manager instance. Remember that this is done with flexVDI Config. In the main menu, go to "Manager" and select "Register". This time, flexVDI Config will ask you the Manager IP address first, and then the Manager password. After the Host is registered, the SSH keys will be updated in all the Hosts.

Once this is done, you will see the new Host appear in flexVDI Dashboard, and you can start using it right away.

Replacing a failed Host

If you are adding a Host that substitutes another failed one, remember to give it the same domain name and IP address. Otherwise, the Manager will not know how to replace the previous Host.

Configuring OCFS2

Once the Hosts have been registered, we have to configure the storage they are going to use. This is done with flexVDI Config, in the option "OCFS2" of the main menu.

OCFS2 is the file system used by flexVDI to manage the shared storage used by two or more flexVDI Hosts. OCFS2 is a cluster file system developed by Oracle, that provides high availability and performance. It is also open sourced. With OCFS2, flexVDI takes advantage of its cache-coherent parallel I/O option that provides higher performance, and uses the file system switching on error to improve availability.

This step is required to access shared storage devices (which will be later configured as Internal Volume in an Image Storage) even though they are used in one flexVDI Host only.

Configuring the first node

Log into one of the Hosts and run flexvdi-config. In the OCSF2 submenu, select the First_Node entry.

config_ocfs2.png

The following menu will ask you to determine the number of nodes that will take part in the cluster. You must know for certain the maximum amount of nodes that will compose the cluster and what will be their IP addresses. Adding or removing nodes in a OCFS2 cluster once it has been created is a complex operation. However, OCFS2 works perfectly if only some of the nodes defined in this step are up. So if you have planned a possible expansion of the cluster in the future, it is highly recommended to select now a larger number of nodes, and reserve a set of IP addresses for use later. Now, enter the maximum amount of nodes that will form this flexVDI infrastructure.

config_ocfs_nodes.png

Now, enter the Hostname and IP address of each node in the cluster.

config_ocfs2_addr.png

Finally flexvdi Config will show a summary of cluster components. If the settings are correct, select Yes.

config_ocfs2_confirm.png

Once you confirm that data is right, the first node is ready.

Configuring the other nodes

After you have configured the first node, you must access and register the others.  To do this, after registering the flexVDI Hosts with the Manager instance, follow these steps on each of them:

  1. Select OCFS2 in the main menu of flexvdi-config. 
  2. Select the option Othe_Node. 
  3. Enter the IP address of the first node in the cluster and press OK:

config_ocfs2_other.png

After doing this in every host, the cluster will be ready for using OCFS2.

Accessing shared storage

The following sections explain how to use a shared storage as an Image Storage and access its Volumes, and how to move your flexVDI Manager instance to a shared storage Volume to provide high availability.