Vmware Windows Cluster Shared Diskclevervia

Posted on by

Windows Server Failover Clustering (WSFC) can be a great fit for clustering applications and services on Microsoft Windows. Application and service availability with WSFC is just as important today as it was when Microsoft Clustering Services (MSCS) was first introduced. VMware vSphere has provided additional flexibility and choice that augments and extends the capabilities of WSFC for quite some time. VMware KB article 2147661 details supported configurations when using vSphere and WSFC.

We are in need of expanding our current quorum disk size by about 500GB. The disk is currently shared by 2VMs which are on 2 different ESXI hosts and connected to our iSCSI SAN via physical compatibility mode in vmware for our SQL databases using Failover Cluster manager, server 2012. Storage Spaces Direct is a new share-nothing scale-out storage solution developed by Microsoft, that will be soon available as part of Windows Server 2016. If you want to learn more about it, this one is a really good starting page. To test this solution using Windows Server 2016 Technology Preview 5, I’ve decided to run the entire solution.

About Setup for Windows Server Failover Clustering on VMware vSphere Setup for Windows Server Failover Clustering describes the supported configurations for a WSFC with shared disk resources you can implement using virtual machines with Failover Clustering for Windows. In vSphere 6.0, you can configure two or more VMs running Windows Server Failover Clustering (or MSCS for older Windows OSes), using common, shared virtual disks (RDM) among them AND still be able to successfully vMotion any of the clustered nodes without inducing failure in WSFC or the clustered application.

One of the most recent updates to KB Article 2147661 included adding native vSAN support for WSFC requiring shared disks when using vSAN 6.7 Update 3.

What about vSAN Stretched Clusters?

This question has come up several times since VMware announced native vmdk support with SCSI3-PR for WSFC. This is an important question because it could extend the capability of WSFC across sites very easily.

vSAN Stretched Clusters already provide a proven and cost-effective solution for active-active data across sites, that is easy to deploy and operate with any additional hardware or appliances. WSFC shared disks can easily be shared by different WSFC nodes residing in completely different sites when using vSAN Stretched Clusters.

Vmware Linux Cluster Shared Disk

It is important to remember that vSAN handles data availability in Stretched Clusters based on Storage Policy while vSphere HA/DRS handle virtual machine availability.

  • With each node of a WSFC cluster residing in alternate sites
    • VM/Host rules can be used to ensure anti-site-affinity for these nodes
    • A Storage Policy configured for protection across sites.
  • If it is desirable for a WSFC cluster to run only in a single site
    • VM/Host and VM-anti-affinity rules could be used
    • A Storage Policy configured for protection in each site or only in a single site

Vmware Failover Cluster Shared Disk

Summary

Disk

Shared disks for WSFC on vSAN is supported in traditional, Stretched Cluster, or 2 Node vSAN configurations. When deploying WSFC on vSAN, stretched or otherwise, it is important to consider each potential failure scenario at the virtualization, Guest OS, or clustered application layer, selecting the most appropriate combination for the required use case.

Take our vSAN 6.7 Hands-On Lab here, and our vSAN 6.7 Advanced Hands-On Lab here!

Gran city pop tour. Continuing along from the previous installment of this series (Using iSCSI to connect to Shared Storage) we’ll take a look at configuring the disks you have attached to your prospective cluster nodes.

In the previous article we used one of several methods for connecting to shared storage. Regardless of the method employed, you will continue preparing for your cluster by configuring the disks in the OS of each node. We should all know the steps to initializing disks and creating partitions, but I’ll go through them here, with an additional note or two about some things you’ll need to do in order to share these partitions between multiple cluster nodes.

Open DISKMGMT.MSC from one of the prospective cluster nodes. You will see the shared disk displayed as Unknown, Offline, and Unallocated.
Bring the disk online by right clicking in the area to the left of the disk representation and selecting Online from the context menu.
The disk now shows up as Unknown, Not Initialized, and Unallocated.
Right click on the disk representation and select Initialize Disk from the context menu.
Select the disk(s) to initialize, select the partition style, and click OK.Important Note: The MBR style will not allocate the full size of partitions exceeding 2TB. If you have a partition greater than 2TB in size you must use the GPT partition style to use the entire partition size. If you use MBR on a partition greater than 2TB the partitioning will complete and you will not receive any error message, but you will only be able to access 2TB of the partition size. The rest will remain inaccessible.
Next create a volume and format it. In this example we are using a simple volume. This is basic windows which we probably all know, but I have the screenshots, so we’ll just run through the process. Right click on the disk representation and select New Simple Volume… from the context menu.
Click Next at the Welcome Screen
Click Next to accept the default maximum size.
Assign the desired drive letter by choosing it from the dropdown box. In this example we are using L. Click Next to continue.
Format the volume as NTFS and provide a volume label as desired. It is a good practice to provide a descriptive label, which we will see further on in the process. You can do a Quick Format or a Full Format as desired. A full format takes a longer amount of time to complete but is more detailed and marks any bad blocks on the disk, preventing them from being used during normal operations of the system. Click Next to continue.
Click Finish to complete the creation of the volume.
When the disk(s) are configured you will see it/them in Disk Management as you would see any other disk. In our example you see the L: drive. You will most likely have multiple disks, so you would repeat the process for initializing and formatting all of the partitions you want shared between your cluster nodes.
Once the disks are configured on the first node, take them offline. Right click the area to the left of the disk representation and select Offline from the context menu. this may seem odd, but shortly we will be bringing this disk up on the other cluster node and if both hosts have simultaneous access to the disk there will be conflicts and both nodes will not pass cluster validation tests, preventing the initial creation of the cluster.
On the second cluster node candidate open DISKMGMT.MSC
Bring the disks online as you did with the first node by right clicking the area to the left of the disk representation and selecting Online from the context menu.
The disk(s) appear, formatted but with a missing drive letter.
Select Rescan Disks from the Action menu as shown.
The disk(s) now appear with (a) drive letter(s), but they will be different from the ones you configured on the first node. They need to match.
Right click each disk representation and select Change Drive Letter and Paths… from the context menu.
Click the drive icon and click Change.
Change the drive letter for the partition to match what you had configured on the first node by selecting it from the dropdown box. This is where you will realize the importance of clearly labeling your partitions. Such a practice will keep you from becoming confused when you are working with multiple partitions, especially if you have more than one with the same size. Click OK when you are done.
Now the second node will have drive letters that match the ones on the first node.

Now the disks are ready for the creation of the cluster, which will be the next installment of the series on Creating a Windows Cluster. Until then, happy trails!