When VMware was started in 1998, their deployment was primarily in the non-production environments of data centers.
Now, VMware often commands 50% or more of the production workloads. The proliferation of VMware has created new challenges for the underlying storage sub systems to effectively manage and protect VMware datastores. One of the challenges with VMware in the beginning was the storage limitation of its VMware File System (VMFS) As a result; customers would often use Raw Device Mapping (RDM) to provision storage to VM’s. RDM’s enables the use of SAN features for backup off loading with built SAN array copy functions, as an example. But, VMware has continued to improve VMFS, which it did with the release of its third generation VMFS5. It eliminated the 2TB size limit going to 64TB. This change, in concert with its vStorage API for Array Integration (VAAI), has allowed designers to use fewer RDM implementations in favor of VMFS. VAAI, which was released with ESX v4.0, allows for the off loading of specific storage management operations to the storage hardware. There are four primitives for VAAI, one of which is helpful for data protection operations.
VAAI – ATS – Atomic Test and Set helps in a shared storage environments when multiple hosts accessing the same VMFS datastore. ATS is an alternative to SCSI reservations, and it can greatly improve disk performance, which can help with backup and replication operations. This is because VMware snapshot, their creation, and deletion, require metadata updates that can cause performance issues on large datastores.
Another API that has enhanced data protection operations between VMware an SAN storage arrays is in the latest release of the vSphere API for Storage Awareness (VASA). VASA enables vCenter to recognize the capabilities of storage arrays. These capabilities have been mostly associated with storage management and visibility. Now, with vSphere 6 and VASA2, customers have the option of using virtual volumes (vVOL), if is supported by their storage array. The new construct, introduced as part of vVOL’s, is storage containers. These containers allow for the logical abstraction of vVOLS’ to be mapped and stored within the array into individual volumes, which in turn, enables the storage array to be carved up into more useful segments. This is helpful as it relates to SAN snapshots and replication. When multiple VM’s occupy one datastore, which is mapped to one LUN, all the VM’s must be copied during a SAN snapshot, or replication job. This could be undesirable, since some of the VM’s may not require advanced SAN copy functions like replication; thereby, wasting valuable bandwidth and storage capacity at the DR site. vVols will take a VM with five vmdk files that would normally occupy one LUN, and separate them into five LUN’s, providing increased granularly for the SAN copy functions. For instance, it is recommended by VMware to separate out the VM page file onto a separate LUN so it is not replicated. vVols’ can accommodate this configuration requirement easily, but it can come at a cost because one VM is now consuming more storage volumes than it would without vVols. Certain SAN storage arrays have limitations on the number of volumes they can support, which is why it is important to know how many VM’s are being planned to run on the storage array to see if can handle vVols, even if it is enabled for VASA2.
In summary, VMware is consuming the physical server foot print rapidly, and similarly, it is consuming a lot of storage with its deployment of vmdk files, snapshots and replication, in addition to the cloning of VM’s that go on in the background, so it is important to select a SAN storage array that supports the latest vSphere API’s. One storage vendor who has been on the forefront of vStorage API integration is IBM with their Storwize family and XIV storage systems. If fact, they were one of the first to support vVol’s with VASA-2 integration, ahead of EMC, and Spectrum Control Basic provides VASA-2 integration for the entire IBM storage product line at no charge.