Senior Systems Engineer – Storage Solutions
The IBM Spectrum Virtualize 8.3.1 release was chock-full of improvements and you’ll notice the changes to the names of the products. No longer are we referring to anything as the ‘Storwize’ family and are also discontinuing use of the name ‘Storwize V7000’ and ‘Storwize v5000’. All new models are part of the IBM FlashSystem family or ‘FS’ for short. We have the FS5000, FS5100, FS7200, FS9200, and FS9200R. They can be all-flash arrays or you can still add spinning drives. There have been minor changes to some of the hardware of these models including newer CPUs, more cache, and new Flash Core Modules (FCM) along with a larger capacity of a 38.4 TB FCM. That’s a large flash drive, especially after compression and deduplication! Now that it’s seen as a storage tier for Easy Tier in 8.3.1, a good use case for Storage Class Memory (SCM) is for Data Reduction Pool (DRP) metadata. The IBM Storage Tier Advisor Tool (STAT) has also been integrated into the GUI—which means you no longer have to export the heat files, use a utility, then import them into an excel sheet—though it currently can only be used for standard pools.
IBM Spectrum Virtualize for Public Cloud
IBM Spectrum Virtualize for Public Cloud (SV4PC) on IBM Cloud & AWS also had good news for customers currently licensed for Virtual Storage Center (VSC). You can now use that extra VSC capacity you’re licensed for in SV4PC with a one-time registration license.
The key differences between SP4PC and traditional IBM Spectrum Virtualize are the deployment, back end virtualization, shared network infrastructure, and the clustering/failover. You can also deploy up to a 4-node cluster on AWS for improved performance.
Three Site coordinated Metro/Global Mirror was also included with 184.108.40.206 but only by RPQ, so it has to go through a few hoops of approvals and there are some lengthy prerequisites and limitations. The full function for GA should be coming soon. Coordinated Metro/Global Mirror is orchestrated by a small RHEL 7.5 (or above) instance running on a standalone server or VM and can be in a star or cascade configuration. The below image from the IBM presentation should clarify the replication coordination.
Finally, release V8.3.1 has announced the ability of DRAID expansion. This was probably the most requested feature at the IBM Spectrum Virtualize conference and was a thorn in the side for many Mainline customers over the past several years. Let’s dive into what you need to know:
- Just like expanding a traditional RAID array, use devices (drives) of the same size and performance. 1
- Dynamically add 1-12 drives, though if adding more than 12 you just need to repeat the process. It might be faster to create a new array if you’re adding more than 12 drives at once, then migrate the data and use the original drives as an expansion to the new array. Did that confuse you enough?
- You can increase the number of spare areas but you can’t change the stripe size.
Choosing the Best Drive Size
You might be wondering what that means for the end result. Let’s take the smallest DRAID six-array size, which is six drives. Half of the total capacity is for data and the rest is for spares and parity. This means no matter how many drives you add, you’ll only be getting 50% of the drive size as usable capacity. This has a profound impact on customers that started with six drives and plan to increase in the future. If you were deciding on a quantity of six of a larger drive or 12 of a smaller drive, go smaller so when you add drives in the future you aren’t sacrificing a lot of capacity due to initial stripe size. You also need to take this into consideration when calculating capacity of an MES for any DRAID expansion.
- Like any other background task, the goal is not to impact performance or impact end users. The length of time to expand depends on factors such as how many drives, their speed, and density. It could take anywhere from hours to weeks. You should see pool capacity gradually increase until the job is complete. The GUI and CLI will show an estimated completion time and it will be shown as a long-running task.
- ACK! I had a drive fail during expansion. Will the rebuild take longer? The simple answer is “no.” Expansion is suspended because the priority is the overall health of your system and a rebuild is crucial to that. Build-back/copy-back is the last process when you replace a failed drive where it copies data back from the distributed spare space, and that is a lower priority than the expansion.
- There is no “undo” button, so make sure you’re committed to expanding.
- There was an issue doing DRAID expansion for FCMs (possible loss of access to data) that was resolved in the V220.127.116.11 firmware.
What are you waiting for? Go expand to your heart’s content!
With hundreds of storage certifications, Mainline’s architects and system engineers help our customers select, architect, and implement efficient, secure storage systems that improve their bottom line. For more information on storage solutions, contact your Mainline Account Representative directly, or reach out to us with any questions.
1 There are exceptions to this. Check out the IBM announcement deck for Spectrum Virtualize V8.3.1 or IBM Barry Whyte’s blog. Contact email@example.com if you’d like links to either article.
You may be interested in:
VLOG: IBM DRAID Explained and Best Practices (3:36)