Technical Support Services Blog
Discover the latest trends in technology, and the technical issues customers are overcoming with the aid of HP Technology Services.

Hierarchal Storage Performance: Phase #2 of 3 – Spreading the data over the Spindles (Pools)

 

 

First, let’s have a quick review of some of the examples I shared last week. We’ll start by looking at examples in Figures #1 and 2.

 

Figure #1: Typical 4 disk raid set, with LDEV of variable size and quantity.

 

 

Simple_RAID_Group.PNG 

 

Figure #2: Typical 8 disk RAID-Group concept.

 

 

Simple_RAID_Group_8_HDD.PNG 

 

 

Figures 1 and 2 suggest that all enterprise arrays use the terms PDEV and LDEV. An LDEV spans multiple PDEVS.  For the most part, this is true.  However, as I alluded to last week, the StoreServ 10000 does not function in this manner. 

 

I assure you that both strengths and weaknesses exist in both examples which depict legacy Raid-Group, LDEV provisioning standards.   One of the biggest strengths with the older methodology was the precision of control, and knowing exactly where your data would reside within the array at all times. This was a plus for those experts who manage System V, Linux, and Mainframe.  However, the flip side is also true: having this much control requires constant management and performance monitoring to make sure loading issues do not creep into the environment.  The situation demands constant observation and possible manual data movement on large production environments.  – Yes.  It can be scripted but let’s be realistic. :smileyhappy:

 

 

In order to tackle this issue, the concept of “disk pools” was established.  The concept of disk pools is not new.  In fact, some of the oldest arrays had this concept too early for their own good (HP Auto Raid).    Nevertheless, the concept of “pools” is now allowing many large enterprise arrays to easily interleave data across 100s or even 1,000s of disk drives – much like EVA (HP P6000) has done for years.  As explained early on, LDEVS are allocations of space within an array group. But now, with the concept of pools, two new benefits appear: thin provisioning and load dispersion of the LDEV across all the PDEVS in the pool.   Though the new feature of thin provisioning is fantastic, the simple ability to spread the load over the entire array to remove the constant fear of a hot array-group slowing down the entire enterprise is one awesome benefit.

 

Here are some new terms to add to your glossary:

 

  • Pool

Comprised of a large number of RAID-groups (physical spindles)

 

  • V-Vol

An LDEV, referred to as a Virtual Volume (V-Ldev) to differentiate it from legacy LDEVS.  This is the HOST presented device which is interleaved across all the spindles within a pool. 

 

 

In Figure #3, we depict two separate array groups, each containing four physical drives.  Each array group has been carved up in equal slices of logical devices that consume all free space within the array group.  The RAID protection level does not change; as you see in the diagram below the concept of a pool is just combining multiple array groups.  The new term “virtual volume” will be interleaved across each LDEV within the pool. 

 

 

 

Figure #3

 

 

P9000_Storage_Pool_W_V-VOL.png 

 

Note:

For best performance, each RAID-group should have equal number of LDEVS, and the best would be to utilize 1:1 subscription ratio so that you can maintain a nice average queue depth over all physical disk.  Much more exist on this topic, but it is outside the scope of this blog.  Figure #3 should help with this visualization, as I am depicting a ratio of 1:1 (Ldev : PDev)

 

Now let’s take it one step further with thin provisioning.  When creating a new virtual volume within the pool—be it HP P9000, or HP StoreServ 10000 (3PAR), or HP P6000 (EVA)—does the array allocate all the physical space up front? 

 

The answer? No. 

 

It allocates space in chunklets--exact size of the chunklet varies depending on array vendor, make and model.  Suffice it to say, that when a host performs a write() to a LBA that has not already been initialized, the array will initialize a new space somewhere else in the pool.  This keeps nice load distribution throughout the entire pool with adequate raid protection. 

 

So just how different is the HP StoreServ concept?  Basically, the HP StoreServ was designed from the ground up with virtualization in mind, no RAID groups, logical devices, or partitions on disk.  Instead, the HP 3PAR simply carves every physical disk into small Chunklets (C) 256 MB in size.  When a 100TB V-lun—or any size LUN, for that matter—is created, the end user must specify the raid set and disk types, such as RAID5 and fiber channel (FC) or Near Line (NL).  The new virtual volume will be provisioned using LDs which will grow as needed using the chunklets from all Spindles in the array, or common provision group defined by customer (CPG).  At the beginning of the V-Vol creation, the minimum size of the thin volume is used to define the canister (V-Vol).  As host performs new writes to the V-Vol, the array will continue to initialize and assign more chunklets to the LD or LD’s which will continue to spread load evenly over all the spindles – refer to Figure #4.

 

Figure #4: This is a HIGH Level diagram

 

3PAR-Chunklet_LD_VVol.png

 

 

I have outlined Phase #1, and #2 and next week, I’ll review the automated movement of data through HP StoreServ Adaptive Optimization, or P9000 SmartTier as well as just how Data Density Reports work. 

So -- Check in next week: same time, same blog.  In the meantime, I look forward to answering any of your questions!

 

By the way -- do you have a tough technical question, or a scenario you would like to discuss? Come see Chris and I at HP Discover where we will be presenting many Hands on LABS, Technical Briefs, Technical interviews with media, and main stage @ Innovation Theater!!

HP Discover 2013.jpg

 

 

Comments
TurnerStorage | ‎05-03-2013 09:34 PM

Actually the newer 3Par 10000 and 7000 series arrays  do 1GB size chucklets ... Great article, keep them coming.

GregTinker | ‎05-05-2013 01:29 AM

Turner - yes, thanks for the comment, i mentioned it in the video but forgot to cover it in the write-up...  Great Catch!!  Thanks for reading!

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
I am an identical twin. My brother’s name is Chris Tinker and we have been extremely fortunate working similar careers within HP, known to ...


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation