Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

Reclaiming space with HP 3PAR StoreServ

CJZ Headshot fixed 150 x 150.jpgBy Calvin Zito, @HPStorageGuy  vExpert 2011, 2012, 2013.png

Eventually I want to have a ChalkTalk on HP 3PAR StoreServ Thin Provisioning and in fact I probably would take several ChalkTalks on the topic.  I got a question from a reader (via email) with some questions about Thin Provisioning and reclaiming space with VMware and thought since I don't have those ChalkTalks yet, it might be worth sharing the info with you.

 

._Thin_technologies_RGB_blue.pngHere's the question from the customer:

 

We have a question about our HP 3PAR 7200 array.  We have created 4 virtual volumes and used the storage for 2 vm’s.  Now we have moved the 2 vm’s and all datastores are 0% used but the storage isn’t reclaimed on our 3PAR array.  Where can we enable the thin copy reclamation? (Using vSphere 5.1).

 

And the answer is....
One of the VAAI primitives is around the UNMAP command.  From a post I did last year talking about all the vSphere 5.0 primatives, UNMAP is used for space reclamation rather than WRITE_SAME. It reclaims space after a VMDK is deleted within the VMFS environment.   There were a few issues with how UNMAP worked and it is something you have to do manually by issuing a command:   run Vmkfstools –y <VMFS Store>.

 

From our HP 3PAR with VMware implementation guide, here are a few notes that explain things:

 

UNMAP (Space Reclaim) Storage Hardware Support for ESXi 5.x

HP 3PAR OS 3.1.1 or later supports the UNMAP storage primitive for space reclaim, which is

supported starting with ESXi 5.0 update 1 with default VMware T10 VAAI plugin. Installation of

the HP 3PAR VAAI plugin is not required.

 

To avoid possible issues described in VMware KB #2007427 and 2014849, automatic

VAAI Thin Provisioning Block Space Reclamation (UNMAP) should be disabled on ESXi 5.0 GA.

ESXi 5.0 update 1 and later includes an updated version of vmkfstools that provides an option

[-y] to send the UNMAP command regardless of the ESXi host’s global setting. You can use the

[-y] option as follows:

# cd /vmfs/volumes/<volume-name>

vmkfstools -y <percentage of deleted block to reclaim>

 

The vmkfstools -y option does not work in ESXi 5.0 GA.

UNMAP will also free up space if files are deleted on UNMAP-supported VMs such as Red Hat

Enterprise 6, provided it is a RDM LUN on a TPVV storage volume-—for example, for RDM volumes

on a Red Hat VM using the ext4 filesystem and mounted using the discard option.

 

For your reading pleasure, more HP 3PAR Thin Provisioning papers


There are a couple of key technical resources I want to point you to that do a great job describe the HP 3PAR Thin Suite and another that talks about what differentiates it from the rest the competition. 

I'm now motivated to work on some Thin Provisioning ChalkTalks and hopefully will have them in the near future.

Comments
John White | ‎07-23-2013 07:22 PM

Is there a performance hit on the storage controller while processing the UNMAP? In the past there have been reports of such.

Josh S | ‎07-24-2013 11:24 AM

The last line your this blog post says "I'm not motivated to work on some Thin Provisioning ChalkTalks and hopefully will have them in the near future."

 

I'm going to assume that you meant to say I'm "now" motivated ha ha

| ‎07-24-2013 05:29 PM

@Josh - thanks for catching my typo!  Yes, I meant I'm NOW motivated.  I fixed it. 

 

 

| ‎07-24-2013 06:22 PM

@John - the premise behind many of the VAAI primitives was to move tasks that could be completed by the array (and off the VMware host/off the networks) so yes, it has an impact.  How much really depends on several things:

 

  1. How big is the datastore?  Cormac Hogan (technical marketing architect for storage at VMware) wrote a blog post last year that talked a bit about what you should be concerned about but basically said how long it takes "depends" on lots of factors. 
  2. With all the goodness in Cormac's article, the thing it doesn't address is the architecture of the array - and that is what I'll do in future ChalkTalks but this is a really CRITICAL point so I'll make it here and now.

Thin Provisioning was built into the architecture - not something that was bolted on afterward.  As an example, the HP 3PAR ASIC does a ton of heavy lifting and offloads the CPU.  When 3PAR sees a string of zeros (like for a EZT thin volume), we don't need to write to disk and the work is largely done by the ASIC. 

 

I was just taking to one of our architects and he said he's almost never seen a 3PAR system CPU constrained - maybe that will change as we drive over 500,000 IOPS with an all-flash 7450; we don't get CPU-constrained because the ASIC does a lot of work that in traditional arrays are handled by the CPU. 

 

Does UNMAP take CPU cycles to complete - yes but with our 3PAR architecture, we can handle that.  With traditional arrays that were designed 15-20 years ago, the impact of UNMAP is far greater.  To see what I mean, checkout the Edison Report that I pointed to in my blog post and specifically look at page 15 (shows test methodology) and the results on page 17.  The test wasn't specific to a VMware reclamation but you can see   that EMC VMAX and Dell Compellent degraded over 40%, VNX nearly 30%, Hitachi VSP 23%, and HP 3PAR 0% degradation. 

nate | ‎08-07-2013 01:07 AM

be sure to be on code rev 3.1.2 or later (MU1 is good - MU2 is a big feature release so not as critical fo this purpose ) for some important space reclaiming fixes..

 

Hopefully at some point soon I'll have some stuff in place that can do UNMAP natively.. at the moment the bulk of my operating systems are too old to support it, so I just use the occasional script that writes zeros. My first (and only thus far) UNAMP experiment was with  RHEL6 about a year ago, and it wasn't pretty, overran the little F-class controllers pretty quick before I knew what was going on, followed by frantic ^Cs to abort the operation (which was formatting 3 file systems in parallel by default the OS attempted to unmap all blocks first). Part of the fixes in 3.1.2 is supposed to address that to some extent (have not tested that function since).

 

For me I see bigger benefit to more intelligent space management, it takes more time up front to plan a bit, I use volume managers, and strict log rotation policies and stuff. I learned over the years how to efficiently manage several of the common Linux-based web stacks with 3PAR. My boss recently was asking me if I had to go to the 3PAR to expand a volume. I said no, any raw device map from 3PAR to a system is ~2TB (VMware 4.x limit), so up to a file system of that size the expand operation is lvextend and resize the file system. In the past 16 months this situation(going beyond 2TB in a single file system) has come up once  - for our splunk server (now at ~2.3TB), all other systems are far below 1TB a piece (most under 400GB).

 

He asked me if I had to go to VMware to extend VMDKs that were created on VMs. I said no, by default for any system that has more than a minimal disk space requirement we allocate a 100GB thick provision lazy zero VMDK and use LVM on it. The bootstrap process provisions a 10GB file system in that 100GB volume, and we don't need to touch 3PAR or VMware to extend that volume up to 100GB. Again in this situation I can think of only a single time (which was recently) when a request came in to resize this volume to 250GB(I added a 2nd VMDK and extended the volume group to encompass both disks). All other systems are well under 50GB a piece (most under 20GB).  One VM out  of roughly 400.

 

I specifically do not use VMware's thin provisioning feature in conjunction with 3PAR thin provisioning. I use thick on VMware's side so I can provision the volume to whatever level I want, and know with 100% confidence that I will never overrun that volume's capacity even if every VM on the disk went to 100% of it's size (very unlikely). Just one less thing to have to think about planning for.  Now running out of physical space is another issue of course, but that's pretty much the same no matter what strategy you have with regards to over provisioning. It also sort of provides a limit as to the max # of VMs on a volume, not that I've ever had a problem with SCSI reservations even before sub LUN locking came out.

 

All of my volumes that need a lot of space (databases mainly) all run out of RDM, not VMFS. RDM mainly for snapshot purposes I do snapshotting all over the place and have ~30 MySQL VMs that run exclusively off of snapshots.

 

It works quite well, I am very happy with the results. It did take a few years to get this strategy right, long before thin reclamaing was a possibility(and even after thin reclaiming came out there was still some major issues with it in the early days).

 

3PAR used to say many years ago to do away with volume management. I learned many years ago that advise was not good. I don't think they say that anymore.

| ‎08-07-2013 06:36 AM

Great insight Nate!  Not sure you have any Windows hosts but have you looked at tools like Raxco Perfect Disk?  I don't know a lot about it but have heard a few good things about it.  Great seeing you last week!

jonodsparks | ‎11-13-2013 12:31 PM

We just recently put in a new 3PAR 7200 and have been going through our various tests and benchmarks from our VMware and Oracle environments.  I am really impressed with the array, but I still have a question on space reclamation.

 

Taking the example from your blog post above, I want to change a few details.  You have a Windows VM with two Eager-Zero Thick volumes (30GB OS and 200GB data) residing on a 1024GB TPVV volume on the 3PAR (45GB used). The VMs has an error that causes its hard disk to fill up so the 3PAR console is now showing 230GB of User Data on the volume and WIndows reports no free space.  You correct the issue and delete all the bad files, restoring the original total usage (15GB OS and 20GB Data), but the 3PAR console still shows 230GB used. I know that Windows doesn't really delete the data, but I thought that VMware tools was supposed to mark this data with zeros and then the 3PAR zero detect takes over from there?  If I run the vmkfstools space reclaim, it says it is trying to reclaim the free space only (770GB), which would not touch the space vacated by the VM because they are EZT volumes.  Do I need to run SDelete on the Windows VM? The only advice I can find is to move the VM to a new volume to reclaim the space...this isn't always a possibility. I was under the impression that much of this process should be automated.

 

Thanks for any insight you can provide!

| ‎11-14-2013 12:08 AM

@jonodsparks - the issue that you're facing is that Windows OS (older than Windows Server 2012) don't know how to deal with reclaiming space.  In fact, you have to do it manually with all pre-WS2012 OSes.  One way of doing it is with SDelete.  Also have heard that fsutil is also an alternative. 

 

I'd recommend you look at the HP 3PAR Thin Technologies technical white paper - lots of technical information there. 

Lastly, I'm not very technical and have not ever done anything myself with reclaiming space.  You might consider posting specific questions you have in our HP 3PAR StoreServ Community Support Forum

Mohammed | ‎05-08-2014 10:30 AM

Hi,

 

we have HP3PAR F400, we have used almost 93% on NL dirves. since we reached to critical usage we did migration of data to another storage after this i have deleted the virtual volume from windows server and VMware. then i performed compactcpg to re-claim the space from strorage/cpg.

 

but the results are very different even after few days, nothing came back .......?

 

what to do now ? i opened support request with HP 3par team. they have took nearly 3 hours .... and no outcome... till yet

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
25+ years experience around HP Storage. The go-to guy for news and views on all things storage..
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.