Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

What will EVA customers benefit from new NetApp program be? Zip! Zilch

  By Calvin Zito, @HPStorageGuy


In my last post, I included a couple of summary videos from our recent HP StorageWorks Tech Day.  The hands-on lab really stirred up a few folks over at NetApp.  The week after our Tech Day, they did a WebEx session to try to address some of the comments made by the bloggers about how difficult the management of their FAS system was.  I won't go into details about that but I counted at least three different GUIs that they showed during that demo.  The HP StorageWorks EVA has one - Command View.  But, that's not the topic I want to cover today.


The old "switch-a-roonie"


During their online demo, Vaughn Stewart from NetApp also discussed the NetApp vSeries - a network based storage virtualization product - and suggested that HP and NetApp were partnering to help EVA customers.  Vaughn thanked me for attending the demo and talked about partnering with HP.  I thought he was trying to connect the fact that I attended as being an HP endorsement of the vSeries.  It wasn't.  I made it very clear that HP wasn't working with NetApp to put vSeries products in front of our EVA's.  I told the demo audience that HP has our own network-based SAN virtualization product called the SAN Virtualization Services Platform (SVSP) that competes with the vSeries and in no way do we recommend EVA customers use the vSeries to virtualize a pool of EVAs.  We have talked about the SVSP several times on this blog and it will be discussed in a podcast later this week. 


I initially didn't understand why Vaughn brought the vSeries into the demo.  But a week or two after the NetApp demo, they announced a new marketing program targeting EMC CX series and HP EVA installed base customers with their vSeries.  Now it's pretty clear to me what was going on then.  This new NetApp marketing program asks customers to consider putting a vSeries in front of an EVA or CX. 


So I want to spend a few minutes now discussing why I think it would be a bad decision for any EVA customer to consider such a thing.  It's my belief that the benefits to an EVA customer would be zip (and interestingly that is the name of NetApp's program).


Improve storage efficiency at what price?


The claim NetApp is making is that EVA customers are not efficiently using their storage capacity.  That's a bit laughable given that with an EVA, every spindle is used for data and unless using tiering, we recommend a single disk group which gives incredible storage capacity efficiency.  To be in the program, NetApp has to approve the customers' application.  The customer is basically signing up to purchase the vSeries in 90 days if it delivers what NetApp will stipulate.  Be sure NetApp also stipulates the peformance hit you'll take - but I'm ahead of myself on that one. 


My guess is the number of customers that actually get into the program will be rather small and maybe that's a NetApp objective of their marketing program.  I suspect that NetApp's real motive is to develop a list of CX and EVA customers that they can continue to call on.


The NetApp claims of improved storage efficiency will come from a couple of categories of services that the vSeries provides for the arrays attached to it. 



  1. Thin provisioning

  2. Data replication (snapshot, clones, etc)

  3. Deduplication


The EVA customer already enjoys capacity efficiencies with the first two categories of thin provisioning and data replication. (Note that the EVA doesn't use traditional thin provisioning today but uses a product called Dynamic Capacity Manager that accomplishes similar results by integrating with the OS). 


For that matter, if customers wanted to pool their capacity of multiple EVAs and manage it as one pool, the SVSP offers thin provisioning and replication services too.  What we don't offer today is primary LUN deduplication.  But should customers running an EVA rush to deduplication their block-based mission critical storage?  I think the answer is absolutely not and here's a few things to consider:



  • NetApp has recently claimed 37,000 deployments of deduplication (via their PR department) but in a recent earnings call, their executives said 37,000 downloads.  I don't know about you but there's a big difference between the number of customers who download some free software versus who are actually using it, especially in production environments.

  • There's a trade-off to implementing deduplication with primary, block based storage - and that trade-off is performance.  Data that I've seen from a few different sources has said that a top customer concern in a virtualized environment is performance.  I've seen throughput testing results that say the performance degradation on a FAS system with dedup can be as high as 65%.  Their own recommendations say to run it during low activity and not all of the time.  NetApp also makes you sign a waiver stating you understand the risks of lower performance. 

  • We haven't tested to see what the actual deduplication capacity savings would be and frankly there are a lot of factors that would play into that. Since the controllers in the vSeries are the same controllers in the FAS system, it's worth noting that we have found that the percentage of capacity savings is roughly equal to the percent of slowdown. 

  • And that performance hit from deduplication doesn't include any other latencies that the vSeries introduces because of their in-line architecture. 


So should an EVA customer put their arrays behind a NetApp vSeries for a potential small capacity savings when the potential performance penalty is high?  And keep in mind the vSeries is based on the FAS controller.  We've shown in a recent blog post that based on our testing, the performance of that degrades rapidly.   


There are some good reasons to implement SAN-based virtualization with a product like the StorageWorks SVSP or the NetApp vSeries.   However, for an EVA customer, NetApp's value proposition of getting better capacity efficiency of the physical storage just isn't one of them.  A far better answer for the EVA customer is the StorageWorks SVSP.  We'll cover this topic in podcast later this week so stay tuned for that.


Tweet this! 

Labels: EVA| NetApp| storage| SVSP
Comments
Anonymous(anon) | ‎11-04-2009 09:10 AM

Currently having a NetApp FAS system after considering both HP EVA and NetApp FAS gear, and I can identify with the annoyances of having multiple GUIs for management of our FAS.  One GUI seems to be better for some tasks, while another seems better for other tasks.  It looks like things will move primarily to the MMC-based NetApp System Manager, which is not bad, but is still lacking as it's a 1.0 version.  About dedup, we saw space savings of 75% after moving 400 GB in our primary VMs to NetApp iSCSI LUNs, and then about 50% savings after moving most of our 5-6 TB in VMs to our NetApp.  Obviously, performance was not the best while the dedup jobs were running, and we are aware of the potential hit in having shared blocks, but we did not notice any performance hit while running our VMs on dedup’d storage after the dedup jobs completed.  And while we don't like some of the ways that NetApp works with disks and we can't fully take advantage of all of our raw space (in part because of the way that an active/active FAS is basically like running 2 storage systems on shared hardware), we still think it's a preferable situation to needing to pay HP licensing on the amount of data that we have (at least that's the way it was when we last looked at HP EVA storage).  We are a backup and recovery services provider, so we felt like we were being penalized for being successful and growing our business.  We already pay for shelves and disks, so why should we also have to pay on a per TB basis or even an unlimited storage basis just to put our data on those disks...?  I believe there is licensing associated with EVA Command View as well – seriously, licensing on the management software…?  The other thing that makes FAS systems really appealing is that they are multi-protocol systems; we really like being able to use pretty much whatever protocol that's needed, and they're all in the same box.  NAS is a big part of what we do as well, and we didn't feel like we'd get the best NAS performance by continuing with our existing practice of placing regular file servers in front of SAN storage, as is the case with the HP EVA File Services option.

Anonymous(anon) | ‎11-04-2009 06:16 PM

We are running two vSeries in front of two EVAs. For me (the EVA admin) the vSeries (managed as a NAS by the windows team) look just like another pair of hosts, but with the following drawbacks:

- only 60% of the space I allocate is usable

- no support for EVA 4400

On the other hand, Command View EVA is far from perfect. When it works, it's great. But just this week both of our CV servers where totally unusable because one operation to one EVA (of a total of 9) did hang, blocking ALL management of ALL EVAs. So what's the solution? Reboot the master controller of the misbehaving EVA... not very elegant.

Anonymous(anon) | ‎11-05-2009 02:08 AM

"The HP StorageWorks EVA has one - Command View"

Being a HP customer running multiple EVA:s, all generations since its realease, this marketing statement always annoys me. HP knows very well that in order to fully use and manage the EVA you must use multiple user interfaces, CV, RSM and to some extent SSSU.

Anonymous(anon) | ‎11-05-2009 03:49 AM

While I agree that during the execution of the job that does the deduplication of data on a FAS it is slower, I have yet to see any performance impact on my FAS3140s for deduplicated data at any other time.  The initial job takes a while (~2 hours for 2 TB of VMs), after that it takes minutes a day to run.  I'm willing to make the trade off when it means that I can have > 250 virtual machines (each with a 20GB operating system HDD) on a single 2TB volume, and we still have room to spare.

The best part...since we moved to OnTAP 7.3, which makes the cache dedupe aware, we've seen a huge reduction in the number of disk operations for our virtual machines...and because the amount of VM data has been reduced in the cache, more of the application data sits in cache, which makes response times for apps better as well (we keep the operating system drives for the VMs on one volume and the application data on multiple others).

Yes, we are aware that cache saturation could occur, at which point we could suffer from the dedupe, but it's a lot cheaper to buy a PAM than more spindles when capacity isn't the issue.

Don't misunderstand me, I like the EVA, it does a tremendous job for some of our Oracle databases (which have an extremely random IO pattern) because we are getting the benefit of having all of the disk's IOPs available to us, but for VMs, and the majority of our applications, using a FAS is what works best for us.

And, I'm very sorry, but no matter how much you try to convince me, DCM is not the same as thin provisioning.  Anytime someone tries to convince me that a LUN can be thin provisioned I'm skeptical...even NetApp.  What NetApp can do is true thin provisioning of NFS and CIFS datastores, and that is extremely helpful...especially for our user home directories and profiles.

| ‎11-07-2009 02:46 PM

Thanks for all of the comments.  A few things:



We had a very painful experience with NetApp Asis; the admins did not read the fine print and just enabled the dedupe "to save space". After a month, the NFS latency hiked up to 50k ms renders filer completely useless. After numerous perfstat collections, the NetApp  tech support still had no clue. It is one of our senior storage admin who spot the problem.


We saw the huge performance penalty on the engineering data.. it took 14 days to finish one run of asis and while asis is running,  it consumed more than 30% of CPU cycles. And because of the nature of the data (high turn over rate), it really is a terrible idea to run dedupe. The second lesson we learned was.. even on a volume with moderate modification, there are still price to pay for large sequential reads. Just as the best practice ( tr3505) has suggested if one is sensitive to read/write performance, one should be cautiousl with dedupe. There is no free lunch.



  • DeltaFord - RSM is not a required GUI; Command View does everything that RSM does though it might take a few more steps.  SSSU is a command line interface to Command View and is provided for those who prefer a command line.  Again, it isn't required but is provided as a convenience.

  • With the new Command View SVSP that we just announced this week, any one looking to pool the capacity of multiple EVA's has even more reason to look at SVSP.  Take a listen to the podcast that I just published as we hit on this topic: www.communities.hp.com/.../converged-infrastructure-block-based-storage-virtualization-podcast.aspx

  • For any customer consider the ZIP program, I'd highly recommend that you talk to your HP rep first and learn more about the SVSP.  I am very confident that 95% of our EVA customers that would consider the vSeries would be better served with the StorageWorks SVSP.

  • Michael - I'll pass on your comments to our R&D team.  I've not heard of that issue before but I'm also not directly in the loop with our support team.  Feel free to use the "Contact" link at the top right side of the page if you'd like to give me more details.


Thanks for all the comments

Anonymous(anon) | ‎11-09-2009 08:01 PM

One thing often not mentioned is that the implementation of a Vfiler is completely disruptive to a customers environment. You have to serve up LUN's from an EVA or any other block storage device to the Vfiler. But before the Vfiler can make use of these LUN's they have to be formatted with the WAFL file system. So either a fair ammount of swing capacity is required or a large amount of downtime for restore of data into the WAFL file system is required.

On the subject of SSSU and RSM, completely agree the two are provided as a convenience at no extra cost. There's nothing in either tool that can't be completed through the Commandview GUI. They just offer additional flexibility around hands off scheduling and batch configuration processes.

Anonymous(anon) | ‎11-10-2009 08:54 PM

@CalvinZ

Marketing people amaze me...because one customer has a use case that doesn't fit the model, but they tried it anyway and it didn't work, then all of a (competitor's) product features must be bad and perform terribly.

I'm sure there are use cases where the EVA and XP don't shine either, but that doesn't make the product as a whole an under performer or not worth having.  I really think you people wake up in the morning and think to yourselves, "How little of the truth can I tell so that my company comes out looking like the winner?".  And no, I don't limit that to just you and HP...NetApp does it, 3PAR does it, and EMC probably does it the worst.

And to that customer...who in their right mind blindly enables a feature that has the potential to be disruptive on production data?  Your admin(s) need to find a new career.

Anonymous(anon) | ‎02-19-2010 02:59 AM

You need to qualify the environment for deduplication and you can not make blanket statements as the author did that deduplication will cause performance problems. As a person who deals with server virt day in and day out, it's a perfect fit. For some other environments it may not be.

Nobody from NetApp claims that we will take a the biggest fastest most mission critical database and we'll dedup the heck out of it. What we claim is that we do have the technology which is part of our architecture and not some add-on disparate solution, to address cases that are a good fit.

For every customer case Calvin provides who had performance issues with dedup, I can provide 10 who didn't and that's why we can undedup the data in place. The technology works and works extremely well. While 37k licenses have been downloaded does not necessarily mean that every one of them is in production, it does point to the fact that customers are looking to space efficient techiques to address rapid data growth and we provide the opportunity free of charge to do it for those interested.

Furthermore, claiming that Dynamic Capacity Manager provides similar results to thin provisioning is quite a misleading statement...This is nothing more than a dynamic LUN expansion with Shrinking capability IF the OS supports it...i.e Windows 2003 does not. Aside from Windows 2008 not many other OS support shrinking a FS.

Thin Provisioning is OS independent, provides fine grain allocation, it is not allocated in large chucks, you pay for storage as you need it and not upfront, you don't pay for power to cool that storage when you don't need it and it is Non-disruptive to the host.

V-series...Unbeknown to a lot of people, v-series actually helps performance of 3rd party arrays and it does because a) an additional caching layer is introduced so cache hits can occur both at the V-series cache and the array's cache b) ONTAP's Tetris-like capability in laying down block intelligently.

mark3241(anon) | ‎12-06-2010 04:53 PM

"NetApp also makes you sign a waiver stating you understand the risks of lower performance. "

 

Maybe when deduplication was first announce this was a requirement, but no waiver is required now.

Also, deduplication seems to provide an advantage in VDI and VM architectures as the dedup'd blocks get loaded into buffers once for multiple VDI's and VM's.

 

At least, we are experiencing a performance gain.

ralfaro(anon) | ‎11-03-2011 03:33 PM

EVA is dead! as well as Fibre Channel, the smart move is to go with NAS, and I mean real NAS, no a "gateway" in front of some old school FC/iSCSI array

| ‎11-04-2011 12:25 AM

LOL - if had a dollar for every EVA we've sold, I could buy new cars for everyone in my family and have a lot of money left to take them all on a cruise around the world.  You are so wrong about the EVA being dead.  I recently visited an EVA customer who had an EVA5000 and recently bought a P6500.  They would say you are wrong. I did a blog post based on the visit I had with that customer.  And speaking of the P6500, that's one of the latest EVA's that we announced recently.  I have several blog posts that talk about the P6000 EVA

 

You're comment about what real NAS is also kind of silly and not even worth addressing - but hey, thanks for stopping by.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author
25+ years experience around HP Storage. The go-to guy for news and views on all things storage..


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation