By Calvin Zito, @HPStorageGuy
In my last post, I included a couple of summary videos from our recent HP StorageWorks Tech Day. The hands-on lab really stirred up a few folks over at NetApp. The week after our Tech Day, they did a WebEx session to try to address some of the comments made by the bloggers about how difficult the management of their FAS system was. I won't go into details about that but I counted at least three different GUIs that they showed during that demo. The HP StorageWorks EVA has one - Command View. But, that's not the topic I want to cover today.
The old "switch-a-roonie"
During their online demo, Vaughn Stewart from NetApp also discussed the NetApp vSeries - a network based storage virtualization product - and suggested that HP and NetApp were partnering to help EVA customers. Vaughn thanked me for attending the demo and talked about partnering with HP. I thought he was trying to connect the fact that I attended as being an HP endorsement of the vSeries. It wasn't. I made it very clear that HP wasn't working with NetApp to put vSeries products in front of our EVA's. I told the demo audience that HP has our own network-based SAN virtualization product called the SAN Virtualization Services Platform (SVSP) that competes with the vSeries and in no way do we recommend EVA customers use the vSeries to virtualize a pool of EVAs. We have talked about the SVSP several times on this blog and it will be discussed in a podcast later this week.
I initially didn't understand why Vaughn brought the vSeries into the demo. But a week or two after the NetApp demo, they announced a new marketing program targeting EMC CX series and HP EVA installed base customers with their vSeries. Now it's pretty clear to me what was going on then. This new NetApp marketing program asks customers to consider putting a vSeries in front of an EVA or CX.
So I want to spend a few minutes now discussing why I think it would be a bad decision for any EVA customer to consider such a thing. It's my belief that the benefits to an EVA customer would be zip (and interestingly that is the name of NetApp's program).
Improve storage efficiency at what price?
The claim NetApp is making is that EVA customers are not efficiently using their storage capacity. That's a bit laughable given that with an EVA, every spindle is used for data and unless using tiering, we recommend a single disk group which gives incredible storage capacity efficiency. To be in the program, NetApp has to approve the customers' application. The customer is basically signing up to purchase the vSeries in 90 days if it delivers what NetApp will stipulate. Be sure NetApp also stipulates the peformance hit you'll take - but I'm ahead of myself on that one.
My guess is the number of customers that actually get into the program will be rather small and maybe that's a NetApp objective of their marketing program. I suspect that NetApp's real motive is to develop a list of CX and EVA customers that they can continue to call on.
The NetApp claims of improved storage efficiency will come from a couple of categories of services that the vSeries provides for the arrays attached to it.
- Thin provisioning
- Data replication (snapshot, clones, etc)
The EVA customer already enjoys capacity efficiencies with the first two categories of thin provisioning and data replication. (Note that the EVA doesn't use traditional thin provisioning today but uses a product called Dynamic Capacity Manager that accomplishes similar results by integrating with the OS).
For that matter, if customers wanted to pool their capacity of multiple EVAs and manage it as one pool, the SVSP offers thin provisioning and replication services too. What we don't offer today is primary LUN deduplication. But should customers running an EVA rush to deduplication their block-based mission critical storage? I think the answer is absolutely not and here's a few things to consider:
- NetApp has recently claimed 37,000 deployments of deduplication (via their PR department) but in a recent earnings call, their executives said 37,000 downloads. I don't know about you but there's a big difference between the number of customers who download some free software versus who are actually using it, especially in production environments.
- There's a trade-off to implementing deduplication with primary, block based storage - and that trade-off is performance. Data that I've seen from a few different sources has said that a top customer concern in a virtualized environment is performance. I've seen throughput testing results that say the performance degradation on a FAS system with dedup can be as high as 65%. Their own recommendations say to run it during low activity and not all of the time. NetApp also makes you sign a waiver stating you understand the risks of lower performance.
- We haven't tested to see what the actual deduplication capacity savings would be and frankly there are a lot of factors that would play into that. Since the controllers in the vSeries are the same controllers in the FAS system, it's worth noting that we have found that the percentage of capacity savings is roughly equal to the percent of slowdown.
- And that performance hit from deduplication doesn't include any other latencies that the vSeries introduces because of their in-line architecture.
So should an EVA customer put their arrays behind a NetApp vSeries for a potential small capacity savings when the potential performance penalty is high? And keep in mind the vSeries is based on the FAS controller. We've shown in a recent blog post that based on our testing, the performance of that degrades rapidly.
There are some good reasons to implement SAN-based virtualization with a product like the StorageWorks SVSP or the NetApp vSeries. However, for an EVA customer, NetApp's value proposition of getting better capacity efficiency of the physical storage just isn't one of them. A far better answer for the EVA customer is the StorageWorks SVSP. We'll cover this topic in podcast later this week so stay tuned for that.