By Calvin Zito, @HPStorageGuy
In my last post, I included a couple of summary videos from our recent HP StorageWorks Tech Day. The hands-on lab really stirred up a few folks over at NetApp. The week after our Tech Day, they did a WebEx session to try to address some of the comments made by the bloggers about how difficult the management of their FAS system was. I won't go into details about that but I counted at least three different GUIs that they showed during that demo. The HP StorageWorks EVA has one - Command View. But, that's not the topic I want to cover today.
The old "switch-a-roonie"
During their online demo, Vaughn Stewart from NetApp also discussed the NetApp vSeries - a network based storage virtualization product - and suggested that HP and NetApp were partnering to help EVA customers. Vaughn thanked me for attending the demo and talked about partnering with HP. I thought he was trying to connect the fact that I attended as being an HP endorsement of the vSeries. It wasn't. I made it very clear that HP wasn't working with NetApp to put vSeries products in front of our EVA's. I told the demo audience that HP has our own network-based SAN virtualization product called the SAN Virtualization Services Platform (SVSP) that competes with the vSeries and in no way do we recommend EVA customers use the vSeries to virtualize a pool of EVAs. We have talked about the SVSP several times on this blog and it will be discussed in a podcast later this week.
I initially didn't understand why Vaughn brought the vSeries into the demo. But a week or two after the NetApp demo, they announced a new marketing program targeting EMC CX series and HP EVA installed base customers with their vSeries. Now it's pretty clear to me what was going on then. This new NetApp marketing program asks customers to consider putting a vSeries in front of an EVA or CX.
So I want to spend a few minutes now discussing why I think it would be a bad decision for any EVA customer to consider such a thing. It's my belief that the benefits to an EVA customer would be zip (and interestingly that is the name of NetApp's program).
Improve storage efficiency at what price?
The claim NetApp is making is that EVA customers are not efficiently using their storage capacity. That's a bit laughable given that with an EVA, every spindle is used for data and unless using tiering, we recommend a single disk group which gives incredible storage capacity efficiency. To be in the program, NetApp has to approve the customers' application. The customer is basically signing up to purchase the vSeries in 90 days if it delivers what NetApp will stipulate. Be sure NetApp also stipulates the peformance hit you'll take - but I'm ahead of myself on that one.
My guess is the number of customers that actually get into the program will be rather small and maybe that's a NetApp objective of their marketing program. I suspect that NetApp's real motive is to develop a list of CX and EVA customers that they can continue to call on.
The NetApp claims of improved storage efficiency will come from a couple of categories of services that the vSeries provides for the arrays attached to it.
- Thin provisioning
- Data replication (snapshot, clones, etc)
The EVA customer already enjoys capacity efficiencies with the first two categories of thin provisioning and data replication. (Note that the EVA doesn't use traditional thin provisioning today but uses a product called Dynamic Capacity Manager that accomplishes similar results by integrating with the OS).
For that matter, if customers wanted to pool their capacity of multiple EVAs and manage it as one pool, the SVSP offers thin provisioning and replication services too. What we don't offer today is primary LUN deduplication. But should customers running an EVA rush to deduplication their block-based mission critical storage? I think the answer is absolutely not and here's a few things to consider:
- NetApp has recently claimed 37,000 deployments of deduplication (via their PR department) but in a recent earnings call, their executives said 37,000 downloads. I don't know about you but there's a big difference between the number of customers who download some free software versus who are actually using it, especially in production environments.
- There's a trade-off to implementing deduplication with primary, block based storage - and that trade-off is performance. Data that I've seen from a few different sources has said that a top customer concern in a virtualized environment is performance. I've seen throughput testing results that say the performance degradation on a FAS system with dedup can be as high as 65%. Their own recommendations say to run it during low activity and not all of the time. NetApp also makes you sign a waiver stating you understand the risks of lower performance.
- We haven't tested to see what the actual deduplication capacity savings would be and frankly there are a lot of factors that would play into that. Since the controllers in the vSeries are the same controllers in the FAS system, it's worth noting that we have found that the percentage of capacity savings is roughly equal to the percent of slowdown.
- And that performance hit from deduplication doesn't include any other latencies that the vSeries introduces because of their in-line architecture.
So should an EVA customer put their arrays behind a NetApp vSeries for a potential small capacity savings when the potential performance penalty is high? And keep in mind the vSeries is based on the FAS controller. We've shown in a recent blog post that based on our testing, the performance of that degrades rapidly.
There are some good reasons to implement SAN-based virtualization with a product like the StorageWorks SVSP or the NetApp vSeries. However, for an EVA customer, NetApp's value proposition of getting better capacity efficiency of the physical storage just isn't one of them. A far better answer for the EVA customer is the StorageWorks SVSP. We'll cover this topic in podcast later this week so stay tuned for that.
By Karl Dohm, Storage Architect
Welcome back to the next in a series of posts where we take a closer look at NetApp and its FAS series of storage arrays. The discussion topic today is Microsoft's Exchange Solution Reviewed Program (ESRP) and its tie to FAS throughput.
The FAS has some controversial history with regards to performance. From time to time the issue comes up and in response NetApp has generally denied the problems exist. Often we find the opposite stance in posts from NetApp lauding their performance, for example in Kostadis Roussos' post where he refers to WAFL write performance as 'surreal'. But, as I have said in previous posts, there are some justifiable reasons this controversial subject keeps surfacing.
First of all, let's touch on why an average storage consumer should care about array throughput. An array with better throughput, i.e. the ability to service more I/Os from a given set of spindles, can result in requiring less hardware to do the same job. The bigger the throughput difference, the more to be saved on purchase price, warranty cost, power consumption, floor space, cooling, etc. Array throughput statistics can be meaningful when evaluating value in a storage array. It seems NetApp also finds this array attribute important given the amount of blog posts and papers they have on the topic of performance.
Recently in the comments section of a blog post on understanding WAFL, NetApp's John Martin and I had a small debate as to whether a synthetic load generator like IOMeter could be used to characterize how an array will perform in production scenarios. I made the argument that this type of tool can be used to circle the wagons around the I/O characteristics of a real world application, and that through multiple point tests of the load components of the application one could get a reasonable assessment of how well the box will behave. John's opinion was more along the line that synthetic workload tests were not suitable to provide an indication of how well an array would run with a production application ("Synthetic workloads in isolation lead to non typical results"). He referenced Jetstress as a more accurate indicator.
I took his queue and had a look at the FAS2050 ESRP results paper which describes MS Exchange like throughput of the FAS2050 array. Even though ESRP isn't intended to be a benchmark, a scan of ESRP results tells me that many vendors seem to use the forum of ESRP as a way to post throughput results relative to how their array handles MS Exchange load. It kind of makes sense since there seems to be no Exchange related benchmark out there, and ESRP is the closest controlled thing the industry has to work with.
The NetApp ESRP paper provides insight into how NetApp would recommend setting up the 2050 for Exchange loads, and it shows throughput results in a heavily loaded 10,000 mailbox Jetstress test. This paper sparked our interest because the described results seemed good and did not correlate with results from synthetic load generators that produce a similar pattern as Jetstress. Maybe John was right. We decided to peel back the onion a bit and take a look under the covers of this ESRP test to figure out what was going on.
We happened to have access to a FAS2050 and decided to try and recreate the ESRP results as published. It turns out that the IOPs value that NetApp published was in fact roughly re-creatable given the data in the paper. On the surface this can be viewed as NetApp having made an honest submission to ESRP, and within the letter of the law one could reasonably argue that they did. But we also learned that NetApp found a way to make their results come across as favorably as possible, meaning the results have little relevance as to how well the FAS will run MS Exchange.
After a rather lengthy setup experience, we finally configured the aggregates, volumes, servers, LUNs, MPIO, and HBA attributes as described in the ESRP paper. We even set the diagnostic switch "wafl_downgrade_target" to a value of 0 in accordance with the recommendations in the paper.
One might ask, as we did, what does "wafl_downgrade_target" do? In its TR-3647 paper, NetApp describes the switch as follows: The "downgrade_target" command changes the priority of a process within Data ONTAP that handles incoming SCSI requests. This process is used by both FC SAN and iSCSI. If your system is not also running NAS workloads, then this priority shift improves response time."
I think his description is telling us that the NAS process consumes bandwidth when there is no NAS work to do. Also, given the NetApp messaging around unified storage architecture, a recommendation to use this switch seems like a bit of a contradiction. Would you consider it normal to be asked to set a switch that generates the following response? "Warning: These diagnostic commands are for use by NetWork Appliance personnel only". Last but not least, this switch resets itself if the array reboots. I'll leave it to the audience to draw their own conclusions as to whether use of this switch is truly a recommended practice in customer environments.
Once the array was freshly initialized and everything was set up, we ran the test and observed the results of roughly 2200 average disk database disk transfers/second per host. Within noise levels, this recreated the results as posted in their ESRP paper.
The main problem we have with how NetApp did this testing is that after the initial run, every time this test is run it runs slower than the previous time it ran. The 2nd run showed results of approximately 1980 transfers/second per server, about an 11% drop. By the fifth run throughput had dropped to approximately 1555 transfers/second per server - a 30% drop. After a couple more runs we were down to 1450, 34% slower than the first run.
I didn't have the patience to run enough times to figure out where this decay curve flattens out.
At this point I decided to run a "reallocate measure" against one of the database LUNs, and the FAS reported the value to be 17. According to the NetApp man page for Reallocate Measure: "The threshold when a LUN, file, or volume is considered unoptimized enough that a reallocation should be performed is given as a number from 3 (moderately optimized) to 10 (very unoptimized)". Allow me to translate - the database LUNs are very fragmented. For those who might be confused by the use of the word fragmentation in this context, this is not NTFS fragmentation - its WAFL fragmentation.
Now things were starting to make sense. We were seeing the same sort of decay curve as shown in the IOMeter results posted in Making Sense of WAFL - Part 4. Every time the test is run, the random component of the Jetstress database accesses fragment the LUN further and the throughput numbers get worse. An array like EMC CX or HP EVA wont undergo this sort of decay curve since these arrays do not have internal WAFL-fragmentation problems like the FAS does.
That's not all. After the throughput test, Jetstress executes a checksum test of the databases to be sure the array did not corrupt any data. After a few runs I noticed an interesting pattern. On the FAS, the length of time needed for the checksum calculation also degraded as the database LUNs went through their WAFL-fragmentation. When the LUNs were fresh and defragmented, the checksum calculation took about 2 hours. By the fifth run, when the database LUNs had a WAFL-fragmentation measure of 17, the checksum calculation took over 10 hours - a 250% slowdown To summarize we saw 34% slowdown on database throughput and a 250% slowdown on checksum calculation by just letting the ESRP test run for about 48 hours before taking measurements.
So, drawing this to a close, I think there is a reasonable argument that NetApp should have results more like 1450 (or less) disk transfers/second/host as opposed to the 2220 transfers/second/host they did post. Most would expect that results in a test as visible as ESRP are measured after a reasonable burn in period. After all, when someone runs MS Exchange, they usually run it for longer than 2 hours.
By Jim Haberkorn
I had hoped that my last post in regards to NetApp performance claims would end gracefully, as a courageous NetApp employee has apparently now agreed to work with us to find out what we may be doing wrong, if anything, to be getting such poor performance out of our NetApp filer. FYI: That discussion has now moved to engineer Karl Dohm's blog post, where there is now a civil discussion taking place on the subject.
But alas, a graceful ending was not to be. I've been informed that a certain NetApp employee has now moved to Twitter to assert that I have called NetApp a liar in my blog post.
So, in the interest of setting the record straight on this important point, let me make it clear: I have never referred to NetApp or its bloggers as liars, though I have said, and still believe, that some of their claims and arguments are illogical, both in regards to claims they make about themselves and claims they make about the competition.
If you check my previous blog post, you will see that the word 'liar' was used only once and that was by a NetApp blogger, in a moment of excessive sensitivity. But now another NetApp employee has picked it up and twittered about it. Ah! A new NetApp blogging tactic: One NetApp blogger exaggerates a competitor claim, then the other one attacks the competitor for it. Hmm....I must add that one to my list.
Now, here are just three examples of NetApp illogic that surfaced in the previous post:
- Using blog references to convince me that WAFL is not a file system (Kostadis, Geert, are you reading this?) when every NetApp white paper on the NetApp website, including one just published in July 2009, still refers to WAFL as a file system - http://media.netapp.com/documents/wp-7079.pdf. Logically, why would you insist your competitors accept your point when you haven't even convinced your own company?
- Telling me it is 'dangerous' for a competitor to even attempt to accurately performance test another vendor's array, when NetApp has actually gone to the extent of publishing two SPC benchmarks on EMC arrays. Okay, maybe 'illogical' is not the right word here - perhaps 'contradictory' would have been more precise. But then again, wouldn't you think it illogical to state an obvious contradiction in a public blog. I mean, the idea of a debate is to win the argument, not hand your competition a stick to beat you with. Note to NetApp bloggers: I am not threatening to beat NetApp employees with a stick.
- Claiming in their 21 page Wyman/Mercer cost-of-ownership white paper that after a thorough and meticulous analysis of EVA, CLARiiON, and DMX usable capacity, it was found that all those arrays used exactly the same amount of usable capacity for a 4TB database, down to the tenth of a terabyte (and by the way, the number NetApp came up with in its painstakingly precise calculation was 30.7TB for each, as opposed to their own 15.0TB for a FAS system.) If 'illogical' is not the right word here, which word would you prefer? Would you find 'ridiculous' less offensive?
But my point is: When your claims are illogical, it's illogical to take offense. Rather, reworking your arguments and getting back into the game is the best option. Also, I think everyone realizes that being illogical and lying are two entirely different things..
As far as blogging is concerned, I consider myself one of the least thin-skinned people you'll ever blog with. Any tendency towards hyper-sensitivity was beaten right out of me during six years in the Marines. When someone now tells me that 'my claims are illogical', I don't get personally worked up about it. In fact, I find myself marveling at their gracious language and self-restraint. Heck, I didn't even get angry when a NetApp blogger published one of my HP Confidential slides and called it 'nonsense' and 'dipstickery' (see this post).
So, here is my final piece of advice to my honorable NetApp colleagues: Lighten up, guys! Nobody in the blogging world minds a well phrased repartee now and again, but all this teeth-grinding is so Cold War. Within the industry, you're the only bloggers I know that carry on the way you do. Your company's doing well. Relax. Engage in the blogs if you feel so moved, but try to have a good time while you're doing it.
By Jim Haberkorn
Before I begin, I don't know how other readers react to it, but whenever I see someone get frustrated and lose their temper on a blog, it's usually because deep inside they know they are losing. Maybe that's not how it is portrayed in movies and books, but in real life, that's how it works.
For those who have come in late to this discussion, (see the discussion on our previous post titled "Are NetApp performance claims logical") this entire exchange I am having with Alex from NetApp actually started last November when I analyzed a NetApp white paper attacking the HP StorageWorks EVA, and pointed out its inconsistencies and highly questionable tactics. At the same time, an HP engineer, initiated a blog in which he discussed performance testing he'd completed on a NetApp filer vs. an EVA. I'm pointing everyone now to the last of the four blogs our engineer posted titled "Making sense of WAFL Part 4" as this one more specifically addresses the performance issue Alex raised in his blog comments I referenced above. I have made the comment several times that HP won that discussion. I stand by that, but in the end, it's up to every reader to decide for themselves.
Now, Alex asked if I would explain why NetApp's having a file system optimized for writes has any bearing on NetApp storage performance. Here is the answer: The NetApp file system - WAFL - is optimized for writes. Basically, WAFL will write to the nearest available free space to where the disk head is. I believe Alex conceded that. The rest of the arrays that we have tested in our lab take a different tack; they optimize for reads by updating blocks in place. And why? Because in block environments most applications are read intensive as opposed to write, and you want to keep the blocks for databases as contiguous as possible so they can be read faster with a minimum of disk head movement. Oracle, for example, depends heavily on locality of reference for its performance. This NetApp write-optimization was designed into the NetApp file system many years before they ever contemplated going into block storage, and the trade-off is that their read performance is impacted relative to the competition. Now, I don't consider this single point to be the final word on NetApp filer performance - all storage systems have their peculiarities - but coupled with some of NetApp's other design decisions and some of the testing we've presented, I think we are building a pretty good case in regards their block performance behavior.
So, forget for a second the tests that each vendor runs and publishes on their own product, forget that all of us are paid to ensure our company's success in the market, forget that we all like to win debates; the point I keep hammering home in my blog posts through various arguments, some technical and some logical, is that the block performance, usable capacity, and cost of ownership claims NetApp makes about its technology do not add up. They don't add up in our lab when we test them, and they don't add up logically when we analyze their technology. Oh, every vendor puts its best foot forward - we all expect that, but, in my opinion, some of NetApp's claims deserve special attention.
Finally, there are two things that NetApp has done over the past several years that, in my opinion, have seriously hurt their credibility among people who know storage. One was the capacity guarantee program that I have previously blogged about, and the second was their Wyman/Mercer cost of ownership white paper where they attacked the HP EVA, and which I also blogged.
If any reader wants to understand why I question some of NetApp's claims about itself, those two blogs would be a good place to start.
By Jim Haberkorn
We at HP like to test competitor arrays and see if we can match their published results. In our internal testing, only the NetApp results are so out of whack that we are left scratching our heads over what the heck they did to achieve them. So for NetApp, we have published our results and challenged them on it. And in fact, the NetApp folks at first put up a vigorous defense which included a fair number of attacks on the integrity of our engineer who did the testing, but then as the discussion progressed, the NetApp enthusiasts went quiet. See the blog post titled Making sense of WAFL.
So, we've argued against NetApp performance with numbers and the argument got very technical, and we won. But recently, NetApp, on one of its blogs, waited till the discussion had died down and then, to our surprise, acted as if they had won the discussion and that the issue now was totally resolved and that they had once again defeated the forces of evil. So, now NetApp has left us no choice but to try a new tactic: Logic.
The NetApp counter-arguments on the subject of their performance are becoming more and more like the man who admits it is raining, admits he is standing outside, admits he doesn't have an umbrella, but then denies he is getting wet. If you ask a NetApp engineer these performance related questions you will get the answer 'yes' to all of them.
- Do you build your LUNs on top of a file system?
- Does your file system write only to free block space rather than updating the old blocks?
- Do you ship a de-fragmentation tool on every NetApp filer?
- Do you use software RAID?
- Does your file system spread the metadata over the entire disk system?
- Does WAFL require you to build secondary inode trees if the file is bigger than 64KB?
- Can IOs to a particular file be serviced by only a single controller in a NetApp cluster?
- Do NetApp snaps reside in the same disk group as the primary volume?
- Is only a quarter of your NVRAM usable?
- Is the processing power of your largest filer, rated by NetApp to handle 1176 disks, based on a maximum 8 x 2.6 GHz AMD dual core Opteron processors - about the same processing power as a Proliant DL785?
- Is your file system optimized for writes?
But, if you then ask a NetApp blogger, if taken together, do these features negatively impact NetApp performance, you'll get an answer to the effect, "What are you crazy? How did you ever come up with that conclusion?"
Now, I am sure that every one of those eleven engineering decisions was made by NetApp for a logical reason from their perspective. But I am also sure that every one of them has a negative impact on NetApp performance, and the cumulative effect is pretty significant in most real-world environments. And those features put NetApp at a huge performance disadvantage in a storage world dominated by vendors who don't require the extra overhead of having to build LUNs on top of file systems, or don't need to ship a de-fragmentation tool with their array, or don't use software RAID.
Conclusion: Is every NetApp NAS customer unhappy with NetApp performance? Answer: No, of course not. But, of the storage arrays we've tested, is NetApp block performance the worst by a pretty wide margin? Yeah, looks that way to us.
An oldie but a goodie! Not long after HP bought LeftHand Networks, NetApp came out guns ablazing with a FUD campaign against our LeftHand technology. It's been over two years since I wrote this blog post and it's interesting to look back at it now. Alex has a new job at NetApp and our P4000 LeftHand has proven to be a great solution in the marketplace. I want to personally thank Alex for drawing so much attention to HP LeftHand!
By Calvin Zito
Alex McDonald, who I think is a competitive analyst over at NetApp, has been trying his best to make our HP LeftHand solutions look bad compared to his products. It's gone back and forth with lots of comments and posts on both this blog and Alex's. This has me thinking about the value of these public cat fights and whether or not they are helping our customers to better sort fact from fiction. I'm watching my metrics on "page views" on our blog and clearly, a lot of you like to read these type of blog posts.
Every storage vendor (and really, every product team) makes engineering choices that results in strengths and weaknesses in what they offer. Let me give a simplified theoretical example to help bring what I'm saying to life. Let's say that Company A's product costs 10% more than Company B's, but Company A's widget has a total cost of ownership that is 1/3 of Company B's even with the higher purchase price. Would you as a customer care about a 10% price difference knowing that the TCO is so much better? You certainly would want to know but I don't think you make that a deciding factor (unless the extra 10% means you go over budget). So to bring it back to something concrete, is it helpful for you when to hear these discussions about a specific product detail without the broader context of the benefits? In the current case with NetApp, I tried to bring the discussion back to that broader context on Alex's blog but he and his colleagues were singularly focused on nothing other than capacity utilization and ignored my raising the benefits of our virtualization bundles (and I guess I shouldn't be surprised about that since NetApp only has storage - no servers, networking, virtualization software, etc.).
I'm not sure I can answer whether or not customers find value in these types of squabbles yet but I'm sure it's something I'll talk about in a future blog post. I'm interested in what you think - is it helping you better understand the products, creating more confusion, or is it just fun to watch a vendor debate in the blogosphere? My intent really is to make our blog a place where customers can learn more about storage, HP StorageWorks, and infrastructure convergence. Feel free to drop me an email by clicking on the "Contact" link on the right-hand navigation of this blog or click on this text.
In my next post, I'll summarize where the specific discussion with NetApp has gone over the last several days - that is unless I get an overwhelming response that you don't see any value in these debates.
By Jasen Baker, Storage Architect
I once heard that in communicating your opinion or differences, you should not use personal phrases, such as "you always", "you never", "every time", etc. These make broad, sweeping assumptions which seldom reflect the truth, especially when communicating differences in products. Attention to detail, such as quoting someone's name when providing a source of argument, or consolidating many options into a single unified calculation over simplify and often times mislead readers who are looking for educated facts instead of uninformed guesses.
As an example, take a look at online value calculators. There are calculators for mortgages, calculators for the national debt, ROI calculators, and even storage capacity calculators. They do their best to point you in a certain direction, to narrow down the scope of what you will be working with, but don't truly take in all the factors, hence why the infamous asterisk * exists!
In responding to a recent blog post referencing "LeftHand Capacity Calculator" crafted to demonstrate useable percentages of available capacity, it's supposed to ALWAYS, come out this way.
To quote the blog, "That's because, regardless of how small or how large your LHN SAN, it's always:"
There's that word again, always.
In storage, there are useable capacities that always occur. That always, is the space you lose as a result of hardware RAID, well, unless it's hardware RAID 0.
First, the HP LeftHand storage nodes can be configured in RAID 5, 6, or RAID 10, all with various useable capacities. This calculator only has RAID 5. Why choose? The reasons are many, but most common are performance, protection and capacity. You choose, it's no different with us or any other vendor solution.
Second, "Disk rightsizing", a term used to explain why that 1TB hard drive you bought only shows ~932GB useable. Why? Well, that's because the hard drive vendors view 1MB as 1000 kbytes while your Operating system views 1MB as 1024 kbytes. That extra 24 bytes adds up which is why you truly don't get the actual hard size useable (this is before formatting it with your favorite file system as well). Again, nothing specific to use or any other vendor solution.
Third, Network RAID. What is Network RAID? Network RAID is a unique feature of the HP LeftHand SAN that allows you to CHOOSE on a per volume basis how many replicated copies of your LUN / VOLUME are distributed across the SAN. What is unique about this is it's DYNAMIC. You get to choose which volumes have it and which do not. What it offers you is the ability to survive entire node failures; if your nodes are physically separated and you lose an entire physical sites, your data remains online and available.
Hardware RAID is usually a set it and forget it configuration, and it's seldom changed. Network RAID is dynamic, because you as the customer choose to turn it on or off depending on the application protection needs. With that choice, you select the use of additional capacity to protect your data in a manner superior to standard hardware RAID. They key point here is choice. You have the choice, and if you change your mind, the system is dynamic and allows you to change the level of data protection on a per volume basis as often as desired. Unfortunately, a calculator without options isn't very reflective of real life. In essence, the above calculator was missing the infamous *Your mileage may vary...
To quote our previous blog poster "Unlike NetApp's space efficiency calculator, the LHN Duplication Calculator I've designed doesn't have any input fields or buttons..."
The HP Centralized Management interface WE designed, does have buttons, and even drop-downs, allowing you to choose how your capacity is used, ALWAYS.
By John Spiers (Former CTO and a founder of LeftHand Networks, now working for HP StorageWorks)
If you haven't taken a look at the new HP virtualization bundles, you definitely should. The virtualization bundles provide an end-to-end solution that delivers application high availability without external storage. Sounds like a contradiction? Read on!
It has been characterized as a "mini-Matrix system", offering server, virtualization, storage and networking products configured and tested to reach the full potential of server virtualization. These bundles include ProLiant G6 servers, VMware's vSphere 4 virtualization software, ProCurve networking Switches, HP LeftHand P4000 and HP's Insight Control Suite (ICE) for management.
My colleague recently wrote about the virtualization bundles and it got me thinking about how unique these bundles are for SMBs and other companies looking to improve application availability and cost savings through virtualization.
Allow me to elaborate...
HP positions these easy-to-buy bundles as solutions that reduce the complexity and uncertainty of virtualization. You have rack or tower servers, highly available shared storage and networking with simple, centralized management.
Looking at the value of the bundles, Illuminata recently stated, "A major contributor to the value of what HP offers in these bundles is the fact that the requirement for acquiring, integrating and testing external SAN storage can be avoided."
We address one of the main hurdles on the way to server virtualization: the need for shared storage. Without shared, highly available storage VMotion, VMware HA or VMware FT cannot happen automatically, virtual machines cannot be moved and application users cannot be shielded from the outage of a physical server. If all you do is consolidate many virtual machines onto one physical server, you have basically placed all bets in one basket (traditionally knows as all eggs in one basket). Some workloads and business requirements can tolerate this scenario, yet many cannot. With HP LeftHand Virtual SAN Appliance, the internal server disks (and directly attached disks) can be pooled in a VMware environment and act like a pool of storage, like a virtual SAN. The IT administrator ends us using a SAN without ever having bought a SAN, a physical SAN that is!
The majority of these Virtualization Bundles include physical SAN nodes. And another secret is that servers and the SAN can not only failover, but automatically failback and incrementally re-sync the data on the primary SAN without manual intervention and with complete application data consistency using VMware's vSphere Fault Tolerance capability. Not to mention that these meaty bundles include software for snapshots, cloning, remote replication, thin provisioning, multi-site synchronous replication and advanced performance monitoring - at no additional charge.
With this new technology combination, HP is clearly establishing a new paradigm in server and storage virtualization. This is what customers have been asking for, for years and they can finally get it. I don't know of a single vendor server and SAN solution in the market today that is comparable.
By Calvin Zito
Several weeks ago, I had written about NetApp claiming to win the CRN Channel Champion Award in the Network Storage Category. The part they kind of ignored was they did not win the overall award, HP StorageWorks did. What they won was the Financial Performance trophy, not the overall Channel Champion. But you wouldn't have know that from their press release.
Well, their press team is at it again. This time NetApp put out a press release regarding their participation in a SNIA Special Interest Group (SIG). Here's what they said in their release: "NetApp is also the founding member of the SNIA Database Information Management Special Interest Group." A bit later in the paragraph where they say this, they give a link to the SNIA website: http://www.snia.org/forums/dmf/programs/ltacsi/dim_sig/. At this website, you'll find that NetApp is not the founding member but a founding member of this SIG along with IBM.
I'd like to think this is just an oversight but given how NetApp stretched the truth in their exaggeration of the CRN Channel Champion Award, this is looking like a dangerous trend. This may seem a bit trivial but really at its core, it's still just plain dishonest.