Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

Converged Infrastructure: Block based storage virtualization podcast

  By Calvin Zito, @HPStorageGuy


This is the third in a series of three podcasts focusing on our HP Converged Infrastructure announcement. In part 1, I spoke with Sr. VP and GM of HP StorageWorks Dave Roberson.  In part 2, I talked to the former CEO of IBRIX Milan Shetti and Marketing Director Lee Johns about the new X9000. 


In today's podcast, we talk about block-based storage virtualization.  Storage virtualization is often a confusing topic because there are different types of storage virtualization.  Today's podcast discusses two types of block-based storage virtualization: within an controller array  (like our HP StorageWorks EVA) and network or SAN based (our StorageWorks SAN Virtualization Services Platform or SVSP).   Each of these products had enhancements that were announced with the November 4th Converged Infrastructure announcement.  


With the SVSP (link goes to the product page), we announced a "new" management tool - Command View SVSP.  This will be very familiar tool to EVA customers as Command View SVSP is very consistent with Command View EVA.  But it also simplifies and automates the task of provisioning a LUN.  For the details, listen to the podcast.


The advancement with the StorageWorks EVA (link goes to EVA family page) is with our Cluster Extension EVA.  This software manages the failover and failback between EVAs in a cluster.  What's new with the software is support for Microsoft Hyper-V Live Migration.  We are the first array to support the new capabilities and again, there's more about this in the podcast.  I also have a guest blog from one of our engineers below that goes into more details. 


So with that, here's the podcast:





If you're browser has issues using the embedded player, click here to listen to the podcast with a different player. Here's a link to download the MP3 (right click and save the file).


By Matthias Popp, HP StorageWorks Architect, Storage Systems Integration


Migrating a running server across data centers, servers and storage?


Are you tired of planning weekend downtime for storage system upgrades, server patches or network changes in your data center?  Are you getting the same "Not this weekend ..." response from your business managers and users?


HP worked with Microsoft to enable Live Migration of virtual machines (VMs) in Hyper-V R2 - not just between servers but also between your storage systems and hence between your data centers!


The latest release of HP StorageWorks Cluster Extension EVA orchestrates the interaction between Microsoft's System Center Virtual Machine Manager, Windows Failover Clustering and Hyper-V Live Migration to move running Server VMs between servers and storage in one single step.


The newest version of Cluster Extension checks the disk array replication process and prepares for a Live Migration and swaps the replication direction when the VM's target server is connected to the remote disk array. All automatic with no further administrator interaction. You decide when to Live Migrate and Cluster Extension makes sure the data can be accessed. Simple as that.


Since your servers and storage are distributed between data centers, the same configuration and software is used for disaster protection. No need to learn additional tools. Use the ones you have! 


Finally a simple solution to proactive maintenance with no downtime!


The new Live Migration support in Cluster Extension configurations will make a radical impact on your IT and business teams.  Clustering software manages unexpected failures at any time and Live Migration enables maintenance during the work day.  The IT team can now do server and storage maintenance during working hours. They no longer have to plan for downtime way ahead of a change.  The IT management doesn't have to budget for expensive weekend and night working hours.  Get your server and storage patched now, because you can!


The following paper explains the configuration and will soon be updated for Windows 2008 R2 and Hyper-V R2 Live Migration support: Disaster Tolerant Virtualization Architecture with HP StorageWorks Cluster Extension and Microsoft Hyper-VTM white paper 


Visit the HP booth at Microsoft's Tech·Ed Europe in Berlin next week for a demonstration and visit Microsoft's blog  for more info about the webcast Building Effective and Highly Available Disaster Recovery Solutions Using Microsoft Virtualization

Tags: BVid

What will EVA customers benefit from new NetApp program be? Zip! Zilch

  By Calvin Zito, @HPStorageGuy

In my last post, I included a couple of summary videos from our recent HP StorageWorks Tech Day.  The hands-on lab really stirred up a few folks over at NetApp.  The week after our Tech Day, they did a WebEx session to try to address some of the comments made by the bloggers about how difficult the management of their FAS system was.  I won't go into details about that but I counted at least three different GUIs that they showed during that demo.  The HP StorageWorks EVA has one - Command View.  But, that's not the topic I want to cover today.

The old "switch-a-roonie"

During their online demo, Vaughn Stewart from NetApp also discussed the NetApp vSeries - a network based storage virtualization product - and suggested that HP and NetApp were partnering to help EVA customers.  Vaughn thanked me for attending the demo and talked about partnering with HP.  I thought he was trying to connect the fact that I attended as being an HP endorsement of the vSeries.  It wasn't.  I made it very clear that HP wasn't working with NetApp to put vSeries products in front of our EVA's.  I told the demo audience that HP has our own network-based SAN virtualization product called the SAN Virtualization Services Platform (SVSP) that competes with the vSeries and in no way do we recommend EVA customers use the vSeries to virtualize a pool of EVAs.  We have talked about the SVSP several times on this blog and it will be discussed in a podcast later this week. 

I initially didn't understand why Vaughn brought the vSeries into the demo.  But a week or two after the NetApp demo, they announced a new marketing program targeting EMC CX series and HP EVA installed base customers with their vSeries.  Now it's pretty clear to me what was going on then.  This new NetApp marketing program asks customers to consider putting a vSeries in front of an EVA or CX. 

So I want to spend a few minutes now discussing why I think it would be a bad decision for any EVA customer to consider such a thing.  It's my belief that the benefits to an EVA customer would be zip (and interestingly that is the name of NetApp's program).

Improve storage efficiency at what price?

The claim NetApp is making is that EVA customers are not efficiently using their storage capacity.  That's a bit laughable given that with an EVA, every spindle is used for data and unless using tiering, we recommend a single disk group which gives incredible storage capacity efficiency.  To be in the program, NetApp has to approve the customers' application.  The customer is basically signing up to purchase the vSeries in 90 days if it delivers what NetApp will stipulate.  Be sure NetApp also stipulates the peformance hit you'll take - but I'm ahead of myself on that one. 

My guess is the number of customers that actually get into the program will be rather small and maybe that's a NetApp objective of their marketing program.  I suspect that NetApp's real motive is to develop a list of CX and EVA customers that they can continue to call on.

The NetApp claims of improved storage efficiency will come from a couple of categories of services that the vSeries provides for the arrays attached to it. 

  1. Thin provisioning

  2. Data replication (snapshot, clones, etc)

  3. Deduplication

The EVA customer already enjoys capacity efficiencies with the first two categories of thin provisioning and data replication. (Note that the EVA doesn't use traditional thin provisioning today but uses a product called Dynamic Capacity Manager that accomplishes similar results by integrating with the OS). 

For that matter, if customers wanted to pool their capacity of multiple EVAs and manage it as one pool, the SVSP offers thin provisioning and replication services too.  What we don't offer today is primary LUN deduplication.  But should customers running an EVA rush to deduplication their block-based mission critical storage?  I think the answer is absolutely not and here's a few things to consider:

  • NetApp has recently claimed 37,000 deployments of deduplication (via their PR department) but in a recent earnings call, their executives said 37,000 downloads.  I don't know about you but there's a big difference between the number of customers who download some free software versus who are actually using it, especially in production environments.

  • There's a trade-off to implementing deduplication with primary, block based storage - and that trade-off is performance.  Data that I've seen from a few different sources has said that a top customer concern in a virtualized environment is performance.  I've seen throughput testing results that say the performance degradation on a FAS system with dedup can be as high as 65%.  Their own recommendations say to run it during low activity and not all of the time.  NetApp also makes you sign a waiver stating you understand the risks of lower performance. 

  • We haven't tested to see what the actual deduplication capacity savings would be and frankly there are a lot of factors that would play into that. Since the controllers in the vSeries are the same controllers in the FAS system, it's worth noting that we have found that the percentage of capacity savings is roughly equal to the percent of slowdown. 

  • And that performance hit from deduplication doesn't include any other latencies that the vSeries introduces because of their in-line architecture. 

So should an EVA customer put their arrays behind a NetApp vSeries for a potential small capacity savings when the potential performance penalty is high?  And keep in mind the vSeries is based on the FAS controller.  We've shown in a recent blog post that based on our testing, the performance of that degrades rapidly.   

There are some good reasons to implement SAN-based virtualization with a product like the StorageWorks SVSP or the NetApp vSeries.   However, for an EVA customer, NetApp's value proposition of getting better capacity efficiency of the physical storage just isn't one of them.  A far better answer for the EVA customer is the StorageWorks SVSP.  We'll cover this topic in podcast later this week so stay tuned for that.

Tweet this! 

Labels: EVA| NetApp| storage| SVSP

StorageWorks Tech Day starting now

CartoonCalvin100X100.JPG By Calvin Zito


If you don't follow me on Twitter, you may not be aware that we have a blogger event going in on Colorado Springs.  We've brought a number of prominent storage and virtualization bloggers and over the next day and a half, have a packed agenda.  The topics we'll cover include:



  • Storage virtualization for enterprise customers - virtualize infrastructure, not just servers

  • Shared storage for virtual servers (SMB-focused)

  • Unified storage

  • Deduplication

  • Converged Infrastructure


We'll also have hands on sessions and demos of the HP StorageWorks Enterprise Virtual Array (EVA), SAN Virtualization Services Platform (SVSP), and HP LeftHand.  Here's a list of who is here (the "@name is their Twitter user name):



I'm grateful to all of them for taking time out of their busy schedules to learn more about HP StorageWorks. 


I'll be blogging here about what is going on and hope to have a few podcasts later in the week. If you want to follow the conversation real-time, we'll be using the hashtag #HPTechDay on Twitter.  If you don't use Twitter, here's a URL where you can see all of the "tweets" using this hashtag:


Tweet this! 

SVSP use case #3: Turning higher availability & data integrity into cost effective DR solutions

By Edgardo Lopez, Product Manager

Turning higher availability and data integrity into cost effective DR solutions that decrease business risk and improve productivity

In our most recent discussion, we talked about how non-disruptive data migration with the HP StorageWorks SAN Virtualization Services Platform (SVSP) can significantly help firms improve data mobility and asset management with the potential of significant savings in capital expenses.  In this new discussion, we want to turn to the issues of business continuity and disaster recovery.  

Many people today confuse business continuity with disaster recovery. Business Continuity ensures that enough resilience (or high availability) has been built into the infrastructure to negate single points of failure within a single site to ensure "business as usual" availability. Disaster Recovery adds recovery after loss of the primary site. Both need to ensure that the Recovery Point Objective (RPO), the point where the application data can be successfully recovered and the Recovery Time Objective (RTO), the point where the application is live again, are within the guidelines of the business.

So, what is the problem?

In the U.S., the 9/11 events have made disaster recovery a major concern for businesses of all sizes. But, while large business can afford the consultants and testing required to build a complete data recovery plan, small and medium size businesses are often challenged by the finances and the choices available. Even though the business risk involves loss of data and applications, not all business are able to deploy an adequate DR solution.

Traditional choices include clustering, server-based data mirroring solutions, and controller-based mirroring software.   However, all of them introduce shortcomings (see HP SVSP: Technology Overview and Use Cases - Note #1 below - for additional details) that negatively impact the complexity and cost of the solutions.  

Finally, the DR solutions and schemes should conveniently allow for non-disruptive testing of the recovery scheme, to make sure the business in not in jeopardy, should it actually need to use the DR solution.   Many organizations however, will give up the DR testing or perform minimal testing simply because of the disruption, complexity and cost of these tests.

It is in this context that we want to introduce the value of a DR solution with an SVSP-based storage infrastructure.   

The SVSP provides both synchronous and asynchronous data mirroring capabilities that facilitate instantaneous service resumption after storage, site or regional disasters.  Its ability to support heterogeneous storage makes ideal for enabling greater flexibility in the choice of storage hardware at the primary and remote sites, to better optimize solution costs.  It also provides space efficient copy services, including snapshots, which will keep the system on-line and fully available during backups, allowing restoring of the entire application in minutes rather than hours.  Centralized management of the recovery operations reduces administrative tasks, improves productivity and minimizes errors. 

The SVSP remote mirroring solution uses a snapshot-based technique, where only the differences between the snapshots are transmitted. This avoids the need for very expensive communication lines between the locations. It is possible to use common (or readily available) and low-cost communication lines, rather than higher-bandwidth higher-cost lines, further improving the economics for small and medium size businesses.


About Data Integrity (Note 2):

Data Integrity is another important requirement for remote replication solutions.  There are multiple aspects:

  1. Consistent copies when multiple volumes are involved

  2. Handling intermittent failures. Disasters often unfold over a period of time

  3. Impact of data corruption on the total recovery time. 

If the wrong recovery choice is made then the problem could be compounded or a total data loss could result.   The SVSP provides a very flexible set of capabilities to handle these issues and enable firms minimize business risk.

Let's talk about the first issue.  The SVSP asynchronous mirroring supports consistency groups.  These are groupings of volumes to enable the creation of consistent Point in Time (PIT) Copies across multiple volumes. This function is important because it aids good application design without sacrificing data consistency requirements. For example, in a database environment, the database and log files can be placed on separate volumes, and the consistency group function will allow the creation of consistent PIT Copies across all databases, log files, and volumes. This is critical because a failure to capture all of these components at the same point in time would leave the database application in an inconsistent and therefore unusable state. Consistency groups are therefore essential in large application environments. They can also contribute to more efficient management of application servers and enable PIT Copy automation through scripting.

Now let's turn to issues 2 and 3.  The HP SVSP provides solutions to these two problems. By using PIT Copies, there are many points in time in which to check for consistent data sets. Once a consistent state is found, the database can be brought online and if possible rolled forward. Application integrated PIT Copies are replicated to the DR site and the consistent data can be brought online either in the DR or Primary site very quickly without the need to recover from tapes. To enhance the rapid recovery asynchronous mirror failback (HP SVSP Continuous Access) facilitates true DR testing and enables replication of just the block changes from the DR site back to the Primary site without a complete block level synchronization of all data. This saves money otherwise spent on communications link bandwidth to support data change rates.

For more information on the SVS, please visit us at:

Note #1: "HP SVSP: Technology Overview and Use Cases" and HP white paper Paper URL:

Note #2: "Rapid Application Recovery with the SVSP: Using PITs and DR to meet today's stringent recovery objectives", and HP white paper Paper URL:  

Tweet this! 

Labels: storage| SVSP

Storage virtualization webcast on TechTarget

By Calvin Zito

Over the last couple of weeks, we’ve had a few posts from Edgardo Lopez talking about the HP StorageWorks SAN Virtualization Services Platform.  Here’s a summary of the posts to date:

But the main reason for today’s post is that I’d like call your attention to a webinar featuring Richard Villars, Storage Systems Vice President at IDC and Kyle Fitze, StorageWorks Storage Platforms Marketing Director.  Here’s a brief description of the webinar:

Simply defined, storage virtualization lets you pool and share storage resources to help you make sure that supply meets business demand. This Mediacast explores how to choose a storage virtualization solution that will help you tackle data growth requirements and spending constraints. View the Webcast or download the companion Podcast to learn more about adaptive storage infrastructures and the business value it delivers.

This is available on and you have two ways to get this (NOTE: I think you have to be a registered SearchStorage user to view these):

View Webcast

Download Podcast

As always, let me know if you have any comments.

Tweet this! 


Follow HP StorageWorks on Twitter:

Labels: storage| SVSP
Showing results for 
Search instead for 
Do you mean 
Follow Us

About the Author(s)
  • 25+ years experience around HP Storage. The go-to guy for news and views on all things storage..
  • This profile is for team blog articles posted. See the Byline of the article to see who specifically wrote the article.
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.