Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

The complexity of choice

Editor's note:  The last three posts have focused on the storage consolidation for SMB announcement we did last week.  We have posts from Carol Kemp, Lee Johns, and Charles Vallhonrat.  Here are links to those posts if you want to read today's post in context:



So with that context, here's today's wrap up on the announcement, focusing on the new HP Virtualization Bundles.


The Complexity of Choice


By Lee Johns


Last week before going on a business trip to the Middle East, I wanted to get a small digital camera.  My wife said those immortal words "That should be easy - there is so much choice today".  Two days of web research and 4 store visits later I had it narrowed down to 6 possibilities from 4 different manufacturers and none had all the features I now wanted having seen all of the different specifications. 


Why am I writing about this here?  Well I was thinking how tough the IT industry makes it for customers, with all of the choice we offer.  It's not just that we have so many different makes and models from so many manufacturers but in addition to that you have to make them all work together in support of your business goals.  With that said I do see a move toward converging different elements of the infrastructure to deliver IT infrastructure solutions that are easier to consume.  Last week, HP took another step in this direction with the announcement of HP Virtualization Bundles for SMB customers starting out with virtualization.  The solutions bring together networking, servers, storage and both server and storage virtualization software into pre-tested configurations that can scale with additional building blocks as their businesses grow. 


SMB's don't always choose the elements of the solutions they deploy.  A reseller partner or integrator will recommend configurations to them.  Regardless, anything that makes it easier for a reseller to recommend or a customer to buy fully featured solutions with built in investment protection is a step in the right direction.  There may be other features available if you want to spend the time doing work on integration and configuration but ,believe me, the HP solutions are well rounded, have unique features, you will be happy with the solution, and save some time and money.


By the way I ended up buying the first camera I was drawn to.  I could have saved a lot of my own time If I had just purchased the product that met my top three criteria (size, price and battery life).  I'm sure I will get great pictures and I doubt I will miss having blink recognition.


Tweet this! 

New StorageWorks consolidation solutions for SMB's

By Carol Kemp


In case you missed it, HP came out with several new products and solutions designed for SMBs to help them survive and thrive during this tough economy. To read the full press release, click here.  Headlining the announcement are new storage consolidation solutions designed to help SMBs do more with less - less money and less time...


Think of the advantages of having one network storage device that doesn't care what kind of data you're storing, whether you have more files than you can handle or block-based application data including gigantic databases. Think of the convenience of having one device to help you manage, protect and grow your business as your data grows. The new HP StorageWorks X1000 and X3000 Network Storage Systems enable customers to easily manage and optimize their storage capacity by consolidating file and block data into a unified storage solution. These new solutions increase performance by up to 30 percent over previous HP file and unified storage solutions.  As a result, companies can reduce costs and simplify data management by improving capacity optimization of storage resources.


For SMBs who are making their first move into shared storage from direct attached storage, their requirements won't be the same as enterprise customers and in fact most SMBs will seek to reduce the level of complexity usually associated with installing their first SAN.  The HP StorageWorks 2000sa and 2000i G2 Modular Smart Arrays are designed to provide low cost and high performance entry level SAN storage.  These solutions deliver 33 percent more storage capacity per unit of rack space for SFF SAS drives over 3.5" drives.  The capability of using small or large form factor SAS or SATA drives, combined with a scalability of up to 99 SFF drives is unmatched by ANY entry-level array product in the market.  The MSA2000sa G2, with its' four SAS ports per controller or eight per dual controller system allows the customer who is moving from a single-host DAS environment enjoy the benefits and economies of a SAN without building a switched infrastructure. Four dual-path or eight single-path hosts can directly access a single MSA2000sa G2.


What if HP combined everything you need to realize the full potential of virtualization into one package?  The HP Virtualization Bundles are the industry's first integrated solution that allows SMBs to more easily and cost-effectively deploy virtualization by transforming existing server disk drives into highly available shared storage. The bundle allows customers to reduce costs up to 35 percent by eliminating the need for external storage to support application high availability. The bundle includes HP server, storage and networking technology as well as integrated virtualization software from HP and VMware. To take advantage of some of the most critical features in VSphere (HA, VMotion, SRM, Live Migration), customers need to implement highly available storage area networks (SANs).  For many midsized customers this is a challenge as adding external storage increases their hardware costs and impacts overall IT management.  The HP Virtualization bundles allow an organization to pool all of the storage that is either inside or directly attached to HP ProLiant G6 servers into one virtualized pool of storage. Combined with HP ProCurve data center networking solutions, this allows an organization to quickly implement a fully virtual infrastructure, with all of its benefits, using pre-tested building blocks.


To get more information, go to our announcement day page on hp.com.


(Editor's note: Carol is my colleague and is the category manager for StorageWorks with an SMB focus)


Tweet this! 

SVSP use case #1: Consolidation and centralized management

By Edgardo Lopez, Product Marketing Manager

 

Recently, I talked about how the HP StorageWorks SAN Virtualization Services Platform (SVSP) network-based storage virtualization solution forms a solid foundation for an adaptive storage infrastructure that can significantly boost storage asset utilization while simplifying operations and improving efficiency.

 

It has been true for many years, that the best way to drive cost out of any infrastructure solution is by implementing common methodologies, tools, standards and techniques. An SVSP based storage infrastructure offers such a common set of tools and methodologies that will enable organizations to address an important set of use cases to streamline operations and improve productivity.  They include:



    • Storage Consolidation and Centralized Management
    • Non-disruptive Data Migrations
    • Dynamic Tiered Storage infrastructures
    • Rapid Application Recovery
    • Efficient Application Testing & Development
    • Cost Effective Disaster Recovery
       

Today,  I'd like explore the "Storage Consolidation & Centralized Management" use case with the SVSP; review how it actually delivers value within the context of simplified storage management, and discover how to evaluate this Use Case, economically.

 

Many customers today have storage arrays from different vendors or at least different generations of arrays, and models from the same vendor.  Except in very rare instances, the customer is forced to recall the capabilities of each type of storage and use a different set of management tools from each vendor, including data services options, in order to manage each array separately.  As the number of servers  (virtual and physical) and storage devices grows, hundreds or thousands of volumes must be individually monitored and managed by storage administrators--a daunting task to say the least.  Complexity, cost and capacity utilization rates are all negatively impacted in these traditional deployments, particularly when server virtualization is introduced.

 

A better approach is to use the SVSP platform to develop a consolidated storage infrastructure (see figure 1) that drives better economics.  The SVSP:



    • Aggregates capacity via network-based virtualization from HP and non-HP arrays, to form centrally managed pools of virtual storage
    • Dynamically provisions storage, reclaim unused space, maximize application uptime by expanding capacity on a "just-in-time" basis,
    • Centrally manages multiple arrays and multiple data services with a consistent set of tools
    • Creates storage tiers and easily migrates data among the tiers to optimize storage costs
    • Improves capacity utilization with thin provisioning for all capacity under management irrespective of the array type
    • Maintains unused capacity in pool for future requirements
    • Improves in some instances performance by striping LUNs across RAID arrays

                                

 Consolidation and management of multiple arrays from various vendors is significantly simplified using the SVSP.   According to IDC (See Footnote #1), solutions like the SVSP can increase storage utilization by as much as 300% when compared to typical SAN Deployments.  It can also triple the amount of storage that a person can manage.   Overall, an SVSP deployment as illustrated in figure #1, can have a significant customer impact in three areas: business, operational, and finacial impact.  I've summarized these as follows:

 

Business impact:



    • Flexibility Storage Infrastructure enabling faster responses to changing business conditions, particularly when server virtualization is part of the deployment
    • Maximizes application availability for revenue-producing activity
    • Greater asset utilization

Operational Impact:



    • Prevents server downtime due to out-of-space conditions
    • Reduces the time needed to respond to support additional servers, applications, and users
    • Increases the amount of storage each administrator can manage
    • Reduces administrative errors by using one management console for all storage devices
    • Better performance due to volume striping across RAID arrays
       

Financial Impact:



    • Reduces storage costs
    • Reclaims unused storage
    • Greater storage utilization
    • Lower TCO, CAPEX and OPEX

For more details see:  www.hp.com/go/svsp

 

Footnote #1: "The Business Value of Storage Virtualization: Scaling the Storage Solution; Leveraging the Storage Investments", IDC, Richard Villars & Randy Perry,   Feb. 2009

 

 Tweet this! 

More on SPC-2

Last month we announced a world record SPC-2 number by the XP24000.  At the same time we extended yet another challenge to EMC to join the rest of the world in publishing benchmarks.  They continue to decline the offer, arguing “representativeness”.  I thought I’d clear up the “representativeness” question.

 

EMC’s argument that this XP is too costly starts from the assumption that SPC-2 only represents a video streaming workload.  To quote, “128 146GB drive pairs in your 1152 drive box? A pure video streaming workload?”  We actually see a widely diverse set of workloads used in the XP.  The power of having both SPC-1 and SPC-2 benchmark results is that they provide audited data that applies to almost any workload mix a customer might have.  But if one had to pick a most common workload it would probably be database transaction processing by day, then back up and data mining workloads joining the transaction processing by night.  SPC-2 models the back up and data mining aspects, with SPC-1 representing the transaction processing.  SPC-2 is about a lot more than video streaming.

 

When people need bullet proof availability and high performance for transaction processing they turn to high end arrays like the XP24000.  It’s probably the most common use for a high end array.  Our data indicates that on average the number of disks in an initial XP purchase is right around the 265 in our SPC-2 configuration.  Some of those won’t have the levels of controllers in the SPC-2 configuration.  But an increasing number use thin provisioning.  In those cases they will often get all the controllers they’ll need up front, delaying the disk purchases as you’d expect with thin provisioning.  So the configuration and workloads look pretty representative.

 

Then consider a real use of the benchmark.  A maximum number is key in assessing an array’s performance.  Below that you can adjust disks, controllers, and cache to get fairly linear performance changes.  But when you reach an array’s limit, all you can do is add another array.  So once you know an array’s maximum number you know its whole range of performance.  By maxing controllers we provide that top end number, giving the most useful result.  For sequential workloads like back-up and data mining maxing disk count isn’t necessary, whereas it generally is for random workloads like transaction processing.

 

Now let’s discuss how one might use XP’s SPC-2 results.  Let’s say you need a high end array for transaction processing.  The most common case we see requires backup and data mining operations at night in a limited time window.  Since the XP’s SPC-2 result is twice that of the next closest high end array, you can expect it to get the backup and data mining done with half the resources of the next fastest array.  But with SPC-2 you can go further.  You can look up the specific results for backup and data mining workloads which are around 10GB/s for the XP24000.  Knowing how much data you need to backup and mine you can estimate how much of the system’s resources you’ll need to get those things done in your time window and therefore what’s still left for transaction processing during that window.  You can scale that for the size array you need for transaction processing.  And you can compare to other arrays that have posted results.  All using audited data before you get sales reps involved.

 

SPC benchmarks are all about empowering the storage customer.  XP24000’s SPC-2 result is important to the most common uses for high end arrays, as well as for less common uses like video editing.  The configuration we used looks pretty typical, with choices made to make the result most useful to customers.  The cost is pretty typical for this kind of need.  At HP we expect to continue providing this kind of useful data for customers.  And our challenge to EMC to publish a benchmark result still stands, though they’ll probably continue inventing reasons not to.

EMC, We Challenge You!

EMC, we're once again throwing down the gauntlet.  Today the XP24000 put up the highest SPC-2 benchmark result in the world.  The top spot for such demanding workloads as video streaming goes to the XP.  Once again, your DMX is a no show.  And once again we challenge you, this time to put up an SPC-2 number.  Every other major storage vendor is now posting SPC results.  Every other major storage vendor is now starting to give customers standard, open, audited performance results to show what they've got.  You remain the only vendor keeping your product performance behind a smoke screen of mysterious numbers and internal testing.  We challenge you join us in the world of openness and let customers quit guessing at how low the DMX's performance really is!

Search
Follow Us


About the Author(s)
  • 25+ years experience around HP Storage. The go-to guy for news and views on all things storage..
  • This profile is for team blog articles posted. See the Byline of the article to see who specifically wrote the article.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation