LaMills Garrett had sent a few tweets wanting to clarify some confusion he's seeing around 3PAR thin provisioning. LaMills is our HP 3PAR Americas product manager and I've known him for a long time - going back to more than 10 years ago when he was working in R&D. After seeing his tweets, I thought it might be a good excuse for a short, Sunday night getting back into the swing of things blog post. So here's what LaMills said:
In another podcast covering our August 23 storage announcement, this edition covers the new P10000 3PAR Storage System with Beth Joseph. I've known Beth for a while and she's one of our top product managers. We cover a lot of topics in this podcast focusing on what's new with our 3PAR platform. Here's the podcast with Beth:
My blog post today is a video demo of the P6000 EVA Thin Provisioning software. I recommend you go to the post to see the whole post but you can click this link to open the video on YouTube
Thin provisioning can help you to get thin and stay thin. Today, I talk about a new program, the HP 3PAR Get Thin Guarantee *. There are a lot of details in my post today so click on the title or the read more link to get all the details.
By Tilman Walker, HP Enterprise Storage Consultant
Imagine a customer that is the world leader in their market (no specifics, but they are in the environmental friendly energy business). The customer environment is very (!) dynamic - and this is mostly due to their fast business growth. In their environment it is hard to perform proper capacity planning.
About 12+ months ago the customer had outgrown their EVA installed base and was looking at some "big iron" - so the account team offered the HP StorageWorks XP24000 with Thin Provisioning (THP) on the complete frame. HP and the customer perception was that THP was the ideal approach for this environment:
Allowing them on-demand fast creation/allocation of additional LUNs (change comes unpredictable in their environment)
Ensuring optimal performance by leaving data distribution to the array (largely offloading the data layout work from the storage administrators)
Brings the EVA ease-of-use to the high-end environment
Said and done: THP was implemented on ~400TB spread across 2x XP24000 with ~300 servers (mainly Windows, some VMware, some Unix). Data was migrated from the EVAs using External Storage/AutoLUN. What worried the customer originally was to run out of space in one of their THP pools. Since then, their worries have ceased and space utilization is under control - of course, the first capacity upgrades are already under way.
Net result is that the customer is very happy with THP (just confirmed in a meeting!). THP is a perfect fit to allow them to survive in their dynamic environment. Performance measurements show a perfect average response time across all LUNs (as you would expect from THP). But, of course, a big part of this superb customer satisfaction is owed to the great HP Services personnel on-site.
So all in all, a well-working, stress-free XP Disk Array with THP environment. All I wonder is: why don't we use THP in considerably more customer environments!?!?!?!
By Edgardo Lopez
In these hard economic times, so many articles are already out there about how to bring down the costs of storage infrastructures, but few of them actually describe how to design a flexible and adaptive infrastructure that facilitates these potential savings and streamline operations to improve productivity
A recently published research paper by IDC titled "The Business Value of Storage Virtualization: Scaling the Storage Solution; Leveraging the Storage Investments" offers a refreshing new perspective. The authors present an IT evolution framework, describing how organization can evolve their infrastructure to enable them get more out of the infrastructure they already have. They show how by integrating server and storage virtualization technologies, IT organizations can significantly boost storage asset utilization while simplifying operations and improving efficiency. Findings show that asset utilization can be increased by up to 70%, cost of future system purchases can be decreased by as much as 50% and how provisioning and capacity expansion across heterogeneous systems can be significantly simplified
Today, I would like to begin a discussion how the HP StorageWorks SAN Virtualization Services Platform or SVSP can be used to implement a storage infrastructure that produces the benefits described in the paper.
The SVSP is a network-based storage virtualization solution that aggregates capacity from multiple SAN attached FC arrays (HP or non-HP), and creates pools of virtual storage that can now be easily provisioned to satisfy the dynamic demands of virtual and physical servers. In addition, the SVSP provide a comprehensive set of data services that help firms streamline operation, and improve efficiency including: volume management, thin provisioning, non-disruptive data migration, copy services (clones and snapshots) and local and remote mirroring. They are the key to address a set of important use cases that we will describe in more detail in future postings. They are:
- Storage Consolidation and Centralized Management
- Non-disruptive Data Migrations
- Dynamic Tiered Storage infrastructures
- Rapid Application Recovery
- Efficient Application Testing & Development
- Cost Effective Disaster Recovery
For more details see: www.hp.com/go/svsp
(Editor's note: Edgardo is the Product Marketing Manager for the SVSP. He will be talking about the HP StorageWorks SVSP and it's use cases in several upcoming posts. Don't forget that you can follow HP StorageWorks on Twitter at http://twitter.com/HPstorageGuy).
By Calvin Zito
Yesterday I talked about the announcement that our Technology Solutions Group did and briefly mentioned the part HP StorageWorks had in that announcement. Today, I'll drill down a bit more into the StorageWorks news.
The current economic conditions are affecting everyone but we all know that the information explosion that we've all been talking about for over a decade doesn't seem to care much about the economy. Many customers are attempting to take costs out to free up capital for their core business processes but the continued information explosion creates specific challenges for IT to efficiently store, protect, optimize and manage data. Adding to this, many data centers are not optimized for agility; a good portion of the IT budget is spent in maintenance and operations. IT is expected to help the business take advantage of opportunities that arise in this new economic era by reacting quickly to deliver new services that help drive growth. Really nothing new here, but I wanted to set the context.
We believe that the next generation data center is core to meeting these challenges. We call this the Adaptive Infrastructure. One of the core tenants of the Adaptive Infrastructure is helping customers move from their current state of high cost IT islands and siloed people resources to low cost pooled assets with more predictable service levels. Virtualization is key to that. Many customers have already virtualized their servers and as a result there's been improvements in utilization, service provisioning and disaster recovery/availability of those servers. If the rest of your infrastructure (e.g. storage, network, etc) isn't virtualized, then you still have limited flexibility. I just saw a post by my colleague in BladeSystem Jason Newton diving deeper on this topic and it's worth a read.
These virtual server environments have unique storage challenges around capacity management, storage provisioning, and data protection/management. And that gets me to the heart of what the announcement this week is about.
Our goal is to reduce the complexities and inhibitors of virtual server environments through the intelligent use of storage virtualization. We're making investments in this technology to optimize capacity, simplify storage provisioning and improve data management across virtual IT environments.
This week's announcement was focused on Fibre Channel storage networks. But we're not suggesting this is the answer for every application or customer environment. We have a very deep (and I know at times confusing) portfolio of products and solutions. But you really don't need an infrastructure vendor who only has a hammer because then every problem looks like a nail. You need an infrastructure vendor who has the breadth of portfolio to match the solution to your specific problem and data types at the lowest cost possible. So again, this announcement is focused on Fibre Channel based solutions - as we continue to integrate LeftHand Networks into our portfolio, we'll have more to say about storage virtualization with other storage networks (Shared SAS, iSCSI, etc.).
So that brings me to the news. There were three new or updated solutions we announced:
HP StorageWorks EVA6400 and EVA8400 virtual storage arrays helps customers save up to 50% in storage management costs for common storage administrative tasks compared to competitive traditional arrays
HP StorageWorks SAN Virtualization Services Platform (SVSP) can lower TCO by pooling and sharing of heterogeneous storage resources. You can improve your capacity utilization by 300% and manage 3X the storage capacity per administrator.
The new Data Protector 6.1 software combined with the EVA offers the industry's best (and we think only) replication based Zero Downtime Backup and recovery for VMware environments and is up to 70% less expensive than other enterprise backup products.
I'll go into more details over the next several days but let me leave you with a pointer to a video by our VP of Marketing, Stephan Schmitt. Stephan is at FOSE this week and was interviewed at the event just yesterday. This video is a nice overview of the announcement.
One last footnote I have to make as I can't wait until tomorrow's post where I'll drill down on the EVA6400 and EVA8400. One of our competitors has tried to make their pre-announcement of solid state drives a year ago as a proof point of their innovation. The funny thing is that we source those drives from the same OEM partner. This competitor had made bold and frankly ridiculous predictions that we would not have SSD drives until late this year or maybe in 2010. Well, I've got news for you Chuck - we have SSD drives in the EVA now and have had them in the XP Disk Array for a few months and in our BladeSystem for even longer.
Last month we announced a world record SPC-2 number by the XP24000. At the same time we extended yet another challenge to EMC to join the rest of the world in publishing benchmarks. They continue to decline the offer, arguing “representativeness”. I thought I’d clear up the “representativeness” question.
EMC’s argument that this XP is too costly starts from the assumption that SPC-2 only represents a video streaming workload. To quote, “128 146GB drive pairs in your 1152 drive box? A pure video streaming workload?” We actually see a widely diverse set of workloads used in the XP. The power of having both SPC-1 and SPC-2 benchmark results is that they provide audited data that applies to almost any workload mix a customer might have. But if one had to pick a most common workload it would probably be database transaction processing by day, then back up and data mining workloads joining the transaction processing by night. SPC-2 models the back up and data mining aspects, with SPC-1 representing the transaction processing. SPC-2 is about a lot more than video streaming.
When people need bullet proof availability and high performance for transaction processing they turn to high end arrays like the XP24000. It’s probably the most common use for a high end array. Our data indicates that on average the number of disks in an initial XP purchase is right around the 265 in our SPC-2 configuration. Some of those won’t have the levels of controllers in the SPC-2 configuration. But an increasing number use thin provisioning. In those cases they will often get all the controllers they’ll need up front, delaying the disk purchases as you’d expect with thin provisioning. So the configuration and workloads look pretty representative.
Then consider a real use of the benchmark. A maximum number is key in assessing an array’s performance. Below that you can adjust disks, controllers, and cache to get fairly linear performance changes. But when you reach an array’s limit, all you can do is add another array. So once you know an array’s maximum number you know its whole range of performance. By maxing controllers we provide that top end number, giving the most useful result. For sequential workloads like back-up and data mining maxing disk count isn’t necessary, whereas it generally is for random workloads like transaction processing.
Now let’s discuss how one might use XP’s SPC-2 results. Let’s say you need a high end array for transaction processing. The most common case we see requires backup and data mining operations at night in a limited time window. Since the XP’s SPC-2 result is twice that of the next closest high end array, you can expect it to get the backup and data mining done with half the resources of the next fastest array. But with SPC-2 you can go further. You can look up the specific results for backup and data mining workloads which are around 10GB/s for the XP24000. Knowing how much data you need to backup and mine you can estimate how much of the system’s resources you’ll need to get those things done in your time window and therefore what’s still left for transaction processing during that window. You can scale that for the size array you need for transaction processing. And you can compare to other arrays that have posted results. All using audited data before you get sales reps involved.
SPC benchmarks are all about empowering the storage customer. XP24000’s SPC-2 result is important to the most common uses for high end arrays, as well as for less common uses like video editing. The configuration we used looks pretty typical, with choices made to make the result most useful to customers. The cost is pretty typical for this kind of need. At HP we expect to continue providing this kind of useful data for customers. And our challenge to EMC to publish a benchmark result still stands, though they’ll probably continue inventing reasons not to.
Wow - it's been a busy week at StorageWorks and Lee still can't post his own blog so I'll post this for him on today's news that we have signed a definitive agreement to acquire LeftHand Networks, Inc.. Here is Lee's post:
As the dust settles on the announcement of the HP & Oracle Exadata machine, along comes another interesting announcement. HP has signed a definitive agreement to acquire LeftHand Networks (See press release). I'm sure a lot of press and analyst opinion will focus on the benefits to HP of beefing up our iSCSI portfolio or gaining access to some of the strong server and storage virtualization capabilities of the LeftHand Networks solution, or of the opportunity to sell to more mid-market customers with HP storage or even on the channel synergies and worldwide expansion opportunities of the acquisition.
These are all true but what are the customer benefits? An interesting fact is a software solution that builds storage products from industry standard servers. HP Extreme Data Storage System, HP & Oracle Exadata Machine & LeftHand Networks are all very different in their target market, and yet all build from Industry Standard ProLiant and/or BladeSystem components. They all layer on strong SW capabilities that turn industry standard servers into robust storage solutions. LeftHand Networks delivers a very strong software stack featuring Snap & Clone technology, thin provisioning, remote replication and SmartClone capabilities and each of these has considerable customer benefit. LeftHand Networks even offers the capability to run storage services in a virtual machine. Imagine that - building a SAN without having to buy storage hardware. Now that really sounds like a customer benefit!
Lee Johns, Director StorageWorks Marketing
Our own Patrick Eitenbichler recently discussed a list of five things customers should focus on to reduce the costs of managing and protecting data. The full article is in Processor Magazine and you can read it at this URL: http://tinyurl.com/593qrm
Here's a summary of the five points Patrick made:
1. Consolidation. Moving data onto centralized storage systems can help administrators avoid the fragmented capacity that leads to extra maintenance work, low disk utilization, and huge backup headaches.
2. Centralized backup. SMEs should look at disk-to-disk-to-tape backup solutions that initially store data on disk drives and eventually migrate it to tape for long-term data retention. Ensuring successful backups on a nightly basis is "mission-critical," says Eitenbichler.
3. Deduplication. Using de-duplication allows administrators to drastically reduce the amount of data stored on disk-based backup systems at data reduction ratios of 20:1 or even 40:1.
4. Thin provisioning. This technique eliminates wasted capacity by automatically sizing storage capacity needed by application requirements.
5. Data life cycle management. It sounds simple, but keeping an inventory of storage devices onsite, available capacity, and growth trends can allow administrators to delay additional purchases for several months.