By Calvin Zito, aka @HPStorageGuy
During our Tech Day at the end of March, we took a tour of three different labs at our Houston campus. First of up is the event lab, timely as Chris Hornauer talked about getting equipment ready for Storage Networking World and the National Association of Broadcasters, both happening this week. The other interesting thing is a closer look at the HP StorageWorks MDS600 - Chris pulls out the enclosure to show the drives and you'll get a look at the mechanism that allows the enclosure to slide out of the rack. The MDS600 gives you shared storage at the price of direct attached storage. I think a deeper dive on it is warranted here on the blog. Here's that video:
If you have trouble with the embedded video, click here to launch the viewer.
The next stop was the Enterprise Backup Solutions Lab. This is the lab where we test our backup hardware (HP tape storage and virtual tape systems like the D2D Backup System and Virtual Library System) with leading ISV backup software, including our own HP Data Protector. Phil Lang mentioned the EBS compatibility matrix - it's one of the most downloaded pieces of content on hp.com. If you want to take a look, you can click here.
Again, if you have any issues with the embedded video, click here.
The last stop was Integrated Systems Test: Automated Test and Verification Lab. There's a lot of interesting stuff here:
- David Sheffield (you can call him Sheff) was our host here - you can tell he loves his job! The bloggers were also interested at how "clean" an environment this test lab was - not your typical lab that has pieces/parts of systems all over.
- The bloggers were very interested in how we collect data from our array controllers in the test lab (you'll see the "diving board" that is used to grab data off a controller).
Last time, if you have any issues with the embedded video, click here.
There are still more videos to come from the StorageWorks Tech Day and I'll post them over the next several days. A special thanks to all of the guys in Houston (and Yvette!!!) that helped make the lab tours possible.
By Calvin Zito
An often overlooked gem in HP's storage portfolio is HP Data Protector. I sat down to talk data protection of virtual machines and more specifically of VMware with Billy Naples, the HP Data Protector Product Marketing Manager. Billy knows data protection and this was a fun discussion for me to have, especially with VMworld next week.
If you have any issues with the embedded player, click here to listen to the podcast.
Here are a few links where you can learn more about HP Data Protector:
- HP Data Protector software product page: www.hp.com/go/DataProtector (now, that one is easy to remember)
- Assuring Business Continuity in Virtualised Environments paper (someone is obviously British!)
- Complete Protection for VMware environments with HP Data Protector software webcast (requires registration but it's well worth it - it's 16 minutes)
- HP Data Protector 6.1 software VMware Integration Installation Best Practice technical white paper (Must read if implementing Data Protector with VMware)
By Calvin Zito
Yesterday I talked about the announcement that our Technology Solutions Group did and briefly mentioned the part HP StorageWorks had in that announcement. Today, I'll drill down a bit more into the StorageWorks news.
The current economic conditions are affecting everyone but we all know that the information explosion that we've all been talking about for over a decade doesn't seem to care much about the economy. Many customers are attempting to take costs out to free up capital for their core business processes but the continued information explosion creates specific challenges for IT to efficiently store, protect, optimize and manage data. Adding to this, many data centers are not optimized for agility; a good portion of the IT budget is spent in maintenance and operations. IT is expected to help the business take advantage of opportunities that arise in this new economic era by reacting quickly to deliver new services that help drive growth. Really nothing new here, but I wanted to set the context.
We believe that the next generation data center is core to meeting these challenges. We call this the Adaptive Infrastructure. One of the core tenants of the Adaptive Infrastructure is helping customers move from their current state of high cost IT islands and siloed people resources to low cost pooled assets with more predictable service levels. Virtualization is key to that. Many customers have already virtualized their servers and as a result there's been improvements in utilization, service provisioning and disaster recovery/availability of those servers. If the rest of your infrastructure (e.g. storage, network, etc) isn't virtualized, then you still have limited flexibility. I just saw a post by my colleague in BladeSystem Jason Newton diving deeper on this topic and it's worth a read.
These virtual server environments have unique storage challenges around capacity management, storage provisioning, and data protection/management. And that gets me to the heart of what the announcement this week is about.
Our goal is to reduce the complexities and inhibitors of virtual server environments through the intelligent use of storage virtualization. We're making investments in this technology to optimize capacity, simplify storage provisioning and improve data management across virtual IT environments.
This week's announcement was focused on Fibre Channel storage networks. But we're not suggesting this is the answer for every application or customer environment. We have a very deep (and I know at times confusing) portfolio of products and solutions. But you really don't need an infrastructure vendor who only has a hammer because then every problem looks like a nail. You need an infrastructure vendor who has the breadth of portfolio to match the solution to your specific problem and data types at the lowest cost possible. So again, this announcement is focused on Fibre Channel based solutions - as we continue to integrate LeftHand Networks into our portfolio, we'll have more to say about storage virtualization with other storage networks (Shared SAS, iSCSI, etc.).
So that brings me to the news. There were three new or updated solutions we announced:
HP StorageWorks EVA6400 and EVA8400 virtual storage arrays helps customers save up to 50% in storage management costs for common storage administrative tasks compared to competitive traditional arrays
HP StorageWorks SAN Virtualization Services Platform (SVSP) can lower TCO by pooling and sharing of heterogeneous storage resources. You can improve your capacity utilization by 300% and manage 3X the storage capacity per administrator.
The new Data Protector 6.1 software combined with the EVA offers the industry's best (and we think only) replication based Zero Downtime Backup and recovery for VMware environments and is up to 70% less expensive than other enterprise backup products.
I'll go into more details over the next several days but let me leave you with a pointer to a video by our VP of Marketing, Stephan Schmitt. Stephan is at FOSE this week and was interviewed at the event just yesterday. This video is a nice overview of the announcement.
One last footnote I have to make as I can't wait until tomorrow's post where I'll drill down on the EVA6400 and EVA8400. One of our competitors has tried to make their pre-announcement of solid state drives a year ago as a proof point of their innovation. The funny thing is that we source those drives from the same OEM partner. This competitor had made bold and frankly ridiculous predictions that we would not have SSD drives until late this year or maybe in 2010. Well, I've got news for you Chuck - we have SSD drives in the EVA now and have had them in the XP Disk Array for a few months and in our BladeSystem for even longer.
Last month we announced a world record SPC-2 number by the XP24000. At the same time we extended yet another challenge to EMC to join the rest of the world in publishing benchmarks. They continue to decline the offer, arguing “representativeness”. I thought I’d clear up the “representativeness” question.
EMC’s argument that this XP is too costly starts from the assumption that SPC-2 only represents a video streaming workload. To quote, “128 146GB drive pairs in your 1152 drive box? A pure video streaming workload?” We actually see a widely diverse set of workloads used in the XP. The power of having both SPC-1 and SPC-2 benchmark results is that they provide audited data that applies to almost any workload mix a customer might have. But if one had to pick a most common workload it would probably be database transaction processing by day, then back up and data mining workloads joining the transaction processing by night. SPC-2 models the back up and data mining aspects, with SPC-1 representing the transaction processing. SPC-2 is about a lot more than video streaming.
When people need bullet proof availability and high performance for transaction processing they turn to high end arrays like the XP24000. It’s probably the most common use for a high end array. Our data indicates that on average the number of disks in an initial XP purchase is right around the 265 in our SPC-2 configuration. Some of those won’t have the levels of controllers in the SPC-2 configuration. But an increasing number use thin provisioning. In those cases they will often get all the controllers they’ll need up front, delaying the disk purchases as you’d expect with thin provisioning. So the configuration and workloads look pretty representative.
Then consider a real use of the benchmark. A maximum number is key in assessing an array’s performance. Below that you can adjust disks, controllers, and cache to get fairly linear performance changes. But when you reach an array’s limit, all you can do is add another array. So once you know an array’s maximum number you know its whole range of performance. By maxing controllers we provide that top end number, giving the most useful result. For sequential workloads like back-up and data mining maxing disk count isn’t necessary, whereas it generally is for random workloads like transaction processing.
Now let’s discuss how one might use XP’s SPC-2 results. Let’s say you need a high end array for transaction processing. The most common case we see requires backup and data mining operations at night in a limited time window. Since the XP’s SPC-2 result is twice that of the next closest high end array, you can expect it to get the backup and data mining done with half the resources of the next fastest array. But with SPC-2 you can go further. You can look up the specific results for backup and data mining workloads which are around 10GB/s for the XP24000. Knowing how much data you need to backup and mine you can estimate how much of the system’s resources you’ll need to get those things done in your time window and therefore what’s still left for transaction processing during that window. You can scale that for the size array you need for transaction processing. And you can compare to other arrays that have posted results. All using audited data before you get sales reps involved.
SPC benchmarks are all about empowering the storage customer. XP24000’s SPC-2 result is important to the most common uses for high end arrays, as well as for less common uses like video editing. The configuration we used looks pretty typical, with choices made to make the result most useful to customers. The cost is pretty typical for this kind of need. At HP we expect to continue providing this kind of useful data for customers. And our challenge to EMC to publish a benchmark result still stands, though they’ll probably continue inventing reasons not to.
By Lee Johns
I live in Houston and this week we had an event that really brought home the importance of a good disaster recovery & backup strategy for storage. Hurricane Ike came through and took down the power to more than two million people. In Galveston businesses had over nine feet of water in them but even 70 Miles in land people are still without power five days after the storm. A friend of mine has been sitting guard over the generator powering the datacenter for his small business. The whole office has been standing guard to ensure it does not get stolen and keeps theire Datacenter up and running. They are a services business and have over 30 customers relying on them. You never know when a disaster will hit. Now may be a good time to reevaluate your strategy.
Yesterday I received a letter in the mail at home that started off:
Dear Sir or Madam,
We are writing to let you know that computer tapes containing some of your personal information were lost while being transported to an off-site storage facility by our archive services vendor. While we have no reason to believe that this information has been accessed or used inappropriately, we deeply regret that this incident occurred....
So the first question I have is how does an archive vendor lose tapes? How hard can it be to take the tapes from your customer put them in a secure truck and drive them to the storage facility? Isn't that your whole business model - you will pick up, transport and store these tapes safely and securely 100% of the time?
Now I understand that any activity with humans involved cannot be guaranteed to work 100% of the time. So what really happened? A bit more of an explanation would have been helpful, such as the truck was in an inadvertent accident and the contents of the truck were spilled into a river or all over the highway and could not all be recovered. Without more details I'm left wondering did someone make off with the tapes by accident or on purpose? Or was this just sloppy work by the company?
Anyway, I hope this is a call to action for this company to do at least two things to prevent such an incident in the future.
1. Look into tape encryption such as the LTO-4 offers. I would have been more much pleased if that second sentence read "While the tapes were physically lost, the data they contained cannot be accessed or read by anyone because the data on the tapes is securely encrypted with sophisticated technology requiring encryption keys to make the data readable. Our security policy ensures that these keys are always stored in or transported to physically separate locations from the computer tapes."
2. Consider the use of replication and electronic vaulting for moving data off-site for archiving. With new technologies such as deduplication and low-bandwidth replication, this company would perhaps be able to reduce the amount of data that is stored on tapes and physically transported to archive storage. Again, I don't know the specifics here, but as an example let's say this company had four sites that they were backing up to data to tape and transporting those tapes to off-site archives. With replication and electronic vaulting, they could replicate data from three of their sites to just one site for backup to tapes and then only have to move tapes from the one site to archive storage thereby reducing their risk exposure by 75%.
If you're worried about how a similar incident could impact your company and what risks are involved HP is here to help. We can work with you to significantly reduce your data security exposure from the desktop to your data center. On the storage side, we offer a FREE storage security risk assessment. For more details on HP's other data security options beyond storage please check HP's Security web page.
When there are no more natural disasters!
When electricity is free!
When hardware never fails!
When software is bug-free!
When people no longer make mistakes!
When there are no computer viruses or other malicious code!
When all people are honest!
When government regulations (SarbOx, SEC, HIPPA. etc.) have gone away!
And even though disk-based backup is set to revolutionize the retention of short-term backups and recovery, don't forget that in that last item some records will need to be stored for many, many years. In the case of HIPPA think about keeping a patient's records over a lifetime which could be 70 to 80 years or beyond. So for some businesses disk, tape and other longer term achival mediums are going to be a mandatory part of the data life cycle architecture.
I'm going on vacation for a few weeks so no posting for a while. I hope you've been having a great summer.
Talk to you in September,