By Matt Morrissey, StorageWorks Product Manager
You'll recognize this common server deployment strategy in your data center: LAN and SAN switch devices used to connect to host servers are smaller pizza box access/edge switches that are physically located at the "top of the rack" of servers to provide connectivity. These top-of-rack switches typically have a series of uplinks to larger chassis-based or director-class aggregation or core switches that provide communication across all of the rack of servers in one or more data centers
As you're looking around your data center today-where is the complexity in your communications infrastructure? It's not at the storage edge. It's actually at the server-to-network infrastructure connection point. It's all of the cables, NICs, HBAs and switches required to gain connectivity to the plethora of servers in the data center. Look at the quantity of switch ports and cables-all right there at the server-to-network edge.
The goal of converged networks is to take a look at how you can reduce the complexity and the size of this infrastructure to really optimize and reduce the cost of both the acquisition and the operational expenses of this infrastructure.
You may have a rack of servers that could have 2, 4, 6 or 8 NIC adapters with 2, 3 or 4 fibre channel adapters in each. That's a large set of both copper and optical cables. In addition, if you are doing a ToR switch deployment, you probably have large number of switch ports and switches required to support these racks. If you multiple this by hundreds if not thousands of servers in your data center, it all adds up to be quite a daunting management task to manage this infrastructure.
As an alternative, a converged network and FCoE technology like the HP StorageWorks Converged Network Switches allows you to replace your HBAs and NICs with Converged Network Adapters that support higher speed 10Gb technology. So now the number of slots used in your servers gets reduced-and the number of cables, switches and switch ports required are dramatically reduced as well. This means you can get equal or more work out of fewer components in your infrastructure. And guess what? Your operational costs from power, cooling and management overhead are going to be reduced, too.
I hate to over use the TCO term but it's the right one here. Because total cost of ownership is exactly what is going to really make FCoE a successful technology.
We'll be blogging more on this topic. So stay tuned. In the meantime let us know what your company is planning around FCoE. If you're looking for more information on HP's converged network switch offerings, this data sheet provides a good product overview.
By David Lawrence, Product Marketing Manager, FC Directors and Software
One of the ways a SAN can add value is by hosting your boot disks externally from your servers. To do this you need a remote boot technology for host operating systems known as Boot from SAN (BFS). There are three necessary ingredients to a BFS configuration:
- A boot disk image located on shared storage accessible via a SAN
- A server connected to a SAN through a host bus adapter (HBA)
- An HBA with BIOS contains instructions that enable the server to boot from SAN.
There are many benefits to doing this:
- Severs can be replaced, re-purposed, and added more quickly with boot disks hosted on SAN
- For BladeSystems, taking the boot disks out of the enclosure reduces power and cooling requirements
- Boot disks can be consolidated on an storage array for easier management, improved security, and higher availability.
- BFS minimizes server maintenance and reduces backup time
Even with all these benefits, BFS is rarely deployed today due to the complexity of setting up the arrays, servers, and HBAs involved. Since HP tests each of its infrastructure products thoroughly, there is a wealth of knowledge built up on how to simplify the set-up of BFS, Click here to get a look at some of the available information to help you. There is link at the bottom of the page where HP Engineers will take your questions on BFS via email.
Infrastructure vendors too are doing their best to take the complexity out of BFS deployment. Brocade talks about their BSF capabilities in this webinar.
By Calvin Zito, aka @HPStorageGuy
Today we announced a couple of updates to our StorageWorks array family. Here's a quick summary:
- The HP StorageWorks P4000 G2 SAN Solution - as you'll hear in tomorrow's podcast, there are a ton of enhancements here. A couple of the very timely ones are support for parity protected Network RAID, Unified (block and file-based) NAS Gateway, the new Best Practices Analyzer, and lower cost mid-line SAS drive support. To get those details, check out tomorrow's podcast.
- The HP StorageWorks P2000 G3 MSA - this product was formerly called the MSA2000. You may have noticed some slight tweaking of our product names - this is an effort to have a simpler, more consistent product naming across our portfolio. But that's not the big news here. In today's podcast, you'll hear about the new iSCSI and 8Gb Fibre Channel combo controller, local replication (up to 64 snapshots) that is included with the P2000, and new remote replication functionality.
So with that high level overview, today's podcast will focus on the P2000. I'm talking to Charles Vallhonrat. Charles has been on the MSA team for as long as I can remember and is managing the marketing team. Today, we talk about the what's new with the MSA, including it's new name, the P2000.
by Sean Fitzpatrick, StorageWorks Storage Platforms Business Development
Over the past few months, we've all witnessed positive milestones for FCoE...but...is it ready?
This past June the Fibre Channel Standards T11.3 BB-5 (back bone) working group finalized defining the spec (or ratified) and voted to forward it to INCITS for public review and eventually publication next year. Is the BB-5 spec good enough to develop product? Absolutely!
Also in June HP announced the availability of two ToR (top of rack) FCoE switches from Brocade and Cisco. Other companies have also announced future availability of FCoE products. This is positive move in the right direction for the industry and for customers looking to lower the TCO over time.
More resources and tools are available today from Brocade, Cisco, Emulex and QLogic to assist with evaluation and planning. HP's own SAN Design Guide provides great design ideas on how to build or modify existing SANs, including a dedicated application note on FCoE implementation guidelines.
As I've stated before, this stage is very important part of the early adopter Phase I to allow customers to evaluate the technology. The real adoption has yet to develop (phase II) and it will, once additional mature products become available. In the mean time IT managers should investigate adopting FCoE for small pilot projects and focusing on how it's going to improve their overall TCO.
From a timeline perspective 2009 is the year for proof of concept and planning. In 2010 I'm anticipating a wider breath of product availability from all the major suppliers.
HP's position is FCoE won't overtake traditional Fibre Channel next week, or even next year. But, now that FCoE is starting to move, it's getting exciting.
Last month we announced a world record SPC-2 number by the XP24000. At the same time we extended yet another challenge to EMC to join the rest of the world in publishing benchmarks. They continue to decline the offer, arguing “representativeness”. I thought I’d clear up the “representativeness” question.
EMC’s argument that this XP is too costly starts from the assumption that SPC-2 only represents a video streaming workload. To quote, “128 146GB drive pairs in your 1152 drive box? A pure video streaming workload?” We actually see a widely diverse set of workloads used in the XP. The power of having both SPC-1 and SPC-2 benchmark results is that they provide audited data that applies to almost any workload mix a customer might have. But if one had to pick a most common workload it would probably be database transaction processing by day, then back up and data mining workloads joining the transaction processing by night. SPC-2 models the back up and data mining aspects, with SPC-1 representing the transaction processing. SPC-2 is about a lot more than video streaming.
When people need bullet proof availability and high performance for transaction processing they turn to high end arrays like the XP24000. It’s probably the most common use for a high end array. Our data indicates that on average the number of disks in an initial XP purchase is right around the 265 in our SPC-2 configuration. Some of those won’t have the levels of controllers in the SPC-2 configuration. But an increasing number use thin provisioning. In those cases they will often get all the controllers they’ll need up front, delaying the disk purchases as you’d expect with thin provisioning. So the configuration and workloads look pretty representative.
Then consider a real use of the benchmark. A maximum number is key in assessing an array’s performance. Below that you can adjust disks, controllers, and cache to get fairly linear performance changes. But when you reach an array’s limit, all you can do is add another array. So once you know an array’s maximum number you know its whole range of performance. By maxing controllers we provide that top end number, giving the most useful result. For sequential workloads like back-up and data mining maxing disk count isn’t necessary, whereas it generally is for random workloads like transaction processing.
Now let’s discuss how one might use XP’s SPC-2 results. Let’s say you need a high end array for transaction processing. The most common case we see requires backup and data mining operations at night in a limited time window. Since the XP’s SPC-2 result is twice that of the next closest high end array, you can expect it to get the backup and data mining done with half the resources of the next fastest array. But with SPC-2 you can go further. You can look up the specific results for backup and data mining workloads which are around 10GB/s for the XP24000. Knowing how much data you need to backup and mine you can estimate how much of the system’s resources you’ll need to get those things done in your time window and therefore what’s still left for transaction processing during that window. You can scale that for the size array you need for transaction processing. And you can compare to other arrays that have posted results. All using audited data before you get sales reps involved.
SPC benchmarks are all about empowering the storage customer. XP24000’s SPC-2 result is important to the most common uses for high end arrays, as well as for less common uses like video editing. The configuration we used looks pretty typical, with choices made to make the result most useful to customers. The cost is pretty typical for this kind of need. At HP we expect to continue providing this kind of useful data for customers. And our challenge to EMC to publish a benchmark result still stands, though they’ll probably continue inventing reasons not to.
EMC, we're once again throwing down the gauntlet. Today the XP24000 put up the highest SPC-2 benchmark result in the world. The top spot for such demanding workloads as video streaming goes to the XP. Once again, your DMX is a no show. And once again we challenge you, this time to put up an SPC-2 number. Every other major storage vendor is now posting SPC results. Every other major storage vendor is now starting to give customers standard, open, audited performance results to show what they've got. You remain the only vendor keeping your product performance behind a smoke screen of mysterious numbers and internal testing. We challenge you join us in the world of openness and let customers quit guessing at how low the DMX's performance really is!
Yes, while the worlds top athletes were in Beijing striving for world records in their various sports HP's XP24000 was setting a new performance standard of its own. That standard is XP's new record SPC-2 benchmark result of 8.7GB/s! That surpasses the previous record by more than 20%, and trounces the next closest single array competitor by more than 2X. The swimmers in Beijing didn't set their new records by that kind of margin!
XP didn't buy this record either. At a price performance of $187/MB/s XP24000 produced its record at 2.5x better price performance than the previous record holder. That previous record was set by an IBM system of 16 separate storage arrays with 1536 disks and a virtualization appliance. A single XP24000 set the new record with just 265 disks. That's a single, bullet proof XP24000 taking on a whole team of IBM products, and winning! Record performance, superior price performance, and bullet proof availability. What more evidence could you want that storage consolidation with the XP24000 is a good thing?
I'd also like to point out that XP didn't have to be tuned like crazy to get this result. We did have to play around with the network and servers to get this result. 8.7GB/s is new territory for the SPC-2 benchmark. It took some effort to get it working at this level while keeping the network and servers out of the way. But with the array we pretty much just set it up and it worked. That means you can expect these kind of results on your real workloads without a lot of esoteric tuning. Pretty nice!
Now a few words about the biggest no show in the high end performance race, EMC. EMC is the only major storage vendor to never publish an SPC number, despite our open challenge for them to do so. They make plenty of performance leadership claims, but show no visible proof! Instead all they provide is vague arguments about SPC being unrepresentative. I think everybody realizes that if EMC could publish a leading number they'd do it in about a millisecond. Instead they won't even get in the pool with the XP! I think we can all figure out why.
Why is XP24000's new result important to a storage professional? The SPC-2 benchmark is the storage industry's standard benchmark for sequential workloads like video on demand, disk-to-disk backup, and large database queries. XP24000's results show the ability to serve >10,000 high quality video streams simultaneously. They also show the capacity to back up 36TB/hour which would keep 42 of the industry's fastest LTO-4 tape drives busy simultaneously. XP put up last year's top SPC-1 number for a single, HDD based array. Now XP has the SPC-2 record. Both results have been verified by an independent auditor and reviewed by our peers on the Storage Performance Council. I think this makes it pretty clear that XP is the gold medalist in storage performance!