When the sun set over Waikiki, HP BladeSystem stood as the victor of InfoWorld 2010 Hawaii Blade Shoot-Out. Editor Paul Venezia blogged about HP's gear sliding off a truck, but other behind-the-scenes pitfalls meant the world's #1 blade architecture nearly missed the Shoot-Out entirely.
Misunderstandings about the test led us to initially decline the event, but by mid-January we'd signed on. Paul's rules were broad: Bring a config geared toward "virtualization readiness" that included at least 4 blades and either fibre or iSCSI shared storage. Paul also gave us copy of the tests he would run, which let vendors select and tune their configurations. Each vendor would get a 2- or 3-day timeslot on-site in Hawaii for Paul to run the tests himself, plus play around with the system's management features. HP was scheduled for the first week of March.
In late January we got the OK to bring pre-released equipment. Luckily for Paul, Dell, IBM, and HP all brought similar 2-socket server blades with then unannounced Intel Xeon 5670 processors ("Westmere"). We scrambled to come up with the CPUs themselves; at the time, HP's limited stocks were all in use to support Intel's March announcement.
HP's final config: One c7000 enclosure, four ProLiant BL460c G6 server blades with VMWare ESX and using 6-core Xeon processors and 8GB LVDIMMs. Two additional BL460c G6's with StorageWorks SB40c storage blades for shared storage; a Virtual Connect Flex-10 module, and a 4Gb fibre switch. (We also had a 1U KVM console and an external MSA2000 storage array just in case, but ended up not using them.)
To show off some power-reducing technology, we used solid state drives in the storage blades, and low-voltage memory in the server nodes. HP recently added these Samsung-made "Green" DDR3 DIMMs that use 2Gb-based DRAMS built with 40nm technology. LV DIMMs can run at 1.35 volts (versus the normal 1.5 volts), so that they "ditch the unnecessary energy drain" (as Samsung's Sylvie Kadivar put it recently).
Our pre-built system left Houston three days before I did, but it still wasn't there when I landed in Honolulu Sunday afternoon. We had inadvertently put the enclosure into an extra-large Keal case (a hard-walled shipping container) which was too tall to fit in some aircraft. It apparently didn't fit the first cargo flight. Or the second one. Or the third one...
Sunday evening, already stressed about our missing equipment, the four of us from HP met in the home of our Hawaiian host, Brian Chee of the University of Hawaii's Advanced Network Computing Laboratory. Our dinnertime conversation generated additional stress: We realized that I'd mis-read the lab's specs, and we'd built our c7000 enclosure with 3-phase power inputs that didn't match the lab's PDUs. Crud.
We nevertheless headed to the lab on Monday, where we spotted the rats-nest of cables intended to connect power meters to the equipment. Since our servers still hadn't arrived, two of the HP guys fetched parts from a nearby Home Depot, then built new junction boxes that would both handle the "plug conversion" to the power whips, plus provide permanent (and much safer) test points for power measurements.
Meanwhile, we let Paul get a true remote management experience on BladeSystem. I VPN'd into HP's corporate network, and pointed a browser to the Onboard Administrator of an enclosure back in a Houston lab. Even with Firefox (Paul's choice of browser), controlling an enclosure that's 3000 miles distant is still simple.
Mid-morning on day #2, Paul got a cell call from the lost delivery truck driver. After chasing him down on foot, we hauled the shipping case on to the truck's hydraulic lift...which suddenly lurched under the heavy weight, spilling the wheels off of the side, and nearly sending the whole thing crashing onto the ground. It still took a nasty jolt.
Some pushing and shoving got the gear to the Geophysics building's piston-driven, hydraulic elevator, then up to the 5th floor. (I suppose I wouldn't want to be on that elevator when the "Low Oil" light turns on!)
We unpacked and powered up the chassis, but immediately noticed a health warning light on one blades. We quickly spotted the problem; a DIMM had popped partway out. Perhaps not coincidently, it was the blade that took the greatest shock when the shipping container had slipped from the lift.
With everything running (whew), Paul left the lab for his "control station", an Ubuntu-powered notebook in an adjoining room. Just as he sat down to start deploying CentOS images to some of the blades...wham, internet access for the whole campus blinked out. It didn't affect the testing itself, but it caused other network problems in the lab.
An hour later, those problems were solved, and performance tests were underway. It went quick. Next, some network bandwidth tests. Paul even found some the time to run some timed tests to evaluate Intel's new AES-NI instructions, using timed test with some OpenSSL tools.
Day #3 brought us a new problem. HP's Onboard Administrator records actual power use, but Paul wanted independent confirmation of the numbers. (Hence, the power meters and junction box test-points.) But the lab's meters couldn't handle redundant three-phase connections. An hour of reconfiguration and recalculation later, we found a way to corroborate measurements. (In the end, I don't think Paul published power numbers, though he may have factored them into his ratings.)
We rapidly re-packed the equipment at midday on day #3 so that IBM could move into the lab. Paul was already drafting his article as we said "Aloha", and headed for the beach -- err, I mean, back to the office.
Every time a competitor introduces a new product, we can't help but notice they suddenly get very interested in what HP is blogging during the weeks prior to their announcement. Then when the competitor announces, the story is very self-congratulatory "we've figured out what the problem is with existing server and blade architectures". The implication being that blades volume adoption is somehow being constrained by the very thing they have and everyone else is really stupid.
HP BladeSystem growth has hardly been constrained; with quarterly growth rates of 60% or 80% and over a million BladeSystem servers sold. So I have to wonder if maybe we already have figured out what many customers want - save time, power, and money in an integrated infrastructure that is easy to use, simple to implement changes, and can run nearly any workload.
Someone asked me today "will your strategy change?" I guess given the success we've had, we'll keep focusing on the big problems of customers - time, cost, change and energy. It sounds boring, it doesn't get a lot of buzz and twitter traffic, but it's why customers are moving to blade architectures.
Our platform was built and proven in a step-by-step approach: BladeSystem c-Class, Thermal Logic, Virtual Connect, Insight Dynamics, etc. Rather than proclaim at each step that we've solved all the industry's problems or have sparked a social movement in computing; we'll continue to focus on doing our job to provide solutions that simply work for customers and tackle their biggest business and data center issues.
data center 3.0
Dynamic Power Capping
x86 server market
1. Virtual I/O ‘Take One’ for Dell was a partnership with eGenera in March 2008.
Headline: eGenera Inks OEM Deal with Dell
“Dell is listening to customers and providing solutions that make the virtual data center easier to deploy and manage, regardless of platform,” said Rick Becker, vice president, Dell Software & Solutions. “Dell and Egenera will help customers focus on company growth by delivering excellence in virtualized infrastructure from server performance, storage interoperability to dynamic data center management.”
2. Dell asked for a mulligan in July 2008 when they tossed eGenera overboard and tried to create something sort-of like Virtual Connect; meet FlexAddress. Over a year and half late to market after version 1.0 of Virtual Connect.
Headline: Dell joins ranks of I/O virtualization providers with FlexAddress
"We have taken a very different approach than HP. Theirs is a proprietary switch that plugs into their backplane, and after adding all of the switching, it can cost about $20,000 for one chassis," said Rick Becker, vice president of software and solutions, Dell Product Group. "We have done this using open standards, so FlexAddress works with other switches like Cisco and Brocade and users don't have to switch their switches."
3. Dell and go back to the partner route with a Cisco bear hug. Headline: Dell and Cisco team up on next-gen datacentres
Sorry, no quote from Rick Becker on this one, but here’s another that will give you an idea. "We're hearing much more interest from customers on FCoE, even though it is not an officially ratified standard, but at the moment, iSCSI is available and affordable," said Robin Kuepers, head of storage for EMEA at Dell. No one asked if the Cisco's Nexus switches that Dell has simply agreed to resell are compatible with Cisco's large installed base of Catalyst switches or all the other industry compatible switches such as ProCurve and others. Hint: They're not. (UPDATE: The comments below called me out that this was overkill and they are right; I should explain. There are differences in feature/functions between Nexus and Catalyst, especially related to capabilities with VM's and the future implementation of FCoE. Catalyst switches will be missing out on some of these capabilities under the current direction. The right direction is to have open standards for these types of capabilities, regardless of the plumbing in the datacenter and a Nexus-only approach doesn't get us there either.)
4. That brings us to Tuesday, February 3rd and a 180 degree sprint, to yet another partner.
Headline: Dell Pairs with Xsigo on Virtual I/O
"What we are excited about with the Xsigo appliance is the openness," explains Rick Becker, vice president of software and solutions at Dell's Product Group. "This works across form factors and vendors, and in a heterogeneous data center, this can mange it all no matter what logo is on the box." No shortage of handshaking going on over there. After all, HP's Solution Builder Program has over 300 members.
However, there’s one important thing that you should know about the Xsigo Dell announcement that Dell didn’t mention. It does its magic with InfiniBand. That means you need to have an Infiniband network in place to use this product. I’m all for open standards in this space too, like Anthony Dina at Dell blogged about last week, other than companies that have decided to base their server interconnect on InfiniBand, there is a more straightforward path for customers looking for a datacenter-wide virtual I/O strategy and an end-to-end virtual infrastructure.
We do agree with Dell that not all businesses are the same, so a ‘one-size-fits-all’ solution isn’t a good approach. To complete our offerings, HP partners with Xsigo among other virtual I/O vendors. It’s supported on HP blades and rack/tower servers too. However, our cornerstone is advancing innovation like Virtual Connect Flex-10 on plain old fashioned Ethernet and Fibre Channel that most businesses will find is a ‘flex-size-fits-most’ solution that fits them just right.
We have to pause and bow to our NonStop brethren who took home the GOLD in SearchDataCenter.com's products of the year. Especially sweet is they knocked off the IBM z10 mainframe.
I know we tend to be a little controverstial over the years with our "Blade Everything" strategy (it really got under the rack server skin of our Dell buddies), but the NonStop BladeSystem should really bring our strategy home for you.
The fact that the new NonStop takes advantage of a BladeSystem architecture is not what makes the product rock. Guts are guts. It's the brains and nervous system the NonStop team was able to build on top of it that make this solution stand out as the tops in the industry. The fact we bladed the NonStop just adds a killer value proposition for you:
2x the performance. 1/2 the footprint. 100% NonStop!
You gotta love it.
The latest Top 500 supercomputer list just got released. BladeSystem c-Class was well represented once again. With 201 entries (40.2%) of the top 500, it has the most entries of any product line. The ever popular ProLiant BL460c made up the most entries, but we also had strong showings of the BL465c and the two-in-one blade the BL2x220c, and the BL685c 4-way blade made a showing as well. BladeSystem supercomputers are used for university and government research, weather modeling, semiconductor development, automotive, telecom, IT services, web infrastructure, financial services, rendering, and many other applications. I'm sure Dell was excited with their new blades line as well. Since they like to compare their blades with HP BladeSystem, I thought I would share how the two compared on the Top500 list:
- HP BladeSystem had 199 more entries in the top 500 list than Dell's new blades
- HP had 39.8% more share than Dell's new blades (40.2% vs. 0.4%)
- HP had 100.5 times more entries than Dell's new blades
Dell's new blades accounted for two entries. Congratulations Dell!
Okay enough of these comparisons. We're excited to see so many customers from Audi to Zeta and lots of customers in between using BladeSystem. If you would like to see a listing of companies building supercomputers with BladeSystem, go check out the top 500 website listing and sort by vendor.