The IT industry is continually changing, so as you read this blog, you may be asking yourself, “What’s new this time around?” or more likely, “What new acronym do I need to learn this week?”
Guest blog by Thomas Brooks, HP ISS Options
Worldwide Product Marketing Manager, Solid State Products
If you have business managers breathing down your neck over slow applications or performance limitations, the answer may not be more memory, drives or storage. It could be just one PCIe card inside your server that unlocks a whole host of compute cycles, allowing you to focus your precious resources elsewhere.
When the sun set over Waikiki, HP BladeSystem stood as the victor of InfoWorld 2010 Hawaii Blade Shoot-Out. Editor Paul Venezia blogged about HP's gear sliding off a truck, but other behind-the-scenes pitfalls meant the world's #1 blade architecture nearly missed the Shoot-Out entirely.
Misunderstandings about the test led us to initially decline the event, but by mid-January we'd signed on. Paul's rules were broad: Bring a config geared toward "virtualization readiness" that included at least 4 blades and either fibre or iSCSI shared storage. Paul also gave us copy of the tests he would run, which let vendors select and tune their configurations. Each vendor would get a 2- or 3-day timeslot on-site in Hawaii for Paul to run the tests himself, plus play around with the system's management features. HP was scheduled for the first week of March.
In late January we got the OK to bring pre-released equipment. Luckily for Paul, Dell, IBM, and HP all brought similar 2-socket server blades with then unannounced Intel Xeon 5670 processors ("Westmere"). We scrambled to come up with the CPUs themselves; at the time, HP's limited stocks were all in use to support Intel's March announcement.
HP's final config: One c7000 enclosure, four ProLiant BL460c G6 server blades with VMWare ESX and using 6-core Xeon processors and 8GB LVDIMMs. Two additional BL460c G6's with StorageWorks SB40c storage blades for shared storage; a Virtual Connect Flex-10 module, and a 4Gb fibre switch. (We also had a 1U KVM console and an external MSA2000 storage array just in case, but ended up not using them.)
To show off some power-reducing technology, we used solid state drives in the storage blades, and low-voltage memory in the server nodes. HP recently added these Samsung-made "Green" DDR3 DIMMs that use 2Gb-based DRAMS built with 40nm technology. LV DIMMs can run at 1.35 volts (versus the normal 1.5 volts), so that they "ditch the unnecessary energy drain" (as Samsung's Sylvie Kadivar put it recently).
Our pre-built system left Houston three days before I did, but it still wasn't there when I landed in Honolulu Sunday afternoon. We had inadvertently put the enclosure into an extra-large Keal case (a hard-walled shipping container) which was too tall to fit in some aircraft. It apparently didn't fit the first cargo flight. Or the second one. Or the third one...
Sunday evening, already stressed about our missing equipment, the four of us from HP met in the home of our Hawaiian host, Brian Chee of the University of Hawaii's Advanced Network Computing Laboratory. Our dinnertime conversation generated additional stress: We realized that I'd mis-read the lab's specs, and we'd built our c7000 enclosure with 3-phase power inputs that didn't match the lab's PDUs. Crud.
We nevertheless headed to the lab on Monday, where we spotted the rats-nest of cables intended to connect power meters to the equipment. Since our servers still hadn't arrived, two of the HP guys fetched parts from a nearby Home Depot, then built new junction boxes that would both handle the "plug conversion" to the power whips, plus provide permanent (and much safer) test points for power measurements.
Meanwhile, we let Paul get a true remote management experience on BladeSystem. I VPN'd into HP's corporate network, and pointed a browser to the Onboard Administrator of an enclosure back in a Houston lab. Even with Firefox (Paul's choice of browser), controlling an enclosure that's 3000 miles distant is still simple.
Mid-morning on day #2, Paul got a cell call from the lost delivery truck driver. After chasing him down on foot, we hauled the shipping case on to the truck's hydraulic lift...which suddenly lurched under the heavy weight, spilling the wheels off of the side, and nearly sending the whole thing crashing onto the ground. It still took a nasty jolt.
Some pushing and shoving got the gear to the Geophysics building's piston-driven, hydraulic elevator, then up to the 5th floor. (I suppose I wouldn't want to be on that elevator when the "Low Oil" light turns on!)
We unpacked and powered up the chassis, but immediately noticed a health warning light on one blades. We quickly spotted the problem; a DIMM had popped partway out. Perhaps not coincidently, it was the blade that took the greatest shock when the shipping container had slipped from the lift.
With everything running (whew), Paul left the lab for his "control station", an Ubuntu-powered notebook in an adjoining room. Just as he sat down to start deploying CentOS images to some of the blades...wham, internet access for the whole campus blinked out. It didn't affect the testing itself, but it caused other network problems in the lab.
An hour later, those problems were solved, and performance tests were underway. It went quick. Next, some network bandwidth tests. Paul even found some the time to run some timed tests to evaluate Intel's new AES-NI instructions, using timed test with some OpenSSL tools.
Day #3 brought us a new problem. HP's Onboard Administrator records actual power use, but Paul wanted independent confirmation of the numbers. (Hence, the power meters and junction box test-points.) But the lab's meters couldn't handle redundant three-phase connections. An hour of reconfiguration and recalculation later, we found a way to corroborate measurements. (In the end, I don't think Paul published power numbers, though he may have factored them into his ratings.)
We rapidly re-packed the equipment at midday on day #3 so that IBM could move into the lab. Paul was already drafting his article as we said "Aloha", and headed for the beach -- err, I mean, back to the office.
By Chuck Klein
Now it was time for the bloggers to head to the Insight Software lab to see what HP does for managing data centers for power & cooling. John Schmitz, Ute Albert, and Tom Turicchi went over the System Insight Manager software (SIM) all the way up the management stack to Insight Dynamics. This is the software stack that allows system administrators to install, configure, monitor, and plan for BladeSystem chassis in the Datacenter.
Tom then gave a demonstration of how the Data Center Power Control part of Insight Control allows for data center managers to plan, monitor, cap and control the amount of energy and cooling is used by their infrastructure. Tom set up policies and rules to manage events that may happen in a Data Center from utility brown-outs to loss of cooling units. He also went over how you can monitor energy usage for the Data Center all the way down to each blade. This would allow you to better plan for capacity and where to install new blades.
The attendees wanted to know what couldn't be managed as they thought the list would be much shorter than reviewing what the software could do. So Tom went over that it managed only HP servers presently, that scripts could be used to manage or shutdown multi-tiered applications, network devices, and storage. These devices did not have the iLO2 ASIC chip in them and that was a foundational element that needed to be there.
Tom also went over a demo of what could be done to setup the event manager to respond to utility policies and help save companies money. He used an example from PG&E in California. That's all for now.
By Chuck Klein
Afternoon was the time for the Summit attendees to visit the HP blades lab. Jim Singer was our tour guide and went though much of the technology around the power, cooling, blades, interconnect, and storage associated with blades. Two of the attendees were pretty familiar with HP BladeSystem. Martin MacLoeod, Blade Watch
http://www.bladewatch.com/ and Kevin Houston, Blades Made Simple http://bladesmadesimple.com/, both had worked with HP blades and knew the details. For others this was a whole new world. Jim Singer and Gary Thome reviewed the design considerations of the power supplies, the 21 patents on our cooling fans, and how the chassis actively managed the blades infrastructure. He reviewed how, what and why it was designed in from the start, why is was built in such a way, and what advantages that provided data center managers when utilizing blades for their infrastructure.
Leslie Gillette's favorite blade is the BL2x220c that combines 2 servers into a single half-height slot. Kevin indicated that none of the other blade vendors have such a blade density. It is used in a lot of compute intensive applications such as super computing applications and for digital rendering such as Weta for the movie, Avatar. Stephen Foskett, Pack Rat http://blog.fosketts.net/ , wanted to know what considerations are taken into account when designing a new blade for the chassis. Gary and Jim went over the consideration of the design needs for power, for memory, what applications it would address, space considerations for features such as disk drives and mezzanine cards. All these go into the product requirements and a timeframe. Then engineering, manufacturing, procurement, etc. respond with what is possible. Then the negotiations begin.
I did ask Kevin Houston if he was seeing a lot of demand for Solid State Drives. He felt they were still too expensive for general applications and that they were only being used for very specific needs. That's all for now.