Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Behind the Scenes at the InfoWorld Blade Shoot-Out

When the sun set over Waikiki, HP BladeSystem stood as the victor of InfoWorld 2010 Hawaii Blade Shoot-Out.  Editor Paul Venezia blogged about HP's gear sliding off a truck, but other behind-the-scenes pitfalls meant the world's #1 blade architecture nearly missed the Shoot-Out entirely.
 
Misunderstandings about the test led us to initially decline the event, but by mid-January we'd signed on. Paul's rules were broad: Bring a config geared toward "virtualization readiness" that included at least 4 blades and either fibre or iSCSI shared storage.  Paul also gave us copy of the tests he would run, which let vendors select and tune their configurations.  Each vendor would get a 2- or 3-day timeslot on-site in Hawaii for Paul to run the tests himself, plus play around with the system's management features.  HP was scheduled for the first week of March.


In late January we got the OK to bring pre-released equipment. Luckily for Paul, Dell, IBM, and HP all brought similar 2-socket server blades with then unannounced Intel Xeon 5670 processors ("Westmere").  We scrambled to come up with the CPUs themselves; at the time, HP's limited stocks were all in use to support Intel's March announcement.


HP BladeSystem at InfoWorld Blade Shoot-OutHP's final config: One c7000 enclosure, four ProLiant BL460c G6 server blades with VMWare ESX and using 6-core Xeon processors and 8GB LVDIMMs. Two additional BL460c G6's with StorageWorks SB40c storage blades for shared storage; a Virtual Connect Flex-10 module, and a 4Gb fibre switch. (We also had a 1U KVM console and an external MSA2000 storage array just in case, but ended up not using them.)


To show off some power-reducing technology, we used solid state drives in the storage blades, and low-voltage memory in the server nodes.  HP recently added these Samsung-made "Green" DDR3 DIMMs that use 2Gb-based DRAMS built with 40nm technology. LV DIMMs can run at 1.35 volts (versus the normal 1.5 volts), so that they "ditch the unnecessary energy drain" (as Samsung's Sylvie Kadivar put it recently).


Our pre-built system left Houston three days before I did, but it still wasn't there when I landed in Honolulu Sunday afternoon. We had inadvertently put the enclosure into an extra-large Keal case (a hard-walled shipping container) which was too tall to fit in some aircraft. It apparently didn't fit the first cargo flight.  Or the second one.  Or the third one...


Sunday evening, already stressed about our missing equipment, the four of us from HP met in the home of our Hawaiian host, Brian Chee of the University of Hawaii's Advanced Network Computing Laboratory.  Our dinnertime conversation generated additional stress: We realized that I'd mis-read the lab's specs, and we'd built our c7000 enclosure with 3-phase power inputs that didn't match the lab's PDUs.  Crud.  


We nevertheless headed to the lab on Monday, where we spotted the rats-nest of cables intended to connect power meters to the equipment.  Since our servers still hadn't arrived, two of the HP guys fetched parts from a nearby Home Depot, then built new junction boxes that would both handle the "plug conversion" to the power whips, plus provide permanent (and much safer) test points for power measurements. 


Meanwhile, we let Paul get a true remote management experience on BladeSystem.   I VPN'd into HP's corporate network, and pointed a browser to the Onboard Administrator of an enclosure back in a Houston lab.   Even with Firefox (Paul's choice of browser), controlling an enclosure that's 3000 miles distant is still simple.


Moments Before Disaster...Mid-morning on day #2, Paul got a cell call from the lost delivery truck driver.  After chasing him down on foot, we hauled the shipping case on to the truck's hydraulic lift...which suddenly lurched under the heavy weight, spilling the wheels off of the side, and nearly sending the whole thing crashing onto the ground.  It still took a nasty jolt.


 


Some pushing and shoving got the gear to the Geophysics building's piston-driven, hydraulic elevator, then up to the 5th floor.  (I suppose I wouldn't want to be on that elevator when the "Low Oil" light turns on!)


 


We unpacked and powered up the chassis, but immediately noticed a health warning light on one blades.  We quickly spotted the problem; a DIMM had popped partway out.  Perhaps not coincidently, it was the blade that took the greatest shock when the shipping container had slipped from the lift.


With everything running (whew), Paul left the lab for his "control station", an Ubuntu-powered notebook in an adjoining room.  Just as he sat down to start deploying CentOS images to some of the blades...wham, internet access for the whole campus blinked out.  It didn't affect the testing itself, but it caused other network problems in the lab. 


An hour later, those problems were solved, and performance tests were underway.  It went quick.  Next, some network bandwidth tests.   Paul even found some the time to run some timed tests to evaluate Intel's new AES-NI instructions, using timed test with some OpenSSL tools.


Day #3 brought us a new problem.  HP's Onboard Administrator records actual power use, but Paul wanted independent confirmation of the numbers.  (Hence, the power meters and junction box test-points.) But the lab's meters couldn't handle redundant three-phase connections.  An hour of reconfiguration and recalculation later, we found a way to corroborate measurements.  (In the end, I don't think Paul published power numbers, though he may have factored them into his ratings.)


We rapidly re-packed the equipment at midday on day #3 so that IBM could move into the lab. Paul was already drafting  his article as we said "Aloha", and headed for the beach -- err, I mean, back to the office.


 Paul Venezia (left) and Brian Chee with the HP BladeSystem c7000.


 View toward the mountains from the rooftop "balcony" behind Brian's lab



 

Pity the "Server Guy"

My brother-in-law David manages a mid-sized construction business, and owns seven or eight servers to handle the data.   But don't bother asking him how much data they hold, or what  processors they use. In fact, it's pointless to ask anyone in his office; they'll all give the same answer:  "I don't know. Ask the Server Guy."
 
Who exactly is the Server Guy?  To an SMB company like David's, Server Guy is the mysterious geek who crawls into a back-office closet clutching two cables and a USB thumbdrive, and emerges fifteen minutes later to declare that email is working again. Server Guy brings IT to the small- and mid-size businesses who either have a 1-man IT department, or depend on part-time or contractor help.


Tam Harbert notes that more and more of these Server Guys are approaching Ingram Micro and asking whether blade servers might be right for the 20-to-100 employee, server-closet crowd.  And, Tam says, increasingly the answer is "Yes."


Why?  Partly, Tam notes, it's the potential for saving money from their smaller footprint  and higher power efficiency. But Arlin Sorensen, president of Heartland Technology Solutions and a Server Guy himself, nails an even bigger reason:


"A lot of our customers aren't equipped to handle the number of servers that they end up having...When you're dealing with 15 different stand-alone servers that were bought at 15 different times, then you have to deal with 15 different experiences in how those things are going to act. The beauty of blades is that the servers all respond and react the same way."


Blades make Server Guy's job EASIER. When you have a jumble of servers, switches, and storage wired together with a rats-nest of connections, the only cross-platform, intuitive management tool that you have is the main circuit breaker on/off switch.    Blades change all that -- they give Server Guy a way to maintain servers in a quick, consistent, predicable manner.


Consider all the things Server Guy might be called upon to know.  (Martin at BladeWatch did just that recently -- and to me his list is both accurate and daunting.)  


But with tools like BladeSystem Onboard Administrator, Server Guy now has graphical, point-and-click tools that let him manage the IT hardware without two hundred hours of classroom training and three expensive industry certifications.  Intuitive tools mean Server Guy is more productive.


How?  Well, let's say my brother-in-law calls Server Guy and says "it sure seems hot in the server closest."  Since most servers have temperature sensors in them, Server Guy could download a bundle of User's Guides, drive down to the office, figure out what settings he needs on a serial cable, plug it into each system, and -- if he remembers all the login passwords -- fetch the temperature readings on each piece of equipment.  He could compare those to the tech specs on the hardware maker's web sites, then finally report to my brother-in-law that everything's OK.


Or...he could simply pull up a browser and remotely look at the Bladesystem Onboard Administrator status screen:


 


No manual needed.  The green bar  obviously means things are OK. There are little graphical orange and red hash marks -- nicely labeled with temperatures, and "Caution" and "Critical" indicators -- showing how much hotter it would need to be before there's a problem. 


The BladeSystem team spends lots of their time developing tools like this, so Server Guy only has to spend a tiny amount of time using them.


Server Guy, if you're out there, let me -- or some of our colleagues -- know what other help you need.   Also, call my brother-in-law.  He says the Internet is broken again, and the "any" key is missing from his keyboard.


 

Ever have the issue of having the EEPROM fail on a blade enclosure?

I had a Blade Specialist ask what happens when the EEPROM fails on the BladeSystem Enclosure.


First of all, just what does the EEPROM do for the blade enclosure.

The c-Class Architecture documents on the mid-plane says as the following.

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00810839/c00810839.pdf
"The only active device is an Electrically Erasable Programmable Read-Only Memory (EEPROM), which the Onboard Administrator uses to acquire information such as the midplane serial number. If this device were to fail, it would not affect the signaling functionality of the NonStop signal midplane."

Question:
Does this mean the blade server and interconnect device will have no problem and keep running even if the EEPROM fails and the information on this device is completedly lost ?

Answer:
When and if the EEPROM fails, it fails "safe" and results in no loss of functionality. It is not in the signal path of anything. It is 'active' in the sense that it is a powered device, but its activities are completely passive, basically to contain information that the OA can query about serial number as stated below and some Field Replaceable Unit information.

Does OA work fine continuously ?     YES
Does Blade server "Boot" normally ? YES
Does iLO access work fine ?            YES
Is the Virtual Connect profile still applicable ?           YES

Blade Hacks: Capture all your configuration settings at once

Learn how to capture all of a c7000 enclosure and server blade configuration settings.  Get things like firmware revisions,  switch settings, similar to what the hpux script “sysinfo” would collect.


Get the BladeSystem CLI guide and check out the "show all" command and others on pages 41-48.


 

Blade Hack: All power on and off

I saw this question come across my desk last night and thought it might be a helpful tip to some you out there.  Not only did I think it was useful, it's really cool that we can do it and even script delays to turn on different servers to balance the power during startup.

QUESTION:

A customer needs to shutdown two-third of their blade servers around 9pm and need to power-on around 6am .. every single day. Do we have way to schedule the Onboard Administrator (OA)\iLo to power-on and shutdown blade server as they need?



ANSWER:

You can easily shutdown all the Blade servers by using the OA CLI script poweroff server all. This will do a gracefull shutdown on all the servers if the special setting in the windows security settings is enabled that allow a remote shutdown. If needed this script can be followed 15 minutes later by a poweroff server all force what will take down all the blades in a few seconds (like pressing the power on/off button for >5 seconds). This is fully scriptable if needed.


Power-on can easily be done with poweron server all command. If needed in OA fw 2.20+ you can schedule the delay between servers to balance the power during startup.

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I am a member of the Enterprise Group Global Marketing team blogging on topics of interest for HP Servers. Check out blog posts on all four Server blog sites-Reality Check, The Eye on Blades, Mission Critical Computing and Hyperscale Computing- for exciting news on the future of compute.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.