Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Displaying articles for: July 2009

Primer on iSCSI and the permutations on BladeSystem

As iSCSI becomes more popular as an interconnect technology for storage and other devices, we had received a lot of questions as to how, why, what, works with iSCSI and the components of HP BladeSystem.

So, Dan Bowers and Ed McGee put together a primer to help answer a majority of the questions and configurations you may have or want to implement.

I have posted the primer to this community so download it and take a look. Let us know if it helps, hinders, confuses, or what additional things we can help with in this iSCI area.



A Blade Kill


In April, SPEC updated their server power benchmark to allow results for blade servers, and now HP has released its first blade result. 
 
HP just published a SPECpower_ssj2008 result for the HP BL280c G6 server blade.  It's the top score for a blade.   I talked to Kari Kelley, the BL280c product manager, and she called it a "solid kill."

The kill comment made more sense after Kari explained she played division-1 volleyball while at Texas A&M. "A kill is a good thing," she assured me.



SPEC didn't open up that benchmark to blades until their SPECpower_ssj2008 V1.10 came out this spring.  For a multi-node system (like a blade infrastructure), the benchmark has to be run with a full set of servers -- so a c7000 enclosure filled with 16 BL280c servers in HP's case. Power is measured at the AC line voltage source, while the test itself monitors throughput and power usage at various performance levels, including periods when the servers are 100% idle.  HP has written this whitepaper that talks more about the benchmark methodology.

For the BL280c G6 server, the overall result was 1877 ssj_ops/watt.   Full details here.  For full details about kills, along with sets and spikes, you'll have to talk to Kari.

SPEC and the benchmark name SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation. The latest SPECpower_ssj2008 benchmark results are available on the SPEC website at http://www.spec.org/power_ssj2008.

A lesson in economics – Price Transparency

It doesn’t happen often, but here is one of those situations in life where that dismal science, economics, can be useful and fun.


First a little definition ….


Price transparency is defined as a situation in which both buyer and seller know what products, services or capital assets are available and at what price.


Now, price transparency is a way of life in the business of standards-based, x86 servers. This even includes blade servers.


Go to any of the major vendors’ web sites such as Dell.com, IBM.com, HP.com or even resellers such as CDW.com and you can freely look up at least what each vendor’s list price is. You can bet that we vendors do this all the time to ensure we are ‘competitive’ with each other. And so do all of our customers. In fact, our customers know our list price and our competitors’ before any of us set foot in a customer’s place. That helps keep us vendors on our toes to deliver better products while keeping prices lower for customers. The x86 market is a living example of price transparency.


Until you get to the newest member of the x86 server community …


Cisco claims on their web site that their recently announced Unified Computing System is “Reducing total cost of ownership at the platform, site, and organizational levels”.  The glaring omission is “minus the cost”. One would presume this includes a competitively priced set of compute, storage, enclosure, interconnect, management tool, software licenses and support components. But aside from Cisco’s word for it, this cannot be verified.


This is because Cisco, unlike the other server vendors, does not publish their UCS list price on their web site. Nor do their resellers seem to.  This makes it difficult for customers (or competitors) to independently validate features to prices in a standards-based industry.


Given Cisco’s traditionally high margins on network plumbing gear (65% vs. the 20% margins of x86 servers), vendors, analysts and customers could be forgiven if they were suspicious of high prices for UCS. In fact one could see some of Cisco’s UCS prices needing to be three times as high as industry averages to meet their business model.


So come on, how about a call for the free market and industry standards.  Are we all about the same price? Is Cisco really cheaper?


Michael P. Kendall

Managing HP servers through firewalls with Insight Software


I came across an interesting white paper that identifies possible ways of managing HP servers with HP Systems Insight Manager and Insight Software deployed in the area of the network that is considered more secure than the standard production network. This is not a best practices document. This document provides information that can enable system administrators to create management solutions appropriate for specific computing environments.


Here is the link: ftp://ftp.hp.com/pub/c-products/servers/management/hpsim/hpsim-53-managing-firewalls.pdf


Installing and configuring HP-related components in Essential Business Server on an HP BladeSystem c3000 chassis

To fully use the features in the c3000 hardware, you must be able to add and configure software in EBS. This updated customer-viewable document covers installation of SNMP, Support Pack, Management Packs and how to add HP updates into Microsoft® System Center Essentials. See the white paper below.



http://dccappshares01.austin.hp.com/SALES_LIBRARY-PRO/CONCENTRA/Autofed%20Content/UCM/UCM-Concentra/Pub/ucm4AA2-6438ENW/4AA2-6438ENW.pdf



Pity the "Server Guy"

My brother-in-law David manages a mid-sized construction business, and owns seven or eight servers to handle the data.   But don't bother asking him how much data they hold, or what  processors they use. In fact, it's pointless to ask anyone in his office; they'll all give the same answer:  "I don't know. Ask the Server Guy."
 
Who exactly is the Server Guy?  To an SMB company like David's, Server Guy is the mysterious geek who crawls into a back-office closet clutching two cables and a USB thumbdrive, and emerges fifteen minutes later to declare that email is working again. Server Guy brings IT to the small- and mid-size businesses who either have a 1-man IT department, or depend on part-time or contractor help.


Tam Harbert notes that more and more of these Server Guys are approaching Ingram Micro and asking whether blade servers might be right for the 20-to-100 employee, server-closet crowd.  And, Tam says, increasingly the answer is "Yes."


Why?  Partly, Tam notes, it's the potential for saving money from their smaller footprint  and higher power efficiency. But Arlin Sorensen, president of Heartland Technology Solutions and a Server Guy himself, nails an even bigger reason:


"A lot of our customers aren't equipped to handle the number of servers that they end up having...When you're dealing with 15 different stand-alone servers that were bought at 15 different times, then you have to deal with 15 different experiences in how those things are going to act. The beauty of blades is that the servers all respond and react the same way."


Blades make Server Guy's job EASIER. When you have a jumble of servers, switches, and storage wired together with a rats-nest of connections, the only cross-platform, intuitive management tool that you have is the main circuit breaker on/off switch.    Blades change all that -- they give Server Guy a way to maintain servers in a quick, consistent, predicable manner.


Consider all the things Server Guy might be called upon to know.  (Martin at BladeWatch did just that recently -- and to me his list is both accurate and daunting.)  


But with tools like BladeSystem Onboard Administrator, Server Guy now has graphical, point-and-click tools that let him manage the IT hardware without two hundred hours of classroom training and three expensive industry certifications.  Intuitive tools mean Server Guy is more productive.


How?  Well, let's say my brother-in-law calls Server Guy and says "it sure seems hot in the server closest."  Since most servers have temperature sensors in them, Server Guy could download a bundle of User's Guides, drive down to the office, figure out what settings he needs on a serial cable, plug it into each system, and -- if he remembers all the login passwords -- fetch the temperature readings on each piece of equipment.  He could compare those to the tech specs on the hardware maker's web sites, then finally report to my brother-in-law that everything's OK.


Or...he could simply pull up a browser and remotely look at the Bladesystem Onboard Administrator status screen:


 


No manual needed.  The green bar  obviously means things are OK. There are little graphical orange and red hash marks -- nicely labeled with temperatures, and "Caution" and "Critical" indicators -- showing how much hotter it would need to be before there's a problem. 


The BladeSystem team spends lots of their time developing tools like this, so Server Guy only has to spend a tiny amount of time using them.


Server Guy, if you're out there, let me -- or some of our colleagues -- know what other help you need.   Also, call my brother-in-law.  He says the Internet is broken again, and the "any" key is missing from his keyboard.


 

Nehalem and Windows 2003: Why 6 x 1GB = 4GB

On Ed Groden's post in the Intel Server Room, Mario Valetti asks about memory configurations for an Intel Xeon 5500-based server running a 32-bit OS.   Ed gives a couple good rules of thumb for what memory to use, including his rule #4, the ultra-important suggestion to populate DIMMs in groups of 3 for each processor.     However, the rules break down if you're stuck running Windows Server 2003 Standard edition.


Mario is confronting this issue on a new Nehalem-based server (a ProLiant DL380 G6 running Intel Xeon 5500 series processors, to be precise), but actually the problem isn't new to the x86 space.   Multiprocessor AMD Opteron-based servers have had a NUMA architecture for a few years now, resulting in similar problems on Windows 2003.  Nahalem makes it notably worse, though...and it's because of that pesky rule about "groups of 3".


I'll explain the exact problem, then give a couple of solutions.


Since Win2003 Standard only addresses 4GB of memory, and the smallest DIMM available on a DL380 G6 is 1GB,  your first instinct on a 2-processor server is to use four 1GB DIMM, like this:


By default, the BIOS on a HP DL380 G6 server (and any of the 2-processor Nahalem-based server blades) builds a memory map by placing all the memory attached to processor #1 first, followed by all the memory on processor #2.    So in our 4-DIMM scenario, the memory would look like this:



Because of the "groups of 3" rule -- which is based on Nahalem processors' ability to interleave memory across its three memory channels --  memory bandwidth on the green region  will be about 3x the bandwidth in the blue region.  I'll call this the "bandwidth" problem.


Also, memory accesses from processor #2 to the green region will have about twice the latency as memory accesses  to that same region from processor #1.   This is the "latency" problem.


These problems stem from two limitations of Windows 2003 Standard: It maxes out at 4GB; and it doesn't grok NUMA.  It doesn't take into account memory location when assigning threads to CPUs.   What you'll see in our 4-DIMM scenario above is that threads will sometimes have dramatically different memory bandwidth and latency, resulting in very uneven performance.


Two ideas might jump to your mind on how to fix this:


1. You could tell the BIOS to build a memory map by assigning alternate slices of memory to different CPUs.  HP BIOS actually lets you do this; it's called “Node Interleaving”, and can be selected in RBSU as an alternative to "NUMA" style assignment.  (This "interleaving" shouldn't be confused with the memory channel interleaving done by each processor).  With Node Interleaving enabled, the 4GB of memory will be evenly split across both processors:


 


 Why won't this fix things?  Well, you've broken the "groups of 3" rule, so your bandwidth problem becomes...well, broader.   100% of the time, your memory bandwidth will be only 2/3 of what it could be.  Plus, you've still got the latency problem, and it's even more unpredicable: about half the time, a thread will get assigned to the "wrong" processor, slowing that thread down.


2.  You could installing six 1GB DIMMs in the server, like this:



The OS will only see 4GB of memory (actually a little less, because of the so-called "Memory Hole" issue).


Why won't this fix things?  Well, it'll fix the bandwidth issue, since both the blue and green regions now have nice, fast bandwidth.  However, the latency problem isn't solved: Windows still won't assign threads to the "right" processor.  You'll end up a with a thread on processor #1 wanting memory that's attached to processor #2, and vice versa.    The result?  At any given time, about 1/3 of your threads will suffer from poor memory latency.


So how can you fix both the bandwidth and latency problems?


You can disable NUMA the old fashioned way: Go with a 1-processor configuration.    Now, all threads will get the same high bandwidth and low latency.  You can use three 2GB DIMMs and still get your "full" 4GB.   Obviously this only works if your applications aren't CPU bound.  (I talked with Mario, and luckily he believes he'll be able to go this route.)


Another way to fix it is to use a NUMA-aware OS.  How does that help?   When BIOS builds the memory, it passes along data structures called called Static Resource Affinity Tables (SRAT) tables to the OS, which describe which processor is attached to which memory region. A NUMA-aware OS can then use that info to decide where to assign threads, helping get rid of the latency problem.  Windows 2008 can handle that, as can the  Enterprise editions of 2003.   


You'll still want to follow the "groups of 3" rule, though!


 

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.