Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Configuration Matters - What Affects Server Power Consumption: Part 1



Following on from my first post I'll take a look at the affect hardware configuration will have on the power consumption of the enclosure.


To do this I went into the Blade Power Sizer and configured up two equivalent systems. I kept the enclosure configuration constant with just 2 x Virtual Connect 1/10 Ethernet Modules and 2 x Virtual Connect Fibre Modules just to simplify the example.


The Blade configuration was BL460cG1, 2 x 2.66GHz CPUs, 16GB RAM, 2x 1Gbit Ethernet, 2x 4Gbit Fibre Channel, 2x 72GB 10K SAS Drives.


Why did I pick BL460cG1, to be honest the actual server doesn't matter, what I'm trying to show here is that the hardware configuration can have a very significant effect on the enclosure power consumption.  The nice thing about the BL460cG1 for this purpose is that it shows this really clearly.


 






















































































 



Configuration A



Configuration B



System



BL460cG1 x 16



BL460cG1 x 16



CPU



E5430 2.66GHz



L5430 2.66GHz



Memory



8 x 2GB FB-DIMMs



4 x 4GB LP FB-DIMMs



Base Ethernet



1GBit Dual-Port Multi-Function



1Gbit Dual-Port Multi-Function



Additional Ethernets



None



None



Fibre Channel



Qlogic 4Gbit



Qlogic 4Gbit



Drives



2 x 72GB 10K SAS



2 x 72GB 10K SAS



Enclosure



c7000



c7000



Ethernet Switches



Virtual Connect 1/10



Virtual Connect 1/10



Fibre Channel



Virtual Connect 4Gbit Fibre Channel



Virtual Connect 4Gbit Fibre Channel



Fans



10



10



Power Supply



HP 2250W x 6



HP 2400W HE x 4



Results



 



 



Idle Power



3698W



2900W



100% Load



5855W



4238W



 


The difference between the two configurations is 798W at idle and 1,617W at high load, which is a huge difference.


Where is most of the power difference coming from, well there are 3 differences in the configuration:



  1. CPU - E5450 (80W TDP) versus L5450 (50W TDP)

  2. Memory - 8 x 2GB FB-DIMMs versus 4 x 4GB Low Power FB-DIMMs

  3. Power Supply - HP 2250W versus HP 2400W High Efficiency


The power supply is worth about 200W (25%) at Idle and 300W (18%) at full load on this configuration. So it's a significant proportion of the difference at the idle load.  At the high loads, though, the majority of the difference between the two configurations is coming from the CPU and Memory.  A standard FB-DIMM takes approximately 10W per DIMM so the difference between 4 and 8 physical DIMMs is roughly 40W per server, additionally a Low Power DIMM uses 2W - 3W less than a standard DIMM.


So what can I take from this example? 



  • System configuration matters. A lot.

  • At high loads the server power consumption is the main factor

  • At low server loads the enclosure becomes a larger proportion of power consumption.


 What are the practical steps I can use to reduce power consumption.



  • Use the lowest power cost effective processor

  • Use smallest number of largest physical DIMMs that are practical and cost effective.

  • Use the highest efficiency power supply that is available


Comments as always are welcome. Let me know where you want me to go as I continue on with this series.


Part 2 of this series is Applications Matter


 

Least Favourite Question

I'm sitting here thinking about writing my first Blog post and trying to come up with something to say.  So I figured I'd start by trying to answer one of my least favourite questions (and before you all start to correct my spelling I'm not originally from the USA) and explain why it's so hard to answer.


The Question: "So much power does a blade enclosure use?"


The answer: "It depends."


What does it depend on? Everything.


How many Blades and which Blades do you have in the enclosure, which CPUS, are they the 50W, 60W, 80W, 95W or 120W versions, how many DIMMs and what size and rank are they, which mezzanines are installed, which Interconnects are installed, how many fans, what's the ambient temperature, what applications are running and how heavily loaded are they?


Even if you gave me all this information I still couldn't answer with any degree of accuracy, those final two items applications and application load really do have such a huge impact it makes it almost impossible to give the right answer. The best that I could do would be to give the maximum and minimum power usage based on that configuration and say you'll be somewhere in-between those two values.


In the next few posts I'll go into some detail about this starting with the affect hardware configuration has on power consumption.

Why we created Thermal Logic technology

I know a lot of you think technology marketing is full of crap <<or insert your own colorful descriptor>>.  I know we can sound that way. It's one of my pet-peeves too.


I also know that some of you may hear a term like "Thermal Logic" and your "marketing-crap' sirens start to go off. So today I wanted to take a moment to explain in plain English the concept of Thermal Logic technology and to show you that it's not a make-believe idea, but a practical approach that HP is taking to address your bigger power and cooling issues in the data center.


It's a very simple idea really. Make the data denter more energy efficient, simply by making it more intelligent.


That's it.  No green-ovation, grandious claims or a high brow vision, just a statement of how the power and cooling problem must, and will be addressed by HP. 


Here's where that came from.  Back in 2003-2006 (even earlier in the mind of Chandrakan Patel in HP Labs), when a lot of our current power and cooling technology was being created in the lab, intelligence was a common theme.  Whether it was smarter fans, smarter power supplies, smarter drives, smarter CRACs, smarter reporting and metering, or smarter whatever; putting intelligence behind the problem of power usage came up again and again.


We described the problem as "you can't manage what you can't measure".  If every component, system and data center understands its need for energy as well as the total supply of energy, it could take action to conserve every watt of power and every gram of cold air.  What we find is that in most cases, every component, system and data center allocates more energy that what it really needs and often wastes energy that isn't being used to do effective work. 


Now, back to today.


Every, I repeat EVERY, technology vendor in the world today is building systems with more efficient parts.  Big deal.  This is basic 'bread and water' today and quite honestly, if your vendor isn't doing everything they can to squeeze every watt out of the basic components, you need to look elsewhere.  HP, IBM and Dell all have access to the latest chips, drives, DIMMs, etc. I imagine Cisco is even figuring out who to call these days. 


Every vendor is also able to show power savings with virtual machines.  Big deal.  Taking applications off of a bunch of out dated power hog servers and putting them on fewer, more efficient ones saves a bunch of energy.  Again, is there a vendor that can't do that too?


99% of the claims that vendors make to differentiate themselves and claim "power efficiency" superiority are based on these 2 concepts: the lastest systems with the latest, most efficient components versus last years' model and comparisons based on using virtual machines. Even worse, it's done with a straight face and backed up with claims with based on stacked benchmarks comparing today's lab queen design versus the last generation just isn't helpful to anyone. 


The data center power problem is so much bigger than a benchmark at any one point in time.  Power consumption is happening every second of every day over years.  I know that's a lot of variables to consider: tempurature, humidity, workload, usage and growth.  That's why intelligence is so darn important.  It's a big, complex problem.  I only wish I had a magic benchmark with a magic number that could prove my claim definitively in every circumstance.  It can't.  Nor can anyone else. 


Only HP, I repeat ONLY HP, is inventing energy-aware components, systems and data centers.  And yes, we call it Thermal Logic technology.  Last week, the ProLiant team announced their next generation servers and talked a lot about the concept of a 'sea of sensors'.  Those sensors are the starting point to collect the data need.


Here's another example to make this real for you: Dynamic Power Capping.  I shared with you in the past a demonstration.  Now that you're getting our unique point of view, I'd like to share with you the technology details behind it in this new whitepaper "HP Power Capping and HP Dynamic Power Capping for ProLiant servers"


Read this and you'll quickly see that Thermal Logic isn't IT marketing crap.  It's a real answer to the real challenges every datacenter in the world is facing - the rising demand and cost for power and cooling.  

Games vendors play . . . with power efficiency claims

These days, server power efficiency is top of mind for everyone.  Five years ago, most customers had no idea how much power a server used.  Now, everyone knows - it's a lot.  In some cases, customers are making power consumption the primary criteria for vendor selection. 


So why is it so dang hard to get a straight story on exactly what options are the most power efficient?

secrets

In our experiences with power measurements, we found lots of ways to the results get skewed; whether intentionally or accidentally. We honestly try to avoid them, but here are some of the dirty little secrets a lot of vendors don't want you to know about their power testing results.  Consider these red flags next time someone spends lots of money in the Wall Street Journal to post a big claim with lots of fine print on power savings.


Lab Queens


One easy way for vendors to skew power results is to cherry-pick low power components.  Processors (even those in the same power grade) and memory DIMMs tend to consume significant power and can have wide variances in power consumed from one part to another. 


We call units built by cherry-picking components "Lab Queens".


Generally, these do not represent what a customer might actually be able to purchase.  Here's an example where we tried to avoid this scenario; the systems compared on this competitive power report on our website uses the exact same processors and DIMMs on all units tested, thereby eliminating differences due to component variances.  It would have been easy to create a Lab Queen and bump the results higher, artificially.


Configuration Errors


Peak power efficiency for a given system always requires the "best" configuration.  Memory power is mostly a function of DIMM count, and not capacity.  Therefore, to achieve minimum power, the minimum number of DIMMs should be used to realize the desired memory capacity. 


If you see a comparison where the DIMM count is different or not mentioned, you might notice the strong odor of dead fish. 


On the infrastructure side of the equation, a blade system can be configured with varying numbers of fans and power supplies.  Obviously some configurations will produce better power efficiency than others.  For instance, 4 power supplies run more efficiently than 6, assuming that 2+2 power redundancy delivers adequate power for a given configuration.  The most power efficient configurations though won't always be generated by a competitor when publishing a comparison.  Honestly, there are such big differences here that an apple to apple comparison is tough.  Only an HP BladeSystem dynamically throttles fan speeds based on demand across the different cooling zones in an enclosure.


Environmental Differences


The voltage level and room temperature can cause variances in the power supply efficiencies and the speed the system fans must run.  Consequently it is important that when comparing servers to verify that these parameters are held constant.  I once heard a story of an engineer setting up a box fan to blow air on his servers to try to reduce the fan power of the blades. 


Obviously he wasn't planning on counting the power from the box fan! 


Remarkably, there are published benchmarks that allow for external cooling to be deployed and not counted as a part of the system power.


Gaming the Benchmark


Any benchmark a vendor publishes is necessarily narrow in what it measures.  When looking at a published benchmark, you should consider how close the benchmark mimics your applications.  If the benchmark is similar to your applications, the results might be relevant to you.  If not, then you might do best just to ignore them.  Also, some benchmarks are very loose on configuration requirements, making it very easy to "game them" and making the results all but totally useless.  One benchmark that is broadly published in the industry falls into this category.


Where to go from here?


At this point, you are probably thinking "Wow! This is much more complicated than I really thought!"  And you are right.  Benchmarking fairly is very tricky business, even if you are trying to be fair.   The best results you can get are the ones you measure yourself.  In my next blog post, I'll comment on some of the techniques and challenges with measuring power yourself. 

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation