Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Displaying articles for: June 2009

New review of HP blades for small sites


 


Recently, we shipped a BladeSystem c3000 (or 'Shorty' to those in the know), to Dave Mitchell at the IT Pro for a hands-on review.  Inside we added the latest virtualization blades, virtual storage with HP LeftHand and some other new gadgets to show off a snazzy virtual infrastructure that's drop dead simple.  Among the things Dave liked:



  • vs. IBM, built like a tank, the c3000 had better quality  

  • HP blade management sets the standard

  • Surprisingly quiet (IBM marketing must have called him)

  • LeftHand integration

  • He called Shorty "friendly during testing".  Ahhh!


>> Read what else Dave had to say.


 


 


 


 



 


 


 


 


 

TOP500 Supercomputer list for June 2009

Since last November, another five spots on this biennial list are taken by systems based on HP server blades.   On the latest list, I count  206 of the 500 fastest computer systems in the world -- over 41% -- as being powered by HP BladeSystem.


TOP500 List - June2009


 


  Some of the highlights really show how far high performance systems have come -- and how quickly they change:



  • Quad-Core processors are used in about 75% of the systems.  Only 4 systems still use single-core processors.

  • The total combined performance of all 500 systems has
    grown 33% from just 6 months ago.

  • The slowest system on this June, 2009 list would have placed right in the middle -- number 274 -- of last November's list.


 


 


 


 

From #HPTF: The future is coming into focus

And BladeSystem Matrix is your lens.  If you want to understand how the the convergence of the racked, stacked and wired world changes the rules of the game, start with Matrix. 


We've talked a lot about  convergence over the last year and how it's more than just consolidating stuff. It's the fusion of infrastructure, processes, physical with virtual, facilities . . . everything . . . to create something completely new. That's why in a converged world, things start to look differently. 


Take for instance this picture.  What is the "face of Matrix"?



Did you say the rack on the left (and the stuff inside) or did you say the screen on the right (Matrix Orchestration Environment)?


As I spoke with Esther and Brad at the Matrix booth, I was curious what customers saw.   No one wanted to see inside the server, the enclosure, the switch.  It didn't matter.  That's just the pool of resouces.  The "WOW" came from how easy it was to design the architecture, do capacity planning, combine virtual and physical, set up disaster recovery and automate provisioning of complex infrastructure.  Configuring and provisioning the network, storage, and compute "just happen" within the Matrix. Right on!  One infrastructure, any workload, on the fly.  That's is the future the private cloud delivers to your data center. 


I was thrilled to hear that so many folks get it.  Connect the dots a little further . . . in a POD, from the cloud, in the rain, on a train, in a box, with a fox - you can have your infrastructure any way you want it.


In that future, you won't care about the stuff inside; only the services delivered, what they costs and how fast you can get them.  The future I see coming into focus is the converged infrastructure (melding the best of HP: NonStop, SuperDome, XP, EVA, LeftHand, ProLiant, ProCurve, and BladeSystem) all controlled and interconnected as one. Inside that "matrix cloud".  You decide how to carve up the best resources for your workloads and data and Matrix does the rest.

From #HPTF: Pondering the HP POD

I found the HP POD to be one of the most compelling demo areas at this weeks' HP Technology Forum. Being able to emerse yourself in the POD and touch and feel what it takes to deploy and support 3000 or more systems is amazing.  Every detail has to be thought through: the airflow, the power and backup, redundancy, security of the container, the cooling, networking, serviceability and on and on.  All integrated on an impressive scale.


Here is a presentation we put together about the HP POD from the show floor.  We call it the "plain English" version.


It was interesting to see and hear the reactions of different customers to the prospect of a POD-enabled future.  My first thought was "cool!".  But, more than once I overheard the word "scary".  It turns out, that it wasn't a negative comment but more of a feeling of nostaligic dread that comes with technological change.  Kind of like when parents think of their children growing up in a Twitter-connected world.  It may or may not be bad, but the prospect is, well, "scary" but the perspective of the parent and the child are radically different.   The role of the POD in the data center of the future has some big implications.  Good or bad is a matter of perspective. 


Those that called the POD scary said the ultra-efficient, lights-out capability of the POD raises the bar for everyone.  When you see PUE's of less than 1.25, and power densities in the 20+kW range, you realize where we all are headed.  The days of racking and stacking taking up a big portion of your week are coming to an end.   One guy was even worried about spending his whole day inside of POD.  I think he was missing the point of lights-out,  you shouldn't need to go inside the POD any more than you need to crawl around inside a server all day long.  Some day, will anyone ever go inside the data center again?   Will you be at the same site as your systems? Will it be designed for humans at all?


I didn't have a chance to speak with any CIO-types, but I wonder if they'd have a different point of view, one more oriented to the macroeconomics of the data center.  The CIO's perspective on the POD would likely be of CapEx and OpEx, not wondering how you get all the cables in the rack.  At the end of the day, IT is simply a combination of services need to support and enable the business. The IT infrastructure is the simply the capacity needed to deliver those services.   When you think that way, why would you spend resources doing in yourself.  If you need the services of transportation, you buy a car.  Building your own from scratch just doesn't make good economic sense.  


Any CIO's out there today?  What are your thoughts?  We'd love to hear them.

Thoughts on #HPTF keynote

After thinking over Paul Millers' presentation last night, here's what stuck with me.


Convergence is the mega-trend



  • It's a bigger idea with bigger implications than consolidation and unification - it's transformational

  • Convergence collapses the stack. That means fewer stuff, less cost, complexity in a converged infrastructure.

  • HP is looking at processes, applications, facilities and more as part of convergence - not just server, network, storage


Everything as a Service is about economics



  • Virtual storage delivered as service is a powerful idea for virtual infrastructures.  Think a SAN at the costs of a bunch of disks over Ethernet.  Big economic shift there.

  • Private clouds are about time to market. The example of how one HP customer could go from 33 days to 108 minutes to design, deploy and go live with their infrastructure solution is not only a huge savings, but a huge competitive advantage.

  • Extreme scale brings extreme challenges.  The economics in that reality are about pennies and seconds.  You really have to think differently.


Here are Paul's presentation slides or you can check him on video - taken as a live stream from the front row.  We'll have the fancy version on video later today.


Circuit Board Rainbow

Recently I demonstrated a prototype blade to an IT admin from a large bank.  He'd worked with lots of different HP server models over the years, but a bright red color peeking out from the server blade chassis caught his eye.  I popped off the cover and showed him the motherboard.  It was coated with a glossy red solder mask that made it shine like a candy apple.


"Why is it red?" he asked.  "Doesn't red color mean hot plug?"


"Well," I answered, "Not in this case.  We do use dark red handles on stuff that's hot pluggable, like power supplies."  (By the way, our official description for the hot-pluggable color is 'port wine', not red.)


The bright red color, I explained, signified that this particular motherboard was a Proto-2, aka a second-generation prototype.  When the product went to production, the circuit boards would all be green.  Until then, the various stages of test hardware would be built in different colors.


      


For many years the ProLiant team has used other colors of solder mask and silkscreen during development.  (The solder mask, also called solder resist, is a paste-like substance that coats most of the outer layers of a circuit board, giving it its color.  By 'silkscreen' I mean the letters that are printed on the circuit board to reference components.)


 


There is no set rule, but we tend to use brighter red, orange, and yellow colors for the earliest prototypes, with darker purples, blues, and greens for later ones.


Not all silkscreen colors work well with solder mask colors.  Allen Shorter, another ProLiant engineer here at HP,  told me he learned (through experience, unfortunately) that white silkscreen on yellow solder mask makes the lettering really, really hard to read. 


There are apparently lots of theories for why green became the normal circuit board color.  (I don't buy into the theory that it's because green is the easiest color for the human eye to see -- PCBs aren't supposed to be on display.)   It's simple enough for PCB fabs to make other colors, but it's slighly more time- and cost-effective for them to stick with a single color, so for large-scale production most everyone uses green.


 

Technology Forum 2009

I'll be speaking at Technology Forum along with my partner in crime Richard Brooke.


We'll be doing session number 1780 - Powering HP BladeSystem covering all the exciting technologies like rack PDUs, UPSs, power cables, plugs and the other kinds of things you need to to know to actually power a c7000 Enclosure up.


So if your attending next week (June 15th – 18th ) at the Mandalay Bay in Las Vegas, or just happen to be in Las Vegas, we'll either be presenting or on the booth (he's the tall one) so stop by and have a chat.  You might also find us after hours somewhere on the strip, but we promise not to talk about power in the evening.


If you're interested in Power and Cooling in general, there's a couple of other sessions you may be interested in as well


 


3156 Thermal Logic Power Management with
Insight Control Suite - with Mark Lackey and Tom Turicchi


4620 Integrity Server Power and Cooling Technologies - with Thomas Vaden


 

Applications Matter - What Affects Server Power Consumption: Part 2

 How does the application you are using and what it is doing affect the power consumption of system.


The first thing that everyone looks at when talking about power consumption is CPU utilization.  Unfortunately CPU utilization is not a good proxy for power consumption and the reason why goes right down to the instruction level. Modern CPUs like the Intel Nehalem and AMD Istanbul processors have 100s of millions of transistors on the die.  What really drives power consumption is how many of those transistors are actually active.  At the most basic level an instruction will activate a number of transistors on the CPU, depending on what the instruction is actually doing a different number of transistors will be activated. So a simple register add, for example, might integer add the values in two registers and place the result in a third register.  A relatively small number of transistors will be active during this sequence.  The opposite would be a complex instruction that streams data from memory to the cache and feeds it to the floating point unit activating millions of transistors simultaneously.


Further to this modern CPU architectures allow some instruction level parallelization so you can, if the code sequence supports it, run multiple operations simultaneously. Then on top of that we have multiple threads and multiple cores.  So depending on how the code is written you can have a single linear sequence of instructions running or multiple parallel streams running on multiple ALUs and FPUs in the processor simultaneously


Add to that the fact that in modern CPUs the power load drops dramatically when the CPU is not actively working, idle circuitry in the CPU is placed in sleep modes, standby or switched off to reduce power consumption.  So if you're not running any floating point code, for example, huge numbers of transistors are not active and not consuming much power. 


This means that application power utilization varies depending on what the application is actually doing and how it is written.   Therefore depending on the application you run you will see massively different power consumption even if they all report 100% CPU utilization.  You can even see differences running the same benchmark depending on which compiler is used and whether the benchmark was optimized for a specific platform or not and the exact instruction sequence that is run.


The data in graph below shows the relative power consumption of an HP BladeSystem c7000 Enclosure with 32 BL2x220c Servers.  We ran a bunch of applications and also had a couple of customers with the same configuration who wre able to give us power measurements off their enclosures.  One key thing to note is that the CPU was pegged at 100% for all of these tests, (except the idle measurement obviously).



As you can see there is a significant difference between idle and the highest power application, Linpack running across 8 cores in each blade.  Another point to look at is that two customer applications, Rendering and Monte Carlo, don't get anywhere close to the Prime95 and Linpack benchmarks in terms of power consumption.


It is therefore impossible to say what is the power consumption of server X and comparing it to server Y unless they are both running the same application under the same conditions.  This why both SPEC  and the TPC have been developing power consumption benchmarks that look at both the workload and power consumption to give an comparable value between different systems.


SPEC in fact just added Power Consumption metrics to the new SPECweb2009 and interesting enoughly the two results that are up there have the same performance per watt number, but they have wildy different configurations, absolute performance numbers and absolute wattage numbers. So there's more to performance per watt than meets the eye.


The first part of this series was Configuration Matters

Is Power Capping ready For Prime Time

 


Mike Manos responded to my post about power capping being
ready for prime time with a very well thought out and argued post that really
looks at this from a datacenter manager's perspective, rather than just my
technology focused perspective.


I'm going to try and summarize some of the key issues that
he brings up and try to respond as best I can.


Critical Mass


This one spans a number points that Mike brings up,  but I think the key thing here is that you
must have a critical mass of devices in the datacenter that support power
capping otherwise there is no compelling value. 
I don't believe it is necessary, however, to have 100% of devices in the
datacenter that support power capping.  There
are two reasons why:


1.     
In most Enterprise datacenters the vast majority
of the power for the IT load is going to the servers.  I've seen numbers around 66% servers, 22%
storage and 12% networking.  This is a
limited sample so if you have other numbers let me know I would be interested.


2.     
Most of the power variation comes from the
server load. A server at full load can use 2x - 3x the power of a server at
idle.  Network switch load variation is
minimal based on some quick Web research (see Extreme Networks power consumption test or Miercom power consumption testing). Storage power consumption variation also seems to
fairly light at no more than 30% more than idle. See Power Provisioning for a
Warehouse-sized Computer
by Google 


So if our Datacenter manager, Howard, can power cap the
servers then he's got control of the largest and most variable chunk of IT
power.  Would he like to have control of
everything, absolutely yes, but being able to control the servers is more than
half of the problem.


Been there done that,
got the T-Shirt


The other thing that we get told by the many Howards that
are out there is that they're stuck. 
They've been round and round the loop Mike describes and they've hit the
wall.  They don't dare decrease the
budgeted power per server any more as they have to allow for the fact the
servers could spike up in load, and if that blows a breaker taking down a rack
then all hell is going to break lose. 
With a server power cap in place Howard can safely drop the budgeted
power per server and fit more into his existing datacenter.  Will this cost him, sure, both time to
install and configure and money for the licenses to enable the feature. But I
guarantee you that when you compare this to cost of new datacenter facilities
or leasing space in another DC this will be trivial.


The heterogeneous
datacenter


I agree most datacenters are in fact heterogeneous at the
server level either; they will have a mix of server generations and
manufacturers.  This again comes down to
critical mass, so what we did was enable this feature on the two of the best
selling servers of the previous generation, DL360 G5 and DL380 G5 and pretty
much all of the BladeSystem blades to help create that critical mass of servers
that are already out there, then add on with the new G6 servers.  We would of course love for everyone with
other manufacturer's product to upgrade immediately to HP G6 ProLiant Servers
and Blades, but it's probably not going to happen.  This will delay the point at which power
capping can be enabled and for those customers that use other vendors systems
they may not be able to enable power capping until those vendors support it.


Power Cap Management


There's a bunch of issues around power cap management that definitely
do need to get sorted out.  The HP
products do come from an IT perspective and they are not the same tools that facilities
managers typically use.  Clearly there
needs to be some kind convergence between these two toolsets even if it's just
the ability to transfer data between them. 
Wouldn't it be great if something like the Systems Insight Manager/Insight
Power Manager combination that collects power and server data could feed into something
like say Aperture (http://www.aperture.com/)
then you'd have the same information in both sets of tools.


The other question that we have had from customers is who
owns and therefore can change the power cap on the server, the
facility/datacenter team or IT Server Admin team.  This is more of a political question than
anything else, and I don't have a simple answer, but if you are really using
power caps to their full potential changing the power cap on a server is
something that both teams will need to be involved in.


I would like to know what are the other barriers you see to
implementing power capping - let me know in the comments and be assured that
your feedback is going into the development teams.


SNMP Access.


Just to make Mike happy I thought I'd let you know that we
do have SNMP access to the enclosure power consumption.


If you collect all six SNMP MIB power supply current output
power values and add them together, you will have calculated the Enclosure
Present Power.


In the CPQRACK.MIB file, which you can get from here http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?swItem=MTX-a7f532d82b3847188d6a7fc60b&lang=en&cc=us&mode=3&


There are some values


cpqRackPowerSupplyCurPwrOutput which is MIB item:
enterprises.232.22.2.5.1.1.1.10.1 through enterprises.232.22.2.5.1.1.1.10.6
gives you the Input Power of each Power Supply, I know the MIB name says output
but it's actually input - sum these together then you have the Enclosure Input Power.


Power supplies placed in standby for Dynamic Power Savings
will be reporting 0 watts.


And for enclosure ambient temp - read:


CPQRACKCOMMONENCLOSURETEMPCURRENT


Tony

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.