Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Displaying articles for: November 2008

Why Blades? Less wires.

There are a lot of reasons to move to blades and we love to include most, if not all in our typical sales pitch. Power, cooling, space, money, time, flexiblity, yada, yada, yada. But today, I just want to talk about one aspect of Why Blades: Less wires (or 'dramatic cable consolidation' in IT marketing speak)


You probably don't think about wires very often.  They're pretty boring.  They're pretty common.  They're also a pretty big pain in the butt. Watch these videoes and you'll see what I mean.






Sometimes you have to take step back and see something like this to recognize that even small improvements can make a really big difference in your day to day life. 


HP blade cut cables up to 94%.  That's comparining all the power, network, SAN and management cables you need for 42 typical rack servers versus blade servers.  Add something like Virtual Connect to that equation, and you can save several administrators a ton of time in having to deal with moving servers or their LAN and SAN connections. Plus you can add another 4 to 1 consolidation on switches, NICs and core switch ports with Virtual Connect Flex-10. 


Do you realize that once your BladeSystem is wired, you many never have to touch a cable again until you decide to take it offline someday?  We tell customers all the time, if you're using blades and go with patch panels instead of Virtual Connect or blade switches, you're missing out on one of the biggest advantages of going with blades.


Here's my quick list of why we hate wires:




  • They cost a lot of money.  Long Ethernet cables are pretty expensive, especially multiplied by 100's or 1000's.  Don't even get me started on Fibre Channel cables. 


  • Wires connect to other things that cost even more money.  Have you checked Cisco's 10Gb core switch prices on a per port basis?  Ouch.


  • Every wire is a potential point of failure.  This was really brought home for us the other day when a customer talked about how they eliminated something like 20,000 cables by going to blades.  He said, "I don't even know how to start calculating the improvement in 'mean time between failures' (MTBF) of that!!!"


  • Moving one server means moving a bunch of wires, a bunch of reconfiguring and bunch of time of your other colleagues on the LAN and SAN side. Sometimes it even means moving the wires of other servers you don't want to mess with.


  • How many hours a year do you figure you spend installing, troubleshooting and untangling wires?  Don't you have something better to do?  Like checking out the latest on the HP blade blog?

Do you have a cable nightmare story or picture to share?  How about a major cable improvement since you moved to blades.  Share!


 

10Gb Ethernet: Divide and Conquer with Virtual Connect Flex-10


Moore's law has usually been used to predict general trends in semiconductors.  While not exactly a perfect analogy, we have seen trends in interconnect bandwidth increase.  Ethernet has seen bandwidth increase ten-fold every few years, with the latest transition to 10Gb.  Usually these transitions take a while because the costs to transition are high, and the transition to 10Gb has followed that trajectory - until now.


Yesterday we announced an exciting new technology: Virtual Connect Flex-10.  We've figured out a way to deliver 10Gb Ethernet technology at a price lower than what many people are spending on 1Gb technology today.  As a result we can help customers get onto 10Gb technology sooner than they otherwise could have done before.


We've noticed that many customers are buying four or more NICs for their servers, sometimes due to bandwidth constraints, other times due to network segmentation or security constraints, and usually for redundancy.  As a result, customers spend an awful lot on a bunch of 1Gb networks.  We figured out that we could help customers by providing a 10Gb network connection that can be divided into up to four connections, replacing the need for up to four NICs.  By doing this we conquer the high price for 10Gb Ethernet connectivity by delivering up to 8 network connections at costs that are less than what many customers pay for four 1Gb connections. At the same time they can allocate more bandwidth for one or more links or maintain the multiple connections they want for security reasons, or do a combination of both.  And to top it all off, Virtual Connect takes less space and power too.  We think this is very cool.


Bottom line: more bandwidth, more flexibility, less costs and less power.  More of what you want and less of what you don't want.  We think this is a good combination.  This is why we believe the quickest, most affordable way to move to 10Gb is Virtual Connect Flex-10, and the time to do it is now.

Rethinking networks for virtual servers

The earth is flat.  Men can't fly.  Chewing gum takes 7 years to travel the digestive track. 


The point of these, and many other bad examples of 'conventional wisdom' is that far too often we accept what seems to be reasonable advice or thinking. Usually because:


1. We've heard it our whole lives
2. Everyone else thinks the same thing or does it the same way
3. It kind of makes sense

If there's one thing our blade engineers are great at, it's challenging conventional wisdom.  Their latest challenge: why do server to network connections have to be one-to-one and why is the speed of each connection fixed?  They said, "Today's network model costs too much, burns too much power and is completely inflexible."


For example, if you want to add a network port on a blade server, you need a NIC, a switch and a cable.  Want 8 network ports per server?  Multiply by 8.  If the network is 1 or 10Gb/s, guess what?  All your connections are 1 or 10Gb/s too!  Nothing in between.  For a variety of reasons, Cisco and others have perpetuated this process for years. 


Enter the age of the virtual server. 


The requirements of virtual servers have already had a dramatic impact on the fundmentals of servers and storage design.  More cores, more memory, more capacity.  But one area untouched until now has been the network layer.  Sure, VMware has 'virtual NICs' and 'virtual switches', but it's done via software and doesn't do anything to address the underlying issues: you still need a lot of physical NICs, switches, and more bandwidth. 


Here's what our team came up with, divide the total capacity of one 10Gb pipe into 4 server ports, then add the ability to fine-tune the bandwidth of each port, so you can give more or less performance to different virtual machines.  The end result is 66% less cost in network equipment from a 4 to 1 consolidation of switches, 65% less power used and great performance from 10Gb speeds built-in.


Take this guided tour and see how Virtual Connect Flex-10 is the next big shake up of conventional wisdom in the network world. 




Today, the virtualization discussion is about virtual servers and storage.  In 2009, virtual I/O and virtual networks are the new frontiers for huge innovation and as a result; another round of consolidation and cost savings in the data center.

Unboxing Virtual Connect

Chuck took a minute to record this video unboxing the HP Virtual Connect Ethernet Module.  Check out all the yummy goodies inside!  (okay not so yummy, but we love Chuck!)

Datacenter power is officially a business problem

I found this article last week and thought I'd share it with you.  Inside, Michelle Bailey from IDC makes some good points about just how big of an issue power and cooling has become.


Here's an excerpt:


“Efficient power and cooling is the number one problem in the data center today, even ahead of availability and disaster recovery,” says Michelle Bailey, research vice president for IDC’s Enterprise Platforms and Datacenter Trends. “That’s a big change. Just two years ago, it was just starting to become important.”


Moreover, Bailey says that many organizations are bumping up against power limits on expanding and adapting to key opportunities. The result? “Power and cooling used to be an IT problem, now it’s a business problem.”


You can read the rest of the article here.

Integrity blade server smokes IBM HS21

Today we published a new SPECjAppServer2004 benchmark (19,613.3 JOPS@Standard) on the Integrity BL870c server blade running HP-UX 11i v3.  Our latest attempt blew away that the world record set by the BL870c in Sept 2008.  This latest benchmark:·         Outperforms the IBM JS22 (POWER6) server blade by 40% - with half the number of server blades!·         Beats IBM’s BladeCenter HS21 (x86) by 20% on a per core comparison!·         Utilizes HP Integrity BL870c server blade running HP-UX 11i v3 and Oracle WebLogic ServerThe latest BL870c SPECjAppServer2004 benchmark results are published at:http://www.spec.org/jAppServer2004/results/res2008q4/jAppServer2004-20081022-00120.html The latest performance slides can be found at:http://bcswwpit.corp.hp.com/Document_Storage/ESSCompetitiveInformation/Integrity_Performance.ppt September 2008 BL870c SPECjAppServer2004 benchmark performance brief:http://h20341.www2.hp.com/integrity/downloads/Integrity_BL870_SPECjApp_092208.pdf

 

Dynamic Power Capping explained

Yesterday, Gary told you about our new Thermal Logic technologies, but one capability really struck a chord with me, Dynamic Power Capping.  It's a not only a killer concept, it's real and here today.


We asked one of our lead engineers on the project, Alan Goodrum to explain all the details; click here to watch his overview.  It's one of the best technical presentations I've seen in a while.  If you're looking to explain this concept to a colleague or your boss, this is the presentation for you.

Thermal Logic – The coolest ideas for your hottest problems


Today we are announcing some significant Thermal Logic innovations.  I'm sure if you're reading this first, you are thinking "wait, here comes another vendor claim that they are most energy efficient!"  I'd hate to disappoint you, but we do believe that addressing power and cooling challenges is much more than just a server that takes less power.


Okay we do have a new energy efficient blade server.  In fact we have a portfolio of energy efficient blades servers.  But wait, there is more!  Really!


We've been musing over this power efficiency thing.  We've noticed how important it seems to be for vendors to tout that they are "NUMBER 1" when it comes to power efficiency.  We thought, "isn't it more important to make our customers #1" when it comes to power efficiency?  This means more than one energy efficient server model.  It means helping customers reduce power footprint for all their servers, all their workloads, all the time.


This conviction has led us to some big new innovations.  IT over-provisioning of power is one of the single biggest problems facing data center design.  The problem is that nobody knows exactly how much power a server might use, and nobody wants to risk tripping a circuit breaker.  So the solution is to safely overprovision.  But safety comes with a cost: A 1000W server requires about $25,000 worth of data center power and cooling infrastructure and servers are frequently over-provisioned two or even three times their actual power consumption.


We've spent the last several years in our labs dreaming up and implementing Dynamic Power Capping, which allows you to safely provision a server's power to what it actually uses, rather than what it might use.  By setting a power cap for an entire BladeSystem enclosure, power can be managed and even shared across the servers inside the enclosure, which can allow even greater savings.  This is a unique capability that addresses an enormous data center challenge that has existed for many, many years.  Now customers can double or even triple the number of servers they can place on a data center circuit, and cut the data center infrastructure costs per server in half or more.


We've also spent years finding ways to throttle back unused resources, to save power dynamically.  Our new BladeSystem power supply has the unique ability to shut off half of the supply at a time - which drives efficiency levels of 90% or higher from 8% utilization on up to 100%.  Other guys will tout high efficiency power supplies - but only at high loads; with BladeSystem you always get high efficiency, no matter what your server configuration or workload.


So while the other guys will tout one server model running one benchmark as what defines energy efficiency, you can count on Thermal Logic to give you more servers per rack and per data center, more insight into power usage, and less power consumed.  More of what you want, and less of what you don't want!

Search
Follow Us


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA ISS Central team and a launch manager for new products and general communications manager for EMEA ISS specific information.
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation