Hyperscale Computing Blog
Learn more about relevant scale-out computing topics, including high performance computing solutions from the data center to cloud.

Interconnect technologies: InfiniBand, 1G Ethernet, 10G Ethernet

Interconnect is an integral part of any scale-out solutions.  The question often asked is something like "which interconnect should I use for my next scale-out project?"  Well, a simple answer is "it depends".  For many applications in the high performance computing (HPC) area, their performance highly depends on the latency and bandwidth of the interconnect fabric -- majority of MPI-based parallel applications are in this category.  InfiniBand has been proven to be the best choice for these applications.    In the latest released Top500 (June 2009),  I counted 151 entries with InfiniBand as the interconnect,  increased from 124 6 months ago.  Another interesting observation is about what server platforms were used to build these clusters -- I found that the HP BladeSystem c-Class is the most popular server platform with 37 entries, followed by IBM's pSeries with 17 entries.   In the last couple of years, we also see more and more cases in commercial applications where InfiniBand is being used as a transport interconnect to meet the low latency and scalability requirements, examples including HP ExaData storage server,  Market data systems deployed in Financial Services Industry, and applications in rendering applications.  


Not every application needs InfiniBand.  There are workloads, include some HPC workloads, that their performance is not sensitive to the latency of the underline interconnect.  For these application,  Gigabit Ethernet provides a very cost effective solution.    However, we see increased bandwidth requirement in many of these cases, partially driven by the adoption of server virtualization, or simply to match higher I/O requirement driven by significant performance increase of the multi-core servers.   10Gigabit Ethernet is becoming a natural choice to meet this higher bandwidth requirement, yet, allow people to stay within the technology they are familiar with. 


So, to summarize, the decision on the choice of interconnect should be based on the performance requirements of applications:  for majority of MPI parallel applications as well as those that require low latency and high bandwidth, InfiniBand is becoming the interconnect of choice.  And InfiniBand is also a technology that is very easy to use and manage. 


 

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us


About the Author(s)
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • I’ll be blogging about the latest news and enhancements as it relates to HP Moonshot.
  • I am part of the integrated marketing team focused on HP Moonshot System and HP Scale-up x86 and Mission-critical solutions.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation