Hyperscale Computing Blog
Learn more about relevant scale-out computing topics, including high performance computing solutions from the data center to cloud.

HP Apollo 8000: One of the Most Energy-Efficient Systems, from Concept to Production

Guest blog by: Nicolas Dube, Distinguished Technologist, HP

 

Last week HP unveiled the HP Apollo 8000, a high performance computing solution that uses innovative warm-liquid cooling technology to fuel the future of science and technology. With more than 250 teraflops per rack, the HP Apollo 8000 provides up to 4x the teraflops per sq. ft. and up to 40% more FLOPs per watt than comparable air-cooled servers. So how did we go from concept to production on one of the world’s most energy efficient systems?

 

In 2011, a group of HP engineers gathered to pursue the concept of a radically more energy efficient platform—rethinking server packaging, power delivery and cooling technologies.  In the process, this group evaluated most, if not all, of the high-voltage power and liquid cooling technologies—either available or about to emerge in the coming years. The goal was simple: design a highly energy-efficient system mass production platform; targeting not only high end High Performance Computing (HPC) laboratories, but also commercial computing customers and hyperscale service providers. As liquid cooling had been around since the 70’s, the challenge had to be not so much in liquid cooling itself, but in how to make it mass-market after 40 years of failed attempts.

 

After thoroughly evaluating more than a dozen cooling technologies and concepts, the design team opted for an idea, called the dry-disconnect. This HP-patented technology, now part of every HP Apollo 8000 liquid-cooled system, permits server trays to be serviced without breaking a liquid connection (an event that could cause water leakage as well as air or contaminants introduction).  Thanks to the dry-disconnect, the HP Apollo 8000 System provides a servicing experience comparable to an air-cooled server, as thermal load is first transferred through a tight mechanical connection. Inside the tray, heat pipes transfer the heat from the processors, accelerators and memory to the heat transfer plate on the side of the tray.  As the tray is loaded inside of the rack, a patented clamping system mates the heat transfer plate with more than a thousand pounds of force to the thermal bus bar, another unique HP innovation, where the water flows through a set of pin fin arrays, then extracting the heat to the water.

 

For power delivery to a rack that could integrate up to 320 processors per rack, adding up to 80kW of density, using the standard 200-240V range was deemed far too impractical and would have required high amperage and large conductors.  The engineering team opted for a 380-480 VAC supply to the rack, and an internal high-voltage DC distribution within the rack.  By removing many components from the power distribution hierarchy between the electrical grid and the server’s processing components, the Apollo platform improves energy efficiency from 7% to 15%, on power distribution alone.

 

On the facility side, deployments of liquid cooled systems have historically been long and complex mainly because of their requirement for custom piping and special chemistry.  In addition, maintenance tasks on such systems needed to be performed by highly trained personnel. For the HP Apollo 8000 System to have a broad market appeal, in addition to being cost optimized and mass produced, the deployment model had to be redefined. HP therefore standardized components such as the Intelligent Cooling Distribution Unit (CDU) as part of the overall HP Apollo 8000 platform, where iCDUs are now ordered with IT racks in a similar way power supplies can be configured in a standard server.  In addition to having probably the most leak-proof architecture on the market today, the Apollo 8000 team decided to supply water to the IT racks at sub-atmospheric pressure, or, in other words, under vacuum.  This feature makes it so that if any component was ever to get loose, instead of leaking water, the Apollo 8000 iCDU would suck air into the system, which can then be easily monitored and corrected, without leaking a drop of liquid.  After building the 4 racks prototype system at NREL in February 2013 (that consisted of designing and implementing custom piping on-site, a long a tedious process sweating pipes and testing) the engineering team came up with a concept of pre-manufactured modular plumbing that connects the racks with quick connects and flexible hoses.  Deploying Peregrine's plumbing infrastructure in August 2013, a task that could have taken up to a couple months doing it the conventional way, the team was able to plumb for all 18 racks in less than a week, with no hot work and no leaks.

 

The HP Apollo 8000 System is a true HPC platform with high-speed networking, integrated switches, centralized management and unmatched density thanks to a unique cooling system and power distribution.  It was designed from the ground up to make chiller-free data centers possible, optimize components thermals and enable heat re-use, a feature that helped NREL save one million dollars a year on electricity, by using the computer as a furnace throughout the Colorado winter.  With the HP Apollo 8000 System now going into production with worldwide availability, we can only look forward to a broad adoption that will contribute to make the world's IT more environmentally friendly.

 

To learn more, read the HP Apollo 8000 System Datasheet and take the HP Apollo 8000 Product Tour.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.