Are you confused about what is an HPC accelerator and if and when to use them? If so, you are not alone. They work for small segments of the overall high performance computing (HPC) application markets but for a growing list of applications they can be lean-green machines. When they work, they can dramatically improve time to solution while cutting energy costs and saving floor space. Order of magnitude improvements in run times are common.
This is reminiscent of the glory days of vector computing when clever programming was necessary for super-fast speed. (I do miss the fun of optimizing supercomputers in the ‘80s.) The big difference is the vector CPUs of yore were hand built and cost a million dollars while today we are driven by teenagers with gaming budgets closer to $1,000.
If you are in the oil and gas, government defense, university research, financial services, or genomics-proteomics computational fields, chances are you are already using HPC accelerators or know someone that is. Hundreds (maybe even thousands) of application types have been accelerated and those driving the largest volumes are in these listed fields. The HPC accelerator industry is poised for tipping points with greater penetration into multicore environments across a rapidly expanding range of applications.
HPC accelerators are inexpensive massively parallel computing chips, programmed differently than general purpose x86 processors. Common types are graphics processing units (GPUs designed for gaming and visualization) and field programmable gate arrays (FPGAs used for flexible circuit design). They tend to deliver hundreds or cores (or functional units) for a thousand or a few thousand dollars. Cheap-fast HPC computing indeed! Applications with a good fit for this heterogeneous programming can run 10 to 100 times faster than on standard multicore servers, which is important since a run-time speedup of at least 10 times is often needed to justify programming heterogeneous architectures.
Accelerators have come a long way and are advancing rapidly. A few years ago they were nearly impossible to program by mere mortals but advances in heterogeneous programming languages and language standards are lowering that hurdle. In this rapidly changing field the computing speeds are advancing faster than Moore’s law (sometimes 5x in one year) with robust roadmaps for important features yet to come. All the major accelerator vendors are projecting significant improvements in floating point rates, error protection, and bandwidths. In addition HPC servers (including those from HP) are improving to better meet challenging cooling and bandwidth requirements that accelerators then to drive, further improving density, heat efficiency, and cost.
In this blog I plan to highlight were accelerators are working today, what changes and tipping points are on the horizon, feature improvements likely to expand the application markets, examples of improved green computing and solution times, and how they might be deployed for effective petascale research clusters. (Petascale accelerated clusters exist today but not yet with significantly improved Linpack price/performance compared to multicore.)
I hope to help you judge if and when to get into this exciting and expanding field of heterogeneous computing so you can time your entry for leading edge, but not bleeding edge, improvements. I welcome comments and questions on this topic. For more information see www.hp.com/go/accelerators.