Use of GPUs for HPC seems to be reaching a new stage of adoption, based on activity we’re seeing. NVIDIA announced the eagerly anticipated next generation solution, Fermi, last month. By adding in capabilities such as Error Correction, Fermi provides a level of accuracy that had not been an inherent feature of GPUs, given their prime use in games and graphics. Not a showstopper, as developers could work around that, but it has been a limiting issue for some.
Just this week, Georgia Institute of Technology announced it had received $12M NSF funding for a GPU-integrated HPC system. The participants in the project, called Keeneland, include Georgia Tech, Oak Ridge National Lab, University of Tennessee, NVIDIA and HP. An auspicious name, as the funding is part of NSF’s Track 2 awards. The announced plan is to build the system with the Fermi GPUs.
We’ll be showing some GPUs demos at SC09 booth next month, using the currently shipping NVIDIA Tesla. Each 1U Tesla S1070 has four GPUs (with 960 cores). Our HP ProLiant DL160 se has an added PCI slot relative to standard DL160. This enables us to support 3 Teslas with two 1U servers. That’s over 12 TFLOPS peak from just the Teslas, in 5U. These GPUs are not just for gaming anymore. That baby’s got game.g