Hyperscale Computing Blog
Learn more about relevant scale-out computing topics, including high performance computing solutions from the data center to cloud.

SLURM: a Simple Resource Manager no more!

SLURM,  the "Highly Scalable Resource Manager" from Lawrence Livermore National Laboratory, has come a long way.  When we started using it for some of the products in our Unified Cluster Portfolio, SLURM had only a simple FIFO scheduler, no job accounting, and the ability to support perhaps a thousand nodes.

SLURM also provided a very clean architecture which allowed HP to contribute the first versions of job accounting for Linux clusters, support for multi-threaded and hyperthreaded architectures, complex job scheduling such as gang scheduling, and fine-grained allocation of system resources ("consumable resources").

SLURM version 2.0 has just been released, and what a powerhouse it is! Heterogeneous clusters, up to 65,000 nodes, resource limits, and job prioritization. Moe Jetty and Danny Auble, the primary authors of SLURM, discuss it on this podcast.

My compliments to the entire team!

Anonymous | ‎09-21-2009 06:05 PM

SLURM is an open-source resource manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.

Showing results for 
Search instead for 
Do you mean 
About the Author

Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.