The Department of Statistics maintains our own high performance computing (HPC) cluster to enable students, faculty and staff access to a highly tailored computation environment that suits current and growing needs for teaching and learning as well as research.

Mission

Our mission is to provide high performance computing methods and related infrastructure that meets the ever changing needs of the teaching and learning community in Statistics. The requirements for achieving modern research goals using big data and conveying science skills to learners are the highest priority tasks. Providing the tools, resources and consulting to support this mission will continue to evolve as the field of Statistics grows.

The Case for Cluster Computing and Batch Submission Scheduling Methods

There are many logical reasons for using a batch job scheduler to run most types of computation in Statistics. The most basic is that of logistics when using Linux and active displays. In the scenario where you would like to run a simulation but it takes longer than a full working day, without a batch submission to a scheduler you would start your application by double clicking and opening a graphical user interface, or perhaps start it from the command line. You begin your computation and the application starts running.

At this point you need to leave the office, the coffee shop, or your kitchen table as you move on for the day. You must be sure that your computer remains on, without falling asleep, and stay connected to the session in order for your job to complete. This is not efficient as lots of things can interrupt your job such as

  • Laptop battery dies
  • Your operating system freezes
  • Wifi connection shuts off
  • Power goes out at home
  • A remote server reboots

For these reasons alone it is not advised to start long running jobs from an interactive session on any computer. Instead, the batch job submission method offers a way for you to ''hand off`` your job to a service that will hold onto your request for resources until they are made available - and even if its 12 hours later it will then start your job automatically for you. It will also handle what to do if a computer is restarted - the scheduler will restart your job where it left off. For this reason alone it is always advised to submit your job to the HPC for scheduling in order to be sure your job completes successfully. If it does not it will also provide you with proper logging and error output so that you can troubleshoot and submit again.