Cardinal Research Cluster Documentation

HPC Documentation Center

Using the Cardinal Research Cluster

Welcome to the HPC Documentation Center where users can find information about the HPC environment and the Cardinal Research Cluster (CRC). Answers to common questions are addressed on the frequently-asked questions page. Documented sections below provide overviews of logging in, submitting jobs, customizing the user environment, and the modules command.  If you find you need additional assistance please feel free to contact the CRC admin team at: with any questions or for additional information.

Getting Started / Logging In

After requesting an account and receiving a "Welcome to the CRC" email, users can login to the cluster. The CRC is accessible by secure shell connection from within the UofL campus network. The CRC has three login nodes for users to use for submitting jobs and compiling code. To access one of the login nodes one ssh to  Accounts on the CRC use the same userid and password as a user's ULink account, and the CRC may be reached from another UNIX computer, for example,  through ssh <ulink userid> The CRC sits behind the campus firewall, so that off-campus users must either access the CRC from another computer on the campus network or by using the virtual private network (vpn). Users requesting a CRC account for research purposes will receive a vpn account. Upon accessing the crc users may be asked to type their userid and then their password. After successful authentication, a command prompt will appear. Typing "exit" at a command prompt will disconnect you from the CRC.

Submitting and Managing Jobs

A user may run jobs on the CRC either in batch mode or interactively. In batch mode, the job is submitted to the queue and runs unattended when scheduled by the system. An interactive job presents the user with a login shell on a compute node from which the user can submit commands or interactively run programs. The interactive job is scheduled by the scheduling software just like the batch jobs, so depending on the system load, there may not be an immediately available set of compute nodes for interactive processing. The queuing software for submitting, modifying, or deleting jobs is torque, and the scheduling software is Moab, full documentation of which is available through the hyperlinks. Briefly, batch jobs are submitted using the "qsub" command and interactive jobs are submitted using "qsub -I".

Please use the "qsub -I -q dev -l nodes=1:ppn=1,walltime=24:00:00" to test and debug applications and build scripts.  An example of submitting jobs without interaction is "qsub -q {short|long|dev} -l nodes=#_of_nodes_needed:ppn=#_of_cpu_cores_1-8,walltime=24:00:00 <my_script>"

Configuring Your User Environment

By default, CRC users use the Bourne-again shell (bash) when they first login. To change to a different shell, type "chsh" at the command prompt and then enter the shell you wish to use as your default shell. Popular options are /bin/bash, /bin/tcsh, /bin/csh, and /bin/sh. Different shells require different methods of customization, although bash and the Bourne Shell (/bin/sh) are somewhat similar, as are turbo-C shell (/bin/tcsh) and the C-shell (/bin/csh). Bash and tcsh are both modern, updated versions of the other two shells.


The variety of compilers and parallel environments on the CRC means that users have some choice about their default environment, or may well need to change between environments when running different programs. The modules command provides an easy way to customize your configuration for each environment. Documentation for the module system can be found here: module.

Contact us at with any questions or for additional information.