The School of Meteorology runs it’s very own high performance computing cluster. The hostname to the MCS cluster is mcs.som.nor.ou.edu and it is available on the NWC network and by the VPN/Bastion Host (starbuck) as that hostname.

The arctic cluster is available by arctic.som.nor.ou.edu.

The MCS cluster currently boasts around 100 cores, but we will be adding more in the future, especially as use increases. Arctic has 450+

To get an account, please contact metit [at] ou.edu requesting an account.

We use the SGE scheduler on both clusters, which is the same as some other clusters you may be familiar with, including yellowstone at NCAR. To submit a job, you will first need to make a wrf.qsub file in the directory which you want to run the file, which includes the following data:

 #!/bin/bash
 #
 #$ -cwd
 #$ -j y
 #$ -S /bin/bash
 #$ -m abe
 #$ -M metit@ou.edu
 #

 mpirun ./wrf.exe

To submit the job, you’d use the command qsub -pe orte XX wrf.qsub where XX is the number of cores you wish to use.

On the arctic cluster, make sure you are running from /share/raidX instead of just /raidX directory. On the MCS cluster things can just be run from the home directory. MCS has /home/scratch and /home/data that can be used. The only difference is that /home/data is replicated to sarkeys energy center for backup while /home/scratch is not.

To check the status of the queue, use qstat -f.
To check the status of a particular job, use qstat -j X where X is the job number.

You can use qdel X where X is the job number to delete a job.

Cluster status pages are at:
http://arctic.som.nor.ou.edu/ganglia/
http://mcs.som.nor.ou.edu/ganglia/