-
Partitions or Queues
Compute jobs are run on functional groups of nodes called partitions or queues. Each different partition has different capabilities (e.g. regular memory versus high memory nodes) and resource restrictions (e.g. time limits on jobs). Nodes may appear in several partitions.
-
Open OnDemand Interface
Open OnDemand is an intuitive, innovative, and interactive interface to remote computing resources. The key benefit for SCINet users is that they can use any web browser, including browsers on a mobile phone, to access Ceres.
There are several interactive apps that can be run in Open OnDemand including Jupyter, RStudio Server, Geneious, CLC Genomics Workbench, and more. The desktop app allows a user to run any GUI software.
If you are using Atlas Open OnDemand, visit the Atlas Open OnDemand Guide for more information.
To access Open OnDemand on the Ceres cluster, go to Ceres OpenOndemand
-
Linux Command-Line Interface
List of basic CLI commands -
SLURM Resource Manager
Ceres uses Simple Linux Utility for Resource Management (SLURM) to submit interactive and batch jobs to the compute nodes. Requested resources can be specified either within the job script or using options with the
salloc
,srun
, orsbatch
commands. -
Resource Allocation
Allocation of cores, memory, and time on Ceres -
Scratch Space
All compute nodes have 1.5 TB of fast local temporary data file storage space supported by SSDs. This local scratch space is significantly faster and supports more input/output operations per second (IOPS) than the mounted filesystems on which the home and project directories reside.