Skip to content

Partitions

The CEOAS partitions are summarized as follows:

partition description
ceoas General-use partition for CEOAS researchers containing college-owned x86 resources.
ceoas-arm General-use partition for CEOAS researchers containing college-owned ARM servers.
ceoas-gpu GPU partition for CEOAS researchers containing college-owned x86 GPU servers.
ceoas-interact Partition dedicated for interactive use. Individual users can have a total of 8 cores and 128GB of RAM allocated on the partition across all jobs.
ceoas-*-lowprio Low-priority partitions containing PI-owned machines for general use when not in use by the owning PI. Jobs launched by the owner take precedence over other jobs and will cancel currently running jobs on the node(s) if needed.
PI partitions These are owned by PIs. Access to one of these partitions is granted with the permission of the owner(s).

All partitions (with a caveat for the lowprio and ceoas-interact ones) are allocated on a first-come-first-serve basis, though we request that users are conscientious of allocating resources based off actual use. For example, if a job requires four cores and 16GB of RAM, please don't exclusively reserve a 32C/256GB node, and similarly please don't use a GPU node for exclusively CPU jobs.

The lowprio partitions are designed to allow college researchers access to other PI nodes, which may have higher performance than college nodes or features not otherwise available. Each PI node is available through the most relevant lowprio partition and a higher priority machine-specific partition. Jobs submitted through the same partition are still allocated on a first-come-first-serve basis, however jobs submitted via the machine-specific partition will preempt and cancel jobs submitted via the lowprio partition if there aren't enough resources otherwise available. It is highly recommended that jobs submitted to lowprio partitions either use checkpointing/resume functionality or are parallel batch jobs where each job is relatively short. A hundred jobs that each take ten minutes is much better than a single job that takes a thousand minutes.

The ceoas-interact partition is a special partition providing dedicated resources for small runs and interactive jobs. This provides researchers some level of resources they can access for things like debugging and small-scale test runs in case there's a high level of contention for the other nodes. Any individual user is restricted to a total of 8 cores and 128GB of memory across all their jobs on the partition. This could be allocated to a single job or split across multiple jobes (such as two jobs, one using 6 cores/112GB of memory and the other using 2 cores/16GB of memory). Jobs executed through the JupyterHub server are also allocated on this partition and count towards a user's total usage.