These partitions and resources are available to all Comet users. Beyond our Acceptable Use Policy there are no restrictions on the use of these resources.
Partition | Node Types | GPU | Max Resources | Default Runtime | Maximum Runtime | Default Memory |
---|---|---|---|---|---|---|
short_free | Standard.b | No | 10 minutes | 30 minutes | 1GB per core | |
default_free | Standard.b | No | 24 hours | 48 hours | 1GB per core | |
long_free | Standard.b | No | 4 days | 14 days | 1GB per core | |
highmem_free | Large.b | No | 24 hours | 5 days | 4GB per core | |
gpu-s_free | GPU-S | Yes | 24 hours | 14 days | 2GB per core | |
interactive_free | Standard.b | No | 2 hours | 8 hours | 1GB per core | |
interactive-gpu_free | GPU-S | Yes | 2 hours | 8 hours | 2GB per core |
These partitions are available to all projects who have allocated funds to their Comet HPC Project accounts. If you have not allocated funds to your HPC Project, or your balance is negative then you will not be able to submit jobs to these partitions.
For further details on paid resource types, see our Billing & Project Funds policy page.
Partition | Node Types | GPU | Max Resources | Default Runtime | Maximum Runtime | Default Memory |
---|---|---|---|---|---|---|
short_paid | Standard.b | No | 10 minutes | 30 minutes | 1GB per core | |
default_paid | Standard.b | No | 24 hours | 48 hours | 1GB per core | |
long_paid | Standard.b | No | 4 days | 14 days | 1GB per core | |
highmem_paid | Large.b | No | 24 hours | 5 days | 4GB per core | |
gpu-s_paid | GPU-S | Yes | 24 hours | 14 days | 2GB per core | |
gpu-l_paid | GPU-L | Yes | 24 hours | 14 days | 2GB per core | |
interactive_paid | Standard.b | No | 2 hours | 8 hours | 1GB per core | |
interactive-gpu_paid | GPU-S | Yes | 2 hours | 8 hours | 2GB per core | |
low-latency_paid | Standard.Lowlatency | No | 1024 cores | 24 hours | 4 days | 1GB per core |
The short partition is intended for quick tests, proof of concept runs, debugging and other tasks which can be completed quickly. It is not intended to run entire compute jobs.
The default partition has the largest number of general CPU resources in the Comet HPC facility and is intended to run the bulk of our compute workloads outside of the multi-node MPI / low latency and GPU requirements.
Default runtime is set to 24 hours and default memory allocation is set to 1GB per allocated CPU core. There is no defined maximum memory allocation - this is limited by the size of the Standard.b nodes it is built on.
It is your responsibility to determine the most appropriate runtime (in the range of 0 - 48 hours), required number of CPU cores and memory allocation for your specific application.
The long partition has the same hardware resources as the default partition, since it is based on the same number and type of nodes (Standard.b), however the runtime is extended over default; from a default of 4 to a maximum of 14 days.
The highmem partition allows jobs which need a larger amount of memory to be run. Note that unlike the Standard.a compute nodes of Rocket (128GB), the Comet Standard.b compute nodes are substantially larger (1.1TB), and you may not need to use the Large.b compute nodes (1.5TB) for many large jobs.
Note that the Large.b compute nodes are also connected by a faster network; if you need to run large processes across multiple nodes simultaneously via MPI (and they do not fit the low-latency node types), then the use of highmem may be an option for you.
By default, jobs submitted to the highmem partition are able to run longer (up to 5 days) than the standard partition (2 days). Though this is not as long as the long partition (14 days).
Consider the use of this partition type if your workload needs more than 1TB of memory on a single node, otherwise the standard or long partitions may be more suitable for you.
The gpu-s partition uses the GPU-S node type on Comet. These nodes are suitable for most types of GPU-accelerate compute, though please do check whether any of your CUDA/OpenCL code paths require double precision, FP64 capability; the Nvidia L40S datasheet should be checked, as these cards are restricted in that mode. The nodes hosting the L40S cards are also connected via faster networking, just like highmem and low-latency, so you can also take advantage of faster inter-node communication, if you job spans more than one node, as well as faster IO speeds to/from the main NOBACKUP storage.
Any jobs run on the gpu-s partition are costed by the number of GPU cards you request. If your job requests two cards, then it will cost twice as much as a single card in the same amount of time. Users of our paid partitions should be careful when requesting resources via Slurm that you are requesting and using what you actually need. Users of our unpaid gpu-s partition do not have any costs associated with the use of GPU cards, but the number available is strictly limited.
Default runtime is up to 24 hours, but you may request up to a maximum of 14 days. The use of GPU resources is closely monitored.
The gpu-l partition uses a single node; GPU-L, which contains a very small number of Nvidia H100 cards. These cards represent some of the most powerful GPU compute options currently available. See Nvidia H100 datasheet for further information. This node type is also connected to NOBACKUP via faster networking, just as with gpu-s, highmem and low-latency to take advantage of faster IO read/write facilities.
As with gpu-s, if your job requests two cards, then it will cost twice as much as a single card in the same amount of time. Users should be careful when requesting resources via Slurm that you are requesting and using what you actually need; this partition represents the most costly use of your HPC Project balance. Be certain of your job parameters before you launch a multi-day compute run.
There is no unpaid access to the gpu-l partition. All users must be members of at least one HPC Project with a positive balance.
Default runtime, like gpu-s is up to 24 hours, but a maximum of 14 days may be requested.