This is an example of a very large Slurm job which is still running on a single node. A single-node Slurm job can use up to 256 cores and 1.5TB of RAM.
–partition=highmem_free
–cpus-per-task=256
–mem=512G
–time=10:00:00
cpus_per_task * time_in_hours
#!/bin/bash #SBATCH --account=myhpcproject #SBATCH --partition=highmem_free #SBATCH --cpus-per-task=256 #SBATCH --mem=512G #SBATCH --time=10:00:00 # Log when we started echo "Job started at: `date`" # Show which node(s) we are running on HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST` echo "Job is running on: $HOSTNAMES" # Add any 'module load' commands here # Add your custom commands you want to run here # Log when we finished echo "Job finished at: `date`"
Jobs submitted to the free highmem_free partition may need to wait for longer until sufficient resources are free. If your HPC Project has funds available, consider applying for Funded access and then submitting to the highmem_paid partition instead, where your job has an opportunity to be scheduled on a much larger set of available compute nodes than are available to free users.
Back to Getting Started
Table of Contents
Main Content Sections
Documentation Tools