====== Large Slurm Job ======
This is an example of a very large Slurm job which is still running on //a single node//. A single-node Slurm job can use up to **256** cores and **1.5TB** of RAM.
* Uses the HPC Project group **myhpcproject**; change to use your //real// HPC Project name
* Submitted to the //free// **highmem_free** partition (''--partition=highmem_free'')
* Requests **256 CPU** cores (''--cpus-per-task=256''); this the largest single node job possible
* Requests **512GB** of RAM (''--mem=512G'')
* Requests up to **10 hours** of runtime (''--time=10:00:00''), for a maximum //possible// total of **2560 Compute Hours** (''cpus_per_task * time_in_hours'')
* Prints the time the job started to the log file
* Prints the name of the compute node(s) it will run on
* Prints the time the job finished to the log file
#!/bin/bash
#SBATCH --account=myhpcproject
#SBATCH --partition=highmem_free
#SBATCH --cpus-per-task=256
#SBATCH --mem=512G
#SBATCH --time=10:00:00
# Log when we started
echo "Job started at: `date`"
# Show which node(s) we are running on
HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST`
echo "Job is running on: $HOSTNAMES"
# Add any 'module load' commands here
# Add your custom commands you want to run here
# Log when we finished
echo "Job finished at: `date`"
Jobs submitted to the //free// **highmem_free** partition may need to wait for longer until sufficient resources are free. If your HPC Project has funds available, consider applying for [[started:paying|Funded]] access and then submitting to the **highmem_paid** partition instead, where your job has an opportunity to be scheduled on a much larger set of [[https://hpc.researchcomputing.ncl.ac.uk/calc/show/2/7|available compute nodes]] than are available to free users.
----
[[started:index|Back to Getting Started]]