This is the basis of most common, sequential Slurm jobs and can be extended easily by adjusting the CPU, RAM and runtime resources. Items to note are:
–partition=default_free
)–cpus-per-task=1
)–mem=1G
)–time=10:00:00
), for a maximum possible total of 10 Compute Hours (cpus_per_task * time_in_hours
)
#!/bin/bash
#SBATCH --account=myhpcproject
#SBATCH --partition=default_free
#SBATCH --cpus-per-task=1
#SBATCH --mem=1G
#SBATCH --time=10:00:00
# Log when we started
echo "Job started at: `date`"
# Show which node(s) we are running on
HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST`
echo "Job is running on: $HOSTNAMES"
# Add any 'module load' commands here
# Add your custom commands you want to run here
# Log when we finished
echo "Job finished at: `date`"