====== Simple Sequential Slurm Job ====== This is the basis of most common, sequential Slurm jobs and can be extended easily by adjusting the CPU, RAM and runtime resources. Items to note are: * Uses the HPC Project group **myhpcproject**; change to use your //real// HPC Project name * Submitted to the //free// **default_free** partition (''--partition=default_free'') * Requests **1 CPU** (''--cpus-per-task=1'') * Requests **1GB** of RAM (''--mem=1G'') * Requests up to **10 hours** of runtime (''--time=10:00:00''), for a maximum //possible// total of **10 Compute Hours** (''cpus_per_task * time_in_hours'') * Prints the time the job started to the log file * Prints the name of the compute node(s) it will run on * Prints the time the job finished to the log file #!/bin/bash #SBATCH --account=myhpcproject #SBATCH --partition=default_free #SBATCH --cpus-per-task=1 #SBATCH --mem=1G #SBATCH --time=10:00:00 # Log when we started echo "Job started at: `date`" # Show which node(s) we are running on HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST` echo "Job is running on: $HOSTNAMES" # Add any 'module load' commands here # Add your custom commands you want to run here # Log when we finished echo "Job finished at: `date`" ---- [[started:index|Back to Getting Started]]