• Home
  • Accessing Our Facilities
    • Apply for Access
    • HPC Resource List
    • Our Staff
    • Our Research Projects
    • Our Research Software

    • Contributions & Costings
    • HPC Driving Test
  • Documentation
    • Documentation Home
    • Getting Started
    • Advanced Topics
    • Training & Workshops
    • FAQ
    • Policies & Procedures
    • Using the Wiki

    • Data & Report Terminology
    • About this website
  • My Account
    • My HPC Projects
HPC Support
Trace: • job_ram

Large Slurm Job

This is an example of a very large Slurm job which is still running on a single node. A single-node Slurm job can use up to 256 cores and 1.5TB of RAM.

  • Uses the HPC Project group myhpcproject; change to use your real HPC Project name
  • Submitted to the free highmem_free partition (–partition=highmem_free)
  • Requests 256 CPU cores (–cpus-per-task=256); this the largest single node job possible
  • Requests 512GB of RAM (–mem=512G)
  • Requests up to 10 hours of runtime (–time=10:00:00), for a maximum possible total of 2560 Compute Hours (cpus_per_task * time_in_hours)
  • Prints the time the job started to the log file
  • Prints the name of the compute node(s) it will run on
  • Prints the time the job finished to the log file

#!/bin/bash

#SBATCH --account=myhpcproject
#SBATCH --partition=highmem_free
#SBATCH --cpus-per-task=256
#SBATCH --mem=512G
#SBATCH --time=10:00:00

# Log when we started
echo "Job started at: `date`"

# Show which node(s) we are running on
HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST`
echo "Job is running on: $HOSTNAMES"

# Add any 'module load' commands here

# Add your custom commands you want to run here

# Log when we finished
echo "Job finished at: `date`"

Jobs submitted to the free highmem_free partition may need to wait for longer until sufficient resources are free. If your HPC Project has funds available, consider applying for Funded access and then submitting to the highmem_paid partition instead, where your job has an opportunity to be scheduled on a much larger set of available compute nodes than are available to free users.


Back to Getting Started

Previous Next

HPC Support

Table of Contents

Main Content Sections

  • Documentation Home
  • Getting Started
  • Advanced Topics
  • Training & Workshops
  • FAQ
  • Policies & Procedures
  • Using the Wiki
  • Contact us & Get Help

Documentation Tools

  • Wiki Login
  • RSE-HPC Team Area
Developed and operated by
Research Software Engineering
Copyright © Newcastle University
Contact us @rseteam