• Home
  • Accessing Our Facilities
    • Apply for Access
    • HPC Resource List
    • Our Staff
    • Our Research Projects
    • Our Research Software

    • Contributions & Costings
    • HPC Driving Test
  • Documentation
    • Documentation Home
    • Getting Started
    • Advanced Topics
    • Training & Workshops
    • FAQ
    • Policies & Procedures
    • Using the Wiki

    • Data & Report Terminology
    • About this website

    • Reports
  • My Account
    • My HPC Projects
HPC Support
Trace: • 027

How can I check what resources my finished jobs used?

Knowing what resources your jobs actually used is key to requesting the optimal resources for future jobs. A quick and easy option is seff

$ seff 1234567
outputs
Job ID: 1234567
Cluster: comet
User/Group: user/cometloginaccess
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 4
CPU Utilized: 00:00:40
CPU Efficiency: 55.56% of 00:01:12 core-walltime
Job Wall-clock time: 00:00:18
Memory Utilized: 185.09 MB
Memory Efficiency: 1.13% of 16.00 GB

You can check individual job steps with the syntax seff job-id.job-step e.g. seff 1234567.2

You can dig deeper with the much more flexible sacct command

Find the memory used by one job with ID <job ID>

sacct -j <job ID> --format JobID,start,elapsed,state,alloccpus,ReqMem,MaxRSS,AveRSS --units=g

Find out which nodes your jobs ran on

sacct --format=User,JobID,partition,state,time,elapsed,nodelist


Back to FAQ

Previous Next

HPC Support

Table of Contents

HPC Service

  • News & Changes

Main Content Sections

  • Documentation Home
  • Getting Started
  • Advanced Topics
  • Training & Workshops
  • FAQ
  • Policies & Procedures
  • Using the Wiki
  • Contact us & Get Help

Documentation Tools

  • Wiki Login
  • RSE-HPC Team Area
Developed and operated by
Research Software Engineering
Copyright © Newcastle University
Contact us @rseteam