Knowing what resources your jobs actually used is key to requesting the optimal resources for future jobs. A quick and easy option is seff
$ seff 1234567
outputsJob ID: 1234567
Cluster: comet
User/Group: user/cometloginaccess
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 4
CPU Utilized: 00:00:40
CPU Efficiency: 55.56% of 00:01:12 core-walltime
Job Wall-clock time: 00:00:18
Memory Utilized: 185.09 MB
Memory Efficiency: 1.13% of 16.00 GB
You can check individual job steps with the syntax seff job-id.job-step e.g. seff 1234567.2
You can dig deeper with the much more flexible sacct command
Find the memory used by one job with ID <job ID>
sacct -j <job ID> --format JobID,start,elapsed,state,alloccpus,ReqMem,MaxRSS,AveRSS --units=g
Find out which nodes your jobs ran on
sacct --format=User,JobID,partition,state,time,elapsed,nodelist