• Home
  • Accessing Our Facilities
    • Apply for Access
    • HPC Resource List
    • Our Staff
    • Our Research Projects
    • Our Research Software

    • Contributions & Costings
    • HPC Driving Test
  • Documentation
    • Documentation Home
    • Getting Started
    • Advanced Topics
    • Training & Workshops
    • FAQ
    • Policies & Procedures
    • Using the Wiki

    • Data & Report Terminology
    • About this website
  • My Account
    • My HPC Projects
HPC Support
Trace: • rocket_comet_cheats

Comet, for existing Rocket users

Key Changes

For existing Rocket users, there are some major differences in how Comet operates and the features it has available - almost every aspect has been improved upon in some way, but there are some key items you need to know before you log on and start accessing the facility.

About the Facility

  • To access Comet you need to pass the HPC Driving Test before you can apply for an account
  • Comet has both Free to access and Premium/priority/paid resources
  • You must keep the information about your HPC project(s) updated and accurate in order to retain access to the facility; abandoned projects or those without regular updates (e.g. titles and descriptions) will be disabled and/or archived.
  • All aspects of your HPC project are managed here on this website - that includes applying for a new project, maintaining descriptions and titles, project members, billing and job resource reports.

Slurm, Jobs & Partitions

  • The Slurm partition names have changed from Rocket. Please consult the new Comet partitions and their intended uses.
  • You must now include the account code of the project you are working in when submitting an sbatch or srun job.
  • Unlike Rocket, Comet supports job checkpointing and resuming.

Hardware

  • Low latency infiniband connectivity is not present on all node / partition types. If you want Infiniband connectivity, consult the Comet partitions and resources page, or our detailed hardware specifications page.
  • Comet has a substantial increase in GPU resources. Choose the hardware most relevant to your use case: GPU-S or GPU-L
  • All nodes of Comet use the same CPU model - dips and spikes in general CPU performance due to CPU differences should no longer impact your jobs
  • All nodes of Comet are significantly larger than Rocket: 256 CPU cores per node compared to just 44 of the standard nodes on Rocket. You may not need to scale your jobs over multiple nodes any longer.
  • The standard nodes of Comet have significantly larger memory than Rocket: 1.1TB compared to just 128GB of the standard nodes on Rocket. Consider this when scheduling your jobs.

Filesystems

  • There is no /nobackup/USERNAME directory - all data on Lustre should be kept in a group or project folder on /nobackup/proj/GROUPNAME
  • NFS $HOME filesystem quotas are doubled compared to Rocket.

Software

  • We no longer use the Intel compilers or runtime libraries. Use either GCC or AOCC instead.
  • We no longer use the Intel MPI runtime. Use OpenMPI instead.

Containers

  • Singularity is no longer supported. Use Apptainer instead.
  • Apptainer containers can be created and run on login nodes without any special privileges.
  • Apptainer containers can be run on all compute node types.
  • Podman containers can be created and run on login nodes without any special privileges.
  • Podman containers can be run on all compute node types.

Interactive Applications / X11

  • It is no longer reccomended to use X11 over SSH. Use Open OnDemand to get a Linux desktop instead.
  • RStudio can be run as a graphical application in your browser via Open OnDemand
  • Jupyter Lab can be run as a graphical application in your browser via Open OnDemand
  • Matlab can be run as a graphical application in your browser via Open OnDemand

Back to Advanced Topics index

Previous Next

HPC Support

Table of Contents

Main Content Sections

  • Documentation Home
  • Getting Started
  • Advanced Topics
  • Training & Workshops
  • FAQ
  • Policies & Procedures
  • Using the Wiki
  • Contact us & Get Help

Documentation Tools

  • Wiki Login
  • RSE-HPC Team Area
Developed and operated by
Research Software Engineering
Copyright © Newcastle University
Contact us @rseteam