Table of Contents

HPC News & Changes

This page is intended to act as a timeline of events for the Comet HPC project, as well as major changes in functionality or policies relating to the system.

Newsletters

Starting March 2026, we also publish a monthly summary newsletter to the HPC-Users email distribution list (to which all users of Comet are subscribed):


(26th) March 2026 - SAIGE now available on Comet

SAIGE is an R package developed with Rcpp for genome-wide association tests in large-scale data sets and biobanks. It is installed on Comet as a ready-to-run container environment. No external dependencies or prior R installation/libraries is required.


(26th) March 2026 - ldsc.py installed

ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. This software used to require Python 2.x to run, but was updated to Python 3.9 via a more recent fork of the source code, however it still has issues with newer Python runtimes, as well as a curious set of older versions of numpy, matplotlib and similar at runtime.

We have built a tiny ldsc container environment to encapsulate ldsc and all of its dependencies so that each user does not need to build their own virtual environment to run it.

The sample data files used by ldsc are also installed centrally at /nobackup/shared/data/ldsc. If you have similar data files used by ldsc which would be useful to others then please let us know and we can move them to this area and thus excluding them from the Comet Data retention policies.


(25th) March 2026 - CCP-EM, Doppio, Relion and more installed

CCP-EM v2 (also known as Doppio), CCP-EM v1, Relion, Salilabs Modller, Topaz, UCSF Motioncor 3, CCP4 and more have been installed on Comet. The full list of software includes (take a deep breath):

This is extensive set of inter-related tools is now available in a new CCP-EM suite container image that is ready for use. We have written a guide which shows all of the tools that are installed and how to use, including both at the command line or from our Linux X11 Desktop using the Open OnDemand service. Many of the tools also make use of optional GPU acceleration.

The full set comprises more than 40GB of software; some of which we did not have the capability to offer before on our previous HPC facility.


(24th) March 2026 - Ice Sheet & Sea Level System Model (ISSM) available

ISSM has been installed on Comet and a guide is available to walk users through making use of both the issm tool, as well as calling ISSM functions from within Python.


(24th) March 2026 - Temporary loss of Lustre

We were informed late this afternoon by our HPC vendor that the Lustre service for Comet (i.e. /nobackup) had been lost on all compute/login nodes.

This outage lasted around 2 hours, and whilst it has now been restored, many jobs which were running during this time will likely have experienced errors and may have terminated unexpectedly.


(24th) March 2026 - PGAP installed

We have installed PGAP, including the image data downloaded to a shared data directory at /nobackup/shared/data/pgap (so that it does not affect your quota, or is subject to the data retention policies).

A simple wrapper for the PGAP Python script is provided which we recommend as the way to run PGAP - simply module load PGAP and then use the pgap script to run your pipelines.

We recommend all users to read the PGAP guide and use the Quickstart Example as a basis for jobs intended to be submitted via Slurm


(23rd) March 2026 - OpenGeoSys available

We have made OpenGeoSys available for use on Comet. This includes a simple guide to get you started both at the command line, or to use OGS interactively via JupyterLab notebook using one a Linux Desktop session on our Open OnDemand service.


(23rd) March 2026 - Comet login service restored

The vendor has restored the Comet login service (e.g. ssh comet.hpc.ncl.ac.uk). Unfortunately it appears that the SSH host key fingerprints for one of the login servers have been lost.

If you attempt to log in to Comet now you will see a warning from your SSH client looking like this:

$ ssh comet.hpc.ncl.ac.uk
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:ABCDEFGHIJKLMNOPQRTU12345678.
Please contact your system administrator.
$

This is expected, since the original fingerprints of that server have now changed. To resolve this, run the following command on your Linux or Mac OS device:

$ ssh-keygen -f $HOME/.ssh/known_hosts -R comet.hpc.ncl.ac.uk

If you are using an alternative SSH client on a Windows platform, the error message from your software (e.g. PuTTY, Mobaxterm or similar) should indicate the equivalent command to run.


(23rd) March 2026 - Planned maintenance overrunning on login nodes

Maintenance on the login nodes was planned today between 9:00am - 11:00am (announced last week). Unfortunately, some issues remain and login nodes are not reliably available at this time (13:00pm). We expect this to be rememedied by our supplier in the next hour or two, please check back for further announcements. Running jobs are NOT affected

Maintenance was finally completed at 1:54pm.

Unfortunately, unexpected issues were encountered whilst restoring the two login nodes (cometlogin01.comet.hpc.ncl.ac.uk and cometlogin02.comet.hpc.ncl.ac.uk) which mean that we are still only operating on a single node - for now this is cometlogin02. The planned changes to the Lustre, /nobackup configuration to mitigate the copy-to-Lustre performance issues also did not address the underlying issues we have documented extensively.

The vendor will now need to analyse the outcomes from the work undertaken today and develop a plan of work to address them. We can only apologise on their behalf for the lack of progress and extended length of the maintenance.


(20th) March 2026 - new text for citations

We've updated our statements page with a standard format for citing use of Comet.

Please reference Newcastle University's High Performance Computing Service in any research report, journal, or publication that requires citation of authors' work. Recognition of the HPC resources you used to perform research is important for acquiring funding for the next generation of hardware, support services, and our research and development activities in HPC, visualization, data storage, and other related infrastructure.

Our suggested acknowledgement:

The authors acknowledge the Comet HPC facility at Newcastle University for providing computational resources that have contributed to the research results reported within this paper. URL: https://hpc.researchcomputing.ncl.ac.uk/


(19th) March 2026 - netcdf-fortran module replaced

The faulty netcdf-fortran module has been replaced and Fortran code which includes netcdf has been tested to compile and link successfully.


(18th) March 2026 - netcdf-fortran module errors

The netcdf-fortran module appears to be missing the lib folder set by the $LD_LIBRARY_PATH environment variable in the module:

$ module show netcdf-fortran/4.6.2
------------------------------------------------------------------------------------------------------------------------------
   /opt/software/manual/modules/netcdf-fortran/4.6.2.lua:
------------------------------------------------------------------------------------------------------------------------------
whatis("netcdf-fortran with MPI (parallel I/O) support")
help([[Netcdf-fortran compiled with parallel I/O support using MPI.

Provides tools and libraries for managing large scientific datasets.
]])
prepend_path("PATH","/opt/software/manual/apps/netcdf-c/4.6.2/bin")
prepend_path("LD_LIBRARY_PATH","/opt/software/manual/apps/netcdf-c/4.6.2/lib")
prepend_path("CPATH","/opt/software/manual/apps/netcdf-c/4.6.2/include")
prepend_path("LIBRARY_PATH","/opt/software/manual/apps/netcdf-c/4.6.2/lib")
setenv("NETCDF_F_DIR","/opt/software/manual/apps/netcdf-c/4.6.2")

$ ls /opt/software/manual/apps/netcdf-c/4.6.2/lib
ls: cannot access '/opt/software/manual/apps/netcdf-c/4.6.2/lib': No such file or directory

If you have had linker errors when compiling Fortran code which included netcdf, this is likely the cause of the error. We have submitted a request to have the module rebuilt with the missing shared library folder added back in.


(14th) March 2026 - Amber, Ambertools Installed

The above tools have been installed on Comet in a new Amber MD container image. Due to the complexity of the software and supporting components these will not be installed as modules.

This includes the following tools/libraries/commands:

Where available, all tools have been configured to use MPI, OpenMP and/or Nvidia CUDA support if possible.

For more information on how to use this container and access the software installed within, please consult our Amber MD guide.


(12th) March 2026 - Stata/MP Installed

Stata/MP is now installed on Comet and is available as a module. Use module load Stata to load it.

Please note that restrictions in the license for Stata/MP limit concurrent number of CPU cores for each Stata job to two.


(12th) March 2026 - Gaussian and GaussView Added

The computational chemistry applications Gaussian and GaussView for structure modelling and visualisation are now available via module on Comet (module load Gaussian and module load GaussianView). These are version 16 and 6, respectively. Whilst funded by individual academic staff within SAgE, the software is available for all research users on Comet - commercial use is not permitted.

We strongly reccomend anyone needing to use GaussView to use it via our Linux X11 Desktop interface on Open OnDemand. Using SSH+X11 is now discouraged and unsupported.


(12th) March 2026 - New Sample Containers Page

We have started to collect sample container definition files which you can use with Apptainer on Comet to build your own container-based software environments.

For now there are simple examples of Ubuntu, Ubuntu with dev tools and Nvidia CUDA/CUDNN using the officially provided Docker images. If you find this useful, or would like to provide any further examples, please contact us.


(12th) March 2026 - Several new system packages added

Our support vendor has installed the following system utilities across all compute nodes:

Some of these were already available on login nodes, but have been pushed to compute nodes for consistency. These are part of the base OS image and hence do not need to be loaded via module first.

In addition, the following tool:

Has been installed on the login nodes only. RClone is a tool for transferring data to/from various network services, cloud storage providers (AWS, Google, Azure, Onedrive etc). You can use it as another method to get data on/off Comet - especially if the source is a Google Drive, Onedrive or Amazon bucket. Please note that we cannot offer any support for the use of RClone - so you should be able to configure it for your own purposes.

As always, you can check our Software section for general software information, and our Software List page for a full list of all software modules, containers and software requests which have been made for Comet.


(11th) March 2026 - Upcoming Maintenance Work - Confirmed date/time

A maintenance window has now been confirmed for 23rd March between 9:00am - 11:00am.

This is to address the following:

The work will likely not take the full two hours, but the Slurm reservation window will be put in place for the full time to prevent any unpredictable behaviour by jobs that could be running during that time.

Email notification will be sent to HPC-Users distribution list as normal.

During the maintenance window our HPC vendor will also be using the opportunity to perform some low level diagnostics on the Lustre (aka /nobackup) service. This is linked to the Lustre performance issues identified in February. Since this will be during the scheduled maintenance window no further downtime will be required for these diagnostics.

If you are logged in to Comet during the maintenance window consider access to /nobackup to be at-risk due to the diagnostics. This should give the vendor the information they require to implement a permanent fix to the cp/rsync/dd Lustre file transfer characteristics we have observed.

Our HPC vendor has suggested a possible fault in the Infiniband connectivity of the Lustre storage system, fortunately Lustre is presented over both Infiniband and Ethernet (for our non-Infiniband hosts). Re-running our February transfer tests from a purely Ethernet-connect compute node the results are startling:

The data shows that cp performance to /nobackup utilising the Ethernet infrastructure is netting consistently higher speeds than with Infiniband, and in line with our performance expectations. It is likely a fault (software configuration or physical cabling issue) in the Infiniband connectivity of one or more of the Lustre servers is therefore bringing down the speeds, and this is therefore not manifesting on nodes that are connected to Lustre using only Ethernet.

Hopefully this should make the resolution to this issue quicker to implement.


(11th) March 2026 - FAQ for R-Studio LD_LIBRARY_PATH Changes

We have added a FAQ entry for changing the LD_LIBRARY_PATH variable prior to starting R-Studio from Open OnDemand.

This is in response to a query about adding custom library locations to R-Studio prior to launching it. Since R-Studio is launched from Open OnDemand you are not able to load extra modules in the normal way - customising LD_LIBRARY_PATH is an alternative method to add custom library locations and user-compiled software locations before it loads.


(10th) March 2026 - Install guide for MuSpAn

We have published an install guide for MuSpAn - a multiscale spatial analysis toolbox for analysis of imaging data. The guide illustrates the options you have for installation of this Python module in your own home area on Comet.


(9th) March 2026 - Upcoming Comet Maintenance

A maintenance window is currently being planned for Comet - this will have a short, 1-2 hour, service outage while our vendor carries out repair work on a number of systems which have been unavailable due to the recent problems with the second login node. This should bring general performance of the login nodes back to original levels.

Additionally, our colleagues in NUIT have identified a possible cabling fault with the uplink from Comet to the Campus network. It is possible that this fault has been causing the intermittent connection issues with /rdw. We need to be clear that this is not the cause of data transfer slowness - that particular issue has been traced to the Lustre filesystem (as per our previously published February news articles) and is not linked to /rdw.

Once dates and times for this work have been agreed we will notify all users via the usual HPC-Users email distribution list.


(9th) March 2026 - R and DEXSeq added to Bioapps

The Bioapps container image has been updated to 2026.03 and now includes R and DEXSeq.


(3rd) March 2026 - CNVKit Installed

The software environment CNVKit has now been installed on Comet. It is available as an Apptainer image, having been converted from the officially published Docker image from the developers. Please ensure that you read and understand our CNVKit guide.


(3rd) March 2026 - HPC Website Updates

A small update to the HPC portal was deployed today. This adds a few improvements and a new reporting feature:


(2nd) March 2026 - FSL

FSL is now installed on Comet. This latest version is available as a container, and we recommend that you read our FSL software guide to understand how to access it and make use of the included tools.

The Software list page has been updated to list FSL as a container application, and the FSL help page is now available. FSL will take advantage of Nvidia GPU hardware if available, both for compute, as well as for 3D visualisation, since you can also run FSL from our Linux X11 Desktop via the Open OnDemand service. FSL can, of course, still be used via normal Slurm/sbatch jobs.


Previous Updates


Back to HPC Documentation Home