====== HPC Service Updates - 2026 - March ======
----
===== (30th) March 2026 - New Bioapps container - added Tophat 1 + 2 =====
A new version of the [[advanced:software:bioapps|Bioapps container]] has been released. This features [[https://github.com/DaehwanKimLab/tophat|Tophat]] 1.4.1 and 2.1.1.
Full details are included in the Bioapps user guide wiki page.
* See [[advanced:software:bioapps|Bioapps container user guide]]
----
===== (30th) March 2026 - Lustre disruption 27-30th =====
The Lustre filesystem on Comet was partially unavailable from the evening of Friday 27th until noon Monday 30th.
This appears to have been another instance of one Lustre server rebooting and a failure of the failover processes. Our HPC vendor brought services up on the second server manually and they are now available again.
Some running jobs may be in a partially frozen state. If you notice this, or any log files not updating, then you may need to use ''scancel'' to stop them.
----
===== (26th) March 2026 - SAIGE now available on Comet =====
SAIGE is an R package developed with Rcpp for genome-wide association tests in large-scale data sets and biobanks. It is installed on Comet as a ready-to-run container environment. No external dependencies or prior R installation/libraries is required.
* See: [[advanced:software:saige|SAIGE user guide for Comet]]
----
===== (26th) March 2026 - ldsc.py installed =====
ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. This software used to require Python 2.x to run, but was updated to Python 3.9 via [[https://github.com/CBIIT/ldsc|a more recent fork of the source code]], however it still has issues with newer Python runtimes, as well as a curious set of older versions of numpy, matplotlib and similar at runtime.
We have built a tiny ldsc container environment to encapsulate ldsc and all of its dependencies so that each user does not need to build their own virtual environment to run it.
The sample data files used by ldsc are also installed centrally at ''/nobackup/shared/data/ldsc''. If you have similar data files used by ldsc which would be useful to others then please let us know and we can move them to this area and thus excluding them from the Comet [[:policies:data|Data retention]] policies.
* See the [[advanced:software:ldsc|ldsc guide for Comet]]
----
===== (25th) March 2026 - CCP-EM, Doppio, Relion and more installed =====
CCP-EM v2 (also known as Doppio), CCP-EM v1, Relion, Salilabs Modller, Topaz, UCSF Motioncor 3, CCP4 and more have been installed on Comet. The full list of software includes (//take a deep breath//):
* CCP-EM v1, CCP-EM v2 (Doppio), CCP4, CheckMySequence, DoubleHelix, FindMySequence, CryoDRGN, CTFFind 4, EMDA, EMDB VA, Locscale, MetalCoord, Model Angelo, Salilabs Modeller, RIBFIND, Relion, Resmap, TEMPy, TEMPy-REFF, Topaz, UCSF MotionCor 3... and and Python, of course!
This is extensive set of inter-related tools is now available in a new [[:advanced:software:ccpemsuite|CCP-EM suite container]] image that is ready for use. We have written a guide which shows all of the tools that are installed and how to use, including both at the command line or from our [[advanced:software:x11|Linux X11 Desktop]] using the [[advanced:interactive|Open OnDemand service]]. Many of the tools also make use of optional GPU acceleration.
The full set comprises more than 40GB of software; some of which we did not have the capability to offer before on our previous HPC facility.
* See: [[:advanced:software:ccpemsuite|CCP-EM suite and tools guide for Comet]]
----
===== (24th) March 2026 - Ice Sheet & Sea Level System Model (ISSM) available =====
ISSM has been installed on Comet and a [[advanced:software:issm|guide]] is available to walk users through making use of both the ''issm'' tool, as well as calling ISSM functions from within Python.
* See: [[advanced:software:issm|ISSM guide for Comet]]
----
===== (24th) March 2026 - Temporary loss of Lustre =====
We were informed late this afternoon by our HPC vendor that the Lustre service for Comet (i.e. ''/nobackup'') had been lost on all compute/login nodes.
This outage lasted around 2 hours, and whilst it has now been restored, many jobs which were running during this time will likely have experienced errors and may have terminated unexpectedly.
----
===== (24th) March 2026 - PGAP installed =====
We have installed [[advanced:software:pgap|PGAP]], including the image data downloaded to a shared data directory at ''/nobackup/shared/data/pgap'' (so that it does not affect your quota, or is subject to the [[policies:data|data retention]] policies).
A simple wrapper for the PGAP Python script is provided which we recommend as the way to run PGAP - simply ''module load PGAP'' and then use the ''pgap'' script to run your pipelines.
We recommend all users to read the [[advanced:software:pgap|PGAP]] guide and use the //Quickstart Example// as a basis for jobs intended to be submitted via Slurm
* More information in our [[advanced:software:pgap|PGAP]] guide
----
===== (23rd) March 2026 - OpenGeoSys available =====
We have made [[advanced:software:ogs|OpenGeoSys]] available for use on Comet. This includes a simple guide to get you started both at the command line, or to use OGS interactively via JupyterLab //notebook// using one a [[advanced:software:x11|Linux Desktop]] session on our [[advanced:interactive|Open OnDemand]] service.
* More information on [[advanced:software:ogs|OpenGeoSys]]
----
===== (23rd) March 2026 - Comet login service restored =====
The vendor has restored the Comet login service (e.g. ''ssh comet.hpc.ncl.ac.uk''). Unfortunately it appears that the SSH host key fingerprints for one of the login servers have been lost.
If you attempt to log in to Comet now you will see a __warning__ from your SSH client looking like this:
$ ssh comet.hpc.ncl.ac.uk
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:ABCDEFGHIJKLMNOPQRTU12345678.
Please contact your system administrator.
$
This is expected, since the original fingerprints of that server have now changed. To resolve this, run the following command on your **Linux** or **Mac OS** device:
$ ssh-keygen -f $HOME/.ssh/known_hosts -R comet.hpc.ncl.ac.uk
If you are using an alternative SSH client on a **Windows** platform, the error message from your software (e.g. PuTTY, Mobaxterm or similar) //should// indicate the //equivalent// command to run.
----
===== (23rd) March 2026 - Planned maintenance overrunning on login nodes =====
Maintenance on the login nodes was planned today between 9:00am - 11:00am (announced last week). Unfortunately, some issues remain and login nodes are not reliably available at this time (13:00pm).
We expect this to be rememedied by our supplier in the next hour or two, please check back for further announcements.
**Running jobs are NOT affected**
**Maintenance was finally completed at 1:54pm**.
//Unfortunately//, unexpected issues were encountered whilst restoring the two login nodes (**cometlogin01.comet.hpc.ncl.ac.uk** and **cometlogin02.comet.hpc.ncl.ac.uk**) which mean that we are //still// only operating on a single node - for now this is **cometlogin02**. The planned changes to the Lustre, ''/nobackup'' configuration to mitigate the copy-to-Lustre performance issues also did not address the underlying issues we have documented extensively.
The vendor will now need to analyse the outcomes from the work undertaken today and develop a plan of work to address them. We can only apologise on their behalf for the lack of progress and extended length of the maintenance.
----
===== (20th) March 2026 - new text for citations =====
We've updated [[policies:statements|our statements page]] with a standard format for citing use of Comet.
Please reference Newcastle University's High Performance Computing Service in any research report, journal, or publication that requires citation of authors' work. Recognition of the HPC resources you used to perform research is important for acquiring funding for the next generation of hardware, support services, and our research and development activities in HPC, visualization, data storage, and other related infrastructure.
=== Our suggested acknowledgement: ===
The authors acknowledge the Comet HPC facility at Newcastle University for providing computational resources that have contributed to the research results reported within this paper. URL: https://hpc.researchcomputing.ncl.ac.uk/
----
===== (19th) March 2026 - netcdf-fortran module replaced =====
The faulty **netcdf-fortran** module has been replaced and Fortran code which includes netcdf has been tested to compile and link successfully.
----
===== (18th) March 2026 - netcdf-fortran module errors =====
The ''netcdf-fortran'' module appears to be missing the ''lib'' folder set by the ''$LD_LIBRARY_PATH'' environment variable in the module:
$ module show netcdf-fortran/4.6.2
------------------------------------------------------------------------------------------------------------------------------
/opt/software/manual/modules/netcdf-fortran/4.6.2.lua:
------------------------------------------------------------------------------------------------------------------------------
whatis("netcdf-fortran with MPI (parallel I/O) support")
help([[Netcdf-fortran compiled with parallel I/O support using MPI.
Provides tools and libraries for managing large scientific datasets.
]])
prepend_path("PATH","/opt/software/manual/apps/netcdf-c/4.6.2/bin")
prepend_path("LD_LIBRARY_PATH","/opt/software/manual/apps/netcdf-c/4.6.2/lib")
prepend_path("CPATH","/opt/software/manual/apps/netcdf-c/4.6.2/include")
prepend_path("LIBRARY_PATH","/opt/software/manual/apps/netcdf-c/4.6.2/lib")
setenv("NETCDF_F_DIR","/opt/software/manual/apps/netcdf-c/4.6.2")
$ ls /opt/software/manual/apps/netcdf-c/4.6.2/lib
ls: cannot access '/opt/software/manual/apps/netcdf-c/4.6.2/lib': No such file or directory
If you have had linker errors when compiling Fortran code which included netcdf, this is likely the cause of the error. We have submitted a request to have the module rebuilt with the missing shared library folder added back in.
----
===== (14th) March 2026 - Amber, Ambertools Installed =====
The above tools have been installed on Comet in a new [[advanced:software:ambermd|Amber MD]] container image. Due to the complexity of the software and supporting components these will //not// be installed as modules.
This includes the following tools/libraries/commands:
* Amber 24
* Ambertools 25
* APBS
* MBX
* PLUMED
* Suitesparse
* Torchani
Where available, all tools have been configured to use MPI, OpenMP and/or Nvidia CUDA support if possible.
For more information on how to use this container and access the software installed within, please consult our [[advanced:software:ambermd|Amber MD guide]].
----
===== (12th) March 2026 - Stata/MP Installed =====
Stata/MP is now installed on Comet and is available as a module. Use ''module load Stata'' to load it.
* Further information can be found in our [[advanced:statamp|Stata/MP]] guide.
Please note that restrictions in the license for Stata/MP limit concurrent number of CPU cores for each Stata job to //two//.
----
===== (12th) March 2026 - Gaussian and GaussView Added =====
The computational chemistry applications Gaussian and GaussView for structure modelling and visualisation are now available via module on Comet (''module load Gaussian'' and ''module load GaussianView''). These are version 16 and 6, respectively. Whilst funded by individual academic staff within SAgE, the software is available for //all// research users on Comet - commercial use is not permitted.
We strongly reccomend anyone needing to use GaussView to use it via our [[advanced:software:x11|Linux X11 Desktop]] interface on Open OnDemand. //Using SSH+X11 is now discouraged and unsupported//.
* For more information, read our [[advanced:software:gaussian|Gaussian / GaussView]] guide.
----
===== (12th) March 2026 - New Sample Containers Page =====
We have started to collect [[advanced:samplecontainers|sample container definition files]] which you can use with [[advanced:apptainer|Apptainer]] on Comet to build your own container-based software environments.
For now there are simple examples of **Ubuntu**, **Ubuntu with dev tools** and **Nvidia CUDA/CUDNN** using the officially provided Docker images. If you find this useful, or would like to provide any further examples, please [[:contact:index|contact us]].
----
===== (12th) March 2026 - Several new system packages added =====
Our support vendor has installed the following system utilities across all compute nodes:
* ''screen''
* ''tmux''
* ''bash-completion''
* ''csh''
Some of these were already available on login nodes, but have been pushed to compute nodes for consistency. These are part of the base OS image and hence do not need to be loaded via module first.
In addition, the following tool:
* ''rclone''
Has been installed on **the login nodes only**. [[advanced:rclone|RClone]] is a tool for transferring data to/from various network services, cloud storage providers (AWS, Google, Azure, Onedrive etc). You can use it as another method to get data on/off Comet - especially if the source is a Google Drive, Onedrive or Amazon bucket. Please note that we cannot offer any support for the //use// of RClone - so you should be able to configure it for your own purposes.
As always, you can check our [[advanced:software|Software]] section for general software information, and our [[:advanced:software_list|Software List]] page for a full list of all software modules, containers and software requests which have been made for Comet.
----
===== (11th) March 2026 - Upcoming Maintenance Work - Confirmed date/time =====
A maintenance window has now been confirmed for **23rd March between 9:00am - 11:00am**.
This is to address the following:
* Essential security fix to one of the Slurm components
* Reboot of the NFS server (''/mnt/nfs/home'') to clear stale, half-open client connections
* Re-integration of the secondary login server **cometlogin02** to provide the resilient login service again, and to reduce the performance penalty of all data transfers/housekeeping tasks running on the single login node
The work will likely //not// take the full two hours, but the Slurm reservation window will be put in place for the full time to prevent any unpredictable behaviour by jobs that could be running during that time.
Email notification will be sent to HPC-Users distribution list as normal.
During the maintenance window our HPC vendor will also be using the opportunity to perform some low level diagnostics on the Lustre (aka ''/nobackup'') service. This is linked to the [[status:index_2026_02|Lustre performance issues identified in February]]. Since this will be during the scheduled maintenance window no further downtime will be required for these diagnostics.
If you are logged in to Comet during the maintenance window consider access to ''/nobackup'' to be //at-risk// due to the diagnostics. This should give the vendor the information they require to implement a permanent fix to the ''cp''/''rsync''/''dd'' Lustre file transfer characteristics we have observed.
Our HPC vendor has suggested a possible fault in the Infiniband connectivity of the Lustre storage system, fortunately Lustre is presented over both Infiniband //and// Ethernet (for our non-Infiniband hosts). Re-running our February transfer tests from a //purely Ethernet-connect compute node// the results are startling:
{{:status:scratch_to_lustre_ethernet.svg|}}
The data shows that ''cp'' performance to ''/nobackup'' utilising the Ethernet infrastructure is netting consistently higher speeds than with Infiniband, and in line with our performance expectations. It is likely a fault (software configuration or physical cabling issue) in the Infiniband connectivity of one or more of the Lustre servers is therefore bringing down the speeds, and this is therefore //not// manifesting on nodes that are connected to Lustre using only Ethernet.
Hopefully this should make the resolution to this issue quicker to implement.
----
===== (11th) March 2026 - FAQ for R-Studio LD_LIBRARY_PATH Changes =====
We have added a [[faq:038|FAQ entry for changing the LD_LIBRARY_PATH]] variable prior to starting R-Studio from Open OnDemand.
This is in response to a query about adding custom library locations to R-Studio prior to launching it. Since R-Studio is launched from Open OnDemand you are not able to load //extra// modules in the normal way - customising ''LD_LIBRARY_PATH'' is an alternative method to add custom library locations and user-compiled software locations before it loads.
----
===== (10th) March 2026 - Install guide for MuSpAn =====
We have published an install guide for [[advanced:software:muspan|MuSpAn]] - //a multiscale spatial analysis toolbox for analysis of imaging data//. The guide illustrates the options you have for installation of this Python module in your own home area on Comet.
* For more information about MuSpAn: https://www.muspan.co.uk/
----
===== (9th) March 2026 - Upcoming Comet Maintenance =====
A maintenance window is currently being planned for Comet - this will have a short, 1-2 hour, service outage while our vendor carries out repair work on a number of systems which have been unavailable due to the recent problems with the second login node. This should bring general performance of the login nodes back to original levels.
Additionally, our colleagues in NUIT have identified a possible cabling fault with the uplink from Comet to the Campus network. It is //possible// that this fault has been causing the **intermittent connection issues** with ''/rdw''. We need to be clear that this is __not__ the cause of data transfer **slowness** - //that// particular issue has been traced to the Lustre filesystem (as per our previously published [[status:index_2026_02|February news articles]]) and is //not// linked to ''/rdw''.
Once dates and times for this work have been agreed we will notify all users via the usual HPC-Users email distribution list.
----
===== (9th) March 2026 - R and DEXSeq added to Bioapps =====
The [[advanced:software:bioapps|Bioapps]] container image has been updated to **2026.03** and now includes **R** and **DEXSeq**.
* See the [[advanced:software:bioapps|Bioapps container help guide]] for more information on all of the included software.
----
===== (3rd) March 2026 - CNVKit Installed =====
The software environment [[advanced:software:cnvkit|CNVKit]] has now been installed on Comet. It is available as an [[advanced:apptainer|Apptainer]] image, having been converted from the officially published [[https://hub.docker.com/r/etal/cnvkit/|Docker]] image from the developers. Please ensure that you read and understand our [[advanced:software:cnvkit|CNVKit guide]].
* For more on CNVKit, see the official documentation: https://cnvkit.readthedocs.io/en/stable/
----
===== (3rd) March 2026 - HPC Website Updates =====
A small update to the HPC portal was deployed today. This adds a few improvements and a new reporting feature:
* The quiz and HPC Driving test section will now jump back to the most recently answered question upon submission of an answer.
* A clarification has been added to the 'Membership Management' page to indicate that removing someone from a project //does not// remove their files from Comet.
* A new, [[https://hpc.researchcomputing.ncl.ac.uk/reports/| public reports section]] is now available; showing an overview of the utilisation trends of various compute resources of both Comet (and historically for our Rocket facility).
----
===== (2nd) March 2026 - FSL =====
FSL is now installed on Comet. This latest version is available as a container, and we recommend that you read our [[:advanced:software:fsl|FSL software guide]] to understand how to access it and make use of the included tools.
The [[advanced:software_list|Software list]] page has been updated to list FSL as a container application, and the [[:advanced:software:fsl|FSL help page]] is now available. FSL will take advantage of Nvidia GPU hardware if available, both for compute, as well as for 3D visualisation, since you can also run FSL from our [[advanced:software:x11|Linux X11 Desktop]] via the [[advanced:interactive|Open OnDemand]] service. FSL can, of course, still be used via normal Slurm/sbatch jobs.
* For further information on FSL: https://fsl.fmrib.ox.ac.uk/fsl/docs/index.html
----
[[:status:index|Back to HPC News & Changes]]