Whilst most use of the HPC facilities is traditionally undertaken by logging in with SSH to a login node and then submitting Slurm SBATCH job files, it is becoming increasingly common to need graphical output - either from data generated on one or more compute nodes, or to run interactive, heavy applications directly on the compute nodes themselves.
There are two main methods for achieving this:
The traditional method of displaying Unix/Linux graphics remotely is through the use of X11 forwarding; this runs the application on a remote system (e.g. Comet or Rocket), but forwards all of the graphics commands to a local display (e.g. your laptop).
The SSH protocol includes support for embedding these graphics commands through a process known as X11 Tunnelling.
This works reasonably well for most simple applications and has been the only option open to users of Rocket, it is still supported on Comet.
A small number of applications simply won't work via this route as they are more complex, require specific X11 API/library support (which may differ between Comet and your local device) or require direct access to GPU hardware to run. It is not always possible to know in advance which these are - if you encounter one, please let us know and we will add a note to the Advanced Software Topics section.
Whilst it is possible to configure PuTTY to support X11 displays launched over SSH, it is no longer the recommended option for Windows users as it relies on additional software (XMing) which is no longer kept updated.
Instead, please consider the use of https://mobaxterm.mobatek.net/ which includes the support all within the one application.
All modern Linux clients (e.g. Ubuntu on your local desktop or laptop) have support for displaying the output of applications launched on the HPC system. Normally you would connect to the HPC facility and add the -X option to your ssh command:
-X
ssh
ubuntu $ ssh -X comet.hpc.ncl.ac.uk login01 $
Any applications/tools then launched from Comet would display their windows/dialogue boxes/graphical output on your local Linux desktop. In most cases this is transparently activated after adding the -X option.
This approach works for most simple tools and applications, but it is no longer the recommended option for larger applications, or those requiring accelerated GPU hardware. Please jump to the Open OnDemand section for further details.
mymac $ ssh -X comet.hpc.ncl.ac.uk login01 $
Normally, to support remote display of Linux applications launched over SSH connections, you will need to have installed and configured XQuartz on your Mac OS system. Support for this is outside the scope of what we can offer, but once installed the behaviour should be largely identical to that of native Linux users, as detailed above.
Open OnDemand is a suite of tools and services which makes it easy to start up and access graphical applications on compute servers, like our HPC facility.
We use Open OnDemand to launch more demanding, graphical applications (such as Jupyter, Matlab, ANSYS, R Studio etc.) on your behalf, and you then access them directly in your web browser, without the need for any additional software.
The applications run directly on the HPC, taking advantage of the massive CPU compute power and large RAM capacity of the compute nodes, allowing you to run demanding interactive code on the same hardware that would typically only be accessible by writing a Slurm job script.
All resources that are allocated during the setup of your Open OnDemand session are counted towards your Slurm job resource utilisation as per normal Slurm resource allocation and costing. Remember that the larger the amount of resources you request, the longer it may take for the Slurm scheduler to find and allocate a few compute node to run your application.
Specifically, on CPU based compute nodes, the costing algorithm applies:
CPU Based Slurm Jobs
In the case of a Slurm job which only uses CPU resources, this becomes:
Total number of CPU cores * Hours = Total Hours of CPU Compute Resource
If your Open OnDemand session uses GPU compute resources, then the alternative costing algorithm applies:
GPU Based Slurm Jobs
In the case of a Slurm job using GPU resources, the calculation is:
Total number of GPU cards * Hours = Total Hours of GPU Compute Resource
This section is incomplete
This documentation section on Open OnDemand is still being written and will not be complete until the Comet HPC facility is fully commissioned.
Back to Advanced Topics
Table of Contents
Main Content Sections
Documentation Tools