• Home
  • Accessing Our Facilities
    • Apply for Access
    • HPC Resource List
    • Our Staff
    • Our Research Projects
    • Our Research Software

    • Contributions & Costings
    • HPC Driving Test
  • Documentation
    • Documentation Home
    • Getting Started
    • Advanced Topics
    • Training & Workshops
    • FAQ
    • Policies & Procedures
    • Using the Wiki

    • Data & Report Terminology
    • About this website
  • My Account
    • My HPC Projects
HPC Support
Trace: • ssh_keys • paying • scrinvex • sidebar • slurm • rocket_resources • matlab • stats • index • containers

Container Technology

Building more complex processing and analysis pipelines within a container can offer several advantages:

  • The container fully encapsulates the job and all software dependencies
  • Software dependencies do not need to be installed on the HPC facility
  • Your workflow can easily be replicated or moved between other systems
  • You can run pre-built job containers which application developers publish
  • You can build your own containers to move between systems, or share with your group

The Comet HPC facility supports the following container tools:

  • Docker
  • Apptainer
  • Singularity - Now deprecated and replaced by Apptainer
  • Podman - RedHat Linux specific container toolset, largely compatible with Docker

This section is incomplete

This documentation section on Container Technology is still being written and will not be complete until the Comet HPC facility is fully commissioned.

Docker


Apptainer

If you have used Singularity previously, then Apptainer is a 100%, like-for-like replacement.

  • Apptainer quick start guide
  • Building containers
  • Apptainer Github repository & issues

Download and run a pre-built container

Following the Apptainer Quick Start guide, we can test everything is working by using the lolcow container image - this is a tiny Linux container that contains the cowsay tool, simply printing a string of text to the screen from an ASCII-art cow.

Load the Apptainer module:

$ module load apptainer
$ apptainer version
1.4.1
$

Download an existing (Docker) container image from the Github container registry, and convert to Apptainer/Singularity .sif format:

$ module load apptainer
$ apptainer pull docker://ghcr.io/apptainer/lolcow
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
INFO:    Fetching OCI image...
45.8MiB / 45.8MiB [============================================================] 100 % 20.3 MiB/s 0s
27.2MiB / 27.2MiB [============================================================] 100 % 20.3 MiB/s 0s
INFO:    Extracting OCI image...
INFO:    Inserting Apptainer configuration...
INFO:    Creating SIF file...
INFO:    To see mksquashfs output with progress bar enable verbose logging
$

Run an Apptainer (or Singularity) container image:

$ apptainer exec lolcow_latest.sif cowsay "HPC is cool!"
 ______________
< HPC is cool! >
 --------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
$

Build a new container

TBD

Run Apptainer container under Slurm

Running the container image pulled down previously, we can easily run the same commands under Slurm on a compute node:

#!/bin/bash

#SBATCH --partition=default_free
#SBATCH -c 1
#SBATCH -t 00:05:00

module load apptainer

###########################################
echo "Starting Apptainer image on $HOSTNAME..."
apptainer exec lolcow_latest.sif cowsay "Help, I'm trapped in a container running under Slurm on $HOSTNAME!"
echo "Job completed!"
###########################################

Submit as:

$ sbatch cowsay.sh
Submitted batch job 1022834
$

Check the output:

$ cat slurm-1022834.out
Starting Apptainer image on compute030...
/ Help, I'm trapped in a container     \
\ running under Slurm on compute30!    /
 --------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
Job completed!
$

Errors Building Apptainer Containers

Apptainer normally assumes the user is a member of sudoers to create images, but has a workaround called user namespaces to allow unprivileged users to create images without needing elevated permissions.

User namespaces is enabled on the login nodes of Comet and therefore creating almost any type of container should work correctly.

One limitation to this is that user namespaces only work fully on local filesystems - if you try to build a container on $HOME or /nobackup you may find that you get odd permission errors during the container installation process (references to “unable to change group” or “cannot install setuid binary” or similar).

Ensure that if building a container from a definition file which include post-install commands (e.g. apt, yum or similar) that you set the following environment variable prior to running the build:

export APPTAINER_TMPDIR=/tmp/

To be clear: running an existing container from $HOME or /nobackup is fully supported; only the initial build process needs to be performed in /tmp, via the APPTAINER_TMPDIR variable, above.

Please build your containers on the login nodes - it is not supported on compute nodes.


Singularity

Singularity is now deprecated and the community have largely moved to Apptainer instead. Apptainer is a like-for-like, drop-in replacement for Singularity; if you have previously used Singularity, then you can start using Apptainer by simply changing any call to singularity to apptainer instead.

Historically we did have Singularity on our previous Rocket HPC facility, but it was not advertised and was very much a “use at your own risk” capability.

Podman


Back to Advanced Topics

Previous Next

HPC Support

Table of Contents

Table of Contents

  • Container Technology
    • Docker
    • Apptainer
      • Errors Building Apptainer Containers
    • Singularity
    • Podman

Main Content Sections

  • Documentation Home
  • Getting Started
  • Advanced Topics
  • Training & Workshops
  • FAQ
  • Policies & Procedures
  • Using the Wiki
  • Contact us & Get Help

Documentation Tools

  • Wiki Login
  • RSE-HPC Team Area
Developed and operated by
Research Software Engineering
Copyright © Newcastle University
Contact us @rseteam