• Home
  • Accessing Our Facilities
    • Apply for Access
    • HPC Resource List
    • Our Staff
    • Our Research Projects
    • Our Research Software

    • Contributions & Costings
    • HPC Driving Test
  • Documentation
    • Documentation Home
    • Getting Started
    • Advanced Topics
    • Training & Workshops
    • FAQ
    • Policies & Procedures
    • Using the Wiki

    • Data & Report Terminology
    • About this website
  • My Account
    • My HPC Projects
HPC Support
Trace: • apptainer

Apptainer Help

This section is incomplete

This documentation section on Container Technologies is still being written and will not be complete until the Comet HPC facility is fully commissioned.

If you have used Singularity previously, then Apptainer is a 100%, like-for-like replacement. To quote the Apptainer website:

Apptainer is a secure, portable, and easy-to-use container system that provides absolute trust and security. It is widely used across industry and academia.

Of the available container technologies it is often seen as one of the easiest to use and it does not require any special administrative permissions to use.

Links to the official Apptainer documentation material:

  • Apptainer quick start guide
  • Building containers
  • Apptainer Github repository & issues

Note that this help guide is not intended to be a complete introduction to Apptainer - this is just a starter for how Apptainer can be used on Comet. Please consider signing up to a future RSE container technology workshop.

Links TBC

Before You Start

The necessary configuration for you to run Apptainer on Comet is very minimal - we already assign you the necessary user namespace range at the point you log in to the facility so that you can use Apptainer without any administrative permissions.

The only change you need to make is to add the following environment variable to your login shell or any script you use to launch Apptainer:

export APPTAINER_TMPDIR=/tmp/

This is mandatory to build any new Apptainer container image. If you are just running existing container images then you do not need the variable to be set, but it causes no harm to do so.

Once you have made this one line change, you can continue with building and running Apptainer containers on Comet. Please read on for a quick guide.


Apptainer - A Quick Start On Comet

Following the Apptainer Quick Start guide, we can test everything is working by using the lolcow container image - this is a tiny Linux container that contains the cowsay tool, simply printing a string of text to the screen from an ASCII-art cow.

Run A Pre-Built Container

Load the Apptainer module:

$ module load apptainer
$ apptainer version
1.4.1
$

Download an existing (Docker) container image from the Github container registry, and convert to Apptainer/Singularity .sif format:

$ module load apptainer
$ apptainer pull docker://ghcr.io/apptainer/lolcow
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
INFO:    Fetching OCI image...
45.8MiB / 45.8MiB [============================================================] 100 % 20.3 MiB/s 0s
27.2MiB / 27.2MiB [============================================================] 100 % 20.3 MiB/s 0s
INFO:    Extracting OCI image...
INFO:    Inserting Apptainer configuration...
INFO:    Creating SIF file...
INFO:    To see mksquashfs output with progress bar enable verbose logging
$

Run an Apptainer (or Singularity) container image:

$ apptainer exec lolcow_latest.sif cowsay "HPC is cool!"
 ______________
< HPC is cool! >
 --------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
$


Build A New Container

This is a more complex example; here we are using an Ubuntu image published by Nvidia on their Docker hub account:

  • https://hub.docker.com/r/nvidia/cuda

The example uses their base image, then proceeds to install several additional packages; including Python 3, OpenMPI and Numpy. Here we use the .def file format to describe what we want Apptainer to do:

Bootstrap: docker
From: nvidia/cuda:12.9.1-devel-ubuntu24.04
 
%post
    # Prevent interactive prompts
    export DEBIAN_FRONTEND=noninteractive

    # Update & install only necessary packages
    apt-get update
	apt-get install -y openssh-client
	apt-get install -y --no-install-recommends python3 python3-pip libopenmpi-dev openmpi-bin
	
    # Clean up APT cache to save space
    apt-get clean 

    # Install only required Python packages
    pip3 install --break-system-packages --no-cache-dir numpy
 
%environment
    export PATH=/usr/bin:$PATH
    export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH

To build the new container, load the Apptainer module, set the APPTAINER_TMPDIR variable and run apptainer with the newly provided .def file:

$ module load apptainer
$ export APPTAINER_TMPDIR=/tmp
$ apptainer build my_container.sif nvidia_container.def

This can take a while, as it is a large, complex image - an almost complete Ubuntu installation, with the Nvidia CUDA SDK, plus additional applications. You should see it start work as follows:

INFO:    Starting build...
INFO:    Fetching OCI image...
2.1GiB / 2.1GiB [=================================================================] 100 % 0.0 b/s 0s
4.3MiB / 4.3MiB [=================================================================] 100 % 0.0 b/s 0s
28.3MiB / 28.3MiB [===============================================================] 100 % 0.0 b/s 0s
87.5KiB / 87.5KiB [===============================================================] 100 % 0.0 b/s 0s
3.0GiB / 3.0GiB [=================================================================] 100 % 0.0 b/s 0s
98.7MiB / 98.7MiB [===============================================================] 100 % 0.0 b/s 0s
INFO:    Extracting OCI image...
INFO:    Inserting Apptainer configuration...
INFO:    Running post scriptlet
+ export DEBIAN_FRONTEND=noninteractive
+ apt-get update
Get:1 http://archive.ubuntu.com/ubuntu noble InRelease [256 kB]
...
...
Get:20 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64  Packages [814 kB]
Fetched 32.8 MB in 3s (11.3 MB/s)
Reading package lists... Done
...
...
Setting up python3.12 (3.12.3-1ubuntu0.7) ...
Setting up libevent-openssl-2.1-7t64:amd64 (2.1.12-stable-9ubuntu2) ...
Setting up libhwloc-plugins:amd64 (2.10.0-1build1) ...
Setting up gfortran-13-x86-64-linux-gnu (13.3.0-6ubuntu2~24.04) ...
...
...
+ apt-get clean
+ pip3 install --break-system-packages --no-cache-dir numpy
Collecting numpy
  Downloading numpy-2.3.2-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.1/62.1 kB 2.6 MB/s eta 0:00:00
Downloading numpy-2.3.2-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (16.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.6/16.6 MB 36.5 MB/s eta 0:00:00
Installing collected packages: numpy
Successfully installed numpy-2.3.2
INFO:    Adding environment to container
INFO:    Creating SIF file...
INFO:    To see mksquashfs output with progress bar enable verbose logging
INFO:    Build complete: container.sif
$

At this point you should now have a rather large container file (.SIF) in your directory:

$ ls -l
total 5696984
-rwxr-xr-x 1 n1234 cometloginaccess        110 Aug  5 08:02 build.sh
-rw-r--r-- 1 n1234 cometloginaccess        604 Aug  8 09:12 nvidia_container.def
-rwxr-x--- 1 n1234 cometloginaccess 5833699328 Aug  8 09:15 my_container.sif
$

You can then run commands in that container as you did with the quick start example. Here we test which version of Python 3 is available in the container:

$ module load apptainer
$ apptainer exec my_container.sif python3 -V
Python 3.12.3
$

Or check the version of Ubuntu the container is using:

$ module load apptainer
$ apptainer exec my_container.sif cat /etc/os-release
INFO:    gocryptfs not found, will not be able to use gocryptfs
PRETTY_NAME="Ubuntu 24.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.2 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
$


Run A Container Under Slurm

Using The Quick-Start Example

Running the container image pulled down previously, we can easily run the same commands under Slurm on any compute node:

#!/bin/bash

#SBATCH --partition=default_free
#SBATCH -c 1
#SBATCH -t 00:05:00

module load apptainer

###########################################
echo "Starting Apptainer image on $HOSTNAME..."
apptainer exec lolcow_latest.sif cowsay "Help, I'm trapped in a container running under Slurm on $HOSTNAME!"
echo "Job completed!"
###########################################

Submit as:

$ sbatch cowsay.sh
Submitted batch job 1022834
$

Check the output:

$ cat slurm-1022834.out
Starting Apptainer image on compute030...
/ Help, I'm trapped in a container     \
\ running under Slurm on compute30!    /
 --------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
Job completed!
$

Using the Nvidia CUDA Example

The more complex Nvidia + CUDA + Ubuntu container can run on any compute node, but is especially helpful on a GPU node, as it already has all of the necessary CUDA tools to make replicating tests between Comet and other systems much, much easier.

First, let's write an sbatch script:

#!/bin/bash

#SBATCH -p gpu-s_free
#SBATCH -c 1
#SBATCH --gres=gpu:L40:1
#SBATCH -t 00:05:00

module load apptainer

echo ""
echo "====== Checking what GPU is available =================="
# This runs OUTSIDE the container
nvidia-smi

echo ""
echo "======= Checking container CUDA SDK version ============"
# This runs INSIDE the container
apptainer exec my_container.sif /usr/local/cuda/bin/nvcc -V

Now submit the job:

$ sbatch nvidia.sh

Check the output:

$ cat 123454667-slurm.out
====== Checking what GPU is available ==================
Fri Aug  8 10:13:44 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.06             Driver Version: 570.124.06     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L40S                    Off |   00000000:83:00.0 Off |                    0 |
| N/A   25C    P8             31W /  350W |       1MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

======= Checking container CUDA SDK version ============
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Tue_May_27_02:21:03_PDT_2025
Cuda compilation tools, release 12.9, V12.9.86
Build cuda_12.9.r12.9/compiler.36037853_0

You can call the CUDA SDK compiler tools and run code from within the container, as well as move it from machine to machine if needed and have a self-contained Nvidia + Ubuntu CUDA SDK runtime environment.

Remember that you don't need to compile on a GPU node - you can compile anywhere, including the login nodes. You only need the GPU node to run your CUDA code.


Accessing $HOME, /nobackup & Other Directories

$HOME

Unlike some other container technologies, Apptainer automatically makes your own $HOME directory available within the container as a default (see Apptainer - bind paths and mounts.html). You do not need to do anything special to access it.

$ cd $HOME
$ apptainer exec path/to/my_container.sif pwd
/mnt/nfs/home/n1234
$

/nobackup

You can access /nobackup in your container by passing the –bind option. It takes two arguments arg1:arg2; arg1 the source (the directory you want to make available), arg2 is where you want it to be mounted. To keep things simple, keep /nobackup mounted at /nobackup inside the container:

$ apptainer exec --bind /nobackup:/nobackup my_container.sif df -h
Filesystem                                                                       Size  Used Avail Use% Mounted on
overlay                                                                           64M   16K   64M   1% /
tmpfs                                                                             95G  168K   95G   1% /dev/shm
/dev/mapper/system-root                                                           24G   17G  6.2G  73% /etc/hosts
storserv05:/mnt/nfs/home/n1234                                                    77T 1003G   72T   2% /mnt/nfs/home/n1234
/dev/mapper/system-tmp                                                            71G  2.0G   66G   3% /tmp
/dev/mapper/system-var                                                           7.8G  2.4G  5.1G  32% /var/tmp
tmpfs                                                                             64M   16K   64M   1% /etc/group
172.31.47.51@o2ib,172.31.31.51@tcp:172.31.47.52@o2ib,172.31.31.52@tcp:/lustre02  1.8P  502G  1.8P   1% /nobackup

You can access folders which you own, or are a member of during the running of the container:

$ apptainer exec --bind /nobackup:/nobackup container.sif touch /nobackup/proj/comettestgroup1/container_was_here
$
$ ls -l /nobackup/proj/comettestgroup1/container_was_here 
-rw-r----- 1 n1234 comettestgroup1 0 Aug  8 14:56 /nobackup/proj/comettestgroup1/container_was_here


Errors And Limitations

Errors Building Apptainer Containers

Apptainer normally assumes the user is a member of sudoers to create images, but has a workaround called user namespaces to allow unprivileged users to create images without needing elevated permissions.

User namespaces are enabled on the login nodes of Comet and the act of simply logging in will assign you a unique user namespace to use - you do not have to take any action; if you can log in to Comet, then you can create Apptainer (and therefore Singularity) container images. Creating almost any type of container should work correctly on Comet.

One limitation to this is that user namespaces only work fully on local filesystems - if you try to build a container on $HOME or /nobackup you may find that you get odd permission errors during the container installation process (references to Unable to change group or Cannot install setuid binary or similar messages at the point of building your container image).

Ensure that if building a container from a definition file which include post-install commands (e.g. apt, yum or similar) that you set the following environment variable prior to running the build:

$ export APPTAINER_TMPDIR=/tmp/

To be clear: running an existing container from $HOME or /nobackup is fully supported; only the initial build process needs to be performed in /tmp, via the APPTAINER_TMPDIR variable, above.

Please build your containers on the login nodes; it is not supported on compute nodes.

Limitations

We do not support the use of containers to run network services such as web servers, databases, or similar. There are no means to expose these services externally or to other nodes and therefore there is no reason to run them - Comet is not for running services. If we detect the attempted use of Apptainer to run such services on Comet those containers will be stopped and/or removed.


Back to Advanced Software Index

Previous Next

HPC Support

Table of Contents

Table of Contents

  • Apptainer Help
    • Before You Start
    • Apptainer - A Quick Start On Comet
      • Run A Pre-Built Container
      • Build A New Container
      • Run A Container Under Slurm
    • Accessing $HOME, /nobackup & Other Directories
    • Errors And Limitations
      • Errors Building Apptainer Containers
      • Limitations

Main Content Sections

  • Documentation Home
  • Getting Started
  • Advanced Topics
  • Training & Workshops
  • FAQ
  • Policies & Procedures
  • Using the Wiki
  • Contact us & Get Help

Documentation Tools

  • Wiki Login
  • RSE-HPC Team Area
Developed and operated by
Research Software Engineering
Copyright © Newcastle University
Contact us @rseteam