Building more complex processing and analysis pipelines within a container can offer several advantages:
The Comet HPC facility supports the following container tools:
This section is incomplete
This documentation section on Container Technology is still being written and will not be complete until the Comet HPC facility is fully commissioned.
If you have used Singularity previously, then Apptainer is a 100%, like-for-like replacement.
Following the Apptainer Quick Start guide, we can test everything is working by using the lolcow container image - this is a tiny Linux container that contains the cowsay tool, simply printing a string of text to the screen from an ASCII-art cow.
cowsay
Load the Apptainer module:
$ module load apptainer $ apptainer version 1.4.1 $
Download an existing (Docker) container image from the Github container registry, and convert to Apptainer/Singularity .sif format:
.sif
$ module load apptainer $ apptainer pull docker://ghcr.io/apptainer/lolcow INFO: Converting OCI blobs to SIF format INFO: Starting build... INFO: Fetching OCI image... 45.8MiB / 45.8MiB [============================================================] 100 % 20.3 MiB/s 0s 27.2MiB / 27.2MiB [============================================================] 100 % 20.3 MiB/s 0s INFO: Extracting OCI image... INFO: Inserting Apptainer configuration... INFO: Creating SIF file... INFO: To see mksquashfs output with progress bar enable verbose logging $
Run an Apptainer (or Singularity) container image:
$ apptainer exec lolcow_latest.sif cowsay "HPC is cool!" ______________ < HPC is cool! > -------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || $
TBD
Running the container image pulled down previously, we can easily run the same commands under Slurm on a compute node:
#!/bin/bash #SBATCH --partition=default_free #SBATCH -c 1 #SBATCH -t 00:05:00 module load apptainer ########################################### echo "Starting Apptainer image on $HOSTNAME..." apptainer exec lolcow_latest.sif cowsay "Help, I'm trapped in a container running under Slurm on $HOSTNAME!" echo "Job completed!" ###########################################
Submit as:
$ sbatch cowsay.sh Submitted batch job 1022834 $
Check the output:
$ cat slurm-1022834.out Starting Apptainer image on compute030... / Help, I'm trapped in a container \ \ running under Slurm on compute30! / -------------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Job completed! $
Apptainer normally assumes the user is a member of sudoers to create images, but has a workaround called user namespaces to allow unprivileged users to create images without needing elevated permissions.
sudoers
User namespaces is enabled on the login nodes of Comet and therefore creating almost any type of container should work correctly.
One limitation to this is that user namespaces only work fully on local filesystems - if you try to build a container on $HOME or /nobackup you may find that you get odd permission errors during the container installation process (references to “unable to change group” or “cannot install setuid binary” or similar).
$HOME
/nobackup
Ensure that if building a container from a definition file which include post-install commands (e.g. apt, yum or similar) that you set the following environment variable prior to running the build:
apt
yum
export APPTAINER_TMPDIR=/tmp/
To be clear: running an existing container from $HOME or /nobackup is fully supported; only the initial build process needs to be performed in /tmp, via the APPTAINER_TMPDIR variable, above.
/tmp
Please build your containers on the login nodes - it is not supported on compute nodes.
Singularity is now deprecated and the community have largely moved to Apptainer instead. Apptainer is a like-for-like, drop-in replacement for Singularity; if you have previously used Singularity, then you can start using Apptainer by simply changing any call to singularity to apptainer instead.
singularity
apptainer
Historically we did have Singularity on our previous Rocket HPC facility, but it was not advertised and was very much a “use at your own risk” capability.
Back to Advanced Topics
Table of Contents
Main Content Sections
Documentation Tools