Table of Contents

ComfyUI

ComfyUI is a node-based interface and inference engine for generative AI. Users can combine various AI models and operations through nodes to achieve highly customizable and controllable content generation.

ComfyUI - The most powerful open source node-based application for generative AI

from https://www.comfy.org/

You can run ComfyUI on Comet using the Open OnDemand desktop facility - this has the option of running on one of our GPU-accelerated job partitions, which enables the running of CUDA-based code from ComfyUI on one or more Nvidia L40S 48GB cards.


Installation

ComfyUI has no pre-built Linux packages available, so must be installed from source. It builds upon various existing Nvidia CUDA / cuDNN frameworks, and these must be available and match the versions expected by the application.

This can be difficult to achieve on a shared user system such as Comet, so our recommendation is to build ComfyUI into a container where its dependencies are entirely bundled with the application itself and not dependent on any external software libraries.

Container Definition

Save this build file as comfyui.def:

Bootstrap: docker
From: nvidia/cuda:12.9.1-cudnn-devel-ubuntu24.04
 
%post
    # Prevent interactive prompts
    export DEBIAN_FRONTEND=noninteractive

    # Update & install only necessary packages
    apt-get update
	apt-get install -y git python3-pip
	
    # Clean up APT cache to save space
    apt-get clean 

    # Install needed Python packages
    pip install --break-system-packages --no-cache-dir torch torchvision torchaudio 

	# Install ComfyUI
	mkdir -p /opt && \
	cd /opt && \
	git clone https://github.com/comfyanonymous/ComfyUI.git && \
	cd /opt/ComfyUI && \
	pip install -r requirements.txt --break-system-packages

	# This directory will be mapped to the outside filesystem
	mkdir -p /opt/ComfyUI/user
	mkdir -p /opt/ComfyUI/temp

	# Clean up pip cache
	rm -rf /root/.cache

%environment

Container Build

Follow the general guide for creating Apptainer images on Comet, specifically you must be on a login node to build the new container. Then run the following:

$ module load apptainer
$ export APPTAINER_TMPDIR=/scratch
$ apptainer build comfyui.sif comfyui.def

We would recommend storing the comfyui.sif container in one of your project /nobackup areas, as this is a large file and will consume a significant portion of your $HOME directory if left there.


Running

Save the script below as ComfyUI.sh, we will use this file to automate the setup of the container and to start the ComfyUI application every time, so that we don't have to type lots of commands each time we want to run it.

#!/bin/bash

# Set this to where you want your ComfyUI user-writeable files to be held
# We recommend using /nobackup, as the model files can be very large!
DIR_ROOT=/nobackup/proj/MY_PROJECT/comfyui

# This should point to where your container image is
# We recommend using nobackup, as this is a large file!
CONTAINER_FILE=$DIR_ROOT/comfyui.sif

echo "Loading modules..."
module load apptainer
module load CUDA
echo "OK"

echo ""
echo "Creating runtime directories..."
mkdir -p $DIR_ROOT/models $DIR_ROOT/user $DIR_ROOT/temp $DIR_ROOT/input $DIR_ROOT/output
echo "OK"

echo ""
echo "Running container..."
apptainer exec --nv \
	--bind $DIR_ROOT/models:/opt/ComfyUI/models \
	--bind $DIR_ROOT/user:/opt/ComfyUI/user \
	--bind $DIR_ROOT/temp:/opt/ComfyUI/temp \
	--bind $DIR_ROOT/input:/opt/ComfyUI/input \
	--bind $DIR_ROOT/output:/opt/ComfyUI/output \
	$CONTAINER_FILE \
	python3 /opt/ComfyUI/main.py

Make ComfyUI.sh executable:

$ chmod +x ComfyUI.sh

Once you have progressed this far you will now need to start a desktop session on one of the Comet GPU nodes.

1. Browse to Open OnDemand

In any web browser (on University campus network, or from the internet - it doesn't matter) browse to https://ood01.comet.hpc.ncl.ac.uk - you may be redirected to the Microsoft authentication portal; enter your normal University IT account username and password (the same as used to log in to Comet).

You will then see the main Open OnDemand portal interface:

2. Choose VNC Desktop Session (GPU)

Open the Interactive Apps option from the top menu bar and select the VNC Desktop Session (GPU) option:

3. Change Session Settings

In the VNC Desktop Session (GPU) session parameters form, ensure that you set the following options:

Note:

4. Submit / Wait For Session

Submit the form and wait for your session to be scheduled. If you ticked the “Email me when session starts” option, then you can navigate away from the website and continue other work as needed.

Once your session is scheduled you will get the option to launch it:

5. Launch Desktop Session

After launching the session you should see a basic Linux desktop environment in your browser:

6. Start ComfyUI

Open a Linux terminal using the icon from the application launcher bar:

In the Linux terminal cd to where you saved the comfyui.sif container and the ComfyUI.sh batch file:

Run the ComfyUI.sh bash script. This will load the Apptainer software module, the Nvidia CUDA libraries and then run the ComfyUI application stored within the container.

If successful the Linux terminal should eventually output:

To see the GUI go to: http://127.0.0.1:ABCD

Where ABCD will be a random number.

If it gets to this point then you can click your mouse cursor on the URL in the Linux terminal and Firefox will open on Linux desktop. The ComfyUI interface will show within the browser:

7. Download Model Files

Model files should be downloaded to $DIR_ROOT/models as you set in the ComfyUI.sh script you saved earlier. The models will then be available to the application whilst running within the container.

Model files are frequently very large, so $DIR_ROOT in the script should most likely be set to a project folder under /nobackup.

Once downloaded the models can be used by the various ComfyUI templates as normal:


Errors / Issues

Sometimes it can take a very long time for the first prompt to be processed after loading the model files. In the example below the first prompt was processed after several hundred seconds:

Note the elapsed time:

But the second prompt, without changing models, generated a new result in under 10 seconds:

The elapsed time for the second run:


Back to advanced software pages