• Home
  • Accessing Our Facilities
    • Apply for Access
    • HPC Resource List
    • Our Staff
    • Our Research Projects
    • Our Research Software

    • Contributions & Costings
    • HPC Driving Test
  • Documentation
    • Documentation Home
    • Getting Started
    • Advanced Topics
    • Training & Workshops
    • FAQ
    • Policies & Procedures
    • Using the Wiki

    • Data & Report Terminology
    • About this website

    • Reports
  • My Account
    • My HPC Projects
HPC Support
Trace: • muspan • castep • cnvkit • bclconvert • ambermd

Amber MD

From Ambermd.org:

Amber is a suite of biomolecular simulation programs. It began in the late 1970's, and is maintained by an active development community; see our history page and our contributors page for more information.

The term “Amber” refers to two things. First, it is a set of molecular mechanical force fields for the simulation of biomolecules; these force fields are in the public domain, and are used in a variety of simulation programs. Second, it is a package of molecular simulation programs which includes source code and demos.

Amber is distributed in two parts: AmberTools and Amber.
  • For more information: https://ambermd.org/index.php
  • A local copy of the Amber24 reference manual is available here: amber24.pdf

Running Amber MD on Comet

As Amber (and Ambertools) are so large and with so many essential and optional dependencies they have been installed on Comet as a container image.

The Amber container is stored in the /nobackup/shared/containers directory and is accessible to all users of Comet. You do not need to take a copy of the container file; it should be left in its original location.

You can find the container files here:

  • /nobackup/shared/containers/ambermd.24.25.sif

We normally recommend using the latest version of the container, in the case of Amber, the version numbers represent the version of Amber (24) and Ambertools (25) installed inside.

Container Image Versions

We may reference a specific container file, such as ambermd.24.25.sif, but you should always check whether this is the most recent version of the container available. Simply ls the /nobackup/shared/containers directory and you will be able to see if there are any newer versions listed.

We have provided a convenience script that will automate all of steps needed to run applications inside the container, pass through any assigned Nvidia GPU, as well as access your $HOME, /scratch and /nobackup directories to just two simple commands.

  • /nobackup/shared/containers/ambermd.24.25.sh

There is a corresponding .sh script for each version of the container image we make available.

Just source this file and it will take care of loading apptainer, setting up your bind directories and calling the exec command for you - and give you a single command called container.run (instead of the really long apptainer exec command) to then run anything you want inside the container, for example - to run the sander tool from Ambertools:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run sander
     usage: sander  [-O|A] -i mdin -o mdout -p prmtop -c inpcrd -r restrt
                   [-ref refc -x mdcrd -v mdvel -e mden -frc mdfrc -idip inpdip -rdip rstdip -mdip mddip 
                   -inf mdinfo -radii radii -y inptraj -amd amd.log -scaledMD scaledMD.log] -cph-data -ce-data <file>-host ipi_hostname -port ipi_port
Consult the manual for additional options.
$


Amber - Serial

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run pmemd


Amber - Parallel / MPI

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run pmemd.MPI


Amber - CUDA

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run pmemd.cuda

Remember to allocate an Nvidia GPU card in your Slurm arguments if you want to make use of CUDA acceleration.


Amber - Interactive Desktop GUI


Ambertools

Ambertools is installed to /opt/ambertools25 and includes almost all of the optional utilities possible.

Contents of /opt/ambertools25/bin:

AddToBox     cpptraj.cuda         match                  nfe-umbrella-slice   rism1d                      simplepbsa.MPI
ChBox        draw_membrane2       match_atomname         nmode                rism3d.snglpnt              sqm
FEW.pl       edgembar             mdgx                   packmol              rism3d.snglpnt.MPI          sqm.MPI
PropPDB      edgembar.OMP         mdgx.MPI               paramfit             rism3d.snglpnt.cuda         sviol
UnitCell     elsize               mdgx.OMP               paramfit.OMP         rism3d.snglpnt.cuda.double  sviol2
XrayPrep     espgen               mdgx.cuda              parmcal              sander                      teLeap
add_pdb      gbnsr6               mdout2pymbar.pl        parmchk2             sander.LES                  test-api
add_xray     gem.pmemd            memembed               pbsa                 sander.LES.MPI              test-api.MPI
addles       gem.pmemd.MPI        metatwist              pbsa.cuda            sander.MPI                  test-api.cuda
am1bcc       gwh                  mm_pbsa.pl             prepgen              sander.OMP                  test-api.cuda.MPI
ambmask      hcp_getpdb           mm_pbsa_nabnmode       process_mdout.perl   sander.quick.cuda           tinker_to_amber
ambpdb       immers               mm_pbsa_statistics.pl  process_minout.perl  sander.quick.cuda.MPI       tleap
antechamber  makeANG_RST          mmpbsa_py_energy       quick                saxs_md                     ucpp
atomtype     makeCHIR_RST         mmpbsa_py_nabnmode     quick.MPI            saxs_md.OMP                 wrapped_progs
bondtype     makeCSA_RST.na       modxna.sh              quick.cuda           saxs_rism                   xaLeap
cestats      makeDIP_RST.dna      ndfes                  quick.cuda.MPI       saxs_rism.OMP               xleap
cphstats     makeDIP_RST.protein  ndfes-path             reduce               senergy
cpptraj      makeDIST_RST         ndfes-path.OMP         residuegen           sgldinfo.sh
cpptraj.MPI  makeRIGID_RST        ndfes.OMP              resp                 sgldwt.sh
cpptraj.OMP  make_crd_hg          nef_to_RST             respgen              simplepbsa

As an example, to run the AddToBox tool:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run AddToBox
AddToBox >> A program for adding solvent molecules to a crystal cell.

Options:
  -c    : the molecule cell (PDB format)
  -a    : the molecule to add
  -na   : the number of copies to add
  -P    : the upper limit of protein atoms
  -o    : output file (PDB format)
  -RW   : Clipping radius for solvent atoms
  -RP   : Clipping radius for protein atoms
  -IG   : Random number seed
  -NO   : flag for no PDB output (stops after determining the protein fraction of the box)
  -G    : Grid spacing for search (default 0.2)
  -V    : Recursively call AddToBox until all residues have been added. (Default 0 ; any other setting activates recursion)
  -path : Path for AddToBox program on subsequent calls (default ${AMBERHOME}/bin/AddToBox)
$


Amber & Ambertools Configuration

If you require configuration details, Amber was built with the following options:

-- ************************************************************************** 
--                               Build Report 
-- 
--                           3rd Party Libraries 
-- ---building bundled: ----------------------------------------------------- 
-- kmmd - Machine-learning molecular dynamics 
-- ---using installed: ------------------------------------------------------ 
-- blas - for fundamental linear algebra calculations 
-- lapack - for fundamental linear algebra calculations 
-- netcdf - for creating trajectory data files 
-- netcdf-fortran - for creating trajectory data files from Fortran 
-- zlib - for various compression and decompression tasks 
-- libbz2 - for various compression and decompression tasks 
-- libm - for fundamental math routines if they are not contained in the C library 
-- ---disabled: ------------------------------------------------ 
-- libtorch - for fundamental math routines if they are not contained in the C library 

--                                Features: 
-- MPI:                               ON 
-- MVAPICH2-GDR for GPU-GPU comm.:    OFF 
-- OpenMP:                            ON 
-- CUDA:                              ON 
-- NCCL:                              OFF 
-- Build Shared Libraries:            ON 
-- Build GUI Interfaces:              ON 
-- Build Python Programs:             ON 
--  -Python Interpreter:              /usr/bin/python3 (version 3.12) 
-- Build Perl Programs:               ON 
-- Build configuration:               Release 
-- Target Processor:                  x86_64 
-- Build Documentation:               ON 
-- Sander Variants:                   normal LES API LES-API MPI LES-MPI QUICK-MPI QUICK-CUDA 
-- Install location:                  /opt/pmemd24/ 
-- Installation of Tests:             ON 

--                               Compilers: 
--         C: GNU 14.2.0 (/usr/bin/gcc) 
--       CXX: GNU 14.2.0 (/usr/bin/g++) 
--   Fortran: GNU 14.2.0 (/usr/bin/gfortran) 

--                              Building Tools: 
-- emil etc gpu_utils kmmd lib pmemd 

--                            NOT Building Tools: 
-- **************************************************************************

While Ambertools was built with the following:

************************************************************************** 
--                               Build Report 
-- 
--                           3rd Party Libraries 
-- ---building bundled: ----------------------------------------------------- 
-- ucpp - used as a preprocessor for the NAB compiler 
-- boost - C++ support library 
-- kmmd - Machine-learning molecular dynamics 
-- ---using installed: ------------------------------------------------------ 
-- blas - for fundamental linear algebra calculations 
-- lapack - for fundamental linear algebra calculations 
-- arpack - for fundamental linear algebra calculations 
-- netcdf - for creating trajectory data files 
-- netcdf-fortran - for creating trajectory data files from Fortran 
-- fftw - used to do Fourier transforms very quickly 
-- readline - enables an interactive terminal in cpptraj 
-- xblas - used for high-precision linear algebra calculations 
-- zlib - for various compression and decompression tasks 
-- libbz2 - for bzip2 compression in cpptraj 
-- plumed - used as an alternate MD backend for Sander 
-- libm - for fundamental math routines if they are not contained in the C library 
-- tng_io - enables GROMACS tng trajectory input in cpptraj 
-- nlopt - used to perform nonlinear optimizations 
-- mpi4py - MPI support library for MMPBSA.py 
-- pnetcdf - used by cpptraj for parallel trajectory output 
-- perlmol - chemistry library used by FEW 
-- ---disabled: ------------------------------------------------ 
-- c9x-complex - used as a support library on systems that do not have C99 complex.h support 
-- protobuf - protocol buffers library, used for communication with external software in QM/MM 
-- lio - used by Sander to run certain QM routines on the GPU 
-- apbs - used by Sander as an alternate Poisson-Boltzmann equation solver 
-- pupil - used by Sander as an alternate user interface 
-- mkl - alternate implementation of lapack and blas that is tuned for speed 
-- mbx - computes energies and forces for pmemd with the MB-pol model 
-- torchani - enables computation of energies and forces with Torchani 
-- libtorch - enables libtorch C++ library for tensor computation and dynamic neural networks 

--                                Features: 
-- MPI:                               ON 
-- MVAPICH2-GDR for GPU-GPU comm.:    OFF 
-- OpenMP:                            ON 
-- CUDA:                              ON 
-- NCCL:                              OFF 
-- Build Shared Libraries:            ON 
-- Build GUI Interfaces:              ON 
-- Build Python Programs:             ON 
--  -Python Interpreter:              /usr/bin/python3 (version 3.12) 
-- Build Perl Programs:               ON 
-- Build configuration:               RELEASE 
-- Target Processor:                  x86_64 
-- Build Documentation:               ON 
-- Sander Variants:                   normal LES API LES-API MPI LES-MPI QUICK-MPI QUICK-CUDA 
-- Install location:                  /opt/ambertools25/ 
-- Installation of Tests:             ON

--                               Compilers: 
--         C: GNU 14.2.0 (/usr/bin/gcc) 
--       CXX: GNU 14.2.0 (/usr/bin/g++) 
--   Fortran: GNU 14.2.0 (/usr/bin/gfortran) 

--                              Building Tools: 
-- addles ambpdb antechamber cew cifparse cphstats cpptraj emil etc fe-toolkit few gbnsr6 gem.pmemd kmmd leap lib libdlfind mdgx mm_pbsa 
mmpbsa_py modxna moft nabc ndiff-2.00 nfe-umbrella-slice nmode nmr_aux packmol_memgen paramfit parmed pbsa pdb4amber pymsmt 
pype_resp pysander pytraj quick reaxff_puremd reduce rism sander saxs sebomd sff sqm xray xtalutil 

--                            NOT Building Tools: 
-- tcpb-cpp - BUILD_TCPB is not enabled 
-- tcpb-cpp/pytcpb - BUILD_TCPB is not enabled 
-- gpu_utils - Not included in AmberTools 
-- pmemd - Not included in AmberTools 
-- **************************************************************************

In addition, all CFLAGS variables are set per the amber.def container definition, below. These are, per our standards for images which have GCC 14 included: CFLAGS=-O3 -march=znver5 -pipe.


Accessing Data

As long as you use the container.run method to launch the applications, you will automatically be able to read and write to files in your $HOME, /scratch and /nobackup directories.

If you run any of the applications inside the container manually, without using the container.run helper you will need to use the –bind argument to apptainer to ensure that all relevant directories are exposed within the container.

Do remember that the container filesystem itself cannot be changed - so you won't be able to write or update to /usr/local, /opt, /etc or any other internal folders - keep output directories restricted to the three areas listed above.


Additional Tools

Several further tools / scripts / utilities are installed alongside Amber & Ambertools. The main additions to the container environment are listed below.

APBS

APBS has been installed to /opt within the container, and can be run as follows:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run apbs -h
----------------------------------------------------------------------
    APBS -- Adaptive Poisson-Boltzmann Solver
    Version APBS 3.4.1
    
    Nathan A. Baker (nathan.baker@pnnl.gov)
    Pacific Northwest National Laboratory
    
    Additional contributing authors listed in the code documentation.
    
    Copyright (c) 2010-2020 Battelle Memorial Institute. Developed at the Pacific
    Northwest National Laboratory, operated by Battelle Memorial Institute, Pacific
    Northwest Division for the U.S. Department of Energy.
...
...
$

  • For more information: https://apbs.readthedocs.io/en/latest/

MBX

MBX has been installed to /opt inside the container and can be run with:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run single_point
$ container.run optimize
$ container.run mb_decomp
$ container.run order_frames
$ container.run normal_modes

  • For more information: https://github.com/paesanilab/MBX

PLUMED

PLUMED2 has been installed to /opt inside the container and can be run with:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run plumed help
Usage: plumed [options] [command] [command options]
  plumed [command] -h|--help: to print help for a specific command
Options:
  [help|-h|--help]          : to print this help
  [--is-installed]          : fails if plumed is not installed
  [--has-mpi]               : fails if plumed is running without MPI
  [--has-dlopen]            : fails if plumed is compiled without dlopen
  [--load LIB]              : loads a shared object (typically a plugin library)
  [--standalone-executable] : tells plumed not to look for commands implemented as scripts
Commands:
  plumed benchmark : run a calculation with a fixed trajectory to find bottlenecks in PLUMED
  plumed completion : dump a function usable for programmable completion
...
$

  • For more information: https://github.com/plumed/plumed2

Suitesparse

Suitesparse has been installed to /usr/local inside the container (specifically /usr/local/bin, /usr/local/lib and /usr/local/include/suitesparse). Please consult the Suitesparse documentation for further information.

  • For more information: https://github.com/DrTimothyAldenDavis/SuiteSparse

Torchani

Torchani has been installed to /usr/local inside the container. To run ani:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run ani

To use the Torchani Python libraries from your own scripts (i.e. import torchani) you must use the version of Python 3 installed inside the container to run them:

$ source /nobackup/shared/containers/ambermd.24.25.sh
$ container.run python3 <path/to/your/script.py>

  • For more information: https://aiqm.github.io/torchani/

Building Amber MD on Comet

Important!

This section is intended for RSE HPC staff, or users who are interested in how the software is configured. If you only need to use the software, stop reading here.

Amber appears to have few dependencies at first, but as you configure the source it looks for a lot of additional libraries. At least the following build tools are required:

  • GCC
  • Nvidia CUDA SDK
  • flex
  • bison
  • m4
  • python3
  • build-essential
  • autoconf
  • cmake
  • git

It, or its dependencies, then also need these further system libraries and tools, which makes it quite a complex install:

  • bc, libopenmpi-dev, python3, flex, bison, m4, jq, liblapack-dev, libblas-dev, libarpack2-dev, libucpp-dev, libnlopt-dev, libz-dev, libbz2-dev, libfftw3-mpi-dev, libfftw3-dev, libprotobuf-dev, xorg-dev, libxext-dev, libxt-dev, libx11-dev, libice-dev, libsm-dev, libgomp1, libgmp10-dev, libgsl-dev, libboost-dev, libboost-iostreams-dev, libboost-regex-dev, libboost-timer-dev, libboost-chrono-dev, libboost-filesystem-dev, libboost-graph-dev, libboost-program-options-dev, libpnetcdf-dev, libnetcdf-dev, libnetcdff-dev, libreadline-dev, libchemistry-mol-perl, bash-completion, libmpfr-dev, libeigen3-dev, swig, libumfpack6

It also then uses these third party packages which are not available as part of any OS install, and so must be downloaded, built or installed through more manual means (untar, configure, make, make install):

Package Link Wanted By Type Status Notes
MBX https://github.com/paesanilab/MBX Ambertools Optional Installed Whilst this builds correctly, Amber/Ambertools will not link against it - it is therefore available as a standalone tool only. We warn against parallel builds (i.e. make -j2 or higher) as each build task can grow to 20GBytes in size and quickly lead to out-of-memory errors on your build/compile host. Binaries are under /opt/bin, with libraries at /opt/lib.
PLUMED https://github.com/plumed/plumed2 Ambertools Optional Installed Binaries installed to /opt/bin, libraries installed to /opt/lib.
XBLAS http://www.netlib.org/xblas Ambertools Optional Installed Library copied to /opt/lib.
Torchani https://github.com/aiqm/torchani Ambertools Optional Installed Installed under $PYTHONPATH. Extensions for CUDA have been compiled with TORCH_CUDA_ARCH_LIST=8.9 ani build-extensions.
tng_io https://gitlab.com/gromacs/tng/ Ambertools Optional Installed Binaries installed to /usr/local/bin, libraries installed to /usr/local/lib.
apbs https://github.com/Electrostatics/apbs Ambertools Optional Installed Binaries installed to /opt/bin, libraries installed to /opt/lib.
umfpack https://github.com/DrTimothyAldenDavis/SuiteSparse Ambertools Optional Installed Binaries installed to /usr/local/bin, libraries installed to /usr/local/lib.
LIO https://github.com/MALBECC/lio Ambertools Optional - LIO will not compile against CUDA SDK 12 and was not included in the build of Amber
Intel MKL Ambertools Optional - Not used on Comet due to AMD Epyc CPU architecture.
PUPIL https://pupil.sourceforge.net/ Ambertools Optional - Not installed - an optional user interface for Ambertools.
libtorch https://docs.pytorch.org/cppdocs/installing.html Ambertools Optional - Ambertools will not link correctly against the binary install of libtorch - throwing an error part way through the compile of Ambertools source. It has therefore been disabled.
MVAPICH2-GDR Amber, Ambertools Optional - Used for GPU-GPU communication. Not installed on Comet.
NCCL https://github.com/NVIDIA/nccl Amber, Ambertools Optional - Used for efficient GPU communications such as in IB connected multiple GPU cards and/or NVLink. Not installed on Comet or L40S cards.

Build script:

Note that you must download the files ambertools25.tar.bz2 and pmemd24.tar.bz2 from https://ambermd.org/GetAmber.php - these are behind a download form - and they should be placed in the same directory as the build script.

#!/bin/bash

echo "Loading modules..."
module load apptainer

echo ""
echo "Building container..."
export APPTAINER_TMPDIR=/scratch

# You must supply a copy of AMBERMD tar files 
# in this SOURCE_DIR
SOURCE_DIR=`pwd`

AM24="pmemd24.tar.bz2"
AM25="ambertools25.tar.bz2"

echo ""
echo "Checking source files..."
if [ -s "$SOURCE_DIR/$AM24" ]
then
	echo "- Found - $SOURCE_DIR/$AM24"
else
	echo "- WARNING - $SOURCE_DIR/$AM24 is MISSING"
	echo ""
	echo "Press return to continue or Control+C to exit and fix"
	read	
fi

if [ -s "$SOURCE_DIR/$AM25" ]
then
    echo "- Found - $SOURCE_DIR/$AM25"
else
    echo "- WARNING - $SOURCE_DIR/$AM25 is MISSING"
    echo ""
    echo "Press return to continue or Control+C to exit and fix"
    read
fi

apptainer build --bind $SOURCE_DIR:/mnt ambermd.24.25.sif ambermd.def 2>&1 | tee ambermd.log

Container Definition:

Bootstrap: docker
From: nvidia/cuda:12.8.1-cudnn-devel-ubuntu24.04

####################################################################
#
# Amber MD container
# ===================
# This is a runtime environment for the Amber MD tools.
# Please see: 
#	https://hpc.researchcomputing.ncl.ac.uk/dokuwiki/dokuwiki/doku.php?id=advanced:software:ambermd
#
# Local file Requirements
# ========================
# You MUST have the ambertools25.tar.bz2 and pmemd24.tar.bz2 tarballs
# in the same directory as running the build script.
#
# Non-Ubuntu Requirements
# ========================
# MBX			https://github.com/paesanilab/MBX	- Not working
# PLUMED		https://github.com/plumed/plumed2
# LIO			https://github.com/MALBECC/lio		- Not working
# XBLAS			http://www.netlib.org/xblas
# Torchani		https://github.com/aiqm/torchani
# tng_io		https://gitlab.com/gromacs/tng/
# apbs			https://github.com/Electrostatics/apbs
# umfpack		https://github.com/DrTimothyAldenDavis/SuiteSparse
# libtorch		https://pytorch.org/get-started/locally/	- Not working
#
####################################################################

%post
    # Prevent interactive prompts
    export DEBIAN_FRONTEND=noninteractive

####################################################################
#
# Basic system packages
#
####################################################################

    # Update & install only necessary packages
    apt-get update
    
	# Base stuff everything will need
	apt-get install -y \
		apt-utils \
		aptitude \
		autoconf \
		automake \
		build-essential \
		cmake \
		gcc-14 \
		g++-14 \
		gfortran-14 \
		git \
		less \
		unzip \
		vim \
		wget 

	# These are specifically needed by Amber MD, or its dependencies
	apt-get install -y \
		bc \
		libopenmpi-dev \
		python3 \
		flex \
		bison \
		m4 \
		jq \
		liblapack-dev \
		libblas-dev \
		libarpack2-dev \
		libucpp-dev \
		libnlopt-dev \
		libnlopt-cxx-dev \
		libz-dev \
		libbz2-dev \
		libfftw3-mpi-dev \
		libfftw3-dev \
		libprotobuf-dev \
		xorg-dev \
		libxext-dev \
		libxt-dev \
		libx11-dev \
		libice-dev \
		libsm-dev \
		libgomp1 \
		libgmp10-dev \
		libgsl-dev \
		libboost-dev \
		libboost-iostreams-dev \
		libboost-regex-dev \
		libboost-timer-dev \
		libboost-chrono-dev \
		libboost-filesystem-dev \
		libboost-graph-dev \
		libboost-program-options-dev \
		libboost-thread-dev \
		libpnetcdf-dev \
		libnetcdf-dev \
		libnetcdff-dev \
		libreadline-dev \
		libchemistry-mol-perl \
		bash-completion \
		libmpfr-dev \
		libeigen3-dev \
		swig \
		libumfpack6

	# Python 3 modules needed by Amber MD
	apt-get install -y \
		python3-pip \
		python3-numpy \
		python3-tk \
		python3-scipy \
		python3-matplotlib \
		python3-mpi4py
		
    # Clean up APT cache to save space
    apt-get clean

	# Clean out Python pip cache
	pip3 cache purge 

#################################################################################
#
# This is all the custom stuff needed to build the various bioinformatics tools
#
#################################################################################

	# This flag needs to be set to indicate which CPU architecture we
	# are optimising for.
	AMD_ARCH=1

	if [ "$AMD_ARCH" = "1" ]
	then
		# Compiling on AMD Epyc
		export BASE_CFLAGS="-O3 -march=znver5 -pipe"
		export BASE_CFLAGS_ALT="-O3 -march=znver5 -pipe"
		export MAKE_JOBS=1
	else
		# Compiling on generic system
		export BASE_CFLAGS=""
		export BASE_CFLAGS_ALT=""
		export MAKE_JOBS=1
	fi
	
	# WARNING!
	# ========
	# Do not try to increase the parallel make jobs above 1 or 2
	# We have observed that the memory used by each gcc/make process
	# that is launched during the compile of Amber 24 can be up to 20GB in
	# resident, in-memory size according to 'top'. Unlike most C/C++ builds
	# the use of parallel make can easily lead to out-of-memory conditions.
	
	# Ensure we are compiling with GCC 14
	update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 20
	update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-14 20
	update-alternatives --install /usr/bin/gfortran gfortran /usr/bin/gfortran-14 20
	
	export CC=gcc-14
	export CXX=g++-14
	export FC=gfortran-14
	export CFLAGS="$BASE_CFLAGS -I/usr/local/include -I/opt/include"
	export CPPFLAGS=""
	export CXXFLAGS="$CFLAGS"
	export PATH=/usr/local/bin:/opt/bin:$PATH
	
	#export MBX_DIR=/opt
	export LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:$LD_LIBRARY_PATH
	export PKG_CONFIG_PATH=/opt/lib/pkgconfig:/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
	export PLUMED_KERNEL=/opt/lib/libplumedKernel.so

	echo ""
	echo "Post-OS-install setup for Amber MD container"
	echo "============================================"

	# A download place for external libraries
	mkdir -p /src/zipped
	
	# Where installations go
	mkdir -p /opt/bin
	mkdir -p /opt/lib
	mkdir -p /opt/include
	mkdir -p /opt/share

	# libtorch
	echo ""
	echo "1. Install libtorch"
	echo "====================="
	echo "SKIPPED"
	#cd /src
	#wget https://download.pytorch.org/libtorch/cu128/libtorch-shared-with-deps-2.10.0%2Bcu128.zip -O zipped/libtorch-shared-with-deps-2.10.0-cu128.zip
	#cd /src
	#unzip zipped/libtorch-shared-with-deps-2.10.0-cu128.zip
	#mv libtorch/include/* /opt/include/
	#mv libtorch/lib/* /opt/lib/
	#mv libtorch/share/* /opt/share/

	# MBX is an optional library that Ambertools uses
	# WARNING - This is a very long, slow compile.
	echo ""
    echo "2a. Install MBX"
    echo "==============="
    echo "SKIPPED"
	#cd /src
	#git clone https://github.com/paesanilab/MBX.git
	#cd MBX/
	#autoreconf -fi
	#./configure --enable-shared --prefix=/opt
	#make
	#make install
	
	# PLUMED is an optional library
	echo ""
	echo "2b. Install PLUMED"
	echo "=================="
	cd /src
	wget https://github.com/plumed/plumed2/releases/download/v2.10.0/plumed-src-2.10.0.tgz -O zipped/plumed-src-2.10.0.tgz
	tar -zxf zipped/plumed-src-2.10.0.tgz
	cd plumed-2.10.0
	./configure --prefix=/opt
	make
	make install
	echo '_plumed() { eval "$(plumed --no-mpi completion 2>/dev/null)";}' >> /etc/bash.bashrc
	echo 'complete -F _plumed -o default plumed'  >> /etc/bash.bashrc
	
	# LIO is an optional library
	# ERROR - Does not compile on CUDA 12+
	echo ""
	echo "2c. Install LIO"
	echo "==============="
	echo "SKIPPED - FAULTY ON CUDA 12.8"
	#cd /src
	#git clone https://github.com/MALBECC/lio.git
	#cd lio
	#CXXFLAGS="$CFLAGS -I/usr/local/cuda-12.8/targets/x86_64-linux/include" make cuda=1 cpu=1
	
	# XBLAS is an optional library
	echo ""
	echo "2d. Install XBLAS"
	echo "=================="
	cd /src
	wget http://www.netlib.org/xblas/xblas.tar.gz -O zipped/xblas.tar.gz
	cd /src
	tar -zxf zipped/xblas.tar.gz
	cd xblas-1.0.248
	./configure --prefix=/opt
	make -j2
	cp libxblas.a /opt/lib
	
	# Torchani is an optional library
	echo ""
	echo "2e. Install Torchani"
	echo "===================="
	pip install torch==2.8 --index-url https://download.pytorch.org/whl/cu128 --break-system-packages
	pip install torchani --break-system-packages
	
	# L40S cards are "compute capability" 8.9 - see:
	# https://en.wikipedia.org/wiki/CUDA#GPUs_supported
	TORCH_CUDA_ARCH_LIST=8.9 ani build-extensions
	
	# tng_io is an optional library
	echo ""
	echo "2f. Install tng_io"
	echo "=================="
	cd /src
	git clone https://gitlab.com/gromacs/tng.git
	cd tng
	mkdir build
	cd build
	cmake ..
	make -j2
	make install
	
	# umfpack (inside suiteparse) is an optional library
	echo ""
	echo "2g. Install suiteparse/umfpack"
	echo "=============================="
	cd /src
	wget https://github.com/DrTimothyAldenDavis/SuiteSparse/archive/refs/tags/v7.12.2.tar.gz -O zipped/suiteparse-v7.12.2.tar.gz
	cd /src
	tar -zxf zipped/suiteparse-v7.12.2.tar.gz
	cd SuiteSparse-7.12.2
	cd build
	cmake ..
	make -j2
	make install
	
	# apbs is an optional library
	echo ""
	echo "2h. Install apbs"
	echo "=================="
	cd /src
	# These are precompiled binaries - apbs is a pain to build from source
	wget https://github.com/Electrostatics/apbs/releases/download/v3.4.1/APBS-3.4.1.Linux.zip -O zipped/APBS-3.4.1.Linux.zip
	cd /src
	unzip zipped/APBS-3.4.1.Linux.zip
	cd APBS-3.4.1.Linux
	cp -a -v bin/* /opt/bin/
	cp -a -v lib/* /opt/lib/
	cp -a include/* /opt/include/
	cp -a share/* /opt/share/
	
	# Installer Amber 24
	echo ""
	echo "Amber (A). Install Amber 24"
	echo "==========================="
	cd /src
	if [ -s /mnt/pmemd24.tar.bz2 ]
	then
		tar -jxf /mnt/pmemd24.tar.bz2
		cd /src/pmemd24_src/build
		CC=gcc-14 FC=gfortran-14 CXX=g++14 cmake .. \
			-DCMAKE_INSTALL_PREFIX=/opt/pmemd24 \
			-DCMAKE_BUILD_TYPE=Release \
			-DCOMPILER=GNU  \
			-DMPI=TRUE \
			-DCUDA=TRUE \
			-DCUDNN=TRUE \
			-DOPENMP=TRUE \
			-DINSTALL_TESTS=TRUE \
			-DDOWNLOAD_MINICONDA=FALSE \
			-DBUILD_PYTHON=TRUE \
			-DBUILD_PERL=TRUE \
			-DBUILD_GUI=TRUE \
			-DPMEMD_ONLY=TRUE \
			-DCHECK_UPDATES=FALSE
			
		make
		make install
		
		echo 'source /opt/pmemd24/amber.sh' >> /etc/bash.bashrc
	else
		echo "Amber MD 24 source file not found"
		exit 1
	fi
	
	# Install Ambertools 25
	echo ""
    echo "Amber (B). Install Ambertools 25"
    echo "================================"
    cd /src
    if [ -s /mnt/ambertools25.tar.bz2 ]
    then
        tar -jxf /mnt/ambertools25.tar.bz2
		cd /src/ambertools25_src/build
		AMBER_PREFIX=/src/ambertools25_src MBX_DIR=/opt CC=gcc-14 FC=gfortran-14 CXX=g++14 cmake .. \
			-DCMAKE_INSTALL_PREFIX=/opt/ambertools25 \
			-DCOMPILER=GNU \
			-DMPI=TRUE \
			-DCUDA=TRUE \
			-DCUDNN=TRUE \
			-DCUDNN_INCLUDE_PATH=/usr/local/lib/python3.12/dist-packages/nvidia/cudnn/include \
			-DCUDNN_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/nvidia/cudnn/lib \
			-DOPENMP=TRUE \
			-DINSTALL_TESTS=TRUE \
			-DDOWNLOAD_MINICONDA=FALSE \
			-DBUILD_PYTHON=TRUE \
			-DBUILD_PERL=TRUE \
			-DCHECK_UPDATES=FALSE \
			-DLIBTORCH=OFF \
			-DTORCH_HOME=/opt \
			-DLIBTORCH_INCLUDE_DIRS=/opt/include \
			-DLIBTORCH_LIBRARIES=/opt/lib \
			-DXBLAS_LIBRARY=/opt/lib/libxblas.a \
			-DMBX_DIR=/opt \
			-DPLUMED_ROOT=/opt \
			-DCMAKE_PREFIX_PATH=/opt \
			-DBUILD_TCPB=FALSE \
			-DBUILD_REAXFF_PUREMD=TRUE
		make
		make install
		echo 'source /opt/ambertools25/amber.sh' >> /etc/bash.bashrc
	else
       echo "Amber MD 25 source file not found"
       exit 1
    fi

	# Remove all src packages
	echo ""
	echo "Cleaning up downloaded src tree"
	echo "=================================="
	cd
	rm -f /src/zipped/*
	rm -rf /src
	pip3 cache purge
	
	echo ""
	echo "All done"

%environment
	# Ambertools
	export AMBERHOME=/opt/ambertools25
	export PERL5LIB="$AMBERHOME/lib/perl:$PERL5LIB"
	export PYTHONPATH="$AMBERHOME/local/lib/python3.12/dist-packages:$PYTHONPATH"
	export LD_LIBRARY_PATH="$AMBERHOME/lib:$LD_LIBRARY_PATH"
	export QUICK_BASIS="$AMBERHOME/AmberTools/src/quick/basis"
	export PATH="$AMBERHOME/bin:$PATH"
	
	# Amber
	export PMEMDHOME=/opt/pmemd24
	export PATH="$PMEMDHOME/bin:$PATH"
	
	# General environment variables for everything else
	export LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:$LD_LIBRARY_PATH
	export PATH=/usr/local/bin:/opt/bin:$PATH
	export CC=gcc-14
	export CXX=g++-14
	export FC=gfortran-14
	export CFLAGS="-O"
	export CXXFLAGS="$CFLAGS"
	export MANPATH=/opt/man
	export PLUMED_VIMPATH=/opt/lib/plumed/vim
	export PKG_CONFIG_PATH=/opt/lib/pkgconfig:/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
	export PLUMED_KERNEL=/opt/lib/libplumedKernel.so

%runscript

Run Script:

#!/bin/bash

module load apptainer

IMAGE_NAME=/nobackup/shared/containers/ambermd.24.25.sif

container.run() {
    # Run a command inside the container...
    # automatically bind the /scratch and /nobackup dirs
    # pass through any additional parameters given on the command line
    apptainer exec --nv --bind /scratch:/scratch --bind /nobackup:/nobackup ${IMAGE_NAME} $@
}


Back to Software

Previous Next

HPC Support

Table of Contents

Table of Contents

  • Amber MD
    • Running Amber MD on Comet
      • Amber - Serial
      • Amber - Parallel / MPI
      • Amber - CUDA
      • Amber - Interactive Desktop GUI
      • Ambertools
      • Amber & Ambertools Configuration
    • Accessing Data
    • Additional Tools
    • APBS
    • MBX
    • PLUMED
    • Suitesparse
    • Torchani
    • Building Amber MD on Comet

HPC Service

  • News & Changes

Main Content Sections

  • Documentation Home
  • Getting Started
  • Advanced Topics
  • Training & Workshops
  • FAQ
  • Policies & Procedures
  • Using the Wiki
  • Contact us & Get Help

Documentation Tools

  • Wiki Login
  • RSE-HPC Team Area
Developed and operated by
Research Software Engineering
Copyright © Newcastle University
Contact us @rseteam