From Ambermd.org:
Amber is a suite of biomolecular simulation programs. It began in the late 1970's, and is maintained by an active development community; see our history page and our contributors page for more information. The term “Amber” refers to two things. First, it is a set of molecular mechanical force fields for the simulation of biomolecules; these force fields are in the public domain, and are used in a variety of simulation programs. Second, it is a package of molecular simulation programs which includes source code and demos. Amber is distributed in two parts: AmberTools and Amber.
As Amber (and Ambertools) are so large and with so many essential and optional dependencies they have been installed on Comet as a container image.
The Amber container is stored in the /nobackup/shared/containers directory and is accessible to all users of Comet. You do not need to take a copy of the container file; it should be left in its original location.
/nobackup/shared/containers
You can find the container files here:
/nobackup/shared/containers/ambermd.24.25.sif
We normally recommend using the latest version of the container, in the case of Amber, the version numbers represent the version of Amber (24) and Ambertools (25) installed inside.
Container Image Versions
We may reference a specific container file, such as ambermd.24.25.sif, but you should always check whether this is the most recent version of the container available. Simply ls the /nobackup/shared/containers directory and you will be able to see if there are any newer versions listed.
ls
We have provided a convenience script that will automate all of steps needed to run applications inside the container, pass through any assigned Nvidia GPU, as well as access your $HOME, /scratch and /nobackup directories to just two simple commands.
$HOME
/scratch
/nobackup
/nobackup/shared/containers/ambermd.24.25.sh
There is a corresponding .sh script for each version of the container image we make available.
.sh
Just source this file and it will take care of loading apptainer, setting up your bind directories and calling the exec command for you - and give you a single command called container.run (instead of the really long apptainer exec command) to then run anything you want inside the container, for example - to run the sander tool from Ambertools:
source
apptainer
bind
exec
container.run
sander
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run sander usage: sander [-O|A] -i mdin -o mdout -p prmtop -c inpcrd -r restrt [-ref refc -x mdcrd -v mdvel -e mden -frc mdfrc -idip inpdip -rdip rstdip -mdip mddip -inf mdinfo -radii radii -y inptraj -amd amd.log -scaledMD scaledMD.log] -cph-data -ce-data <file>-host ipi_hostname -port ipi_port Consult the manual for additional options. $
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run pmemd
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run pmemd.MPI
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run pmemd.cuda
Remember to allocate an Nvidia GPU card in your Slurm arguments if you want to make use of CUDA acceleration.
Ambertools is installed to /opt/ambertools25 and includes almost all of the optional utilities possible.
/opt/ambertools25
Contents of /opt/ambertools25/bin:
/opt/ambertools25/bin
AddToBox cpptraj.cuda match nfe-umbrella-slice rism1d simplepbsa.MPI ChBox draw_membrane2 match_atomname nmode rism3d.snglpnt sqm FEW.pl edgembar mdgx packmol rism3d.snglpnt.MPI sqm.MPI PropPDB edgembar.OMP mdgx.MPI paramfit rism3d.snglpnt.cuda sviol UnitCell elsize mdgx.OMP paramfit.OMP rism3d.snglpnt.cuda.double sviol2 XrayPrep espgen mdgx.cuda parmcal sander teLeap add_pdb gbnsr6 mdout2pymbar.pl parmchk2 sander.LES test-api add_xray gem.pmemd memembed pbsa sander.LES.MPI test-api.MPI addles gem.pmemd.MPI metatwist pbsa.cuda sander.MPI test-api.cuda am1bcc gwh mm_pbsa.pl prepgen sander.OMP test-api.cuda.MPI ambmask hcp_getpdb mm_pbsa_nabnmode process_mdout.perl sander.quick.cuda tinker_to_amber ambpdb immers mm_pbsa_statistics.pl process_minout.perl sander.quick.cuda.MPI tleap antechamber makeANG_RST mmpbsa_py_energy quick saxs_md ucpp atomtype makeCHIR_RST mmpbsa_py_nabnmode quick.MPI saxs_md.OMP wrapped_progs bondtype makeCSA_RST.na modxna.sh quick.cuda saxs_rism xaLeap cestats makeDIP_RST.dna ndfes quick.cuda.MPI saxs_rism.OMP xleap cphstats makeDIP_RST.protein ndfes-path reduce senergy cpptraj makeDIST_RST ndfes-path.OMP residuegen sgldinfo.sh cpptraj.MPI makeRIGID_RST ndfes.OMP resp sgldwt.sh cpptraj.OMP make_crd_hg nef_to_RST respgen simplepbsa
As an example, to run the AddToBox tool:
AddToBox
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run AddToBox AddToBox >> A program for adding solvent molecules to a crystal cell. Options: -c : the molecule cell (PDB format) -a : the molecule to add -na : the number of copies to add -P : the upper limit of protein atoms -o : output file (PDB format) -RW : Clipping radius for solvent atoms -RP : Clipping radius for protein atoms -IG : Random number seed -NO : flag for no PDB output (stops after determining the protein fraction of the box) -G : Grid spacing for search (default 0.2) -V : Recursively call AddToBox until all residues have been added. (Default 0 ; any other setting activates recursion) -path : Path for AddToBox program on subsequent calls (default ${AMBERHOME}/bin/AddToBox) $
If you require configuration details, Amber was built with the following options:
-- ************************************************************************** -- Build Report -- -- 3rd Party Libraries -- ---building bundled: ----------------------------------------------------- -- kmmd - Machine-learning molecular dynamics -- ---using installed: ------------------------------------------------------ -- blas - for fundamental linear algebra calculations -- lapack - for fundamental linear algebra calculations -- netcdf - for creating trajectory data files -- netcdf-fortran - for creating trajectory data files from Fortran -- zlib - for various compression and decompression tasks -- libbz2 - for various compression and decompression tasks -- libm - for fundamental math routines if they are not contained in the C library -- ---disabled: ------------------------------------------------ -- libtorch - for fundamental math routines if they are not contained in the C library -- Features: -- MPI: ON -- MVAPICH2-GDR for GPU-GPU comm.: OFF -- OpenMP: ON -- CUDA: ON -- NCCL: OFF -- Build Shared Libraries: ON -- Build GUI Interfaces: ON -- Build Python Programs: ON -- -Python Interpreter: /usr/bin/python3 (version 3.12) -- Build Perl Programs: ON -- Build configuration: Release -- Target Processor: x86_64 -- Build Documentation: ON -- Sander Variants: normal LES API LES-API MPI LES-MPI QUICK-MPI QUICK-CUDA -- Install location: /opt/pmemd24/ -- Installation of Tests: ON -- Compilers: -- C: GNU 14.2.0 (/usr/bin/gcc) -- CXX: GNU 14.2.0 (/usr/bin/g++) -- Fortran: GNU 14.2.0 (/usr/bin/gfortran) -- Building Tools: -- emil etc gpu_utils kmmd lib pmemd -- NOT Building Tools: -- **************************************************************************
While Ambertools was built with the following:
************************************************************************** -- Build Report -- -- 3rd Party Libraries -- ---building bundled: ----------------------------------------------------- -- ucpp - used as a preprocessor for the NAB compiler -- boost - C++ support library -- kmmd - Machine-learning molecular dynamics -- ---using installed: ------------------------------------------------------ -- blas - for fundamental linear algebra calculations -- lapack - for fundamental linear algebra calculations -- arpack - for fundamental linear algebra calculations -- netcdf - for creating trajectory data files -- netcdf-fortran - for creating trajectory data files from Fortran -- fftw - used to do Fourier transforms very quickly -- readline - enables an interactive terminal in cpptraj -- xblas - used for high-precision linear algebra calculations -- zlib - for various compression and decompression tasks -- libbz2 - for bzip2 compression in cpptraj -- plumed - used as an alternate MD backend for Sander -- libm - for fundamental math routines if they are not contained in the C library -- tng_io - enables GROMACS tng trajectory input in cpptraj -- nlopt - used to perform nonlinear optimizations -- mpi4py - MPI support library for MMPBSA.py -- pnetcdf - used by cpptraj for parallel trajectory output -- perlmol - chemistry library used by FEW -- ---disabled: ------------------------------------------------ -- c9x-complex - used as a support library on systems that do not have C99 complex.h support -- protobuf - protocol buffers library, used for communication with external software in QM/MM -- lio - used by Sander to run certain QM routines on the GPU -- apbs - used by Sander as an alternate Poisson-Boltzmann equation solver -- pupil - used by Sander as an alternate user interface -- mkl - alternate implementation of lapack and blas that is tuned for speed -- mbx - computes energies and forces for pmemd with the MB-pol model -- torchani - enables computation of energies and forces with Torchani -- libtorch - enables libtorch C++ library for tensor computation and dynamic neural networks -- Features: -- MPI: ON -- MVAPICH2-GDR for GPU-GPU comm.: OFF -- OpenMP: ON -- CUDA: ON -- NCCL: OFF -- Build Shared Libraries: ON -- Build GUI Interfaces: ON -- Build Python Programs: ON -- -Python Interpreter: /usr/bin/python3 (version 3.12) -- Build Perl Programs: ON -- Build configuration: RELEASE -- Target Processor: x86_64 -- Build Documentation: ON -- Sander Variants: normal LES API LES-API MPI LES-MPI QUICK-MPI QUICK-CUDA -- Install location: /opt/ambertools25/ -- Installation of Tests: ON -- Compilers: -- C: GNU 14.2.0 (/usr/bin/gcc) -- CXX: GNU 14.2.0 (/usr/bin/g++) -- Fortran: GNU 14.2.0 (/usr/bin/gfortran) -- Building Tools: -- addles ambpdb antechamber cew cifparse cphstats cpptraj emil etc fe-toolkit few gbnsr6 gem.pmemd kmmd leap lib libdlfind mdgx mm_pbsa mmpbsa_py modxna moft nabc ndiff-2.00 nfe-umbrella-slice nmode nmr_aux packmol_memgen paramfit parmed pbsa pdb4amber pymsmt pype_resp pysander pytraj quick reaxff_puremd reduce rism sander saxs sebomd sff sqm xray xtalutil -- NOT Building Tools: -- tcpb-cpp - BUILD_TCPB is not enabled -- tcpb-cpp/pytcpb - BUILD_TCPB is not enabled -- gpu_utils - Not included in AmberTools -- pmemd - Not included in AmberTools -- **************************************************************************
In addition, all CFLAGS variables are set per the amber.def container definition, below. These are, per our standards for images which have GCC 14 included: CFLAGS=-O3 -march=znver5 -pipe.
CFLAGS
amber.def
CFLAGS=-O3 -march=znver5 -pipe
As long as you use the container.run method to launch the applications, you will automatically be able to read and write to files in your $HOME, /scratch and /nobackup directories.
If you run any of the applications inside the container manually, without using the container.run helper you will need to use the –bind argument to apptainer to ensure that all relevant directories are exposed within the container.
–bind
Do remember that the container filesystem itself cannot be changed - so you won't be able to write or update to /usr/local, /opt, /etc or any other internal folders - keep output directories restricted to the three areas listed above.
/usr/local
/opt
/etc
Several further tools / scripts / utilities are installed alongside Amber & Ambertools. The main additions to the container environment are listed below.
APBS has been installed to /opt within the container, and can be run as follows:
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run apbs -h ---------------------------------------------------------------------- APBS -- Adaptive Poisson-Boltzmann Solver Version APBS 3.4.1 Nathan A. Baker (nathan.baker@pnnl.gov) Pacific Northwest National Laboratory Additional contributing authors listed in the code documentation. Copyright (c) 2010-2020 Battelle Memorial Institute. Developed at the Pacific Northwest National Laboratory, operated by Battelle Memorial Institute, Pacific Northwest Division for the U.S. Department of Energy. ... ... $
MBX has been installed to /opt inside the container and can be run with:
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run single_point $ container.run optimize $ container.run mb_decomp $ container.run order_frames $ container.run normal_modes
PLUMED2 has been installed to /opt inside the container and can be run with:
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run plumed help Usage: plumed [options] [command] [command options] plumed [command] -h|--help: to print help for a specific command Options: [help|-h|--help] : to print this help [--is-installed] : fails if plumed is not installed [--has-mpi] : fails if plumed is running without MPI [--has-dlopen] : fails if plumed is compiled without dlopen [--load LIB] : loads a shared object (typically a plugin library) [--standalone-executable] : tells plumed not to look for commands implemented as scripts Commands: plumed benchmark : run a calculation with a fixed trajectory to find bottlenecks in PLUMED plumed completion : dump a function usable for programmable completion ... $
Suitesparse has been installed to /usr/local inside the container (specifically /usr/local/bin, /usr/local/lib and /usr/local/include/suitesparse). Please consult the Suitesparse documentation for further information.
/usr/local/bin
/usr/local/lib
/usr/local/include/suitesparse
Torchani has been installed to /usr/local inside the container. To run ani:
ani
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run ani
To use the Torchani Python libraries from your own scripts (i.e. import torchani) you must use the version of Python 3 installed inside the container to run them:
import torchani
$ source /nobackup/shared/containers/ambermd.24.25.sh $ container.run python3 <path/to/your/script.py>
Important!
This section is intended for RSE HPC staff, or users who are interested in how the software is configured. If you only need to use the software, stop reading here.
Amber appears to have few dependencies at first, but as you configure the source it looks for a lot of additional libraries. At least the following build tools are required:
It, or its dependencies, then also need these further system libraries and tools, which makes it quite a complex install:
It also then uses these third party packages which are not available as part of any OS install, and so must be downloaded, built or installed through more manual means (untar, configure, make, make install):
make -j2
/opt/bin
/opt/lib
$PYTHONPATH
TORCH_CUDA_ARCH_LIST=8.9 ani build-extensions
Build script:
Note that you must download the files ambertools25.tar.bz2 and pmemd24.tar.bz2 from https://ambermd.org/GetAmber.php - these are behind a download form - and they should be placed in the same directory as the build script.
ambertools25.tar.bz2
pmemd24.tar.bz2
#!/bin/bash echo "Loading modules..." module load apptainer echo "" echo "Building container..." export APPTAINER_TMPDIR=/scratch # You must supply a copy of AMBERMD tar files # in this SOURCE_DIR SOURCE_DIR=`pwd` AM24="pmemd24.tar.bz2" AM25="ambertools25.tar.bz2" echo "" echo "Checking source files..." if [ -s "$SOURCE_DIR/$AM24" ] then echo "- Found - $SOURCE_DIR/$AM24" else echo "- WARNING - $SOURCE_DIR/$AM24 is MISSING" echo "" echo "Press return to continue or Control+C to exit and fix" read fi if [ -s "$SOURCE_DIR/$AM25" ] then echo "- Found - $SOURCE_DIR/$AM25" else echo "- WARNING - $SOURCE_DIR/$AM25 is MISSING" echo "" echo "Press return to continue or Control+C to exit and fix" read fi apptainer build --bind $SOURCE_DIR:/mnt ambermd.24.25.sif ambermd.def 2>&1 | tee ambermd.log
Container Definition:
Bootstrap: docker From: nvidia/cuda:12.8.1-cudnn-devel-ubuntu24.04 #################################################################### # # Amber MD container # =================== # This is a runtime environment for the Amber MD tools. # Please see: # https://hpc.researchcomputing.ncl.ac.uk/dokuwiki/dokuwiki/doku.php?id=advanced:software:ambermd # # Local file Requirements # ======================== # You MUST have the ambertools25.tar.bz2 and pmemd24.tar.bz2 tarballs # in the same directory as running the build script. # # Non-Ubuntu Requirements # ======================== # MBX https://github.com/paesanilab/MBX - Not working # PLUMED https://github.com/plumed/plumed2 # LIO https://github.com/MALBECC/lio - Not working # XBLAS http://www.netlib.org/xblas # Torchani https://github.com/aiqm/torchani # tng_io https://gitlab.com/gromacs/tng/ # apbs https://github.com/Electrostatics/apbs # umfpack https://github.com/DrTimothyAldenDavis/SuiteSparse # libtorch https://pytorch.org/get-started/locally/ - Not working # #################################################################### %post # Prevent interactive prompts export DEBIAN_FRONTEND=noninteractive #################################################################### # # Basic system packages # #################################################################### # Update & install only necessary packages apt-get update # Base stuff everything will need apt-get install -y \ apt-utils \ aptitude \ autoconf \ automake \ build-essential \ cmake \ gcc-14 \ g++-14 \ gfortran-14 \ git \ less \ unzip \ vim \ wget # These are specifically needed by Amber MD, or its dependencies apt-get install -y \ bc \ libopenmpi-dev \ python3 \ flex \ bison \ m4 \ jq \ liblapack-dev \ libblas-dev \ libarpack2-dev \ libucpp-dev \ libnlopt-dev \ libnlopt-cxx-dev \ libz-dev \ libbz2-dev \ libfftw3-mpi-dev \ libfftw3-dev \ libprotobuf-dev \ xorg-dev \ libxext-dev \ libxt-dev \ libx11-dev \ libice-dev \ libsm-dev \ libgomp1 \ libgmp10-dev \ libgsl-dev \ libboost-dev \ libboost-iostreams-dev \ libboost-regex-dev \ libboost-timer-dev \ libboost-chrono-dev \ libboost-filesystem-dev \ libboost-graph-dev \ libboost-program-options-dev \ libboost-thread-dev \ libpnetcdf-dev \ libnetcdf-dev \ libnetcdff-dev \ libreadline-dev \ libchemistry-mol-perl \ bash-completion \ libmpfr-dev \ libeigen3-dev \ swig \ libumfpack6 # Python 3 modules needed by Amber MD apt-get install -y \ python3-pip \ python3-numpy \ python3-tk \ python3-scipy \ python3-matplotlib \ python3-mpi4py # Clean up APT cache to save space apt-get clean # Clean out Python pip cache pip3 cache purge ################################################################################# # # This is all the custom stuff needed to build the various bioinformatics tools # ################################################################################# # This flag needs to be set to indicate which CPU architecture we # are optimising for. AMD_ARCH=1 if [ "$AMD_ARCH" = "1" ] then # Compiling on AMD Epyc export BASE_CFLAGS="-O3 -march=znver5 -pipe" export BASE_CFLAGS_ALT="-O3 -march=znver5 -pipe" export MAKE_JOBS=1 else # Compiling on generic system export BASE_CFLAGS="" export BASE_CFLAGS_ALT="" export MAKE_JOBS=1 fi # WARNING! # ======== # Do not try to increase the parallel make jobs above 1 or 2 # We have observed that the memory used by each gcc/make process # that is launched during the compile of Amber 24 can be up to 20GB in # resident, in-memory size according to 'top'. Unlike most C/C++ builds # the use of parallel make can easily lead to out-of-memory conditions. # Ensure we are compiling with GCC 14 update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 20 update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-14 20 update-alternatives --install /usr/bin/gfortran gfortran /usr/bin/gfortran-14 20 export CC=gcc-14 export CXX=g++-14 export FC=gfortran-14 export CFLAGS="$BASE_CFLAGS -I/usr/local/include -I/opt/include" export CPPFLAGS="" export CXXFLAGS="$CFLAGS" export PATH=/usr/local/bin:/opt/bin:$PATH #export MBX_DIR=/opt export LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:$LD_LIBRARY_PATH export PKG_CONFIG_PATH=/opt/lib/pkgconfig:/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH export PLUMED_KERNEL=/opt/lib/libplumedKernel.so echo "" echo "Post-OS-install setup for Amber MD container" echo "============================================" # A download place for external libraries mkdir -p /src/zipped # Where installations go mkdir -p /opt/bin mkdir -p /opt/lib mkdir -p /opt/include mkdir -p /opt/share # libtorch echo "" echo "1. Install libtorch" echo "=====================" echo "SKIPPED" #cd /src #wget https://download.pytorch.org/libtorch/cu128/libtorch-shared-with-deps-2.10.0%2Bcu128.zip -O zipped/libtorch-shared-with-deps-2.10.0-cu128.zip #cd /src #unzip zipped/libtorch-shared-with-deps-2.10.0-cu128.zip #mv libtorch/include/* /opt/include/ #mv libtorch/lib/* /opt/lib/ #mv libtorch/share/* /opt/share/ # MBX is an optional library that Ambertools uses # WARNING - This is a very long, slow compile. echo "" echo "2a. Install MBX" echo "===============" echo "SKIPPED" #cd /src #git clone https://github.com/paesanilab/MBX.git #cd MBX/ #autoreconf -fi #./configure --enable-shared --prefix=/opt #make #make install # PLUMED is an optional library echo "" echo "2b. Install PLUMED" echo "==================" cd /src wget https://github.com/plumed/plumed2/releases/download/v2.10.0/plumed-src-2.10.0.tgz -O zipped/plumed-src-2.10.0.tgz tar -zxf zipped/plumed-src-2.10.0.tgz cd plumed-2.10.0 ./configure --prefix=/opt make make install echo '_plumed() { eval "$(plumed --no-mpi completion 2>/dev/null)";}' >> /etc/bash.bashrc echo 'complete -F _plumed -o default plumed' >> /etc/bash.bashrc # LIO is an optional library # ERROR - Does not compile on CUDA 12+ echo "" echo "2c. Install LIO" echo "===============" echo "SKIPPED - FAULTY ON CUDA 12.8" #cd /src #git clone https://github.com/MALBECC/lio.git #cd lio #CXXFLAGS="$CFLAGS -I/usr/local/cuda-12.8/targets/x86_64-linux/include" make cuda=1 cpu=1 # XBLAS is an optional library echo "" echo "2d. Install XBLAS" echo "==================" cd /src wget http://www.netlib.org/xblas/xblas.tar.gz -O zipped/xblas.tar.gz cd /src tar -zxf zipped/xblas.tar.gz cd xblas-1.0.248 ./configure --prefix=/opt make -j2 cp libxblas.a /opt/lib # Torchani is an optional library echo "" echo "2e. Install Torchani" echo "====================" pip install torch==2.8 --index-url https://download.pytorch.org/whl/cu128 --break-system-packages pip install torchani --break-system-packages # L40S cards are "compute capability" 8.9 - see: # https://en.wikipedia.org/wiki/CUDA#GPUs_supported TORCH_CUDA_ARCH_LIST=8.9 ani build-extensions # tng_io is an optional library echo "" echo "2f. Install tng_io" echo "==================" cd /src git clone https://gitlab.com/gromacs/tng.git cd tng mkdir build cd build cmake .. make -j2 make install # umfpack (inside suiteparse) is an optional library echo "" echo "2g. Install suiteparse/umfpack" echo "==============================" cd /src wget https://github.com/DrTimothyAldenDavis/SuiteSparse/archive/refs/tags/v7.12.2.tar.gz -O zipped/suiteparse-v7.12.2.tar.gz cd /src tar -zxf zipped/suiteparse-v7.12.2.tar.gz cd SuiteSparse-7.12.2 cd build cmake .. make -j2 make install # apbs is an optional library echo "" echo "2h. Install apbs" echo "==================" cd /src # These are precompiled binaries - apbs is a pain to build from source wget https://github.com/Electrostatics/apbs/releases/download/v3.4.1/APBS-3.4.1.Linux.zip -O zipped/APBS-3.4.1.Linux.zip cd /src unzip zipped/APBS-3.4.1.Linux.zip cd APBS-3.4.1.Linux cp -a -v bin/* /opt/bin/ cp -a -v lib/* /opt/lib/ cp -a include/* /opt/include/ cp -a share/* /opt/share/ # Installer Amber 24 echo "" echo "Amber (A). Install Amber 24" echo "===========================" cd /src if [ -s /mnt/pmemd24.tar.bz2 ] then tar -jxf /mnt/pmemd24.tar.bz2 cd /src/pmemd24_src/build CC=gcc-14 FC=gfortran-14 CXX=g++14 cmake .. \ -DCMAKE_INSTALL_PREFIX=/opt/pmemd24 \ -DCMAKE_BUILD_TYPE=Release \ -DCOMPILER=GNU \ -DMPI=TRUE \ -DCUDA=TRUE \ -DCUDNN=TRUE \ -DOPENMP=TRUE \ -DINSTALL_TESTS=TRUE \ -DDOWNLOAD_MINICONDA=FALSE \ -DBUILD_PYTHON=TRUE \ -DBUILD_PERL=TRUE \ -DBUILD_GUI=TRUE \ -DPMEMD_ONLY=TRUE \ -DCHECK_UPDATES=FALSE make make install echo 'source /opt/pmemd24/amber.sh' >> /etc/bash.bashrc else echo "Amber MD 24 source file not found" exit 1 fi # Install Ambertools 25 echo "" echo "Amber (B). Install Ambertools 25" echo "================================" cd /src if [ -s /mnt/ambertools25.tar.bz2 ] then tar -jxf /mnt/ambertools25.tar.bz2 cd /src/ambertools25_src/build AMBER_PREFIX=/src/ambertools25_src MBX_DIR=/opt CC=gcc-14 FC=gfortran-14 CXX=g++14 cmake .. \ -DCMAKE_INSTALL_PREFIX=/opt/ambertools25 \ -DCOMPILER=GNU \ -DMPI=TRUE \ -DCUDA=TRUE \ -DCUDNN=TRUE \ -DCUDNN_INCLUDE_PATH=/usr/local/lib/python3.12/dist-packages/nvidia/cudnn/include \ -DCUDNN_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/nvidia/cudnn/lib \ -DOPENMP=TRUE \ -DINSTALL_TESTS=TRUE \ -DDOWNLOAD_MINICONDA=FALSE \ -DBUILD_PYTHON=TRUE \ -DBUILD_PERL=TRUE \ -DCHECK_UPDATES=FALSE \ -DLIBTORCH=OFF \ -DTORCH_HOME=/opt \ -DLIBTORCH_INCLUDE_DIRS=/opt/include \ -DLIBTORCH_LIBRARIES=/opt/lib \ -DXBLAS_LIBRARY=/opt/lib/libxblas.a \ -DMBX_DIR=/opt \ -DPLUMED_ROOT=/opt \ -DCMAKE_PREFIX_PATH=/opt \ -DBUILD_TCPB=FALSE \ -DBUILD_REAXFF_PUREMD=TRUE make make install echo 'source /opt/ambertools25/amber.sh' >> /etc/bash.bashrc else echo "Amber MD 25 source file not found" exit 1 fi # Remove all src packages echo "" echo "Cleaning up downloaded src tree" echo "==================================" cd rm -f /src/zipped/* rm -rf /src pip3 cache purge echo "" echo "All done" %environment # Ambertools export AMBERHOME=/opt/ambertools25 export PERL5LIB="$AMBERHOME/lib/perl:$PERL5LIB" export PYTHONPATH="$AMBERHOME/local/lib/python3.12/dist-packages:$PYTHONPATH" export LD_LIBRARY_PATH="$AMBERHOME/lib:$LD_LIBRARY_PATH" export QUICK_BASIS="$AMBERHOME/AmberTools/src/quick/basis" export PATH="$AMBERHOME/bin:$PATH" # Amber export PMEMDHOME=/opt/pmemd24 export PATH="$PMEMDHOME/bin:$PATH" # General environment variables for everything else export LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:$LD_LIBRARY_PATH export PATH=/usr/local/bin:/opt/bin:$PATH export CC=gcc-14 export CXX=g++-14 export FC=gfortran-14 export CFLAGS="-O" export CXXFLAGS="$CFLAGS" export MANPATH=/opt/man export PLUMED_VIMPATH=/opt/lib/plumed/vim export PKG_CONFIG_PATH=/opt/lib/pkgconfig:/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH export PLUMED_KERNEL=/opt/lib/libplumedKernel.so %runscript
Run Script:
#!/bin/bash module load apptainer IMAGE_NAME=/nobackup/shared/containers/ambermd.24.25.sif container.run() { # Run a command inside the container... # automatically bind the /scratch and /nobackup dirs # pass through any additional parameters given on the command line apptainer exec --nv --bind /scratch:/scratch --bind /nobackup:/nobackup ${IMAGE_NAME} $@ }
Back to Software
Table of Contents
HPC Service
Main Content Sections
Documentation Tools