ANSYS Forte is included in the University ANSYS license package, it is normally installed as part of the larger ANSYS software module. e.g:
$ module avail ANSYS
------------------------------------------------------------ /mnt/storage/apps/eb/modules/all ------------------
ANSYS/17.0 ANSYS/18.1 ANSYS/19.4 ANSYS/2020-R2 ANSYS/2021 ANSYS/2022-R1 ANSYS/2024R1 (D)
Where:
D: Default Module
$
You should normally design your project on your desktop or workstation. The use of the HPC facility is generally intended to run analysis on existing projects, though for Comet, you may also choose to (optionally) run the interactive ANSYS Workbench on HPC via Open OnDemand in your browser.
The path to all ANSYS components is not set when loading the ANSYS module (it adds a lot to the PATH, so not practical to do this for every component); Forte is one of the products which is not brought into the path.
To work around this on Ansys 2024R1:
$ module load ANSYS/2024R1
$ export PATH=$PATH:/mnt/storage/apps/eb/software/ANSYS/2024R1/v241/reaction/forte.linuxx8664/bin
$ source /mnt/storage/apps/eb/software/ANSYS/2024R1/v241/reaction/forte.linuxx8664/bin/forte_setup.ksh
The Ansys Forte directory is now in your path and the correct environment variables have been set by forte_setup.ksh
.
Use forte_setup.csh
if you are using csh
instead of bash
for your HPC shell.
All ANSYS Forte jobs run via MPI (even restricted to a single node), however the simplest version running on a single node requires a lot less Slurm headers to be set. For maximum performance consider the multi node version, but note that you may need to run several iterations in order to establish the most optimal resource settings for your Slurm job script.
This method of running ANSYS Forte is about the smallest allocation you can get away with. The recipe allocates the following:
–nodes=
option)–ntasks-per-node=2
)–cpus-per-task=4
)–mem=8G
)Whilst Forte can run on a single node, you must still allocate at least two tasks in Slurm. Set your Slurm resource values (CPU, RAM) accordingly to the complexity of your project.
#!/bin/bash
# Request 1 core
#SBATCH --cpus-per-task=4
#SBATCH --ntasks-per-node=2
# Set requested amount of memory
#SBATCH --mem=8G
# Request 8 hours of runtime
#SBATCH --time=08:00:00
# Set partition/queue
# defq allows up to 2 days of runtime
#SBATCH --partition=defq
# Load ANSYS and add on the Forte runtime directory
module load ANSYS/2024R1
export PATH=$PATH:/mnt/storage/apps/eb/software/ANSYS/2024R1/v241/reaction/forte.linuxx8664/bin
source /mnt/storage/apps/eb/software/ANSYS/2024R1/v241/reaction/forte.linuxx8664/bin/forte_setup.ksh
forte -i JOB_INPUT_FILE \
-rn Run_001 \
-o JOB_OUTPUT_FILE \
-$FORTE_ARGS </dev/null >MONITOR 2>&1
Note that the amount of RAM and CPU cores allocated to each task will be dependent on:
–nodes=3
)–ntasks-per-node=4
)–cpus-per-task=8
)–mem=8G
)This configuration allows for an aggregate total of 24 processors being used by ANSYS Forte. Increase or reduce your Slurm resource values according to the complexity of your Forte project:
#!/bin/bash
# Request 3 physical servers
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=8
#SBATCH --mem=8G
#SBATCH --time=24:00:00
#SBATCH --partition=defq
echo ""
echo "Started at: `date`"
echo ""
echo "Slurm: Using $SLURM_CPUS_PER_TASK CPU cores per task"
echo "Slurm: Using $SLURM_TASKS_PER_NODE task per node"
echo "Slurm: Using $SLURM_JOB_NUM_NODES nodes"
echo "Slurm: $SLURM_CPUS_PER_TASK * $SLURM_TASKS_PER_NODE * $SLURM_JOB_NUM_NODES = Total CPU"
# Load ANSYS and add on the Forte runtime directory
echo ""
echo "Loading ANSYS module"
module load ANSYS/2024R1
export PATH=$PATH:/mnt/storage/apps/eb/software/ANSYS/2024R1/v241/reaction/forte.linuxx8664/bin
source /mnt/storage/apps/eb/software/ANSYS/2024R1/v241/reaction/forte.linuxx8664/bin/forte_setup.ksh
# Load Intel MPI runtime
module load intel
# Get the list of nodes we have been allocated
echo ""
echo "Extracting Slurm hostnames..."
HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST`
echo "Job will run on:
$HOSTNAMES"
# Set to your Forte input file
INPUT_FILE=Dual_Fuels_82percentGasoline_AMG_Tutorial.ftsim
OUTPUT_FILE=jobOutput.txt
echo ""
echo "Calling Ansys Forte"
# We generate this string so that we can record the actual command we are going to run
cmd="forte -i $INPUT_FILE -rn Run_001 -o $OUTPUT_FILE"
echo "Running: $cmd"
# Run the command script
srun $cmd
echo ""
echo "Finished at: `date`"
This method of running ANSYS Forte allows you to scale the compute resources to almost any level necessary, but the Slurm job script is somewhat more complex due to the need to identify which hosts/nodes are allocated for your job, and then dynamically generating the cmd.sh
file which is launched via srun
.
It appears that ANSYS Forte uses one of the tasks launched as a communication process for all other compute processes. Hence launching with two tasks only has one actively processing. Your total tasks (nodes
* tasks_per_node
) should be specified with this in mind.
You may need to do some performance profiling to understand whether it is better to have more tasks with fewer cpu cores per task, or fewer tasks with more cpu cores per task, e.g:
–ntasks-per-node=4
and –cpus-per-task=8
or…
–ntasks-per-node=32
and –cpus-per-task=1
This represents the same number of cores, but presented via a different mix of tasks.