Ansys Fluent is the industry-leading fluid simulation software known for its advanced physics modeling capabilities and industry leading accuracy.
(from https://www.ansys.com/en-gb/products/fluids/ansys-fluent)
Fluent is available as part of our Newcastle University licensing agreement to all users of the HPC facilities.
The use of Fluent on the HPC facilities is primarily intended to be via command line, for batch processing of models and associated data sets. We do not support any interactive/graphical use of Fluent at this time.
To access the Fluent commands, please load the associated Fluent module:
module load ANSYS
There are often multiple versions of Ansys Fluent installed. To see which are available, run:
$ module avail ANSYS
--------------------------------------- /path/to/software/ ---------------------------------------
ANSYS/17.0 ANSYS/18.1 ANSYS/19.4 ANSYS/2020-R2 ANSYS/2021 ANSYS/2022-R1 ANSYS/2024R1 (D)
Where:
D: Default Module
The default version will be marked with (D) (in the above example; 2024 R1). To use any other version (in this case; 2022 R1) use the variant:
module load ANSYS/2022-R1
Fluent jobs can run on either a single node, using a core per task, or across multiple nodes.
Running Fluent on a single node (aka a server) is the easiest method, but you are limited by the resources available on that single node.
The example below allocates:
Adjust the values as needed.
#!/bin/bash
# Request a single physical server
#SBATCH --nodes=1
# Request 32 parallel tasks per server
#SBATCH --ntasks-per-node=32
# Request 1 cores per task (32 tasks x 1 core per task x 1 node == 32 total cores)
#SBATCH --cpus-per-task=1
# Set requested amount of memory
#SBATCH --mem=128G
# Request 8 hours of runtime
#SBATCH --time=08:00:00
# Set partition/queue
# defq allows up to 2 days of runtime
#SBATCH --partition=defq
# Load the ansys software environment
module load ANSYS/2024R1
# Now run ansys....
# -i : the name of your input file
# -gu and -driver null : to disable the graphical user interface
# -t : to spawn up to the ntasks you specified above
# myfluentjob.jou : the name of your Fluent journal file
# Set SIMVARIANT to your selected workload type:
# 2d, 2ddp, 3d or 3ddp
SIMVARIANT=2d
fluent $SIMVARIANT -i myfluentjob.jou -gu -t $SLURM_NTASKS -driver null
Submit the job as sbatch fluent_single.sh
Multi-node Fluent jobs can take advantage of more resources simultaneously, but the script is a little more complex.
The example below allocates:
4 nodes x 4 tasks x 8 cores
)4 nodes x 64GB per node
)Adjust the values as needed.
#!/bin/bash
#SBATCH --chdir=/path_to/your_ansys/working_directory/
# Request a four physical servers
#SBATCH --nodes=4
# Request 4 parallel tasks per server
#SBATCH --ntasks-per-node=4
# Request 8 cores per task (4 nodes x 4 tasks per node x 8 cores per task == 64 total cores)
#SBATCH --cpus-per-task=8
# Set requested amount of memory (this is per node; so 64G x 4 nodes == 256GB total)
#SBATCH --mem=64G
# Request 8 hours of runtime
#SBATCH --time=08:00:00
# Set partition/queue
# defq allows up to 2 days of runtime
#SBATCH --partition=defq
echo ""
echo "Started at: `date`"
echo ""
echo "=== Slurm resource summary === "
echo "Slurm: Using $SLURM_CPUS_PER_TASK CPU cores per task"
echo "Slurm: Using $SLURM_TASKS_PER_NODE task per node"
echo "Slurm: Using $SLURM_JOB_NUM_NODES nodes"
echo "Slurm: Using $SLURM_NTASKS total tasks"
echo "Slurm: $SLURM_CPUS_PER_TASK * $SLURM_TASKS_PER_NODE * $SLURM_JOB_NUM_NODES = Total CPU"
echo ""
echo "=== Extracting Slurm hostnames === "
HOSTNAMES=`scontrol show hostnames $SLURM_JOB_NODELIST`
echo "Job will run on:
$HOSTNAMES"
# Load the ansys software environment
echo ""
echo "=== Loading ANSYS ==="
module load ANSYS/2024R1
echo "Done"
# Now run ansys....
# -i : the name of your input file
# -gu and -driver null : to disable the graphical user interface
# -t : to spawn up to the ntasks you specified above
# myfluentjob.jou : the name of your Fluent journal file
# Set SIMVARIANT to your selected workload type:
# 2d, 2ddp, 3d or 3ddp
SIMVARIANT=3ddp
# Running on normal nodes (not low-latency)
cmd="srun fluent $SIMVARIANT -g -nm -np ${SLURM_NTASKS} -nt ${SLURM_CPUS_PER_TASK} -slurm -pdefault -mpi=intel -platform=intel -i MY_PROJECT.JOU"
# If running on low-latency nodes (uses infiniband inter-process communication)
#cmd="srun fluent $SIMVARIANT -g -nm -np ${SLURM_NTASKS} -nt ${SLURM_CPUS_PER_TASK} -slurm -pinfiniband -mpi=intel -platform=intel -i MY_PROJECT.JOU"
echo ""
echo "=== Running ANSYS ==="
echo "Running: $cmd"
$cmd
echo "Done"
echo ""
echo "Completed at: `date`"
Submit the job as sbatch fluent_multi.sh
Note: Multi node example taken from https://innovationspace.ansys.com/forum/forums/topic/multiple-nodes-using-fluent-on-hpc-under-slurm/ Further examples may be found at https://docs.hpc.shef.ac.uk/en/latest/stanage/software/apps/ansys/fluent.html
You may need to do some performance profiling to understand whether it is better to have more tasks with fewer cpu cores per task, or fewer tasks with more cpu cores per task, e.g:
–ntasks-per-node=4
and –cpus-per-task=8
or…
–ntasks-per-node=8
and –cpus-per-task=4
This represents the same number of cores, but presented via a different mix of tasks. Some experimentation may be needed in order to find the most optimal balance of task quantity and task size.