OpenFOAM¶
Versions avaiable¶
Supported versions¶
Note
The versions of OpenFOAM installed in the software repository are built and supported by the Discoverer HPC team.
To check which OpenFOAM versions and build types are currently supported on Discoverer, execute on the login node:
module avail
User-supported versions¶
Users are welcome to bring (or compile locally) and use their own OpenFOAM builds, but they will not be supported by the Discoverer HPC team.
In case the users are interested to adopt some of out build recipes, they are publicly available on-line at:
https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/openfoam
The tests reveal that compiling the OpenFOAM 10 and 11 programming code by the Intel oneAPI compilers (either icx
and icpx
, or icc
and icpc
) does not speed up the produced executable code (compared to the case when GCC and LLVM compilers are employed for that same purpose). Therefore, it is beneficial to employ the vanilla LLVM Compiler Infrastructure (version 16 or higher) for compiling OpenFOAM 10 and 11.
Running OpenFOAM¶
Warning
You MUST NOT execute OpenFOAM calculations directly upon the login node (login.discoverer.bg). You have to run your calculations only as Slurm jobs.
Warning
Write your result only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!
Shown below is a working example of running one of the tutorials that come with the source code archive of OpenFOAM, as a Slurm batch job.
First, load the corresponding OpenFOAM environment module and copy the tutorial files under your Personal scratch and storage folder (/discofs/username):
module load openfoam/11/11-llvm-openmpi-int32
mkdir /discofs/$USER/run_openfoam
cd /discofs/$USER/run_openfoam
cp -pr $WM_PROJECT_DIR/tutorials/multiphase/interFoam/laminar/wave .
After that, create a new file, for example /discofs/$USER/run_openfoam/wave/run_openfoam_mpi.batch
, and store therein the following Slurm batch script:
#!/bin/bash
#SBATCH --partition=cn ### Partition
#SBATCH --job-name=gromacs_8 ### Job Name
#SBATCH --time=512:00:00 ### WallTime
#SBATCH --account=your_slurm_account_name
#SBATCH --qos=your_slurm_qos_name
#SBATCH --nodes 1 # Number of nodes
#SBATCH --ntasks-per-node 6 # Number of MPI threads per node
#SBATCH --ntasks-per-core 1 # Do not change this!
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
module purge
module load openfoam/11/11-llvm-openmpi-int32
. $WM_PROJECT_DIR/bin/tools/RunFunctions
cd $SLURM_SUBMIT_DIR
runApplication blockMesh
runApplication extrudeMesh
for i in 1 2
do
runApplication -s $i topoSet -dict topoSetDict$i
runApplication -s $i refineMesh -dict refineMeshDictX -overwrite
done
for i in 3 4 5 6
do
runApplication -s $i topoSet -dict topoSetDict$i
runApplication -s $i refineMesh -dict refineMeshDictY -overwrite
done
runApplication setWaves
runApplication decomposePar
mpirun interFoam -parallel
runApplication reconstructPar
Warning
Do not change the value of --ntasks-per-node
to an arbitrary number for this particular example, since the underlying configuration (stored under system
folder) imposes the use of 6 parallel processes. For any other example, always set --ntasks-per-node
to the number of parallel processes defined in the configuration of OpenFOAM.
Afterwards, submit the script as a job to the queue:
cd /discofs/$USER/run_openfoam/wave
sbatch run_openfoam_mpi.batch
Upon successful submission, the standard output will be directed by Slurm towards the file /discofs/$USER/run_openfoam/wave/slurm.%j.out
(where %j
stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_openfoam/wave/slurm.%j.err
. Tail the file slurm.%j.out
to follow the progress of the computations.
Getting help¶
See Getting help