WRF

Versions avaiable

Supported versions

Note

The versions of WRF installed in the software repository are built and supported by the Discoverer HPC team. Our primary objective is to provide EM_CORE builds to our valued customers. If you require a specific WRF build other than the EM_CORE one (NMM_CORE or DA_CORE for instance), please do not hesitate to contact us.

To check which WRF versions are currently supported on Discoverer, execute on the login node:

module avail

and grep the output for “wrf”. Note that the name of the “wrf” module contains the version of the source code, the name of the compiler package employed for compiling that code, and the name of the MPI library incorporated.

Each WRF build that is hosted in the public software repository of Discoverer is dependent on external libraries such as zlib, zstd, libaec, xz, lz4, bzip2, libzip, curl, hdf5, netcdf-c, and netcdf-fortran. Those libraries are provided as a bundle that comes with the WRF installation, and it is built intentionally to match the requirements of each particular WRF installation. You can check out the recipes we created and followed to compile and optimize the external libraries. Those recipes are published online here:

https://gitlab.discoverer.bg/vkolev/recipes/-/blob/main/WRF

whereupon each such recipe contains “bundle” as part of its file name.

Important

On the supported parallelism: The WRF builds available in the Discoverer HPC software repository employ dual “dm+sm” parallelism (Distributed Memory + Shared Memory). In that situation, the number of OpenMP threads per MPI process corresponds to how many tiles are requested during WRF execution. Refer to the WRF documentation for more details on that topic, if you are interested. Also, you can always turn off Shared Memory parallelism by setting the number of OpenMP threads used to just one, which is equal to using Distribute Parallelism alone.

User-supported versions

Users are welcome to bring, compile, and use their own builds of WRF, but those builds will not be supported by the Discoverer HPC team.

Running WRF

Warning

You MUST NOT execute simulations directly upon the login node (login.discoverer.bg). You have to run your simulations as Slurm jobs only.

Warning

Write the results only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!

Slurm batch template

To run WRF as a Slurm batch job, you may use the following template:

#!/bin/bash
#
#SBATCH --partition=cn         # Name of the partition of nodes (as the support team)
#SBATCH --job-name=wrf_1
#SBATCH --time=00:50:00        # The job completes for ~ 6 min

#SBATCH --nodes           2    # Two nodes will be used
#SBATCH --ntasks-per-node 128  # Use all 128 CPU cores on each node
#SBATCH --ntasks-per-core 1    # Run only one MPI process per CPU core
#SBATCH --cpus-per-task   2    # Number of OpenMP threads per MPI process
                               # That means Shared Memory parallelism is involved.

#SBATCH --account=<your_slurm_account_name>
#SBATCH --qos=<the_qos_name_you_want_to_follow>

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

ulimit -Hs unlimited
ulimit -Ss unlimited

module purge
module load wrf/4/4.5.2-em-gcc-openmpi

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PROC_BIND=false
export OMP_SCHEDULE='STATIC'
export OMP_WAIT_POLICY='ACTIVE'
export UCX_NET_DEVICES=mlx5_0:1

mpirun -np 4 real.exe # Process the input NC data sets, here you do not
                      # need to request the use of more than 4 MPI tasks

mpirun wrf.exe        # Run the actual simulation

Specify the parmeters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • number of threads (OpenMP threads) per MPI process (--cpus-per-task)
  • version of WRF to run after module load (see Supported versions)

Note

The requested number of MPI processes per node should not be greater than 128 (128 is the number of CPU cores per compute node, see Resource Overview).

You need to submit the Slurm batch job script to the queue from within the folder where the input NC and namelist.input files reside. Check the provided working example (see below) to find more details about how to create a complete Slurm batch job script for running WRF.

Working example

The goal of this working example is to show one possible way WRF can run on Discoverer HPC by means of Slurm batch job. Running the example is simple - just execute on the login node (login.discoverer.bg):

mkdir /discofs/`whoami`/wrf-test
cd /discofs/`whoami`/wrf-test
cp /opt/software/WRF/4/4.4-nvidia-openmpi/examples/1.batch .

At this point, edit 1.batch and add the mandatory account name and qos there in. Afterwards submit the job to the queue:

cd /discofs/`whoami`/wrf-test
sbatch 1.batch

Once started successfully by Slurm, that job will create first a directory under your Personal scratch and storage folder (/discofs/username). The name of the directory will be similar to this one: wrf_2022-06-21-22-21-06-52.1655836192 (numbers should be different in your case). You may check the progress of the simulation by entering that directory and following there the event messages emitted by the running WRF processes to the error log:

tail -f em_real/rsl.error.0000

Getting help

See Getting help