SW4¶
About¶
SW4 (Seismic Waves, 4th order) is a program for simulating seismic wave propagation on parallel computers.
Supported versions¶
To check which SW4 versions and build types are currently supported on Discoverer, execute on the login node:
module avail sw4
The recipe followed to build the source code is available at:
Running SW4 on Discoverer¶
To run SW4 on Discoverer you need to compose Slurm batch script.
#!/bin/bash
#
#SBATCH --partition=cn ### Partition (you may need to change this)
#SBATCH --job-name=sw4_on_8_nodes
#SBATCH --time=512:00:00 ### WallTime - set it accordningly
#SBATCH --account=<specify_your_slurm_account_name_here>
#SBATCH --qos=<specify_the_qos_name_here_if_it_is_not_the_default_one_for_the_account>
#SBATCH --nodes 8 # May vary
#SBATCH --ntasks-per-core 1 # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 128 # Must be less/equal to the number of CPU cores
#SBATCH --cpus-per-task 1 # Must be 1 or 2, unless you have a better guess
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
module purge
module load sw4/3/3.0-gcc
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1
cd $SLURM_SUBMIT_DIR
mpirun sw4 input.file
Specify the parameters and resources required for successfully running and completing the job:
- Slurm partition of compute nodes, based on your project resource reservation (
--partition
)- job name, under which the job will be seen in the queue (
--job-name
)- wall time for running the job (
--time
)- number of occupied compute nodes (
--nodes
)- number of MPI proccesses per node (
--ntasks-per-node
)- number of threads (OpenMP threads) per MPI process (
--cpus-per-task
)- version of SW4 to run after
module load
(see Supported versions)
Note
The number of MPI processes in total (across the nodes) should not exceed those assumed by the domain decomposition. Using this template, one may achieve maximum thread affinity on AMD Zen2 CPUs.
Save the complete Slurm job description as a file, for example /discofs/$USER/sw4.batch
, and submit it to the queue:
cd /discofs/$USER
sbatch sw4.batch
Getting help¶
See Getting help