SW4 === .. toctree:: :maxdepth: 1 :caption: Contents: About ----- `SW4`_ (Seismic Waves, 4th order) is a program for simulating seismic wave propagation on parallel computers. Supported versions ------------------ To check which SW4 versions and build types are currently supported on Discoverer, execute on the login node: .. code-block:: bash module avail sw4 The recipe followed to build the source code is available at: https://gitlab.discoverer.bg/vkolev/recipes/-/blob/main/sw4 Running SW4 on Discoverer ------------------------- To run SW4 on Discoverer you need to compose Slurm batch script. .. code:: bash #!/bin/bash # #SBATCH --partition=cn ### Partition (you may need to change this) #SBATCH --job-name=sw4_on_8_nodes #SBATCH --time=512:00:00 ### WallTime - set it accordningly #SBATCH --account= #SBATCH --qos= #SBATCH --nodes 8 # May vary #SBATCH --ntasks-per-core 1 # Bind one MPI tasks to one CPU core #SBATCH --ntasks-per-node 128 # Must be less/equal to the number of CPU cores #SBATCH --cpus-per-task 1 # Must be 1 or 2, unless you have a better guess #SBATCH -o slurm.%j.out # STDOUT #SBATCH -e slurm.%j.err # STDERR module purge module load sw4/3/3.0-gcc export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export OMP_PLACES=cores export OMP_PROC_BIND=spread export UCX_NET_DEVICES=mlx5_0:1 cd $SLURM_SUBMIT_DIR mpirun sw4 input.file Specify the parameters and resources required for successfully running and completing the job: - Slurm partition of compute nodes, based on your project resource reservation (``--partition``) - job name, under which the job will be seen in the queue (``--job-name``) - wall time for running the job (``--time``) - number of occupied compute nodes (``--nodes``) - number of MPI proccesses per node (``--ntasks-per-node``) - number of threads (OpenMP threads) per MPI process (``--cpus-per-task``) - version of SW4 to run after ``module load`` (see `Supported versions`_) .. note:: The number of MPI processes in total (across the nodes) should not exceed those assumed by the domain decomposition. Using this template, one may achieve maximum thread affinity on AMD Zen2 CPUs. Save the complete Slurm job description as a file, for example ``/discofs/$USER/sw4.batch``, and submit it to the queue: .. code:: bash cd /discofs/$USER sbatch sw4.batch Getting help ------------ See :doc:`help` .. _`SW4`: https://github.com/geodynamics/sw4