Quantum ESPRESSO

Versions avaiable

Supported versions

Note

The versions of Quantum ESPRESSO installed in the public software repository are compiled are supported by the Discoverer HPC support team.

Warning

Currently, we do not offer access to an Intel oneAPI build for QE 7.2. Its because of an error that occurs whenever you try to compile the QE Fortran code using ifort or ifx. We will release an Intel oneAPI-based build of QE 7.2 once we managed to get rid of that compile time error. Meanwhile, use the NVIDIA HPC SDK-based QE 7.2 builds. We support both Open MPI and MPICH parallelization for QE.

To display the list of supported versions, execute on the login node:

module avail q-e-qe/

Important

OpenMP threading is available for each of the supported versions.

Those versions are compiled against the 4.1.X branch of Open MPI 4.1.X, as well as against the external libraries OpenBLAS and FFTW3). The recipes employed for compiling the Quantum ESPRESSO programming code are publicly available at:

https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/q-e-qe/

User-supported installations

Important

Users are welcome to bring, compile, and install within their scratch folders any versions of Quantum ESPRESSO, but those installations will not be supported by the Discoverer HPC team.

Running simulations

Running simulations means invoking Quantum ESPRESSO executables for processing the instructions and data supplied as input files.

Warning

You MUST NOT execute simulation directly upon the login node (login.discoverer.bg). You have to run your simulations as Slurm jobs only.

Warning

Write your results only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!

To run QE you may adopt the following Slurm batch template (the input file is input.abi):

#!/bin/bash
#
#SBATCH --partition=cn
#SBATCH --job-name=qe-7.2-nvidia-openmpi-env-mod
#SBATCH --time=512:00:00

#SBATCH --ntasks          12288 # 12288 taks == 96 nodes
#SBATCH --ntasks-per-core 1     # Bind one MPI tasks to one CPU core
#SBATCH --cpus-per-task   1     # Must be 1, unless you have a better guess
                                # So far we haven't estimated which of the
                                # QE tools works well in hybrid parallelization mode.

#SBATCH --account=<add_here_your_slurm_account_name>
#SBATCH --qos=<add_here_the_name_of_the_qos>

#SBATCH -o slurm.qe-7.2-nvidia-openmpi-env-mod.%j.out         # STDOUT
#SBATCH -e slurm.qe-7.2-nvidia-openmpi-env-mod.%j.err         # STDERR

module purge
module load q-e-qe/7/latest-intel-openmpi

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1

export ROOT_DIR=/discofs/vkolev/qe/qe-7.2/test-mpi/

export ESPRESSO_PSEUDO=${ROOT_DIR}/pseudo
export ESPRESSO TMPDIR=${ROOT_DIR}/run/tmp

cd $SLURM_SUBMIT_DIR

mpirun ph.x -npool ${SLURM_NTASKS} -input ph.in

Specify the parameters and resources required for successfully running and completing the job:

  • the Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • the job name, under which the job will be seen in the queue (--job-name)
  • the wall time for running the job (--time)
  • the number of tasks to run (--ntasks), see Notes on the parallelization
  • number of MPI proccesses per core (--ntasks-per-core, keep that 1)
  • specify the version of QE to run after module load
  • do not change the export declarations unless you are told to do so

Save the complete Slurm job description as a file, for example /discofs/$USER/run_qe/run.batch, and submit it to the queue:

cd /discofs/$USER/run_qe
sbatch run.batch

Upon successful submission, the standard output will be directed into the file /discofs/$USER/run_qe/slurm.%j.out (where %j stands for the Slurm job ID). The standard error messages will be stored inside /discofs/$USER/run_qe/slurm.%j.err.

Notes on the parallelization

QE expects the number of requested parallel MPI tasks (through --ntasks) to be one defined based on specific input parameters, which means --ntasks cannot take an arbitrary value. Also, it is better not to allocate MPI tasks based on a number of compute nodes (avoid spcifying --nodes in combination with --ntasks) and rely on --ntasks only. In other words, let Slurm to decide on how to distribute the --ntasks number of parallel MPI tasks over the nodes available in the partition.

Getting help

See Getting help