License required


You can gain access to VASP installation and run simulations based on it upon the compute nodes of Discoverer, only if you can provide a valid license (it will be checked/verified). Based on the valid license in your possession, you can run only the version of VASP covered by that license (and not any of the other versions installed).

Supported versions


The versions of VASP installed in the software repository are built, tested, and supported by the Discoverer HPC team.

To check which VASP versions are currently officially supported on Discoverer, execute on the login node:

module avail vasp

and get the list.

User-supported versions

Users are welcome to bring, compile, and use their own builds of VASP (covered by the license), but those builds will not be supported by the Discoverer HPC team.

Running VASP


You MUST NOT execute simulations directly upon the login node ( You have to run your simulations as Slurm jobs only.


Write the results only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!

Slurm batch template

To run VASP executables as a Slurm batch job, you may copy and modify the following template:

#SBATCH --partition=cn         # Name of the partition of nodes (as the support team)
#SBATCH --job-name=vasp
#SBATCH --time=02:00:00        # Set a wall time limit for the job

#SBATCH --nodes           4    # Two nodes will be used
#SBATCH --ntasks-per-node 32   # Use all 128 CPU cores on each node
#SBATCH --ntasks-per-core 1    # Run only one MPI process per CPU core
#SBATCH --cpus-per-task   4    # Number of OpenMP threads per MPI process

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

ulimit -Hs unlimited
ulimit -Ss unlimited

module purge
module load vasp/5/5.4.4-nvidia-openmpi

export OMP_PROC_BIND=false
export UCX_NET_DEVICES=mlx5_0:1


mpirun vasp_std

Specify the parameters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • number of threads (OpenMP threads) per MPI process (--cpus-per-task)
  • version of VASP to run after module load (see Supported versions)


The requested number of MPI processes per node should not be greater than 128 (128 is the number of CPU cores per compute node, see Resource Overview). Run a series of short simulations to estimate the most profitable combination of allocated number of nodes, MPI processes (tasks), and OpenMP threads per MPI task for your system. Then apply that combination to your productive simulations.

The example above shows how to invoke the vasp_std executable. You might replace it with vasp_gam or vasp_ncl, depending on your goals.

Getting help

See Getting help