CP2K¶
Versions avaiable¶
Supported versions¶
Note
The versions of C2PK installed in the software repository are built and supported by the Discoverer HPC team.
To check which C2PK versions are currently supported on Discoverer, execute on the login node:
module avail cp2k
and grep the output for “cp2k”.
Important
The C2PK builds available in the Discoverer HPC software repository employ dual “dm+sm” parallelism (Distributed Memory + Shared Memory). In that case, the defined number of OpenMP threads per MPI process implies the number of tiles during the C2PK execution. It is recommended to use dm+sm type of parallelism, when possible (presumes the use of cp2k.psmp
executable).
User-supported versions¶
Users are welcome to bring, or compile, and use their own builds of C2PK, but those builds will not be supported by Discoverer HPC team.
To show the interested users all the steps in compiling CP2K version available in the software repository of the Discoverer HPC, we published the build recipes and patches online:
https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/cp2k
Running C2PK¶
Warning
You MUST NOT execute simulations directly upon the login node (login.discoverer.bg). You have to run your simulations as Slurm jobs only.
Warning
Write the results only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!
Slurm batch template¶
To run C2PK as a Slurm batch job, you may use the following template:
#!/bin/bash
#
#SBATCH --partition=cn # Name of the partition of nodes (as the support team)
#SBATCH --job-name=cp2k
#SBATCH --time=00:50:00 # The job completes for ~ 6 min
#SBATCH --nodes 1 # Two nodes will be used
#SBATCH --ntasks-per-node 64 # Number of MPI processes per node
#SBATCH --ntasks-per-core 1 # Run only one MPI process per CPU core
#SBATCH --cpus-per-task 4 # Number of OpenMP threads per MPI process
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
ulimit -Hs unlimited
ulimit -Ss unlimited
module purge
module load cp2k/2022/latest-intel-openmpi
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PROC_BIND=false
export OMP_SCHEDULE='STATIC'
export OMP_WAIT_POLICY='ACTIVE'
export UCX_NET_DEVICES=mlx5_0:1
mpirun cp2k.psmp -i input.inp | tee output.out
Specify the parmeters and resources required for successfully running and completing the job:
- Slurm partition of compute nodes, based on your project resource reservation (
--partition
)- job name, under which the job will be seen in the queue (
--job-name
)- wall time for running the job (
--time
)- number of occupied compute nodes (
--nodes
)- number of MPI proccesses per node (
--ntasks-per-node
)- number of threads (OpenMP threads) per MPI process (
--cpus-per-task
)- version of C2PK to run after
module load
(see Supported versions)
Note
The requested number of MPI processes per node should not be greater than 128 (128 is the number of CPU cores per compute node, see Resource Overview).
Save the complete Slurm job description as a file, storing it inside the folder with the input configuration, for example /discofs/$USER/run_cp2k/run.batch
, and submit it to the queue afterwards
cd /discofs/$USER/run_cp2k/
sbatch run.batch
Upon successful submission, the standard output will be directed by Slurm into the file /discofs/$USER/run_cp2k/slurm.%j.out
(where %j
stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_cp2k/slurm.%j.err
.
Check the provided working example (see below) to find more details about how to create a complete Slurm batch job script for running CP2K.
Working example¶
The goal of this working example is to show one possible way C2PK can run on Discoverer HPC by means of Slurm batch job. Running the example is simple - just execute on the login node (login.discoverer.bg):
sbatch /opt/software/cp2k/2022.1/example/argon.sbatch
Once started successfully by Slurm, that job will create first a directory under your Personal scratch and storage folder (/discofs/username). The name of the directory will be similar to this one: cp2k_2022-06-21-22-21-06-52.1655836192
(numbers should be different in your case). You may check the advance of the simulation by entering that directory and execute there:
tail -f argon.out
Getting help¶
See Getting help