NAMD¶
Versions avaiable¶
Supported versions¶
Note
The versions of NAMD installed in the software repository are built and supported by the Discoverer HPC team.
To check which NAMD versions and build types are currently supported on Discoverer, execute on the login node:
module avail
and grep the output for “NAMD”.
Important
Consider using SMP build (Shared-Memory and Network-Based Parallelism) for it should provide more efficient memory management.
User-supported versions¶
Users are welcome to bring, or compile, and use their own builds of NAMD, but those builds will not be supported by Discoverer HPC team.
They might find it useful to check or adopt the build recipes used by the Discoverer HPC support team:
Running simulations¶
Running simulations means invoking namd2
for generating trajectories based on a given set of input files.
Warning
You MUST NOT execute simulation directly upon the login node (login.discoverer.bg). You have to run your simulations as Slurm jobs only.
Warning
Write your trajectories and result of analysis only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!
SMP build¶
The benefit of running SMP build of NAMD, compared to the non-SMP build, is the higher efficiency of using and managing memory (RAM). You might need to run some benchmarks using both builds and decide which one is more productive in your case, based on the estimated computational cost.
To run your NAMD SMP simulations as a job, use the following Slurm batch template:
#!/bin/bash
#SBATCH --partition=cn # Partition name
## Ask Discoverer HPC support team nodes on which
## partition to employ for your productive runs
#SBATCH --job-name=namd_smp # Job Name
#SBATCH --time=06:00:00 # WallTime
#SBATCH --nodes 1 # Number of nodes
#SBATCH --cpus-per-task 64 # Number of OpenMP threads per MPI (SMP goes here)
# You may vary this number during the benchmarking
# simulatiions
#SBATCH --ntasks-per-core 1 # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 4 # Number of MPI tasks per node
# You may vary this number during the benchmarking
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
module purge
module load NAMD/latest-intelmpi-smp
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export I_MPI_PIN=1
export I_MPI_PIN_DOMAIN=omp:platform
export FI_PROVIDER=verbs
export UCX_NET_DEVICES=mlx5_0:1
cd $SLURM_SUBMIT_DIR
NUM_CPU_CORES_PER_NODE=128 # Do not change this number, unless you are absolutely
# sure you have to do that
AFFINITY=`echo $NUM_CPU_CORES_PER_NODE $SLURM_NTASKS_PER_NODE | awk '{p=($1-$2)/$2; c=$1-1; f=p+1; print "+ppn",p,"+commap ",0"-"c":"f,"+pemap 1-"c":"f"."p}'`
mpirun namd2 ${AFFINITY} --outputname /disofs/$USER/taskname-smp /path/to/the/input/file.namd
Warning
Do not edit NUM_CPU_CORES_PER_NODE
and AFFINITY
declations, unless you have the respective expertise in running NAMD.
Specify the parmeters and resources required for successfully running and completing the job:
- Slurm partition of compute nodes, based on your project resource reservation (
--partition
)- job name, under which the job will be seen in the queue (
--job-name
)- wall time for running the job (
--time
)- number of occupied compute nodes (
--nodes
)- number of MPI proccesses per node (
--ntasks-per-node
)- number of threads (threads) per MPI process (
--cpus-per-task
)- version of NAMD to run, after
module load
(see Supported versions)
Save the complete Slurm job description as a file, for example /discofs/$USER/run_namd/run_smp_mpi.batch
, and submit it to the queue afterwards:
cd /discofs/$USER/run_namd
sbatch run_smp_mpi.batch
Upon successful submission, the standard output will be directed by Slurm into the file /discofs/$USER/run_namd/slurm.%j.out
(where %j
stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_namd/slurm.%j.err
.
non-SMP build¶
The non-SMP parallel build of NAMD adopts MPI parallelization only. While that type of NAMD built requires a simpler Slurm job description, it is expected to be not as memory (RAM) efficient as the SMP build. You might need to run some benchmarks using both builds and decide which one is more productive in your case, based on the estimated computational cost.
To run your NAMD non-SMP simulations as a job, use the following Slurm batch template:
#!/bin/bash
#SBATCH --partition=cn # Partition name
## Ask Discoverer HPC support team nodes on which
## partition to employ for your productive runs
#SBATCH --job-name=namd_non_smp # Job Name
#SBATCH --time=06:00:00 # WallTime
#SBATCH --nodes 1 # Number of nodes
# You may vary this number during the benchmarking
#SBATCH --ntasks-per-core 1 # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 128 # Number of MPI tasks per node
# You may vary this number during the benchmarking
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
module purge
module load NAMD/latest-intelmpi-nosmp
export I_MPI_PIN=1
export I_MPI_PIN_DOMAIN=omp:platform
export FI_PROVIDER=verbs
export UCX_NET_DEVICES=mlx5_0:1
cd $SLURM_SUBMIT_DIR
mpirun namd2 --outputname /disofs/$USER/taskname-nonsmp /path/to/the/input/file.namd
Specify the parmeters and resources required for successfully running and completing the job:
- Slurm partition of compute nodes, based on your project resource reservation (
--partition
)- job name, under which the job will be seen in the queue (
--job-name
)- wall time for running the job (
--time
)- number of occupied compute nodes (
--nodes
)- number of MPI proccesses per node (
--ntasks-per-node
)- version of NAMD to run after
module load
(see Supported versions)
Save the complete Slurm job description as a file, for example /discofs/$USER/run_namd/run_nonsmp_mpi.batch
, and submit it to the queue:
cd /discofs/$USER/run_namd
sbatch run_nonsmp_mpi.batch
Upon successful submission, the standard output will be directed into the file /discofs/$USER/run_namd/slurm.%j.out
(where %j
stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_namd/slurm.%j.err
.
Getting help¶
See Getting help