Tinker

Versions avaiable

Supported versions

Note

The versions of Tinker installed in the software repository are built and supported by the Discoverer HPC team.

To check which Tinker versions and build types are currently supported on Discoverer, execute on the login node:

module avail tinker

Important

Tinker supports SMP (Shared-Memory and Network-Based Parallelism). MPI is not supported there by design.

User-supported versions

Users are welcome to bring or compile, and use their own builds of Tinker, but those builds will not be supported by the Discoverer HPC team.

In case their intentions are to compile Tinker by themselves, they might download and modify the build recipes developed by the Discoverer HPC support team:

https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/tinker/

Running simulations

Running simulations means invoking any of the installed Tinker tools for input files processing.

Warning

You MUST NOT execute simulation directly upon the login node (login.discoverer.bg/login.bg.discoverer.bg). You have to run your simulations as Slurm jobs only.

Warning

Write your trajectories and result of analysis only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!

The following Tinker tools are available as executable files in the software repository of Discoverer HPC:

alchemy
analyze
anneal
archive
bar
correlate
critical
crystal
diffuse
distgeom
document
dynamic
freefix
gda
intedit
intxyz
minimize
minirot
minrigid
mol2xyz
molxyz
monte
newton
newtrot
nucleic
optimize
optirot
optrigid
path
pdbxyz
polarize
poledit
potential
prmedit
protein
pss
pssrigid
pssrot
radial
saddle
scan
sniffer
spacefill
spectrum
superpose
testgrad
testhess
testpair
testpol
testrot
testvir
timer
timerot
torsfit
valence
vibbig
vibrate
vibrot
xtalfit
xtalmin
xyzedit
xyzint
xyzmol2
xyzpdb

To run any of the Tinker tools as a Slurm job, use the following batch script template:

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=tinker      # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --nodes           1    # Number of nodes to use
#SBATCH --cpus-per-task   32   # Number of OpenMP threads

#SBATCH --ntasks-per-node 1    # Number of Tinker processes per compute node (must be 1)

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

module purge

module load tinker/8/latest-nvidia

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread

cd $SLURM_SUBMIT_DIR

tinker_tool_name input_file

Specify the parameters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes have to be 1 per job (--nodes  1)
  • number of Tinker tool processes per node have to be 1 per job(--ntasks-per-node  1)
  • number of threads (threads) per process (--cpus-per-task)
  • version of Tinker to run, after module load (see Supported versions)

Save the complete Slurm job description as a file, for example /discofs/$USER/run_namd/run_tinker.batch, and submit it to the queue afterwards:

cd /discofs/$USER/run_tinker
sbatch run_tinker.batch

Upon successful submission, the standard output will be directed by Slurm into the file /discofs/$USER/run_tinker/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_tinker/slurm.%j.err.

Important

Always try to estimate the effective rank of the SMP parallelization based on a short running test, before running the productive simulation. Be aware that employing more than 32 OpenMP threads per job might not contribute to the speed of execution.

Running benchmarks

Discoverer HPC offers the possibility to execute any of the benchmarks originally shipped with the source code distribution of Tinker. Below is an example of how to run the benchmarks.

Create a directory under your Personal scratch and storage folder (/discofs/username):

mkdir /discofs/$USER/tinker

and copy there both force field and benchmark input data folders:

cp -pr /opt/software/tinker/8/8.10.2-nvidia/params /discofs/$USER/tinker
cp -pr /opt/software/tinker/8/8.10.2-nvidia/bench /discofs/$USER/tinker

Format the content of the benchmark scripts (those files with extension .run) inder bench folder:

cd /discofs/$USER/tinker/bench
sed -i 's/..\/bin\///g' *.run

Compose a Slurm batch script file and place it under /discofs/$USER/tinker/bench/ folder. For instance, this is the content of the batch that runs the benchmark script bench9.run:

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=tinker      # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --nodes           1    # Number of nodes to use
#SBATCH --cpus-per-task   32   # Number of OpenMP threads

#SBATCH --ntasks-per-node 1    # Number of Tinker processes per compute node (must be 1)

#SBATCH -o slurm.bench9.out        # STDOUT
#SBATCH -e slurm.bench9.err        # STDERR

module purge

module load tinker/8/latest-nvidia

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread

cd $SLURM_SUBMIT_DIR

bash bench9.run

(you can replace bench9.run inside the batch with any of the other benchmark run scripts). Once stored in the file /discofs/$USER/tinker/bench/bench9.sbatch in could be submmitted as a job to the Slurm queue:

cd /discofs/$USER/tinker/bench/
sbatch bench9.sbatch

Note that the benchmark job timing will be reported in /discofs/$USER/run_tinker/slurm.bench9.err upon completion. The standard output generated during the execution will be collected and stored as a stream in /discofs/$USER/run_tinker/slurm.bench9.out. During the execution of the job you can follow the progress by tailing the standard output:

tail -f /discofs/$USER/tinker/bench/slurm.bench9.out

Getting help

See Getting help