LAMMPS

Versions avaiable

Supported versions

Note

The versions of LAMMPS installed in the software repository are built and supported by the Discoverer HPC team.

To check which LAMMPS versions and build types are currently supported on Discoverer, execute on the login node:

module avail

and grep the output for “lammps”.

Important

The versions of LAMMPS installed in the software repository are built against Intel OpenMP library to support SMP runs on top of MPI. That is the most effective way of running LAMMPS in parallel.

Along with the executable lmp LAMMPS installation on Discoverer HPC includes the following tools:

binary2txt
chain.x
micelle2d.x
msi2lmp
stl_bin2txt

Note

LAMMPS shell is not part of the installation!

The recipe developed for compiling the programming code of LAMMPS is available online at:

https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/lammps

It is based on employing DPC++/C++ compilers (the new LLVM-based Intel compilers) provided by the latest Intel oneAPI. Feel free to copy, modify, and comment on those kinds of recipes.

User-supported versions

Users are welcome to bring, or compile, and use their own builds of LAMMPS but those builds will not be supported by Discoverer HPC team.

Running simulations

Running simulations means invoking lmp executable for generating trajectories based on a given set of input files.

Warning

You MUST NOT execute simulation directly upon the login node (login.discoverer.bg). You have to run your simulations as Slurm jobs only.

Warning

Write your trajectories and result of analysis only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!

To run your LAMMPS simulations as a job, use the following Slurm batch template:

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=lammps      # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --nodes           1    # Number of nodes
#SBATCH --cpus-per-task   4    # Number of OpenMP threads per MPI (SMP goes here)
                               # You may vary this number during the benchmarking
                               # simulatiions

#SBATCH --ntasks-per-core 1    # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 16   # Number of MPI tasks per node
                               # You may vary this number during the benchmarking

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

module purge

module load lammps/latest

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1

cd $SLURM_SUBMIT_DIR

mpirun lmp -sf omp -pk omp ${SLURM_CPUS_PER_TASK} -in in.file

Specify the parmeters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • number of threads (threads) per MPI process (--cpus-per-task)
  • version of LAMMPS to run, after module load (see Supported versions)

Save the complete Slurm job description as a file, for example /discofs/$USER/run_lammps/run_lammps.batch, and submit it to the queue afterwards:

cd /discofs/$USER/run_lammps
sbatch run_lammps.batch

Upon successful submission, the standard output will be directed by Slurm into the file /discofs/$USER/run_lammps/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_lammps/slurm.%j.err.

Using the examples

The installation of LAMMPS available in the software repository of Discoverer HPC comes with the official examples. To print out the folder with the examples, execute on the login node:

module load lammps/latest
LMP=`which lmp`
EXAMPLE_FOLDER=`find $LMP -type f -name "lmp" -print | sort -u | uniq | rev | cut -d "/" -f 3- | rev`/share/lammps/examples
echo $EXAMPLE_FOLDER

Then you can select any of the examples there and try running them using the Slurm batch script from above.

For instance, copy the VISCOSITY example into your Personal scratch and storage folder (/discofs/username) (as a sub-folder):

mkdir /discofs/$USER/run_lammps
cp $EXAMPLE_FOLDER/VISCOSITY /discofs/$USER/run_lammps
cd /discofs/$USER/run_lammps/VISCOSITY

create there the following Slurm batch script (it is almost the same as the one from example above, but points to the specific input file):

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=lammps      # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --nodes           1    # Number of nodes
#SBATCH --cpus-per-task   4    # Number of OpenMP threads per MPI (SMP goes here)
                               # You may vary this number during the benchmarking
                               # simulatiions

#SBATCH --ntasks-per-core 1    # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 16   # Number of MPI tasks per node
                               # You may vary this number during the benchmarking

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

module purge

module load lammps/latest

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1

cd $SLURM_SUBMIT_DIR

mpirun lmp -sf omp -pk omp ${SLURM_CPUS_PER_TASK} -in in.nemd.2d

and save it as /discofs/$USER/run_lammps/VISCOSITY/run.nemd.2d. Once ready, submit it to the queue:

cd /discofs/$USER/run_lammps/VISCOSITY
sbatch run.nemd.2d

and follow the updates to slurm.%j.out and slurm.%j.err files.

Getting help

See Getting help