LAMMPS

Versions available

Supported versions

Note

The versions of LAMMPS installed in the software repository are built and supported by the Discoverer HPC team.

To check which LAMMPS versions and build types are currently supported on Discoverer, execute on the login node:

module avail lammps

We highly recommend to use the latest version of LAMMPS available in the software repository. Note that you can load the latest version by just typing:

module load lammps

Important

The latest version of LAMMPS installed in the software repository is built against the GCC OpenMP library to support SMP runs on top of OpenMPI. This is currently the most effective way to run LAMMPS in parallel on the Discoverer Petascale Supercomputer compute nodes.

Along with the executable lmp, some old installations of LAMMPS, still available on Discoverer HPC, may include the following tools:

binary2txt
chain.x
micelle2d.x
msi2lmp
stl_bin2txt

Note

The latest version of LAMMPS does not provide those tools anymore.

The recipe developed for compiling the programming code of LAMMPS is available online at:

https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/lammps

Feel free to copy, modify, and comment on those recipes. Log files are also available in the same repository.

User-supported versions (bring your own builds)

You are welcome to use your own LAMMPS builds—whether you bring a pre-compiled LAMMPS version or choose to compile it yourself. However, please note that these user-installed builds are not supported by the Discoverer HPC team. Therefore you cannot ask our team to help you with any issues related to them.

Running LAMMPS simulations

To run a simulation with LAMMPS, you execute the lmp program using your prepared set of input files to generate trajectories and analysis outputs.

Warning

Never run LAMMPS productive or test simulations directly on the login node (login.discoverer.bg). Those simulations must be submitted as Slurm jobs on the compute nodes or run using srun in interactive mode, whenever necessary.

Warning

Always store your simulation trajectories and analysis results in your Per-project scratch and storage folder. Do not use your home folder for this purpose under any circumstances. The home folder has limited storage capacity and is not suitable for storing large files. Neihter it can handle well parallel I/O operations.

Using OpenMP (Shared-memory parallelism)

Important

Be aware that not all simulation protocols and methods included in LAMMPS support shared-memory parallelism, based on the OpenMP library. The use of OpenMP promoted threading should be in sync with the input configuration. Consult the official LAMMPS documentation.

The following template can be useful for creating a Slurm batch job script to submit to the queue and it runs LAMMPS in shared-memory parallelism mode:

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=lammps_OMP  # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --account=<your_slurm_account_name>
#SBATCH --qos=<the_qos_name_you_want_to_follow>

#SBATCH --nodes           1    # Number of nodes
#SBATCH --cpus-per-task   8    # Number of OpenMP threads per MPI (SMP goes here)
                               # You may vary this number during the benchmarking
                               # simulatiions
#SBATCH --ntasks-per-node 1    # Number of MPI tasks per node
                               # You may vary this number during the benchmarking

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

module purge

module load lammps/20250722

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1

cd $SLURM_SUBMIT_DIR

lmp -sf omp -pk omp ${SLURM_CPUS_PER_TASK} -in in.file

Specify the parmeters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • number of threads (threads) per MPI process (--cpus-per-task)
  • version of LAMMPS to run, after module load (see Supported versions)

Save the complete Slurm job description as a file, for example /valhalla/projects/your_project_name/run_lammps/run_lammps_OMP.batch, and submit it to the queue afterwards:

cd /valhalla/projects/your_project_name/run_lammps
sbatch run_lammps_OMP.batch

Upon successful submission, the standard output will be directed by Slurm into the file /valhalla/projects/your_project_name/run_lammps/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /valhalla/projects/your_project_name/run_lammps/slurm.%j.err.

Using MPI (Distributed-memory parallelism)

The following template can be useful for creating a Slurm batch job script to submit to the queue and it runs LAMMPS in distributed-memory parallelism mode:

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=lammps_MPI  # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --account=<your_slurm_account_name>
#SBATCH --qos=<the_qos_name_you_want_to_follow>

#SBATCH --nodes           1    # Number of nodes
#SBATCH --cpus-per-task   1    # Number of OpenMP threads per MPI (SMP goes here)
                               # You may vary this number during the benchmarking
                               # simulatiions

#SBATCH --ntasks-per-core 1    # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 128  # Number of MPI tasks per node
                               # You may vary this number during the benchmarking

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

module purge

module load lammps/20250722

export UCX_NET_DEVICES=mlx5_0:1

cd $SLURM_SUBMIT_DIR

mpirun lmp -in in.file

Specify the parmeters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • number of threads (threads) per MPI process (--cpus-per-task)
  • version of LAMMPS to run, after module load (see Supported versions)

Save the complete Slurm job description as a file, for example /valhalla/projects/your_project_name/run_lammps/run_lammps_MPI.batch, and submit it to the queue afterwards:

cd /valhalla/projects/your_project_name/run_lammps
sbatch run_lammps_MPI.batch

Upon successful submission, the standard output will be directed by Slurm into the file /valhalla/projects/your_project_name/run_lammps/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /valhalla/projects/your_project_name/run_lammps/slurm.%j.err.

Using hybrid parallelization (OpenMP + MPI)

Important

Be aware that not all simulation protocols and methods included in LAMMPS support hybrid parallelization, based on the comnbined use of OpenMP and MPI libraries. The use of OpenMP promoted threading should be in sync with the input configuration. Consult the official LAMMPS documentation.

The following template can be useful for creating a Slurm batch job script to submit to the queue and it runs LAMMPS in hybrid parallelization mode:

#!/bin/bash

#SBATCH --partition=cn         # Partition name
                               ## Ask Discoverer HPC support team nodes on which
                               ## partition to employ for your productive runs

#SBATCH --job-name=lammps_hybrid  # Job Name
#SBATCH --time=06:00:00        # WallTime

#SBATCH --account=<your_slurm_account_name>
#SBATCH --qos=<the_qos_name_you_want_to_follow>

#SBATCH --nodes           1    # Number of nodes
#SBATCH --cpus-per-task   2    # Number of OpenMP threads per MPI (SMP goes here)
                               # You may vary this number during the benchmarking
                               # simulatiions
#SBATCH --ntasks-per-core 1    # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 128   # Number of MPI tasks per node
                               # You may vary this number during the benchmarking

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

module purge

module load lammps/20250722

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1

cd $SLURM_SUBMIT_DIR

mpirun lmp -sf omp -pk omp ${SLURM_CPUS_PER_TASK} -in in.file

Specify the parmeters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • number of threads (threads) per MPI process (--cpus-per-task)
  • version of LAMMPS to run, after module load (see Supported versions)

Save the complete Slurm job description as a file, for example /valhalla/projects/your_project_name/run_lammps/run_lammps_hybrid.batch, and submit it to the queue afterwards:

cd /valhalla/projects/your_project_name/run_lammps
sbatch run_lammps_hybrid.batch

Upon successful submission, the standard output will be directed by Slurm into the file /valhalla/projects/your_project_name/run_lammps/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /valhalla/projects/your_project_name/run_lammps/slurm.%j.err.

Getting help

See Getting help