LAMMPS¶
Versions available¶
Supported versions¶
Note
The versions of LAMMPS installed in the software repository are built and supported by the Discoverer HPC team.
To check which LAMMPS versions and build types are currently supported on Discoverer, execute on the login node:
module avail lammps
We highly recommend to use the latest version of LAMMPS available in the software repository. Note that you can load the latest version by just typing:
module load lammps
Important
The latest version of LAMMPS installed in the software repository is built against the GCC OpenMP library to support SMP runs on top of OpenMPI. This is currently the most effective way to run LAMMPS in parallel on the Discoverer Petascale Supercomputer compute nodes.
Along with the executable lmp, some old installations of LAMMPS, still available on Discoverer HPC, may include the following tools:
binary2txt
chain.x
micelle2d.x
msi2lmp
stl_bin2txt
Note
The latest version of LAMMPS does not provide those tools anymore.
The recipe developed for compiling the programming code of LAMMPS is available online at:
https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/lammps
Feel free to copy, modify, and comment on those recipes. Log files are also available in the same repository.
User-supported versions (bring your own builds)¶
You are welcome to use your own LAMMPS builds—whether you bring a pre-compiled LAMMPS version or choose to compile it yourself. However, please note that these user-installed builds are not supported by the Discoverer HPC team. Therefore you cannot ask our team to help you with any issues related to them.
Running LAMMPS simulations¶
To run a simulation with LAMMPS, you execute the lmp program using your prepared set of input files to generate trajectories and analysis outputs.
Warning
Never run LAMMPS productive or test simulations directly on the login node (login.discoverer.bg). Those simulations must be submitted as Slurm jobs on the compute nodes or run using srun in interactive mode, whenever necessary.
Warning
Always store your simulation trajectories and analysis results in your Per-project scratch and storage folder. Do not use your home folder for this purpose under any circumstances. The home folder has limited storage capacity and is not suitable for storing large files. Neihter it can handle well parallel I/O operations.
Using MPI (Distributed-memory parallelism)¶
The following template can be useful for creating a Slurm batch job script to submit to the queue and it runs LAMMPS in distributed-memory parallelism mode:
#!/bin/bash
#SBATCH --partition=cn # Partition name
## Ask Discoverer HPC support team nodes on which
## partition to employ for your productive runs
#SBATCH --job-name=lammps_MPI # Job Name
#SBATCH --time=06:00:00 # WallTime
#SBATCH --account=<your_slurm_account_name>
#SBATCH --qos=<the_qos_name_you_want_to_follow>
#SBATCH --nodes 1 # Number of nodes
#SBATCH --cpus-per-task 1 # Number of OpenMP threads per MPI (SMP goes here)
# You may vary this number during the benchmarking
# simulatiions
#SBATCH --ntasks-per-core 1 # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 128 # Number of MPI tasks per node
# You may vary this number during the benchmarking
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
module purge
module load lammps/20250722
export UCX_NET_DEVICES=mlx5_0:1
cd $SLURM_SUBMIT_DIR
mpirun lmp -in in.file
Specify the parmeters and resources required for successfully running and completing the job:
- Slurm partition of compute nodes, based on your project resource reservation (
--partition)- job name, under which the job will be seen in the queue (
--job-name)- wall time for running the job (
--time)- number of occupied compute nodes (
--nodes)- number of MPI proccesses per node (
--ntasks-per-node)- number of threads (threads) per MPI process (
--cpus-per-task)- version of LAMMPS to run, after
module load(see Supported versions)
Save the complete Slurm job description as a file, for example /valhalla/projects/your_project_name/run_lammps/run_lammps_MPI.batch, and submit it to the queue afterwards:
cd /valhalla/projects/your_project_name/run_lammps
sbatch run_lammps_MPI.batch
Upon successful submission, the standard output will be directed by Slurm into the file /valhalla/projects/your_project_name/run_lammps/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /valhalla/projects/your_project_name/run_lammps/slurm.%j.err.
Using hybrid parallelization (OpenMP + MPI)¶
Important
Be aware that not all simulation protocols and methods included in LAMMPS support hybrid parallelization, based on the comnbined use of OpenMP and MPI libraries. The use of OpenMP promoted threading should be in sync with the input configuration. Consult the official LAMMPS documentation.
The following template can be useful for creating a Slurm batch job script to submit to the queue and it runs LAMMPS in hybrid parallelization mode:
#!/bin/bash
#SBATCH --partition=cn # Partition name
## Ask Discoverer HPC support team nodes on which
## partition to employ for your productive runs
#SBATCH --job-name=lammps_hybrid # Job Name
#SBATCH --time=06:00:00 # WallTime
#SBATCH --account=<your_slurm_account_name>
#SBATCH --qos=<the_qos_name_you_want_to_follow>
#SBATCH --nodes 1 # Number of nodes
#SBATCH --cpus-per-task 2 # Number of OpenMP threads per MPI (SMP goes here)
# You may vary this number during the benchmarking
# simulatiions
#SBATCH --ntasks-per-core 1 # Bind one MPI tasks to one CPU core
#SBATCH --ntasks-per-node 128 # Number of MPI tasks per node
# You may vary this number during the benchmarking
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
module purge
module load lammps/20250722
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export UCX_NET_DEVICES=mlx5_0:1
cd $SLURM_SUBMIT_DIR
mpirun lmp -sf omp -pk omp ${SLURM_CPUS_PER_TASK} -in in.file
Specify the parmeters and resources required for successfully running and completing the job:
- Slurm partition of compute nodes, based on your project resource reservation (
--partition)- job name, under which the job will be seen in the queue (
--job-name)- wall time for running the job (
--time)- number of occupied compute nodes (
--nodes)- number of MPI proccesses per node (
--ntasks-per-node)- number of threads (threads) per MPI process (
--cpus-per-task)- version of LAMMPS to run, after
module load(see Supported versions)
Save the complete Slurm job description as a file, for example /valhalla/projects/your_project_name/run_lammps/run_lammps_hybrid.batch, and submit it to the queue afterwards:
cd /valhalla/projects/your_project_name/run_lammps
sbatch run_lammps_hybrid.batch
Upon successful submission, the standard output will be directed by Slurm into the file /valhalla/projects/your_project_name/run_lammps/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /valhalla/projects/your_project_name/run_lammps/slurm.%j.err.
Running Free Energy calculations¶
Here it is explained how to run free energy calculations using LAMMPS on the Discoverer HPC cluster. The example scripts and input files are taken from the following repository:
https://github.com/freitas-rodrigo/FreeEnergyLAMMPS
Clone the repository into your project directory before proceeding:
cd /valhalla/projects/<your_slurm_account_name>/
git clone https://github.com/freitas-rodrigo/FreeEnergyLAMMPS.git
Two methods are covered here: the Frenkel–Ladd method and the reversible scaling method. Each follows the same two-stage pattern: first, run LAMMPS via a SLURM job to produce raw data; then, run the post-processing Python scripts (also via SLURM) to integrate the data and produce plots.
1. Frenkel–Ladd method¶
1.1 Fixing job.sh and submitting the LAMMPS job¶
The job.sh script in the repository assumes a locally compiled LAMMPS binary called lmp_serial. On Discoverer the LAMMPS executable is simply called lmp, so this must be corrected before submitting anything. Navigate to the Frenkel–Ladd directory and apply the fix with sed:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd
sed -i 's/lammps="..\/..\/lammps\/src\/lmp_serial"/lammps\="lmp"/g' job.sh
Next, create the following SLURM batch script and save it as run.sh in the same directory:
#!/bin/bash
#SBATCH --partition cn
#SBATCH --job-name frenkel_ladd
#SBATCH --time 00:15:00
#SBATCH --account <your_slurm_account_name>
#SBATCH --qos <your_slurm_account_name>
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 1
#SBATCH --ntasks-per-core 1
#SBATCH --cpus-per-task 4
#SBATCH --mem 16G
#SBATCH -o frenkel_ladd.%j.out
#SBATCH -e frenkel_ladd.%j.err
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PROC_BIND=false
module purge || exit 1
module load lammps/20250722 || exit 2
cd ${SLURM_SUBMIT_DIR}
bash job.sh
Save this file as:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/run.sh
Then submit the job to the queue:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/
sbatch run.sh
You can monitor progress by watching the SLURM output files as they are written:
tail -f frenkel_ladd.<jobid>.out
tail -f frenkel_ladd.<jobid>.err
If the job completes successfully, the directory data/ will be created inside the Frenkel–Ladd folder and will contain the following
files:
backward_100K.dat forward_100K.dat lammps_100K.log
backward_400K.dat forward_400K.dat lammps_400K.log
backward_700K.dat forward_700K.dat lammps_700K.log
backward_1000K.dat forward_1000K.dat lammps_1000K.log
backward_1300K.dat forward_1300K.dat lammps_1300K.log
backward_1600K.dat forward_1600K.dat lammps_1600K.log
1.2 Post-processing: integrating the data and plotting¶
This step must only be attempted once the files listed above are present in the data/ directory. Running it before that will cause it to fail.
Navigate to the post-processing directory:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/post_processing
The integration script uses SciPy, and the version installed on Discoverer is newer than the one the code was written for. Apply the following two fixes exactly once:
sed -i '/^import scipy\.constants as sc/a import scipy' integrate.py
sed -i 's/trapz/scipy.integrate.trapezoid/g' integrate.py
Do not run these commands a second time, as they will corrupt the file.
Create the following SLURM batch script and save it as run.sh in the post-processing directory:
#!/bin/bash
#SBATCH --partition cn
#SBATCH --job-name plotting
#SBATCH --time 00:15:00
#SBATCH --account <your_slurm_account_name>
#SBATCH --qos <your_slurm_account_name>
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 1
#SBATCH --ntasks-per-core 1
#SBATCH --cpus-per-task 4
#SBATCH --mem 16G
#SBATCH -o plotting.%j.out
#SBATCH -e plotting.%j.err
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PROC_BIND=false
module purge || exit 1
module load lammps/20250722 || exit 2
module load lammps/freeenergy || exit 3
cd ${SLURM_SUBMIT_DIR}
python3 integrate.py && python3 plot.py
Save it as:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/post_processing/run.sh
Then submit it:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/post_processing
sbatch run.sh
A successful run will produce the following output files:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/data/free_energy.dat
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/frenkel_ladd/post_processing/fig_free_energy_vs_temperature.png
Transfer fig_free_energy_vs_temperature.png to your local machine to visualise it.
2. Reversible scaling method¶
2.1 Fixing job.sh and submitting the LAMMPS job¶
As with the Frenkel–Ladd case, the executable name in job.sh must be
corrected first:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/
sed -i 's/lammps="..\/..\/lammps\/src\/lmp_serial"/lammps\="lmp"/g' job.sh
Create the following SLURM batch script:
#!/bin/bash
#SBATCH --partition cn
#SBATCH --job-name reversible_scaling
#SBATCH --time 00:15:00
#SBATCH --account <your_slurm_account_name>
#SBATCH --qos <your_slurm_account_name>
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 1
#SBATCH --ntasks-per-core 1
#SBATCH --cpus-per-task 4
#SBATCH --mem 16G
#SBATCH -o reversible_scaling.%j.out
#SBATCH -e reversible_scaling.%j.err
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PROC_BIND=false
module purge || exit 1
module load lammps/20250722 || exit 2
cd ${SLURM_SUBMIT_DIR}
bash job.sh
Save it as:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/run.sh
Submit it to the queue:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/
sbatch run.sh
If execution is successful, the data/ directory will be created and
will contain:
backward.dat
forward.dat
lammps.log
2.2 Post-processing: integrating the data and plotting¶
Navigate to the post-processing directory and apply the SciPy compatibility fix:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/post_processing
sed -i 's/cumtrapz/cumulative_trapezoid/g' integrate.py
Again, apply this fix only once.
Create the following SLURM batch script:
#!/bin/bash
#SBATCH --partition cn
#SBATCH --job-name plotting
#SBATCH --time 00:15:00
#SBATCH --account <your_slurm_account_name>
#SBATCH --qos <your_slurm_account_name>
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 1
#SBATCH --ntasks-per-core 1
#SBATCH --cpus-per-task 4
#SBATCH --mem 16G
#SBATCH -o plotting.%j.out
#SBATCH -e plotting.%j.err
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PROC_BIND=false
module purge || exit 1
module load lammps/20250722 || exit 2
module load lammps/freeenergy || exit 3
cd ${SLURM_SUBMIT_DIR}
python3 integrate.py && python3 plot.py
Save it as:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/post_processing/run.sh
Then submit it:
cd /valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/post_processing/
sbatch run.sh
A successful run will create the following plot:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/post_processing/fig_free_energy_vs_temperature.png
The numerical data underlying the plot will be found at:
/valhalla/projects/<your_slurm_account_name>/FreeEnergyLAMMPS/reversible_scaling/data/free_energy.dat
Transfer the image to your local machine to visualise it.
Getting help¶
See Getting help