ABINIT¶
Versions avaiable¶
Supported versions¶
Note
The versions of ABINIT installed in the public software repository are compiled are supported by Discoverer HPC support team.
To display the list of supported versions, execute on the login node:
module avail abinit/
Those versions are compiled against the 4.1.X branch of Open MPI 4.1.X, as well as against a bundle of external libraries (HDF5, NetCDF C, NetCDF Fortran, LibXC, OpenBLAS, BLAS, LAPACK, FFTW3). The recipe employed for compiling the ABINIT programming code is publicly available at:
https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/abinit/
Important
The FFTW3 and linear algebra libraries included in the bundle are not part of Intel oneMKL. FFTW3 is the one compiled based on the procedure published before. The involved linear algebra libraries are separately compiled to support the build of ABINIT. LibXC supports the computation of third derivatives.
User-supported installations¶
Important
Users are welcome to bring, compile, and install within their scratch folders any versions of ABINIT, but those installations will not be supported by the Discoverer HPC team.
Running simulations¶
Running simulations means invoking ABINIT executables for processing the instructions and data supplied as input files.
Warning
You MUST NOT execute simulation directly upon the login node (login.discoverer.bg). You have to run your simulations as Slurm jobs only.
Warning
Write your results only inside your Personal scratch and storage folder (/discofs/username) and DO NOT use for that purpose (under any circumstances) your Home folder (/home/username)!
To run ABINIT you may adopt the following Slurm batch template (the input file is input.abi
):
#!/bin/bash
#SBATCH --job-name=abinit_test
#SBATCH --partition cn
#SBATCH --ntasks=400
#SBATCH --ntasks-per-core=1
#SBATCH --time=1-0:0:0
#SBATCH -o slurm.%j.out # STDOUT
#SBATCH -e slurm.%j.err # STDERR
export UCX_NET_DEVICES=mlx5_0:1
ulimit -s unlimited
module purge
module load abinit/9/latest-intel-openmpi
cd $SLURM_SUBMIT_DIR
mpirun abinit input.abi
Specify the parameters and resources required for successfully running and completing the job:
- the Slurm partition of compute nodes, based on your project resource reservation (
--partition
)- the job name, under which the job will be seen in the queue (
--job-name
)- the wall time for running the job (
--time
)- the number of tasks to run (
--ntasks
), see Notes on the parallelization- number of MPI proccesses per core (
--ntasks-per-core
, keep that 1)- specify the version of ABINIT to run after
module load
- do not change the
export
declarations unless you are told to do so
Save the complete Slurm job description as a file, for example /discofs/$USER/run_abinit/run.batch
, and submit it to the queue:
cd /discofs/$USER/run_abinit
sbatch run.batch
Upon successful submission, the standard output will be directed into the file /discofs/$USER/run_abinit/slurm.%j.out
(where %j
stands for the Slurm job ID). The standard error messages will be stored inside /discofs/$USER/run_abinit/slurm.%j.err
.
Notes on the parallelization¶
ABINIT expects the number of requested parallel MPI tasks (through --ntasks
) to be one defined based on specific input parameters, which means --ntasks
cannot take an arbitrary value. Also, it is better not to allocate MPI tasks based on a number of compute nodes (avoid specifying --nodes
in combination with --ntasks
) and rely on --ntasks
only. In other words, let Slurm to decide on how to distribute the --ntasks
number of parallel MPI tasks over the nodes available in the partition.
Getting help¶
See Getting help