Lua

Versions avaiable

Supported versions

Note

The versions of Lua installed in the software repository are built and supported by the Discoverer HPC team. It is faster than the one available in Linux distributions as a package.

To check which Lua versions are currently supported on Discoverer, execute on the login node:

module avail lua

and grep the output for “lua”.

The recipes, adopted for compiling the programming code, are publicly available at:

https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/lua

User-supported versions

Users are welcome to bring, or compile, and use their own builds of Lua, but those builds will not be supported by the Discoverer HPC team.

Loading Lua

To load Lua in your Slurm batch scripts, use the template:

#!/bin/bash
#
#SBATCH --partition=cn         # Name of the partition of nodes (as the support team)
#SBATCH --job-name=lua
#SBATCH --time=00:01:00        # The job completes for ~ 1 min

#SBATCH --nodes           1    # Two nodes will be used
#SBATCH --ntasks-per-node 1    # Number processes per node
#SBATCH --ntasks-per-core 1    # Run only one process per CPU core

#SBATCH -o slurm.%j.out        # STDOUT
#SBATCH -e slurm.%j.err        # STDERR

ulimit -Hs unlimited
ulimit -Ss unlimited

module purge
module load lua/latest-intel

lua script.lua

Specify the parameters and resources required for successfully running and completing the job:

  • Slurm partition of compute nodes, based on your project resource reservation (--partition)
  • job name, under which the job will be seen in the queue (--job-name)
  • wall time for running the job (--time)
  • number of occupied compute nodes (--nodes)
  • number of MPI proccesses per node (--ntasks-per-node)
  • version of Lua to run after module load (see Supported versions)

Save the complete Slurm job description as a file, storing it inside the folder with the input configuration, for example /discofs/$USER/run_lua/run.batch, and submit it to the queue afterwards

cd /discofs/$USER/run_lua/
sbatch run.batch

Upon successful submission, the standard output will be directed by Slurm into the file /discofs/$USER/run_lua/slurm.%j.out (where %j stands for the Slurm job ID), while the standard error output will be stored in /discofs/$USER/run_lua/slurm.%j.err.

Getting help

See Getting help