Where and how to compile code¶
Table of Contents
About¶
This document describes how and where to compile code on the Discoverer CPU cluster. Discoverer provides access to several compiler suites that support C, C++, and Fortran programming languages, each with different features, optimizations, and use cases.
Understanding which compiler to use and how to properly configure your build environment is essential for successful code compilation and optimal performance on Discoverer CPU cluster.
Where to compile code¶
Warning
The programming code (single programming files, code projects with hundreds of files with code) must be compiled only on the compute nodes. The login nodes (login.discoverer.bg for CPU cluster, and login-plus.discoverer.bg for GPU cluster) are not suitable for compiling code because they are shared systems and any huge I/O load created by the compilation tasks may degrade the performance in a way that the users will not be able to use the system effectively. Also note that the login nodes may have different CPU architecture than the compute nodes, which may cause compatibility issues when compiling code on the login nodes using specific optimizations or features. That is espesially valid for GPU cluster (Discoverer+), where the login node does not have GPUs installed, and the CPU has a different model, compared to the compute nodes.
On the compute nodes, compilation tasks should be performed using SLURM jobs backed interactive Bash sessions or through SLURM job (batch) scripts.
Note
The interactive sessions are suitable for quick tests and debugging, while the SLURM jobs are more suitable for longer compilation tasks or when compiling large projects and performing optimisations (e.g. link-time optimization, profile-guided optimization, etc.).
Compilers available on Discoverer clusters¶
The compiler collections available on Discoverer CPU and GPU clusters are listed in Compilers. Check if your compiler of choice is in that list. If it is not, you can request a new compiler collection to be installed on the clusters by contacting the Discoverer HPC team (see Getting help).
Note that in most cases, the baseline GCC compiler collection is used. It is directly installed on the compute nodes and there is not need to load any enviornment modules to access its installation.
Running compilation tasks using SLURM job scripts¶
Important
This is the recommended way to run compilation tasks on the Discoverer clusters.
Discoverer CPU cluster¶
Create a compilation job script that requests the execution of a compilation task on a compute node of the Discoverer CPU cluster, by specifying the cn partition and the number of CPUs and memory to be used for the compilation task (here we use 8 CPUs and 32GB of memory in the example, but you may adjust that to your needs):
#!/bin/bash # #SBATCH --partition=cn # Partition #SBATCH --job-name=compile # Job name #SBATCH --time=02:00:00 # WallTime - set it accordingly # #SBATCH --account=<specify_your_slurm_account_name_here> #SBATCH --qos=<specify_the_qos_name_here_if_it_is_not_the_default_one_for_the_account> # #SBATCH --nodes=1 # Single node #SBATCH --ntasks=1 # Single task #SBATCH --cpus-per-task=8 # Number of parallel compilation threads #SBATCH --mem=32G # Memory for compilation # #SBATCH -o compile.%j.out # STDOUT #SBATCH -e compile.%j.err # STDERR # Load required modules module purge || exit module load llvm/21/21.1.3 || exit module load cmake/3/3.31.6 || exit # Add other dependencies as needed # Set compilation directory cd ${SLURM_SUBMIT_DIR} # Set number of parallel jobs for make/cmake export MAKEFLAGS="-j${SLURM_CPUS_PER_TASK}" export CMAKE_BUILD_PARALLEL_LEVEL=${SLURM_CPUS_PER_TASK} # Configure (if using CMake) tar xvf package-1.1.1.tar.gz || exit cd package-1.1.1 || exit cmake -B build-llvm -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release || exit cmake --build build-llvm -j${SLURM_CPUS_PER_TASK} || exit ctest --test-dir build-llvm --output-on-failure || exit cmake --install build-llvm || exit
Save the script into a file, e.g. compile.sh, and submit that script on the login node (login.discoverer.bg) using the following command:
sbatch compile.sh
Then follow the compilation progress by monitoring the output of the job:
tail -f compile.XXXXXX.out
where XXXXXX is the job ID of the compilation job. You may also check the standard error output of the job:
tail -f compile.XXXXXX.err
where XXXXXX is the job ID of the compilation job.
Once the compilation is finished, you can check the output of the job:
cat compile.XXXXXX.out
where XXXXXX is the job ID of the compilation job.
Discoverer GPU cluster¶
CPU-only compilation task¶
In other to do that, you must utilize a QoS that allows the execution of CPU-only jobs, because the default QoS only allows the execution of jobs that use GPUs. The example below utilises the QoS 2cpu-single-host, which allows the execution of CPU-only jobs on a single host. In case you need to run more parallel compilations, contact our team to request a different QoS (see Getting help).
Below is an example of a compilation job script that requests the execution of a compilation task on a compute node of the Discoverer GPU cluster, by specifying the common partition and the number of CPUs and memory to be used for the compilation task (here we use 2 CPUs and 32GB of memory in the example, but you may adjust that to your needs):
#!/bin/bash
#
#SBATCH --partition=common # Partition
#SBATCH --job-name=compile # Job name
#SBATCH --time=02:00:00 # WallTime - set it accordingly
#
#SBATCH --account=<specify_your_slurm_account_name_here>
#SBATCH --qos=2cpu-single-host # QoS that allows the execution of CPU-only jobs
#
#SBATCH --nodes=1 # Single node
#SBATCH --ntasks=2 # Single task
#SBATCH --cpus-per-task=1 # Number of parallel compilation threads
#SBATCH --mem=32G # Memory for compilation
#
#SBATCH -o compile.%j.out # STDOUT
#SBATCH -e compile.%j.err # STDERR
# Load required modules
module purge || exit
module load llvm/21/21.1.3 || exit
module load cmake/3/3.31.6 || exit
# Add other dependencies as needed
# Set compilation directory
cd ${SLURM_SUBMIT_DIR}
# Set number of parallel jobs for make/cmake
export MAKEFLAGS="-j2"
export CMAKE_BUILD_PARALLEL_LEVEL=2
# Configure (if using CMake)
tar xvf package-1.1.1.tar.gz || exit
cd package-1.1.1 || exit
cmake -B build-llvm -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release || exit
cmake --build build-llvm -j2 || exit
ctest --test-dir build-llvm --output-on-failure || exit
cmake --install build-llvm || exit
Save the script into a file, e.g. compile.sh, and submit that script on the login node (login-plus.discoverer.bg) using the following command:
sbatch compile.sh
Then follow the compilation progress by monitoring the output of the job:
tail -f compile.XXXXXX.out
where XXXXXX is the job ID of the compilation job. You may also check the standard error output of the job:
tail -f compile.XXXXXX.err
where XXXXXX is the job ID of the compilation job.
Once the compilation is finished, you can check the output of the job:
cat compile.XXXXXX.out
where XXXXXX is the job ID of the compilation job.
GPU-demanding compilation task¶
Below is an example of a compilation job script that requests the execution of a compilation task on a compute node of the Discoverer GPU cluster, by specifying the common-gpu partition and the number of GPUs and memory to be used for the compilation task (here we use 1 GPU and 16GB of memory in the example, but you may adjust that to your needs). Be sure that you really need access to the GPUs. If you only need to compile code that does not use GPUs for testing or compute capability discovery, you can use the CPU-only compilation task (see above).
#!/bin/bash # #SBATCH --partition=common-gpu # Partition #SBATCH --job-name=compile # Job name #SBATCH --time=02:00:00 # WallTime - set it accordingly # #SBATCH --account=<specify_your_slurm_account_name_here> #SBATCH --qos=<specify_the_qos_name_here_if_it_is_not_the_default_one_for_the_account> # #SBATCH --nodes=1 # Single node #SBATCH --ntasks=1 # Single task #SBATCH --cpus-per-task=8 # Number of parallel compilation threads #SBATCH --mem=16G # Memory for compilation #SBATCH --gres=gpu:1 # Request 1 GPU # #SBATCH -o compile.%j.out # STDOUT #SBATCH -e compile.%j.err # STDERR # Load required modules module purge || exit module load llvm/21/21.1.3 || exit module load cmake/3/3.31.6 || exit module load cuda/12/12.8.0 || exit # Add other dependencies as needed # Set compilation directory cd ${SLURM_SUBMIT_DIR} # Set number of parallel jobs for make/cmake export MAKEFLAGS="-j8" export CMAKE_BUILD_PARALLEL_LEVEL=8 # Configure (if using CMake) tar xvf package-1.1.1.tar.gz || exit cd package-1.1.1 || exit cmake -B build-llvm -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release || exit cmake --build build-llvm -j8 || exit ctest --test-dir build-llvm --output-on-failure || exit cmake --install build-llvm || exit
Save the script into a file, e.g. compile.sh, and submit that script on the login node (login-plus.discoverer.bg) using the following command:
sbatch compile.sh
Then follow the compilation progress by monitoring the output of the job:
tail -f compile.XXXXXX.out
where XXXXXX is the job ID of the compilation job. You may also check the standard error output of the job:
tail -f compile.XXXXXX.err
where XXXXXX is the job ID of the compilation job.
Once the compilation is finished, you can check the output of the job:
cat compile.XXXXXX.out
where XXXXXX is the job ID of the compilation job.
Running compilation tasks using SLURM interactive Bash sessions¶
Note
The interactive sessions are suitable for quick tests and debugging, while the SLURM jobs are more suitable for longer compilation tasks or when compiling large projects and performing optimisations (e.g. link-time optimization, profile-guided optimization, etc.).
Discoverer CPU cluster¶
On the login node of the Discoverer CPU cluster (login.discoverer.bg), you can request the execution of an interactive Bash session on a compute node using the following command:
srun --partition=cn --time=01:00:00 --nodes=1 \ --account=<specify_your_slurm_account_name_here> \ --qos=<specify_the_qos_name_here_if_it_is_not_the_default_one_for_the_account> \ --ntasks=1 --cpus-per-task=8 --mem=16G --pty bash
Wait until SLURM job is started and you are logged in the interactive session. You will see that the Bash hostname name locator in the terminal prompt (it is displayed between square brackets after the username) is changed to the name of the compute node you are logged in. For instance:
[username@cn0001 ~]$ hostname cn0001
shows that the Base session is running on the compute node cn0001.discoverer.bg.
For that moment on, until the interactive session is terminated, you can use the compute node as if you were logged in directly to it. You can run any commands you need to compile your code, as well as loading the necessary environment modules to compile your code:
module load llvm/21/21.1.3 clang++ -o program program.cpp
or
module load llvm/21/21.1.3 module load cmake/3/3.31.6 tar xvf package-1.1.1.tar.gz cd package-1.1.1 cmake -B build-llvm -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_Fortran_COMPILER=clang++ -DCMAKE_INSTALL_PREFIX=/valhalla/projects/<your_project_name>/install cmake --build build-llvm -j8 ctest --test-dir build-llvm --output-on-failure cmake --install build-llvm
Once you are done with the compilation, you can prematurely terminate the interactive session by typing Ctrl+D, or by typing exit or logout. That will return you to your default Base session that is already running on the login node.
Discoverer GPU cluster¶
Requesting an interactive session on the login node of the Discoverer GPU cluster (login-plus.discoverer.bg) is similar to the one on the CPU cluster, but here you may request also GPU resources through GRES allocation.
CPU-only compilation task¶
srun --partition=common --time=01:00:00 --nodes=1 \ --account=<specify_your_slurm_account_name_here> \ --qos=2cpu-single-host \ --ntasks=1 --cpus-per-task=8 --mem=16G --pty bash
GPU-demanding compilation task¶
Be sure that you really need access to the GPUs. If you only need to compile code that does not use GPUs for testing or compute capability discovery, you can use the CPU-only compilation task (see above).
srun --partition=common-gpu --time=01:00:00 --nodes=1 \ --account=<specify_your_slurm_account_name_here> \ --qos=<specify_the_qos_name_here_if_it_is_not_the_default_one_for_the_account> \ --ntasks=1 --cpus-per-task=8 --mem=16G --gres=gpu:1 --pty bash
Wait until SLURM job is started and you are logged in the interactive session. You will see that the Bash hostname name locator in the terminal prompt (it is displayed between square brackets after the username) is changed to the name of the compute node you are logged in. For instance:
[username@dgx1 ~]$ hostname dgx1
shows that the Base session is running on the compute node cn0001.discoverer.bg.
For that moment on, until the interactive session is terminated, you can use the compute node as if you were logged in directly to it. You can run any commands you need to compile your code, as well as loading the necessary environment modules to compile your code:
module load llvm/21/21.1.3 module load cuda/12/12.8.0 nvcc -o program program.cu
or
module load llvm/21/21.1.3 module load cmake/3/3.31.6 module load cuda/12/12.8.0 tar xvf package-1.1.1.tar.gz cd package-1.1.1 cmake -B build-llvm -DCMAKE_C_COMPILER=clang \ -DCMAKE_CXX_COMPILER=clang++ \ -DCMAKE_Fortran_COMPILER=clang++ \ -DCMAKE_CUDA_COMPILER=nvcc \ -DCMAKE_INSTALL_PREFIX=/valhalla/projects/<your_project_name>/install cmake --build build-llvm -j8 ctest --test-dir build-llvm --output-on-failure cmake --install build-llvm
Once you are done with the compilation, you can prematurely terminate the interactive session by typing Ctrl+D, or by typing exit or logout. That will return you to your default Base session that is already running on the login node.
Help¶
If you need help with the compilation process, you can contact the Discoverer HPC team (see Getting help).