LLVM Compiler Infrastructure¶
Warning
The package provided under the name “llvm” in the software repository of Discoverer HPC adopts the programming code of LLVM Compiler Infrastructure project without applying any modifications to that code. In case you need specific LLVM build provided and supported by other vendors of compiler packages, check Intel oneAPI, AMD Optimized Compiler Collection (AOCC), and NVIDIA HPC SDK (former PGI Compilers).
Supported versions¶
To check which versions of LLVM Compiler Infrastructure are currently supported on Discoverer, execute on the login node:
module avail llvm
The LLVM build, available in the software repository, is supported by HPC Discoverer team. Below is the complete list of LLVM projects and runtimes included:
- Projects
- bolt
- clang
- clang-tools-extra
- flang
- libclc
- lld
- lldb
- mlir
- openmp
- polly
- pstl
- Runtimes
- libcxx
- libcxxabi
- libunwind
- compiler-rt
- libc
The LLVM compilers are equipped with internal code generators for producing code optimized for running on major NVIDIA, Intel, and AMD GPU accelerators with HPC compute capabilities. While such GPU accelerators are not yet available on Discoverer’s compute nodes, a new, brand-new GPU partition is on the way.
Important
Our LLVM builds are compiling time self-sufficient. For each new version of the vanilla LLVM code, we use GCC to initially compile that code, and then we use the compilers and libraries from that compilation to compile the code again. It is important to declare here that the LLVM compilers, tools, and libraries we host depend only on the system libraries that come with the Linux distribution (which includes libstdc++). The only exception is to load an alternative version of binutils to support the LLVMgold plugin. But that binutils package also falls back to the system libraries.
The implemented compilation recipe is publicly available at https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/llvm
Loading¶
To access the latest LLVM compilers, load the environment module llvm/latest
:
module load llvm/latest
or select a particular version.
LLVM compilers¶
The LLVM build includes the following C, C++, and Fortran compiler executables:
clang
clang++
flang
Warning
Contrary to the approach adoped in AMD AOCC package, we do not symlink flang
to clang
. Based on our type of installation flang
is a symlink to flang-new
.
Compiler optimization flags for AMD Zen2 CPU¶
Note
The compute nodes of Discoverer HPC are equipped with AMD EPYC 7H12 64-Core processors, which implies AMD Zen2 CPU architecture, whereupon AXV 256 is the highest SIMD instruction (no AVX 512 is available).
Sticking to the following compiler flags can be profitable during the compile-time optimization on AMD Zen2:
-march=znver2 -mtune=native
and it is up on you to add -mfma
to that set.
For example:
clang -march=znver2 -mtune=native ...
clang++ -march=znver2 -mtune=native ...
flang -march=znver2 -mtune=native ...
More on the supported compiler flags and types of optimizations: LLVM User Guides / Optimizations
Interaction with CMake¶
It is recommended to specify explicitly the compiler executables:
-DCMAKE_C_COMPILER=clang
-DCMAKE_CXX_COMPILER=clang++
-DCMAKE_Fortran_COMPILER=flang
The corresponding optimization compiler flags can be passed to cmake
the usual way:
-DCMAKE_C_FLAGS="-march=znver2 -mtune=native"
-DCMAKE_CXX_FLAGS="-march=znver2 -mtune=native"
-DCMAKE_Fortran_FLAGS="-march=znver2 -mtune=native"
Notes on using llvm-bolt¶
The following presentation illustrates the benefits of using LLVM Bolt:
https://llvm.org/devmtg/2024-03/slides/practical-use-of-bolt.pdf
Notes on linking¶
We employ ld.gold
as default linker (through Binutuls). If you want to rely on different linker you have to specify its executable explicitly:
https://cmake.org/cmake/help/latest/variable/CMAKE_LINKER_TYPE.html
https://www.gnu.org/software/automake/manual/html_node/How-the-Linker-is-Chosen.html
Getting help¶
See Getting help