Math Kernel Library (MKL)¶
Supported versions¶
To check which Intel® oneAPI Math Kernel Library (MKL) versions and build types are currently supported on Discoverer, execute on the login node:
module load intel
module avail mkl/
Warning
Many of our customers experience problems with running parallel applications linked against MKL. Those kind of problems occur whenever MPI-based parallel numerical subroutines, provided by MKL, are invoked within external applications running on the compute nodes on Discoverer HPC. Which is likely due to the running of Intel MPI library on AMD EPYC 7H12 CPU. Our recommendation is to avoid the use of Intel MPI along with MKL on Discoverer until we find a solution to that problem. The simplest way to follow that recommendation is to prevent passing -mkl=parallel
and -qmkl=parallel
flags to the Intel compilers during the compilation. Replacing them with -mkl=serial
and -qmkl=serial
might provide a work-around, if the serial MKL subroutines can substitute the MPI-based ones. In case your code mandates the adoption of MPI-based FFTW3 and ScaLAPACK libraries, see FFTW3 and ScaLAPACK.
Loading¶
To obtain access to the latest MKL version, load the following environment modules in your Slurm batch script:
module load intel
module load mkl/latest
otherwise, replace latest
with the version number.
Check the MKL documentation about the available MKL thread support. Whenever necessary, add the corresponding MKL-related environment variables to your Slurm batch scripts.
Using the Link Line Advisor¶
Important
MKL’s primary goal it to accelerate the code execution on genuine Intel processors, which includes the use of AVX 512 SIMD instructions. Unfortunately, AMD EPYC 7H12 processors do not support AVX 512 SIMD instructions. The highest SIMD instructions supported are AVX 256.
Before compiling and (or) linking your code to MKL, visit Intel® oneAPI Math Kernel Library Link Line Advisor and obtain from there the appropriate set of compiler and linker flags.
MKL and Conda¶
Users may bring to their Personal scratch and storage folder (/discofs/username) a separate installation(s) of Intel® oneAPI Math Kernel Library, mainly by using Conda, but those kinds of installations will not be supported by the Discoverer HPC team!
Getting help¶
See Getting help