DNN (Deep Neural Network)


Only CPU DNN engines are supported on Discoverer HPC nodes. GPU DNN engines are not yet supported!

Intel® oneDNN

Intel® oneAPI Deep Neural Network Library or oneDNN is included in the installation of Intel® oneAPI supported on Discoverer HPC.

Originally, oneDNN should work as expected on Intel CPU families, but it also performs well on the AMD EPYC 7H12 64-Core Processors installed on Discoverer HPC compute nodes, unless AVX-512 support is essential for running the code.

Loading the environment for supporting the use of oneAPI is a two-stage process. First, the basic environment modules have to be loaded:

module load intel
module load dnnl/latest

Then load the module providing access to the thread library adopted by the code:

  • GNU OpenMP library
module load dnnl-cpu-gomp/latest
  • Intel OpenMP library
module load dnnl-cpu-iomp/latest
module load dnnl-cpu-tbb/latest


AMD ZenDNN is built (against AMD AOCC) and supported by Discoverer HPC team. It is available in the software repository.

To gain access to the installation, load the corresponding environment module:

module load amd/zendnn/3/latest


MagmaDNN is built (against a variety of compilers and BLAS/LAPACK libraries) and supported by the Discoverer HPC team. It is available in the software repository.

To gain access to the installation that corresponds to the employed compiler, load the specific environment module:

Intel® oneAPI

module load magmadnn/1/latest-intel


module load magmadnn/1/latest-nvidia


module load magmadnn/1/latest-aocc


module load magmadnn/1/latest-gcc

User-supported DNN

Users are welcome to install (using Conda, for instance) other DNN types or builds, but they will not be supported by the Discoverer HPC team.

Getting help

See Getting help