oneTBB ====== Intel Threading Building Blocks - C++ Template Library for Parallel Programming .. toctree:: :maxdepth: 1 :caption: Contents: Overview -------- `oneTBB `_ (formerly Intel Threading Building Blocks) is a C++ template library for parallel programming that provides high-level abstractions for task-based parallelism. It offers a rich set of parallel algorithms, concurrent data structures, and synchronization primitives that enable efficient utilization of multi-core processors. Key features: * Task-based parallelism with automatic load balancing * Parallel algorithms: ``parallel_for``, ``parallel_reduce``, ``parallel_scan``, ``parallel_sort``, and more * Concurrent containers: Thread-safe queues, hash maps, vectors, and other data structures * Pipeline parallelism for efficient data flow processing * NUMA awareness: Optional HWLOC support for NUMA-optimized scheduling (tbbbind) * High-performance scalable memory allocator (tbbmalloc) Available versions ------------------ To view available oneTBB versions: .. code-block:: bash $ module avail onetbb Build recipes and configuration details are maintained in our GitLab repository: * `Build Recipes `_ Compiler support ---------------- .. warning:: This oneTBB installation is compiled against **libc++** (LLVM's C++ standard library). Your code **must also use libc++** when linking against this installation. Mixing libc++ and libstdc++ in the same application can cause ABI incompatibilities and runtime errors. .. note:: We compile oneTBB from source to avoid the complexity of using Intel oneAPI installations that provide oneTBB compiled against libstdc++. This ensures consistency and compatibility with our LLVM-based toolchain. .. important:: The **llvm-rt** module must be loaded in advance, as oneTBB is linked against libc++, which is provided by the llvm-rt module. However, if the **llvm** module is already loaded, there is no need to load llvm-rt separately, as the llvm module includes the runtime libraries. Load either llvm or llvm-rt before loading the onetbb module. Supported builds: .. code-block:: bash $ module avail onetbb # LLVM build (recommended, uses libc++) Linking your code against oneTBB installation --------------------------------------------- .. warning:: **You must use libc++** when compiling your code, since this oneTBB installation is linked against libc++. Use the ``-stdlib=libc++`` flag with clang++. When compiling code that uses oneTBB, load the appropriate modules and link with ``-ltbb`` (and optionally ``-ltbbmalloc`` for the scalable memory allocator): .. code-block:: bash # Option 1: If using llvm module (includes runtime) $ module load llvm/ $ module load onetbb/ $ clang++ -std=c++17 -stdlib=libc++ your_code.cpp -ltbb -ltbbmalloc # Option 2: If only runtime is needed (llvm module not loaded) $ module load llvm-rt/ # Provides libc++ that oneTBB is linked against $ module load onetbb/ $ clang++ -std=c++17 -stdlib=libc++ your_code.cpp -ltbb -ltbbmalloc Example usage: .. code-block:: cpp #include #include #include int main() { std::vector data(1000000); tbb::parallel_for( tbb::blocked_range(0, data.size()), [&](const tbb::blocked_range& r) { for (size_t i = r.begin(); i < r.end(); ++i) { data[i] = i * 2; } } ); return 0; } .. note:: Remember to load either the **llvm** module (which includes runtime) or the **llvm-rt** module (runtime only) before loading onetbb, as they provide libc++ that oneTBB is linked against. Then compile with ``-stdlib=libc++`` to match the oneTBB library's C++ standard library. Failure to do so may result in ABI mismatches and runtime errors. .. note:: When linking, both dynamic and static linking are supported. The actual linking method depends on your compilation configuration. oneTBB vs OpenMP: When to Use Which? ------------------------------------ Use **OpenMP** when: * You have simple loop parallelism (data parallelism) * You want minimal code changes (just add pragmas) * You're working with regular, predictable workloads * You need quick parallelization of existing sequential code * You're doing scientific computing with regular loops Use **oneTBB** when: * You need complex parallel algorithms (task graphs, pipelines) * You require concurrent data structures (queues, hash maps) * You have irregular or dynamic workloads * You need nested parallelism that scales well * You're building recursive parallel algorithms * You want better load balancing for heterogeneous workloads .. note:: Many projects use both: OpenMP for simple data parallelism and oneTBB for complex parallel algorithms, task dependencies, and concurrent data structures. Exception Handling with libc++ Build ------------------------------------ .. warning:: When using this libc++ build, you **cannot catch oneTBB exceptions by their specific types** across library boundaries. Instead, catch by base types. Example of correct exception handling: .. code-block:: cpp try { // oneTBB code } catch (const std::runtime_error& e) { // Catches tbb::unsafe_wait, etc. // Handle runtime errors } catch (const std::bad_alloc& e) { // Catches tbb::bad_last_alloc // Handle allocation errors } catch (const std::exception& e) { // Catches other oneTBB exceptions // Handle other exceptions } See `detailed information about this limitation and workarounds `_ for more details. Getting Help ------------ * See :doc:`help` Resources --------- * Official Documentation: https://oneapi-src.github.io/oneTBB/ * GitHub Repository: https://github.com/oneapi-src/oneTBB * API Reference: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/index.html