Reputation: 175
I don't understand why Intel MPI use DAPL, if native ibverbs are faster than DAPL, OpenMPI use native ibverbs. However, in this benchmark IntelMPI achieves better performance.
http://www.hpcadvisorycouncil.com/pdf/AMBER_Analysis_and_Profiling_Intel_E5_2680.pdf
Upvotes: 1
Views: 3694
Reputation: 94245
Intel MPI uses several interfaces to interact with hardware, and DAPL is not default for all cases. OpenMPI will select some interface for current hardware too, it will be not always ibverbs, there is shared memory API for local node interactions and TCP for Ethernet-only hosts.
List for Intel MPI (Linux):
https://software.intel.com/en-us/get-started-with-mpi-for-linux
Getting Started with Intel® MPI Library for Linux* OS. Last updated on August 24, 2015
Support for any combination of the following interconnection fabrics:
- Shared memory
- Network fabrics with tag matching capabilities through Tag Matching Interface (TMI), such as Intel® True Scale Fabric, Infiniband*, Myrinet* and other interconnects
- Native InfiniBand* interface through OFED* verbs provided by Open Fabrics Alliance* (OFA*)
- OpenFabrics Interface* (OFI*)
- RDMA-capable network fabrics through DAPL*, such as InfiniBand* and Myrinet* Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet*, and other interconnects
Interface to fabric can be selected with I_MPI_FABRICS environment variable: https://software.intel.com/en-us/node/535584
Selecting Fabrics. Last updated on February 22, 2017
Intel® MPI Library enables you to select a communication fabric at runtime without having to recompile your application. By default, it automatically selects the most appropriate fabric based on your software and hardware configuration. This means that in most cases you do not have to bother about manually selecting a fabric.
However, in certain situations specifying a particular communication fabric can boost performance of your application. You can specify fabrics for communications within the node and between the nodes (intra-node and inter-node communications, respectively). The following fabrics are available:
Fabric
- Network hardware and software used
shm
- Shared memory (for intra-node communication only).dapl
- Direct Access Programming Library* (DAPL)-capable network fabrics, such as InfiniBand* and iWarp* (through DAPL).tcp
- TCP/IP-capable network fabrics, such as Ethernet and InfiniBand* (through IPoIB*).tmi
- Tag Matching Interface (TMI)-capable network fabrics, such as Intel® True Scale Fabric, Intel® Omni-Path Architecture and Myrinet* (through TMI).ofa
- OpenFabrics Alliance* (OFA)-capable network fabrics, such as InfiniBand* (through OFED* verbs).ofi
- OpenFabrics Interfaces* (OFI)-capable network fabrics, such as Intel® True Scale Fabric, Intel® Omni-Path Architecture, InfiniBand* and Ethernet (through OFI API).For inter-node communication, it uses the first available fabric from the default fabric list. This list is defined automatically for each hardware and software configuration (see I_MPI_FABRICS_LIST for details).
For most configurations, this list is as follows:
dapl,ofa,tcp,tmi,ofi
Upvotes: 1