MadMPI benchmark

MPI overlap benchmark

« back to PM2 home.

Download

Powered By GForge Collaborative Development Environment

» Latest release (2016-01-12)

» MadMPI benchmark source code is hosted as part of the PM2 project at gforge.

MadMPI benchmark

MadMPI benchmark contains the following benchmark series:

Benchmarks are point-to-point, running on two nodes. Collective operations are not benchmarked yet.

For installation instructions, see the README.

Documentation

»  MadMPI benchmark installation and documentation

»  The reference published paper (ICPP 2016)

»  The presentation slides

Results

To interpret results, see the reference article and the following slide:.

MadMPI

MadMPI with Pioman/pthread on InfiniBand FDR (william, ConnectX-3 MT27500, ibverbs)

MadMPI with Pioman/pthread on InfiniBand EDR (joe, ConnectX-4 MT27700, ibverbs)

OpenMPI

OpenMPI 1.8 on InfiniBand FDR (inti haswell)

OpenMPI 1.10.1 on InfiniBand FDR (william, ConnectX-3 MT27500, ibverbs)

OpenMPI 1.10.1 on InfiniBand EDR (joe, ConnectX-4 MT27700, ibverbs)

OpenMPI 1.10 on InfiniBand FDR, Mellanox MXM (plafrim mistral, Mellanox ConnectX-3, MXM)

OpenMPI 1.10.1 on shared memory (william)

OpenMPI v2.x snapshot, on InfiniBand FDR (william, ConnectX-3 MT27500, ibverbs)

OpenMPI v2.x snapshot, on InfiniBand EDR (joe, ConnectX-4 MT27700, ibverbs)

MVAPICH2

MVAPICH2 2.0b on InfiniBand FDR (william, ConnectX-3 MT27500, ibverbs)

MVAPICH2 2.2a on InfiniBand FDR (william, ConnectX-3 MT27500, ibverbs)

MVAPICH2 2.2a on InfiniBand EDR (joe, ConnectX-4 MT27700, ibverbs)

MPICH3

MPICH 3.2 on InfiniBand, Mellanox MXM (plafrim mistral, Mellanox ConnectX-3, MXM)

MPICH 3.2 on TCP/Ethernet (william)

MPICH 3.2 on shared memory (william)

Intel MPI

Intel MPI 5.1 on InfiniBand QDR TrueScale (plafirm miriel, QLogic IBA7322)

Intel MPI 5.1 on InfiniBand QDR (plafrim mistral, Mellanox ConnextX-3)

Intel MPI 5.1 on shared memory (plafrim miriel, QLogic IBA7322)

Intel MPI 5.1.3 on Intel Omni-Path (plafrim devel, Omni-Path HFI Adapter 100 Series)

MPC

MPC 2.5.2 on InfiniBand FDR (william, ConnectX-3 MT27500, ibverbs)

Cray XE (Blue Waters)

Cray XE (Blue Waters), using Gemini network

(thank to François Tessier and Emmanuel Jeannot, JLESC)

Cray XE (Blue Waters), shared memory

(thank to François Tessier and Emmanuel Jeannot, JLESC)

Fujitsu K computer

Fujitsu K computer using Tofu network

(thank to Balazs Gerofi, Riken)

IBM BluGene/Q

IBM BluGene/Q (Juqueen), default parameters

(thank to Benedikt Steinbusch, Forschungszentrum Jülich)

IBM BluGene/Q (Juqueen), thread multiple

(thank to Benedikt Steinbusch, Forschungszentrum Jülich)

Run the benchmark on your machine, and send me your own results!

Contact

For any questions regarding MadMPI benchmark, please contact Alexandre Denis
	alexandre.denis@inria.fr