Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part I)


This writeup focuses on verifying the performance of the Infiniband Interconnects or RDMA/iWARP Interconnects as well. The material is modified from IBM Portal “Setting up an HPC cluster with Red Hat Enterprise Linux

A. Building Intel MPI Benchmark (“IMB”)

IMB can be run on a single node or several nodes. 2 or more nodes will be required to test message passing between nodes

Step 1: Download the IMB

1. Go to Intel® MPI Benchmarks 3.2.2 and download the software

2. Untar the package to a shared directory used by the nodes

# tar -zxvf IMB_3.2.2.tar.gz -C /home/hpc

3. Change directory to source directory

# cd /home/hpc/imb/src

4. Edit the make_ict makefile to change the assignment of the CC value from mpiic to mpicc as shown

LIB_PATH    =
LIBS        =
CC          = mpicc
ifeq (,$(shell which ${CC}))
$(error ${CC} is not defined through the PATH environment variable setting. Please try sourcing an Intel(r) Cluster Tools script file such as "mpivars.[c]sh" or "ictvars.[c]sh")
endif
OPTFLAGS    =
CLINKER     = ${CC}
LDFLAGS     =
CPPFLAGS    =

export CC LIB_PATH LIBS OPTFLAGS CLINKER LDFLAGS CPPFLAGS
include Makefile.base

5. Type “make” at /home/hpc/imb/src

# make

You should see a IMB-MPT1 executable. If you cannot find it, do use the command “locate” or “find” to locate executable.

See Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part II) for the 2nd Part of the Article.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s