Wednesday 2 February 2011

Using the MPI profiling interface

How does the MPI profiling interface works? The answer is almost to easy. Finding how to use it is more complex.

The basic idea of the MPI profiling interface is simple: every single MPI function provides actually two entry points. One has the classical MPI_ prefix, the other has PMPI_. Thus, the whole idea is to overload the MPI_ ones, and call the corresponding PMPI_ in the middle. This approach gives full access to both parameters and return code.
Moreover the PMPI_ calls are part of the MPI standard definition (as far as I know...) and therefore are common to every implementations.


In order to test the MPI profiling interface I wrote down the simplest MPI code possible. Two files were needed, one for the wrapper, on for the program.


mpi_wrap.h

#ifndef MPI_WRAP
#define MPI_WRAP

int MPI_Init(int *argc, char ***argv);

#endif

mpi_wrap.c

#include "mpi_wrap.h"

#include <mpi.h>
#include <stdio.h>

int MPI_Init(int* argc, char ***argv)
{
int ret;
fprintf(stderr, "Prof: MPI_Init(...)");

ret = PMPI_Init(argc, argv);

return ret;
}

mpi_hello.c

#include <mpi.h>

#include "mpi_wrap.h"

int main()
{
int rank=0, pop=0;

MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &pop);

if ( rank == 0 )
printf("%d: I'm the master of %d puppets.\n", rank, pop);

MPI_Finalize();

return 0;
}

On Ness


Ness is the EPCC cluster used by MSc students, on Scientific Linux.

MPI installed: mpich-2

MPI C Compiler: pgcc


The first problem came from the compilation of the library. For a yet unknown reason ld doesn't want to link.

mpicc -c -fPIC mpi_wrap.c -o mpi_wrap.o
mpicc -shared -soname=libmpi_wrap.so -o libmpi_wrap.so mpi_wrap.o
Compilation ended by

/usr/bin/ld: /opt/local/packages/mpich2/1.0.5p4-ch3_sock-pgi7.0-7/lib/libmpich.a(init.o): relocation R_X86_64_32 against `MPIR_Process' can not be used when making a shared object; recompile with -fPIC
/opt/local/packages/mpich2/1.0.5p4-ch3_sock-pgi7.0-7/lib/libmpich.a: could not read symbols: Bad value
.


Thus the static library approach was taken.

mpicc -c -fpic mpi_wrap.c -o mpi_wrap.o
ar rcs libmpi_wrap.a mpi_wrap.o
That compiled well.


Then comes the program compilation, that is straightforward.

mpicc -c -I. mpi_hello.c -o mpi_hello.o
mpicc mpi_hello.o -L. -lmpi_wrap -o mpi_hello
And the result worked fine:
$> mpiexec -n 2 mpi_hello
mpiexec: running on ness front-end; timings will not be reliable.
Prof: MPI_Init(...)
Prof: MPI_Init(...)
0: I'm the master of 2 puppets.

At home


My home desktop machine is using a Gentoo/Linux installation.

MPI installed: OpenMPI

MPI C Compiler: gcc



Compiling the dynamic library worked:

mpicc -c -fpic mpi_wrap.c -o mpi_wrap.o
mpicc -shared -Wl,-soname,libmpi_wrap.so mpi_wrap.o -o libmpi_wrap.so

And compiling the executable too:
mpicc -c -I. mpi_hello.c -o mpi_hello.o
mpicc -L. -lmpi_wrap mpi_hello.o -o mpi_hello

Of course as the dynamic library approach is used, the LD_LIBRARY_PATH environment variable has to be set from the directory where the .so is:
export LD_LIBRARY_PATH=`pwd`

Finally running works as well:
$> mpiexec -n 2 mpi_hello
Prof: MPI_Init(...)
Prof: MPI_Init(...)
0: I'm the master of 2 puppets.


Discussion


It is rather strange that Ness doesn't want to link as a dynamic library. Further investigation will be done on that problem, in order to find an answer.

Using a statically linked library offers the advantage of simplicity: no need to set up the LD_LIBRARY_PATH but increases the size of the executable, especially when the tool will include the graphical interface.

Thus the advantages of the dynamically linked library are the reversed, saving executable size as the expense of few configuration.


As far as possible I will try to use the dynamically linked library approach, as the graphical interface will certainly contains a lot of code, that is not directly needed into the program. But the library has to be present on a common ground if used on a cluster, and this will be something I need to investigate further on.


References


No real references here, but just some websites that helped me remember how to create libraries. And of course how to use the MPI profiling interface.


Creating a shared and static library with the gnu compiler [gcc] - René Nyffenegger

Open MPI FAQ: Performance analysis tools

No comments:

Post a Comment