Parallel Computing Support User Guide

From EGIWiki
Jump to: navigation, search

Summary

This page discusses support for generic parallel computing jobs on the EGI infrastructure. We consider using the MPI-START framework as a means for launching multiple jobs on a cluster. The are several clearly apparent application areas:

* Hadoop-On-Demand/myHadoop
* Charm++
* Parallel R

For MPI applications, check also the MPI User Guide.


This is a work in progress.

JDL requirements

As we are using the MPI-START framework, the format of the JDL is the same as for an MPI job. However, the executable hello_bin may launch any process

JobType       = "Normal";
CPUNumber     = 4;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_bin hello";
InputSandbox  = {"starter.sh", "hello_bin"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);


echo "My hostname is `hostname`"
echo "My name is `whoami`"
echo "Systype `uname -a`"

#which mpirun
#which mpiexec

echo "My directory is $PWD"
pwd
ls

The starter.sh is based on the code from http://grid.ifca.es/wiki/Middleware/MpiStart/UserDocumentation
The effect of this code will simplly be to run hello_bin on all the allocated nodes.

Using MPI-START and mpiexec to perform non mpi workloads

Both OpenMPI and MPICH2 define a number of environment variables that are available to every MPI process. In particular, they export variables which relate to the number of process slots allocated to the job and to the MPI Rank of the processes. Using this information one can nominate a "master" or coordinator process in the set of processes. This allows us to accommodate some master/slave use-cases.


MPI Distribution Rank Variable Communicator Size variable
OpenMPI OMPI_COMM_WORLD_RANK OMPI_COMM_WORLD_SIZE
MPICH2 MPIRUN_RANK MPIRUN_NPROCS


#!/bin/bash
if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then
### Code for coordinating master process
else
### Code for slave processes
fi

For example, our hello_bin script could be replaced by

 
#!/bin/bash


if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then
echo "I am the one true master"
echo "My hostname is `hostname`"
echo "My name is `whoami`"
echo "Systype `uname -a`"
echo "I ran with arguments $*"


echo "My directory is $PWD"
pwd
ls

echo Machine_file ${MPI_START_MACHINEFILE}
cat ${MPI_START_MACHINEFILE}




Note: In this case, I am checking to see if I am the master process or one of the slaves. If I am the master process, I run a simple set of commands, If I am a slave, I don't really do anything.

Using other Parallel Computing Frameworks - Charm++

In this example we will see how we use the MPI-START framework to launch charm++ code on the allocated processors. The actual charm++ code files hello.C and hello.ci are based on the 3darray example in the charm++ source code distribution.

Charm++ JDL example

CPUNumber     = 10;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_master2";
InputSandbox  = {"starter.sh", "hello_master2","Makefile","hello.C","hello.ci"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);


hello_master2

This example assumes that charm++ has been installed on the worker nodes under /opt/charm. We assume that ssh between worker nodes is OK (this check can be added in the JDL). I convert the list of allocated nodes into one that can be used by charm++. Also, I have not investigated the effects on the accounting data yet.

#!/bin/bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/charm/lib
export PATH=$PATH:/opt/charm/bin
if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then
echo "Now building Charm++ 3darray example"
make -f Makefile
cat ${MPI_START_MACHINEFILE} | sed ’s/^/host /g’ > charm_nodelist
charmrun ++remote-shell ssh ++nodelist charm_nodelist \
./hello +p${OMPI_UNIVERSE_SIZE} 32 +x2 +y2 +z2 +isomalloc_sync
fi
# The slave node does not actively do anything, it’s processes are
# launched by the Charm++ "remote-shell" invocation.