Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Parallel Computing Support User Guide"

From EGIWiki
Jump to navigation Jump to search
Line 96: Line 96:
<br>
<br>
<b>Note:</b> In this case, I am checking to see if I am the master process or one of the slaves. If I am the master process, I run a simple set of commands, If I am a slave, I don't really do anything.
<b>Note:</b> In this case, I am checking to see if I am the master process or one of the slaves. If I am the master process, I run a simple set of commands, If I am a slave, I don't really do anything.
== Using other Parallel Computing Frameworks - Charm++ ==
In this example we will see how we use the MPI-START framework to launch charm++ code on the allocated processors.
=== Charm++ JDL example ===
<pre>
CPUNumber    = 10;
Executable    = "starter.sh";
Arguments    = "OPENMPI hello_master2";
InputSandbox  = {"starter.sh", "hello_master2","Makefile","hello.C","hello.ci"};
OutputSandbox = {"std.out", "std.err"};
StdOutput    = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);
</pre>

Revision as of 15:46, 27 April 2012

Summary

This page discusses support for generic parallel computing jobs on the EGI infrastructure. We consider using the MPI-START framework as a means for launching multiple jobs on a cluster. The are several clearly apparent application areas:

* Hadoop-On-Demand/myHadoop
* Charm++
* Parallel R

This is a work in progress.

JDL requirements

As we are using the MPI-START framework, the format of the JDL is the same as for an MPI job. However, the executable hello_bin may launch any process

JobType       = "Normal";
CPUNumber     = 4;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_bin hello";
InputSandbox  = {"starter.sh", "hello_bin"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);


echo "My hostname is `hostname`"
echo "My name is `whoami`"
echo "Systype `uname -a`"

#which mpirun
#which mpiexec

echo "My directory is $PWD"
pwd
ls

The starter.sh is based on the code from http://grid.ifca.es/wiki/Middleware/MpiStart/UserDocumentation
The effect of this code will simplly be to run hello_bin on all the allocated nodes.

Using MPI-START and mpiexec to perform non mpi workloads

Both OpenMPI and MPICH2 define a number of environment variables that are available to every MPI process. In particular, they export variables which relate to the number of process slots allocated to the job and to the MPI Rank of the processes. Using this information one can nominate a "master" or coordinator process in the set of processes. This allows us to accommodate some master/slave use-cases.


MPI Distribution Rank Variable Communicator Size variable
OpenMPI OMPI_COMM_WORLD_RANK OMPI_COMM_WORLD_SIZE
MPICH2 MPIRUN_RANK MPIRUN_NPROCS


#!/bin/bash
if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then
### Code for coordinating master process
else
### Code for slave processes
fi

For example, our hello_bin script could be replaced by

 
#!/bin/bash


if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then
echo "I am the one true master"
echo "My hostname is `hostname`"
echo "My name is `whoami`"
echo "Systype `uname -a`"
echo "I ran with arguments $*"


echo "My directory is $PWD"
pwd
ls

echo Machine_file ${MPI_START_MACHINEFILE}
cat ${MPI_START_MACHINEFILE}




Note: In this case, I am checking to see if I am the master process or one of the slaves. If I am the master process, I run a simple set of commands, If I am a slave, I don't really do anything.

Using other Parallel Computing Frameworks - Charm++

In this example we will see how we use the MPI-START framework to launch charm++ code on the allocated processors.

Charm++ JDL example

CPUNumber     = 10;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_master2";
InputSandbox  = {"starter.sh", "hello_master2","Makefile","hello.C","hello.ci"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);