Difference between revisions of "Parallel Computing Support User Guide"
Line 24: | Line 24: | ||
</pre> | </pre> | ||
<br> | <br> | ||
<pre> | |||
echo "My hostname is `hostname`" | |||
echo "My name is `whoami`" | |||
echo "Systype `uname -a`" | |||
#which mpirun | |||
#which mpiexec | |||
echo "My directory is $PWD" | |||
pwd | |||
ls | |||
</pre> | |||
The <b>starter.sh</b> is based on the code from http://grid.ifca.es/wiki/Middleware/MpiStart/UserDocumentation | The <b>starter.sh</b> is based on the code from http://grid.ifca.es/wiki/Middleware/MpiStart/UserDocumentation | ||
<br> | <br> | ||
The effect of this code will simplly be to run <b>hello_bin</b> on all the allocated nodes. | |||
== hello_bin == | == hello_bin == |
Revision as of 16:37, 27 April 2012
Summary
This page discusses support for generic parallel computing jobs on the EGI infrastructure. We consider using the MPI-START framework as a means for launching multiple jobs on a cluster. The are several clearly apparent application areas:
* Hadoop-On-Demand/myHadoop * Charm++ * Parallel R
This is a work in progress.
JDL requirements
As we are using the MPI-START framework, the format of the JDL is the same as for an MPI job. However, the executable hello_bin may launch any process
JobType = "Normal"; CPUNumber = 4; Executable = "starter.sh"; Arguments = "OPENMPI hello_bin hello"; InputSandbox = {"starter.sh", "hello_bin"}; OutputSandbox = {"std.out", "std.err"}; StdOutput = "std.out"; StdError = "std.err"; Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);
echo "My hostname is `hostname`" echo "My name is `whoami`" echo "Systype `uname -a`" #which mpirun #which mpiexec echo "My directory is $PWD" pwd ls
The starter.sh is based on the code from http://grid.ifca.es/wiki/Middleware/MpiStart/UserDocumentation
The effect of this code will simplly be to run hello_bin on all the allocated nodes.
hello_bin
This is a very simple code that shows how we detect and execute specific code on the master process only.
Using MPI-START and mpiexec to perform non mpi workloads
Both OpenMPI and MPICH2 define a number of environment variables that are available to every MPI process. In particular, they export variables which relate to the number of process slots allocated to the job and to the MPI Rank of the processes. Using this information one can nominate a "master" or coordinator process in the set of processes. This allows us to accommodate some master/slave use-cases.
MPI Distribution | Rank Variable | Communicator Size variable |
---|---|---|
OpenMPI | OMPI_COMM_WORLD_RANK | OMPI_COMM_WORLD_SIZE |
MPICH2 | MPIRUN_RANK | MPIRUN_NPROCS |
#!/bin/bash if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then ### Code for coordinating master process else ### Code for slave processes fi
For example, our hello_bin script could be replaced by
#!/bin/bash if test x"$OMPI_COMM_WORLD_RANK" = x"0" ; then echo "I am the one true master" echo "My hostname is `hostname`" echo "My name is `whoami`" echo "Systype `uname -a`" echo "I ran with arguments $*" echo "My directory is $PWD" pwd ls echo Machine_file ${MPI_START_MACHINEFILE} cat ${MPI_START_MACHINEFILE}