Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "MPI User Guide"

From EGIWiki
Jump to navigation Jump to search
Line 40: Line 40:


===== Network Interconnections =====
===== Network Interconnections =====
Sites ''may'' publish the network interconnect available for the execution of MPI applications with a variable with the form:
MPI-<interconnect>
Currently the valid interconnects are: '''Ethernet''', '''Infiniband''', '''SCI''', and '''Myrinet'''.
Examples
*GlueHostApplicationSoftwareRunTimeEnvironment: MPI-Infiniband


===== Shared Homes =====
===== Shared Homes =====
Sites supporting a shared filesystem for the execution of MPI applications publish the '''MPI_SHARED_HOME''' variable. If your application needs such feature, you should check the availability of that variable.


==== Submitting MPI Job ====
==== Submitting MPI Job ====

Revision as of 13:49, 27 June 2011

MPI User Guide

This document is intended to help EGI user community to execute MPI applications on the Infrastructure.


MPI Support at EGI

Site Support

Execution of MPI applications requires sites that properly support the submission and execution of parallel applications and the availability of a MPI implementation. Site administrators should check the MPI-Start Installation and Configuration manual with the relevant information about the configuration of sites. Since not all of them have this support enabled, special tags are published via the information system to allow users to discover which are the sites that can be used for their executions. Sites may also install different implementations (or flavours) of MPI. It is important therefore that users can use the information system to locate sites with the software they require.

Inter-Site Support

Application Execution

MPI-start is the recommended way of starting MPI jobs in the infrastructure. The User documentation contains a complete (more technical) description on how to run MPI jobs. Documentation is focused on gLite resources, although MPI-Start can be used with ARC and UNICORE if installed and configured by the site administrator.

Examples can be found also at the tutorials materials prepared by the EGI-InSPIRE SA3 Support for parallel computing (MPI) task:

MPI Use in General

MPI-START Use

The recommended way of starting MPI applications in the EGI Infrastructure is using MPI-Start. Sites including support for MPI-Start must include in their GlueHostApplicationSoftwareRunTimeEnvironment attribute the tag MPI-START

MPI-START Features

MPI Implementations supported

Discovering information about MPI

The recommended way of starting MPI applications in the EGI Infrastructure is using MPI-Start. Sites including support for MPI-Start must include in their GlueHostApplicationSoftwareRunTimeEnvironment attribute the tag MPI-START

Discovery of resources is the first step that needs to be accomplished before the execution of applications. This can be done by using the GlueHostApplicationSoftwareRunTimeEnvironment attribute, which should include all the relevant MPI support information that allow users to locate the sites with the adequate software environment. The following sections describe the tags that site may publish

MPI Flavours
Network Interconnections

Sites may publish the network interconnect available for the execution of MPI applications with a variable with the form:

MPI-<interconnect>

Currently the valid interconnects are: Ethernet, Infiniband, SCI, and Myrinet.

Examples

  • GlueHostApplicationSoftwareRunTimeEnvironment: MPI-Infiniband


Shared Homes

Sites supporting a shared filesystem for the execution of MPI applications publish the MPI_SHARED_HOME variable. If your application needs such feature, you should check the availability of that variable.

Submitting MPI Job

Basic
Advanced (Hooks Framework)

Debugging MPI Job

Examples

MPI User Interfaces

Upcoming Features