Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "VT MPI within EGI:Nagios"

From EGIWiki
Jump to navigation Jump to search
Line 29: Line 29:
*'''Expected behaviour:'''
*'''Expected behaviour:'''


{| cellspacing="0" cellpadding="2" border="1" style="border: 1px solid black;" class="wikitable sortable"
{| cellspacing="0" cellpadding="2" border="1" class="wikitable sortable" style="border: 1px solid black;"
|- align="left" style="background: none repeat scroll 0% 0% Lightgray;"
|- align="left" style="background: none repeat scroll 0% 0% Lightgray;"
! Use Case  
! Use Case  
Line 62: Line 62:


*'''Gonçalo:'''
*'''Gonçalo:'''
  Not that I'm aware but Emir should have a last look when we agree on this.  
  Not that I'm aware but Emir should have a last look when we agree on this.  


Line 67: Line 68:


  I would change "One MPI flavour tag (following any of the proposed formats)  is not present  under GlueHostApplicationSoftwareRunTimeEnvironment" to "No MPI flavour tag
  I would change "One MPI flavour tag (following any of the proposed formats)  is not present  under GlueHostApplicationSoftwareRunTimeEnvironment" to "No MPI flavour tag
 
(following any of the proposed formats) is present under..." it seems easier to understand to me.  
(following any of the proposed formats) is present under..." it seems easier to understand to me.  


*'''Enol:'''
*'''Enol:'''


  GlueCEPolicyMaxCPUTime could be infinity, at least at IFCA we had only limits on the wallclocktime. It may be necessary to adjust the use case to allow the 99999999 value.
  GlueCEPolicyMaxCPUTime could be infinity, at least at IFCA we had only limits on the wallclocktime. It may be necessary to adjust the use case to allow the 99999999 value.
*'''Gonçalo:'''
*'''Gonçalo:'''
  Not sure if that is really a good policy  but I guess that is OK if GlueCEPolicyMaxWallClockTime has a limit! For now, I'll remove this restriction.
  Not sure if that is really a good policy  but I guess that is OK if GlueCEPolicyMaxWallClockTime has a limit! For now, I'll remove this restriction.


Line 82: Line 84:


*'''Gonçalo:'''
*'''Gonçalo:'''
  I though of this but there is no easy way out. Administrators can set GlueCEPolicyMaxCPUTime=GlueCEPolicyMaxWallClockTime and that is a valid option also. Imagine that they have
  I though of this but there is no easy way out. Administrators can set GlueCEPolicyMaxCPUTime=GlueCEPolicyMaxWallClockTime and that is a valid option also. Imagine that they have
  GlueCEPolicyMaxCPUTime=1000
GlueCEPolicyMaxCPUTime=1000
  GlueCEPolicyMaxWallClockTime= 1000
GlueCEPolicyMaxWallClockTime= 1000
  This will allow you to run 100 instances of an MPI job  each one of them spending 10 of WallClockTime. That is also a valid aproach and Adminstrators may not like to see a WARNING because that is exactly the settings they want.  
  This will allow you to run 100 instances of an MPI job  each one of them spending 10 of WallClockTime. That is also a valid aproach and Adminstrators may not like to see a WARNING because that is exactly the settings they want.  
  The 4  value  seemed reasonable (not to high) and coherent with org.sam.mpi.ComplexJob where I request 4 slots.  
  The 4  value  seemed reasonable (not to high) and coherent with org.sam.mpi.ComplexJob where I request 4 slots.  
<br>  
<br>  


Line 107: Line 111:
*'''Expected behaviour:'''
*'''Expected behaviour:'''


{| cellspacing="0" cellpadding="2" border="1" style="border: 1px solid black;" class="wikitable sortable"
{| cellspacing="0" cellpadding="2" border="1" class="wikitable sortable" style="border: 1px solid black;"
|- align="left" style="background: none repeat scroll 0% 0% Lightgray;"
|- align="left" style="background: none repeat scroll 0% 0% Lightgray;"
! Use Case  
! Use Case  
Line 166: Line 170:
*'''Expected behaviour:'''
*'''Expected behaviour:'''


{| cellspacing="0" cellpadding="2" border="1" style="border: 1px solid black;" class="wikitable sortable"
{| cellspacing="0" cellpadding="2" border="1" class="wikitable sortable" style="border: 1px solid black;"
|- align="left" style="background: none repeat scroll 0% 0% Lightgray;"
|- align="left" style="background: none repeat scroll 0% 0% Lightgray;"
! Use Case  
! Use Case  
Line 216: Line 220:
  One thing that is missing and was discussed on the EVO is what the MPI application will be, not really important for the definition of the probes and the status, but we have to agree on a minimum set of
  One thing that is missing and was discussed on the EVO is what the MPI application will be, not really important for the definition of the probes and the status, but we have to agree on a minimum set of
  functionality to be tested.
  functionality to be tested.
*'''Gonçalo:'''
That is another discussion. In I2G times we used pi calculation (remember?!). It is simple enough and we can enhance it with MPI directives. Maybe we can reuse it?!
The drawback is that it does not test IO but just communication and cpu  between instances. But do we want to go in that direction?

Revision as of 16:50, 27 February 2012

MPI VT Nagios Specifications

The present SAM MPI testing infrastructure is completely dependent of the information which is published by each individual site. If a site is publishing the MPI-START tag, the resource is tested, otherwise it is not. This information system dependency does not allow to test sites which are offering MPI functionality but are not broadcasting it, or sites which are broadcasting the MPI/Parallel support in an incorrect way. The introduction of a new service type in GOCDB (MPI or Parallel) would break this dependency and would allow the definition of a MPI test profile to probe:

  • The information published by the (MPI or Parallel) service.
    • org.sam.mpi.EnvSanityCheck
  • The (MPI or Parallel) functionality offered by the site.
    • org.sam.mpi.SimpleJob
    • org.sam.mpi.ComplexJob


org.sam.mpi.EnvSanityCheck

  • Name: org.sam.mpi.EnvSanityCheck
  • Requirements: The service should be registered in GOCDB as a MPI (or Parallel) Service Type.
  • Purpose: Test the information published by the (MPI or Parallel) service.
  • Description: The probe should test if the service:
    • Publishes MPI-START tag under GlueHostApplicationSoftwareRunTimeEnvironment
    • Publishes MPI flavour tag under GlueHostApplicationSoftwareRunTimeEnvironment according to one of the following formats:
      • <MPI flavour>
      • <MPI flavour>-<MPI version>
      • <MPI flavour>-<MPI version>-<Compiler>
    • Has the GlueCEPolicyMaxSlotsPerJob variable set to a reasonable value (not 0 nor 1 nor 999999999) for the queue where the MPI job is supposed to run.
    • Publishes reasonable GlueCEPolicyMaxCPUTime and GlueCEPolicyMaxWallClockTime values (not 0 nor 999999999), and that GlueCEPolicyMaxCPUTime allows to execute a parallel application requesting 4 slots spending GlueCEPolicyMaxWallClockTime of WallClockTime.
  • Dependencies: None.
  • Frequency: Each hour.
  • Time out: 120s (Do you this is too much for an ldap query?)
  • Expected behaviour:
Use Case Probe Result
MPI-START tag is not present under GlueHostApplicationSoftwareRunTimeEnvironment CRITICAL
One MPI flavour tag (following any of the proposed formats) is not present under GlueHostApplicationSoftwareRunTimeEnvironment CRITICAL
The probe reaches a timeout and the probe execution is canceled CRITICAL
GlueCEPolicyMaxSlotsPerJob is equal to 0 or 1 or to 999999999 WARNING
If (GlueCEPolicyMaxCPUTime is equal to 0 or to 999999999) or (GlueCEPolicyMaxWallClockTime is equal to 0 or to 999999999) or (GlueCEPolicyMaxCPUTime / GlueCEPolicyMaxWallClockTime < 4) WARNING
If MPI-START tag AND MPI flavour are present under GlueHostApplicationSoftwareRunTimeEnvironment AND GlueCEPolicyMaxSlotsPerJob variable is not 0 or 1 or 999999999 AND GlueCEPolicyMaxCPUTime / GlueCEPolicyMaxWallClockTime >=4) OK

Comments

  • Enol:
Timeout of the BDII: I would stay in the safe side and keep the 120s, is not that much and for sure there are other probes to check the BDII status that may detect if they are too slow. Related with
that, the use case "The probe reaches a timeout and the probe execution is canceled" may be a Warning instead of Critical, because it's unrelated with the objective of the probe. Is there any kind of
policy for this kind of things?
  • Gonçalo:
Not that I'm aware but Emir should have a last look when we agree on this. 
  • Enol:
I would change "One MPI flavour tag (following any of the proposed formats)  is not present  under GlueHostApplicationSoftwareRunTimeEnvironment" to "No MPI flavour tag
(following any of the proposed formats) is present under..." it seems easier to understand to me. 
  • Enol:
GlueCEPolicyMaxCPUTime could be infinity, at least at IFCA we had only limits on the wallclocktime. It may be necessary to adjust the use case to allow the 99999999 value.
  • Gonçalo:
Not sure if that is really a good policy  but I guess that is OK if GlueCEPolicyMaxWallClockTime has a limit! For now, I'll remove this restriction.
  • Enol:
Ideally GlueCEPolicyMaxCPUTime  /  GlueCEPolicyMaxWallClockTime >= GlueCEPolicyMaxSlotsPerJob, but maybe that's too much. If we had accounting, we could check which is the medium size of the parallel
jobs and use that value. For the time being a 4 times relation sounds small for me: 4 core machines are not that big anymore, and I guess MPI users would like to go beyond one machine. 
  • Gonçalo:
I though of this but there is no easy way out. Administrators can set GlueCEPolicyMaxCPUTime=GlueCEPolicyMaxWallClockTime and that is a valid option also. Imagine that they have
GlueCEPolicyMaxCPUTime=1000
GlueCEPolicyMaxWallClockTime= 1000
This will allow you to run 100 instances of an MPI job  each one of them spending 10 of WallClockTime. That is also a valid aproach and Adminstrators may not like to see a WARNING because that is exactly the settings they want. 
The 4  value  seemed reasonable (not to high) and coherent with org.sam.mpi.ComplexJob where I request 4 slots. 




org.sam.mpi.SimpleJob

  • Name: org.sam.mpi.SimpleJob
  • Requirements: The service should be registered in GOCDB as a MPI (or Parallel) Service Type; Job submission requesting two slots in different machines (JobType="Normal"; CpuNumber = 2; NodeNumber=2)
  • Purpose: Test the MPI functionality with a minimum set of resources.
  • Description: The probe should check if:
    • MPI-START is able to find the type of scheduler.
    • MPI- START is able to determine if the environment for the MPI flavour under test is correctly set.
    • The application correctly compiles.
    • MPI-START is able to distribute the application binaries.
    • The application executes with the number of requested slots and finishes correctly.
  • Dependencies: Executed after org.sam.mpi.envsanitycheck, and if it exits with WARNING or OK status.
  • Frequency: ? (What is the execution frequency of the current probe? Should be the same!)
  • Timeout: ? (What is the execution frequency timeout of the current probe? Should be the same!)
  • Expected behaviour:
Use Case Probe Result
MPI-START is not able to determine which kind of scheduler is used at the site. WARNING
MPI-START is not able to determine if the environment for the MPI flavour under test is correctly set. WARNING
The compilation of the parallel application fails. CRITICAL
MPI-START fails to distribute the application binaries. CRITICAL
The MPI application execution failed. CRITICAL
MPI-START fails to collect the application results in the master node. CRITICAL
The application executed successfully with less slots than the requested ones. CRITICAL
The probe reaches a timeout and the probe execution is canceled. WARNING
The probe reaches a timeout (in two successive attempts) and the probe execution is canceled. CRITICAL
The application executed successfully with the requested slots AND MPI-START was able to collect the application results in the master node. OK


Comments


org.sam.mpi.ComplexJob

  • Name: org.sam.mpi.ComplexJob
  • Requirements: The service should be registered in GOCDB as a MPI (or Parallel) Service Type; Job submission requesting 4 slots with 2 instances running in different dedicated machines (JobType="Normal"; CpuNumber = 4; NodeNumber=2; SMPGranularity=2; WholeNodes=True)
  • Purpose: Test the MPI functionality and check the recommendation from the EGEE MPI WG are being implemented.
  • Description: The probe should check if:
    • MPI-START is able to find the type of scheduler.
    • MPI- START is able to determine if the environment for the MPI flavour under test is correctly set.
    • The application correctly compiles.
    • MPI-START is able to distribute the application binaries.
    • The application executes with the number and characteristics of requested slots and finishes correctly.
    • MPI-START is able to collect the application results in the master node.
  • Dependencies: Executed after org.sam.mpi.envsanitycheck, and if it exits with WARNING or OK status.
  • Frequency: once per day ?
  • Timeout: One day?
  • Expected behaviour:
Use Case Probe Result
MPI-START is not able to determine which kind of scheduler is used at the site. WARNING
MPI-START is not able to determine if the environment for the MPI flavour under test is correctly set. WARNING
The compilation of the parallel application fails. CRITICAL
MPI-START fails to distribute the application binaries. CRITICAL
The MPI application execution failed. CRITICAL
MPI-START fails to collect the application results in the master node. CRITICAL
The application executed successfully with less slots than the requested ones. CRITICAL
The probe reaches a timeout and the probe execution is canceled. WARNING (This should not go to CRITICAL because we do not know how long the job can be in queue because it is requesting some 4 slots)
The application executed successfully with the requested slots AND MPI-START was able to collect the application results in the master node. OK


Comments

  • Enol:
Frequency: no need for this to be daily, maybe every two or three days is enough. The probe is requesting empty nodes, so emptying them means decreasing the throughput of the site. Although here we could argue
that a good MPI service means that MPI jobs enter more or less promptly into execution.
  • Enol:
Timeout: the easy value, just until the next probe is to be submitted (1 day if daily, 2 days if every two days).
  • Enol:
One thing that is missing and was discussed on the EVO is what the MPI application will be, not really important for the definition of the probes and the status, but we have to agree on a minimum set of
functionality to be tested.
  • Gonçalo:
That is another discussion. In I2G times we used pi calculation (remember?!). It is simple enough and we can enhance it with MPI directives. Maybe we can reuse it?! 
The drawback is that it does not test IO but just communication and cpu  between instances. But do we want to go in that direction?