|Main||EGI.eu operations services||Support||Documentation||Tools||Activities||Performance||Technology||Catch-all Services||Resource Allocation||Security|
|Documentation menu:||Home •||Manuals •||Procedures •||Training •||Other •||Contact ►||For:||VO managers •||Administrators|
Back to Administration FAQ
How to define values for SI00, HEP-SPEC, MaxCPUTime, MaxWallClockTime
The GLUE 1.3 schema defines 3 essential objects for computing resources:
- GlueCE - represents a queue in the batch system
- GlueCluster - aggregates GlueCE and GlueSubCluster objects
- GlueSubCluster - represents a disjoint set of sufficiently homogeneous worker nodes
The gLite/EMI WMS currently cannot explicitly direct a job to a particular SubCluster (by forwarding job requirements via the CE to the batch system); therefore one generally is advised to have different SubClusters aggregated by different GlueCluster objects, typically for different CE hosts. In other words, a GlueCE should be associated with a single GlueCluster that is associated with a single GlueSubCluster. A future revision of the EMI WMS is foreseen to allow job requirements to be forwarded at least to CREAM CEs.
For each SubCluster the admin needs to determine reasonably correct values at least for the GlueHostBenchmarkSI00 and GlueHostProcessorOtherDescription attributes: the GlueHostBenchmarkSF00 (sic) attribute is less important and often given a zero value to indicate it has not been determined.
These days it is preferred to derive the value of the GlueHostBenchmarkSI00 attribute from an actual measurement of the HEP-SPEC06 benchmark average per core at your site, published as part of GlueHostProcessorOtherDescription, as described here:
Batch system scaling
Your batch system may have been configured to scale the values for a job's maximum CPU and wall-clock time according to the performance of the worker node on which it is running. In that case the published MaxCPUTime / MaxWallClockTime values must refer to the reference machine (for which the scaling factor would be exactly one). In other cases they must refer to the slowest machine in the SubCluster. If there is a large variety in the performance of worker nodes in a particular SubCluster, consider separating them into multiple SubClusters in the batch system and configure your CE hosts correspondingly.
In any case each CE queue needs to publish a GlueCECapability that indicates the power of the reference machine:
Here N is the value of GlueHostBenchmarkSI00 (see above) scaled to the reference machine.
Realistic example numbers
GlueSubClusterPhysicalCPUs: 1894 GlueSubClusterLogicalCPUs: 9375 GlueHostBenchmarkSI00: 1918 GlueHostProcessorOtherDescription: Cores=4.95,Benchmark=7.67-HEP-SPEC06 GlueCECapability: CPUScalingReferenceSI00=1866
MaxCPUTime and MaxWallClockTime
Your batch system may define various queues with different values for the maximum CPU and wall-clock time that apply to the jobs in each queue. The number of queues and their properties are defined per site according to site-specific policies and the needs of the supported communities. A supported VO may negotiate the batch queue properties with the site.
For a particular value of the MaxCPUTime there is no universal recipe to determine a corresponding value for MaxWallClockTime, but here are some considerations:
- Jobs may occasionally or often need to wait for I/O to complete, depending on the type of job, the VO, concurrent activities, ...
- The higher the number of concurrent jobs on a worker node or in the batch system, the higher the competition for shared resources:
- local disk
- storage element
- A job might get stuck due to a mistake in the job's code or a problem at the site: in those cases it may be counterproductive to allow the job to sit idle and keep its slot occupied for a long time.
To allow for normal I/O wait times, while trying to guard against jobs wasting their slots, in practice the MaxWallClockTime for a particular queue usually is set to 1.5 or 2 times its MaxCPUTime.