Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "High Energy Physics"

From EGIWiki
Jump to navigation Jump to search
Line 7: Line 7:
can be classified into:
can be classified into:


;Experiment computing systems and services: Software stacks developed by the
*Experiment computing systems and services: Software stacks developed by the
experiments on top of the WLCG middleware to implement their particular
experiments on top of the WLCG middleware to implement their particular
computing models. Nevertheless, successful examples of experiment computing
computing models. Nevertheless, successful examples of experiment computing
systems reuse exist, e.g. the Linear Collider Detector (see Section 5.3.2)
systems reuse exist, e.g. the Linear Collider Detector (see Section 5.3.2)


;Middleware services: VO independent, high-level grid services
*Middleware services: VO independent, high-level grid services


**Data Management: Services for data discovery and data transfer, e.g.
**Data Management: Services for data discovery and data transfer, e.g.
LCG File Catalogue, gLite File Transfer Service, WLCG Disk Pool Manager
LCG File Catalogue, gLite File Transfer Service, WLCG Disk Pool Manager
etc. See Section 6.7 for a complete description of each service.
etc. See Section 6.7 for a complete description of each service.


**Workload Management: Services that allow users to submit and manage
**Workload Management: Services that allow users to submit and manage
generic batch jobs on grid resources, e.g. Ganga (see Section 6.2), gLite
generic batch jobs on grid resources, e.g. Ganga (see Section 6.2), gLite
Workload Management System etc.
Workload Management System etc.


**Persistency: Framework to interface database access for storing and
**Persistency: Framework to interface database access for storing and
retrieving different types of scientific data, such as event and conditions
retrieving different types of scientific data, such as event and conditions
data. [R 4]
data. [R 4]


**Monitoring: Application and site monitoring to follow the experiment
**Monitoring: Application and site monitoring to follow the experiment
activities and the state of the grid infrastructure respectively, e.g.
activities and the state of the grid infrastructure respectively, e.g.
Dashboards (see Section 6.1), SAM/Nagios monitoring and HammerCloud
Dashboards (see Section 6.1), SAM/Nagios monitoring and HammerCloud
(see Section 6.6)
(see Section 6.6)

Revision as of 19:56, 28 February 2011


The High Energy Physics (HEP) HUC represents the 4 LHC experiments at CERN, which are fully relying on the use of grid computing for their offline data distribution, processing and analysis. The HEP computing systems are probably the most complex grid-integrated applications currently in production. The services ran by the HEP HUC can be classified into:

  • Experiment computing systems and services: Software stacks developed by the

experiments on top of the WLCG middleware to implement their particular computing models. Nevertheless, successful examples of experiment computing systems reuse exist, e.g. the Linear Collider Detector (see Section 5.3.2)

  • Middleware services: VO independent, high-level grid services
**Data Management: Services for data discovery and data transfer, e.g.

LCG File Catalogue, gLite File Transfer Service, WLCG Disk Pool Manager etc. See Section 6.7 for a complete description of each service.

**Workload Management: Services that allow users to submit and manage

generic batch jobs on grid resources, e.g. Ganga (see Section 6.2), gLite Workload Management System etc.

**Persistency: Framework to interface database access for storing and

retrieving different types of scientific data, such as event and conditions data. [R 4]

**Monitoring: Application and site monitoring to follow the experiment

activities and the state of the grid infrastructure respectively, e.g. Dashboards (see Section 6.1), SAM/Nagios monitoring and HammerCloud (see Section 6.6)