Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Workload Manager"

From EGIWiki
Jump to navigation Jump to search
Line 3: Line 3:
== Overview ==
== Overview ==


EGI Workload manager (also known as DIRAC4EGI) is a service is provided to the EGI community as
EGI Workload manager (also known as [[DIRAC4EGI]]) is a service is provided to the EGI community as


*a workload management service used to distribute the users' computing tasks among the available resources both HTC and cloud.
*a workload management service used to distribute the users' computing tasks among the available resources both HTC and cloud.
Line 33: Line 33:
* Requests for service:  
* Requests for service:  


* User Manual: [HOWTO22] [https://wiki.egi.eu/wiki/HOWTO22]
* User Manual: [[HOWTO22]]  
* FQA: [GGUS:DIRAC FAQ]
* FQA: [[GGUS:DIRAC FAQ]]






== Headline text ==
== Headline text ==

Revision as of 13:55, 5 June 2018

EGI Workload Manager

Overview

EGI Workload manager (also known as DIRAC4EGI) is a service is provided to the EGI community as

  • a workload management service used to distribute the users' computing tasks among the available resources both HTC and cloud.
  • service for managing massively distributed data.

Main features

Workload Manager provides a Workload Management Service (WMS) for High Throughput Computing resources based on DIRAC, which improves the general job throughput compared with native management of grid computing resources. Cloud computing resources are managed as well in a uniform and transparent way for the users.

  • Workload Manager configuration allows to choose appropriately computing and storage resources maximising their usage efficiency for particular user requirements.
  • Workload Manager File Catalogue includes replica, metadata and provenance functionality simplifying the development of scientific application accessing data in distributed environments.
  • All the Workload Manager functionality is accessible through friendly user interfaces, including a Web Portal. It has an open architecture and allows easy extensions for the needs of particular applications.

DIRAC data and job management systems ensure proven production scalability up to peaks of more than 100 thousand concurrently running jobs for the LHCb experiment. This is by far large enough for the computing requirements of environmental science in a sensible temporal horizon.

Targeting User Groups

The service suits for the established Virtual Organization communities, long tail of users, SMEs and Industry

  • EGI and EGI Federation participants
  • Research communities

This service platform eases scientific computing by overlaying distributed computing resources in a transparent manner to the end-user. For example, WeNMR, a structured biology community, uses DIRAC for a number of community services, and reported an improvement from previous 70% to 99% with DIRAC job submission. The benefits of using this service include but not limited to :

  • Maximize usage efficiency by choosing appropriately computing and storage resources on real-time
  • Large–scale distributed environment to manage and handle data storage, movement, accessing and processing
  • Handle job submission and workload distribution in a transparent way
  • Interoperable, handle different storage supporting both cloud and grid capacity
  • User-friendly interface that allows to choose among different DIRAC services, manage the complete lifecycle from search of data to processing analysis


Get Starts

  • Requests for service:


Headline text