Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "FedCloudCMSVCycle"

From EGIWiki
Jump to navigation Jump to search
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
[[Category: Technology ]]
{{Fedcloud_Menu}}
[[Category: Fedcloud-tf]]
 
{{FCloudCom_menubar}}
 
{{TOC_right}}
{{FedCloudUseCases |  
{{FedCloudUseCases |  
FCUC_Status    = Preparatory|  
FCUC_Status    = Pre-production|  
FCUC_StartDate = 17/12/2014|  
FCUC_StartDate = 17/12/2014|  
FCUC_EndDate  = -|   
FCUC_EndDate  = -|   
Line 9: Line 12:


FCUC_ShortDesc =
FCUC_ShortDesc =
The CMS community would like to profit of the EGI Federated Cloud resources to absorbe workload peaks in CMS grid. |
The CMS community would like to profit of the EGI Federated Cloud resources to absorb workload peaks in CMS grid. |


FCUC_Tasks =  
FCUC_Tasks =  
The [http://cms.web.cern.ch/news/cms-particle-detector Compact Muon Solenoid (CMS)] is a general-purpose detector at the [http://home.web.cern.ch/about/accelerators/large-hadron-collider Large Hadron Collider] (LHC). It is designed to investigate a wide range of physics, including the search for the [http://home.web.cern.ch/about/physics/search-higgs-boson Higgs boson], [http://home.web.cern.ch/about/physics/extra-dimensions-gravitons-and-tiny-black-holes extra dimensions], and particles that could make up [http://home.web.cern.ch/about/physics/dark-matter dark matter]. Although it has the same scientific goals as the [http://home.web.cern.ch/about/experiments/atlas ATLAS experiment], it uses different technical solutions and a different magnet-system design.
The [http://cms.web.cern.ch/news/cms-particle-detector Compact Muon Solenoid (CMS)] is a general-purpose detector at the [http://home.web.cern.ch/about/accelerators/large-hadron-collider Large Hadron Collider] (LHC). It is designed to investigate a wide range of physics, including the search for the [http://home.web.cern.ch/about/physics/search-higgs-boson Higgs boson], [http://home.web.cern.ch/about/physics/extra-dimensions-gravitons-and-tiny-black-holes extra dimensions], and particles that could make up [http://home.web.cern.ch/about/physics/dark-matter dark matter]. Although it has the same scientific goals as the [http://home.web.cern.ch/about/experiments/atlas ATLAS experiment], it uses different technical solutions and a different magnet-system design.
This use case foresees the usage of cloud infrastructure broker VAC/VCycle developed by CERN. The CERN community is developing an OCCI connector for VAC/VCycle to access the EGI Federated Cloud resources.
This use case foresees the usage of cloud infrastructure broker Vac/Vcycle developed by the University of Manchester. The CERN community has developed an OCCI connector for Vcycle to access the EGI Federated Cloud resources.


More information on VAC/VCycle are available below.
More information on Vac/Vcycle are available below.
===VAC===
===Vac===
VAC is a self managing system to control virtual machines which are running on hypervisors which are not managed by an IaaS system. It is an implementation of the vacuum model whereby a VM factory runs on each physical machine. Each factory independently decides to start a VM instance or instances if on a multi-core node. The factory takes care of the VM contextualization based upon the predetermined configuration for the VO. Currently, one instance is started per job and is automatically shut down when the job terminates are no further payloads are available. Information is exchanged between the host and the guest via a directory on the host which is mounted by the guest. One key piece of information shared is the exit status of the job. If the exist status is No work available, the factory will back-off from creating machines and try again later. An aspect of this approach is that as there is no central service and hence avoids a central point of failure and is horizontally scalable. Factories may communicate with each other to achieve target shares for the specific VAC space at the site. With this approach, each VM factory can decide which VO’s VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site’s VM factories query each other to discover which VM types they are running, and therefore identify which VO’s VMs should be started as nodes become available again. For sites where most of the resources are dedicated to a few VOs, this approach provides a straight forward solution that is no longer dependent on all the Grid or cloud machinery for these jobs.
[http://www.gridpp.ac.uk/vac/ Vac] is a self managing system to control virtual machines which are running on hypervisors which are not managed by an IaaS system. It is an implementation of the vacuum model whereby a VM factory runs on each physical machine. Each factory independently decides to start a VM instance or instances if on a multi-core node. The factory takes care of the VM contextualization based upon the predetermined configuration for the VO. Currently, one instance is started per job and is automatically shut down when the job terminates are no further payloads are available. Information is exchanged between the host and the guest via a directory on the host which is mounted by the guest. One key piece of information shared is the exit status of the job. If the exist status is No work available, the factory will back-off from creating machines and try again later. An aspect of this approach is that as there is no central service and hence avoids a central point of failure and is horizontally scalable. Factories may communicate with each other to achieve target shares for the specific Vac space at the site. With this approach, each VM factory can decide which VO’s VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site’s VM factories query each other to discover which VM types they are running, and therefore identify which VO’s VMs should be started as nodes become available again. For sites where most of the resources are dedicated to a few VOs, this approach provides a straight forward solution that is no longer dependent on all the Grid or cloud machinery for these jobs.
===VCycle===
===Vcycle===
Vcycle is an alternative implementation of the VAC system which can be used in conjunction with IasS providers. Whereas an instance of VAC resides on each physical host, a centralized Vcycle service uses the IaaS interface to manage the VM lifecycle following the same logic as implemented in VAC. It supervises the VMs and instantiates/shutdowns VMs depending on the load coming from the experiment’s central task queue. As with VAC, if the exist status is No work available, the factory will back-off from recreating machines and try again later. Using Vcycle this way can provide elastic capacity using the resource providers it has at its disposal.|
[http://www.gridpp.ac.uk/vcycle/ Vcycle] is an alternative implementation of the vacuum model which can be used in conjunction with IasS providers. Whereas an instance of Vac resides on each physical host, a centralized Vcycle service uses the IaaS interface to manage the VM lifecycle following the same logic as implemented in Vac. It supervises the VMs and instantiates/shutdowns VMs depending on the load coming from the experiment’s central task queue. As with Vac, if the exist status is No work available, the factory will back-off from recreating machines and try again later. Using Vcycle this way can provide elastic capacity using the resource providers it has at its disposal.|
FCUC_Files =
FCUC_Files =
* [http://cms.web.cern.ch/ CERN CMS]|  
* [http://cms.web.cern.ch/ CMS]
* [http://www.gridpp.ac.uk/vac/ Vac]
* [http://www.gridpp.ac.uk/vcycle/ Vcycle]|  
}}
}}

Latest revision as of 15:30, 7 May 2015

Overview For users For resource providers Infrastructure status Site-specific configuration Architecture



Federated Cloud Communities menu: Home Production use cases Under development use cases Closed use cases High level tools use cases



General Information

  • Status: Pre-production
  • Start Date: 17/12/2014
  • End Date: -
  • EGI.eu contact: Diego Scardaci / diego.scardaci@egi.eu
  • External contact: Hassen Riahi / Hassen.Riahi@cern.ch, Laurence Field / Laurence.Field@cern.ch

Short Description

The CMS community would like to profit of the EGI Federated Cloud resources to absorb workload peaks in CMS grid.

Use Case

The Compact Muon Solenoid (CMS) is a general-purpose detector at the Large Hadron Collider (LHC). It is designed to investigate a wide range of physics, including the search for the Higgs boson, extra dimensions, and particles that could make up dark matter. Although it has the same scientific goals as the ATLAS experiment, it uses different technical solutions and a different magnet-system design. This use case foresees the usage of cloud infrastructure broker Vac/Vcycle developed by the University of Manchester. The CERN community has developed an OCCI connector for Vcycle to access the EGI Federated Cloud resources.

More information on Vac/Vcycle are available below.

Vac

Vac is a self managing system to control virtual machines which are running on hypervisors which are not managed by an IaaS system. It is an implementation of the vacuum model whereby a VM factory runs on each physical machine. Each factory independently decides to start a VM instance or instances if on a multi-core node. The factory takes care of the VM contextualization based upon the predetermined configuration for the VO. Currently, one instance is started per job and is automatically shut down when the job terminates are no further payloads are available. Information is exchanged between the host and the guest via a directory on the host which is mounted by the guest. One key piece of information shared is the exit status of the job. If the exist status is No work available, the factory will back-off from creating machines and try again later. An aspect of this approach is that as there is no central service and hence avoids a central point of failure and is horizontally scalable. Factories may communicate with each other to achieve target shares for the specific Vac space at the site. With this approach, each VM factory can decide which VO’s VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site’s VM factories query each other to discover which VM types they are running, and therefore identify which VO’s VMs should be started as nodes become available again. For sites where most of the resources are dedicated to a few VOs, this approach provides a straight forward solution that is no longer dependent on all the Grid or cloud machinery for these jobs.

Vcycle

Vcycle is an alternative implementation of the vacuum model which can be used in conjunction with IasS providers. Whereas an instance of Vac resides on each physical host, a centralized Vcycle service uses the IaaS interface to manage the VM lifecycle following the same logic as implemented in Vac. It supervises the VMs and instantiates/shutdowns VMs depending on the load coming from the experiment’s central task queue. As with Vac, if the exist status is No work available, the factory will back-off from recreating machines and try again later. Using Vcycle this way can provide elastic capacity using the resource providers it has at its disposal.

Additional Files