Difference between revisions of "FedCloudDIRAC"
Revision as of 12:33, 7 May 2015
- Status: Use case 1 finished, Use case 2 in progress, Use case 3 in progress
- Start Date: Use case 1 July 2012, Use case 2 April 2013, Use case 3 Setember 2013
- End Date: Use case 1 March 2013
- EGI.eu contact: Gergely Sipos / email@example.com
- External contact: Víctor Méndez / firstname.lastname@example.org
The DIRAC interware project provides a framework for building ready to use distributed computing systems. It has been proven to be a useful tool for large international scientific collaborations integrating in a single system, their computing activities and distributed computing resources: Grids, Clouds and HTC clusters. In the case of Cloud resources, DIRAC is currently integrated with Amazon EC2, OpenNebula, OpenStack and CloudStack. Some Monte Carlo (MC)simulation campaign were realized at the large scale project Belle II, providing over 10.000 thousand CPU days from Amazon. Until this use case in Fedcloud-tf, all cases have made used of a single cloud at a time. The work integrates the resources provided by the multiple private clouds of the EGI Federated Cloud and additional WLCG resources, providing high-level scientific services on top of them by using the DIRAC framework. New design has been adopted by a federated hybrid cloud architecture (Rafhyc). Initial integration and scaling tests demonstrates the architecture is valid to manage federated hybrid cloud IaaS to provide eScience SaaS. The solution has been adopted by LHCb DIRAC for the LHCb computing on federated clouds, using end-points just like another computing resource.
Use Case 1: Running LHCb simulations of Monte Carlo jobs using IaaS in a federated manner, for integration and scaling tests.
Including OpenStack, OpenNebula and CloudStack multiple IaaS providers (Finished)
Use Case 2: VMDIRAC as portal for VM scheduler, with third party job broker.
In September 2013 a collaboration with the EGI FedCloud WeNMR project has started aiming at using the VMDIRAC portal as VM scheduler. The DIRAC broker is not involved in this use case because the job payload for the VCing VM is provided by the ToPoS server as described above. The DIRAC team used ssh contextualization to run in the VMs the VM Monitor Agent, that is in charge of the VM status update for the VM management in the DIRAC portal, and at the same time is checking CPU activity in the VM, if no activity in a certain window time, then it is stopping the VM automatically. The goal of this effort is twofold:
WeNMR can take advantage of the VM machinery in DIRAC and the web monitoring and management
VMDIRAC can be prooved as a tool for VM scheduling without the use of DIRAC broker
- File 1
- File 2