https://wiki.egi.eu/w/api.php?action=feedcontributions&user=Zashah&feedformat=atomEGIWiki - User contributions [en]2024-03-29T07:51:38ZUser contributionsMediaWiki 1.37.1https://wiki.egi.eu/w/index.php?title=Federated_AAI_Requirements&diff=59655Federated AAI Requirements2013-09-06T13:06:00Z<p>Zashah: </p>
<hr />
<div>This page tracks resource providers' requirements regarding AAI. RPs may have different requirements for authenticating users. We'll try to track the pieces of information that are required at each site to guide decisions about which solutions and configurations are required for the federation. Optional attributes are marked in parentheses.<br> <br />
<br />
{| border="1" width="200" cellspacing="1" cellpadding="1"<br />
|-<br />
! scope="col" | RP<br> <br />
! scope="col" | Full Name<br> <br />
! scope="col" | Email<br> <br />
! scope="col" | Nationality<br> <br />
! scope="col" | ePPN <br />
! scope="col" | Organization <br />
! scope="col" | Other (Please add column before this one)<br> <br />
! scope="col" | Attributes may be derived<br />
|-<br />
| <br />
BSC<br> <br />
<br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| CESGA<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| CESNET<br> <br />
| align="center" | x<br> <br />
| align="center" | x<br> <br />
| align="center" | <br> <br />
| align="center" | x <br />
| align="center" | (x) <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| CETA-CIEMAT<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| Cyfronet<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| FZ Jülich<br> <br />
| align="center" | x<br> <br />
| align="center" | x<br> <br />
| align="center" | x<br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| GRIF<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| GRNET<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| GWDG<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| IFCA<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| IGI <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | &nbsp;?<br />
|-<br />
| IPHC<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| CC-IN2P3<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| Oxford<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| SARA<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| STFC<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| TCD<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| KTH<br> <br />
| align="center" | x<br> <br />
| align="center" | x<br> <br />
| align="center" | x<br> <br />
| align="center" | x<br />
| align="center" | (x)<br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| SZTAKI<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| INFN-Napoli<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| IISAS<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| PLOCAN<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|-<br />
| 100 Percent IT Ltd<br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br> <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br> <br />
| align="center" | &nbsp;?<br />
|}<br />
<br />
<br></div>Zashahhttps://wiki.egi.eu/w/index.php?title=FedClouds_QR13&diff=59178FedClouds QR132013-08-19T09:37:48Z<p>Zashah: </p>
<hr />
<div>[[Category: Technology ]]<br />
[[Category: Fedcloud-tf]]<br />
This wiki page collects the contribution for the '''EGI Quarterly Report 13''', from ''May 2013'' to ''July 2013''.<br> Reports must be submitted by all the partners who accounted effort on TSA2.6N.<br />
<br />
Please, report the activities in the table below. If your institution/NGI is missing from the table, add a line or contact peter.solagna(at)egi.eu.<br />
<br />
<br />
== Federated cloud partners report ==<br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|-<br />
! Partner accounting effort for TSA2.6N <br />
! NGI <br />
! Contact person <br />
! Resource centre(s) contributing to the task force (if any) <br />
! Activities carried out during the quarter <br />
! Plans for the next quarter<br />
|-<br />
| CESNET <br />
| NGI_CZ <br />
| Miroslav Ruda <br />
| <br />
| <br />
*Contributions to the Blueprint Document <br />
*Improvements in cross-patform compatibility of the rOCCI client <br />
*Deployment of the vmcatcher tool <br />
*Supporting Almere Summer School <br />
*Supporting OpenModeller/BioVel use case <br />
*Participation in weekly task meetings <br />
*Participation in meetings of the accounting work group <br />
*Supporting users of the Perun management server <br />
*Maintenance of local computing and storage resources committed to the TF<br />
<br />
| <br />
|-<br />
| CNRS <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| FCTSG <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| JUELICH <br />
| Bjorn Hagemeier <br />
| NGI_DE <br />
| FZJ <br />
| <br />
*Blueprint Document <br />
*OCCI deployment &amp; solving issues with OCCI and various implementations of it <br />
*Support CSGF use case to get access to FedCloud resources <br />
*Enabling glancepush and vmcatcher <br />
*Advertising Swift object storage at FZJ <br />
*Discussions about requirements for use cases using CDMI <br />
*Supporting Almere Summer School by deploying their image <br />
*Updating APEL SSM client to send new format usage records<br />
<br />
| <br />
|-<br />
| KIT <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH <br />
| <br />
NDGF<br />
<br />
| Zeeshan Ali Shah <br />
| KTH <br />
| <br />
*Contributions to the Blueprint Document <br />
*Testing and deployment rOCCI interface <br />
*Deployment of the vmcatcher tool <br />
*Deployment of Federated Authentication interface <br />
*Participation in weekly task meetings <br />
*Supporting users of the Perun management server <br />
*Maintenance of operating local computing and storage resources committed to the TF&#124;-<br />
<br />
| <br />
| <br />
| <br />
|-<br />
| OXFORD <br />
| David Wallom <br />
| NGI_UK <br />
| <br />
| <br />
*Chairing the task force, including all meetings and other activities. <br />
*representing EGI FCT @ the Almera Summer School<br />
<br />
| <br />
|-<br />
| UI SAV <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=FedClouds_QR13&diff=59177FedClouds QR132013-08-19T09:37:29Z<p>Zashah: /* Federated cloud partners report */</p>
<hr />
<div>[[Category: Technology ]]<br />
[[Category: Fedcloud-tf]]<br />
This wiki page collects the contribution for the '''EGI Quarterly Report 13''', from ''May 2013'' to ''July 2013''.<br> Reports must be submitted by all the partners who accounted effort on TSA2.6N.<br />
<br />
Please, report the activities in the table below. If your institution/NGI is missing from the table, add a line or contact peter.solagna(at)egi.eu.<br />
<br />
<br />
== Federated cloud partners report ==<br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|-<br />
! Partner accounting effort for TSA2.6N <br />
! NGI <br />
! Contact person <br />
! Resource centre(s) contributing to the task force (if any) <br />
! Activities carried out during the quarter <br />
! Plans for the next quarter<br />
|-<br />
| CESNET <br />
| NGI_CZ <br />
| Miroslav Ruda <br />
| <br />
| <br />
*Contributions to the Blueprint Document <br />
*Improvements in cross-patform compatibility of the rOCCI client <br />
*Deployment of the vmcatcher tool <br />
*Supporting Almere Summer School <br />
*Supporting OpenModeller/BioVel use case <br />
*Participation in weekly task meetings <br />
*Participation in meetings of the accounting work group <br />
*Supporting users of the Perun management server <br />
*Maintenance of local computing and storage resources committed to the TF<br />
<br />
| <br />
|-<br />
| CNRS <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| FCTSG <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| JUELICH <br />
| Bjorn Hagemeier <br />
| NGI_DE <br />
| FZJ <br />
| <br />
*Blueprint Document <br />
*OCCI deployment &amp; solving issues with OCCI and various implementations of it <br />
*Support CSGF use case to get access to FedCloud resources <br />
*Enabling glancepush and vmcatcher <br />
*Advertising Swift object storage at FZJ <br />
*Discussions about requirements for use cases using CDMI <br />
*Supporting Almere Summer School by deploying their image <br />
*Updating APEL SSM client to send new format usage records<br />
<br />
| <br />
|-<br />
| KIT <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH <br />
| <br />
*NDGF<br />
<br />
| Zeeshan Ali Shah<br />
| KTH<br />
| <br />
*Contributions to the Blueprint Document<br />
*Testing and deployment rOCCI interface<br />
*Deployment of the vmcatcher tool<br />
*Deployment of Federated Authentication interface<br />
*Participation in weekly task meetings<br />
*Supporting users of the Perun management server<br />
*Maintenance of operating local computing and storage resources committed to the TF&#124;-<br />
<br />
| <br />
| <br />
| <br />
|-<br />
| OXFORD <br />
| David Wallom <br />
| NGI_UK <br />
| <br />
| <br />
*Chairing the task force, including all meetings and other activities. <br />
*representing EGI FCT @ the Almera Summer School<br />
<br />
| <br />
|-<br />
| UI SAV <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=FedClouds_QR13&diff=59176FedClouds QR132013-08-19T09:35:54Z<p>Zashah: /* Federated cloud partners report */</p>
<hr />
<div>[[Category: Technology ]]<br />
[[Category: Fedcloud-tf]]<br />
This wiki page collects the contribution for the '''EGI Quarterly Report 13''', from ''May 2013'' to ''July 2013''.<br> Reports must be submitted by all the partners who accounted effort on TSA2.6N.<br />
<br />
Please, report the activities in the table below. If your institution/NGI is missing from the table, add a line or contact peter.solagna(at)egi.eu.<br />
<br />
<br />
== Federated cloud partners report ==<br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|-<br />
! Partner accounting effort for TSA2.6N <br />
! NGI <br />
! Contact person<br />
! Resource centre(s) contributing to the task force (if any)<br />
! Activities carried out during the quarter <br />
! Plans for the next quarter<br />
|-<br />
|CESNET<br />
| NGI_CZ<br />
| Miroslav Ruda<br />
|<br />
|<br />
* Contributions to the Blueprint Document<br />
* Improvements in cross-patform compatibility of the rOCCI client<br />
* Deployment of the vmcatcher tool<br />
* Supporting Almere Summer School<br />
* Supporting OpenModeller/BioVel use case<br />
* Participation in weekly task meetings<br />
* Participation in meetings of the accounting work group<br />
* Supporting users of the Perun management server<br />
* Maintenance of local computing and storage resources committed to the TF<br />
|<br />
|-<br />
|CNRS<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|FCTSG<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|JUELICH<br />
|Bjorn Hagemeier<br />
|NGI_DE<br />
|FZJ<br />
|<br />
* Blueprint Document<br />
* OCCI deployment & solving issues with OCCI and various implementations of it<br />
* Support CSGF use case to get access to FedCloud resources<br />
* Enabling glancepush and vmcatcher<br />
* Advertising Swift object storage at FZJ<br />
* Discussions about requirements for use cases using CDMI<br />
* Supporting Almere Summer School by deploying their image<br />
* Updating APEL SSM client to send new format usage records<br />
|<br />
|-<br />
|KIT<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|KTH<br />
<br />
|<br />
* Contributions to the Blueprint Document<br />
* Testing and deployment rOCCI interface<br />
* Deployment of the vmcatcher tool<br />
* Deployment of Federated Authentication interface<br />
* Participation in weekly task meetings<br />
* Supporting users of the Perun management server<br />
* Maintenance of operating local computing and storage resources committed to the TF|-<br />
|LUH<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|OXFORD<br />
|David Wallom<br />
|NGI_UK<br />
|<br />
|<br />
* Chairing the task force, including all meetings and other activities.<br />
* representing EGI FCT @ the Almera Summer School<br />
|<br />
|-<br />
|UI SAV<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:WorkGroups:Scenario3&diff=53371Fedcloud-tf:WorkGroups:Scenario32013-03-27T13:58:57Z<p>Zashah: /* Resource providers LDAP servers */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}} <br />
<br />
== Scenario 3: Integrating information from multiple resource providers ==<br />
<br />
<font color="red">Leader: David Wallom, OeRC</font> <br />
<br />
== Scenario collaborators ==<br />
<br />
{| border="1"<br />
|-<br />
! Role <br />
! Institution <br />
! Name<br />
|-<br />
| Scenario leader <br />
| OeRC <br />
| David Wallom<br />
|-<br />
| Collaborator <br />
| OeRC <br />
| Matteo Turilli<br />
|-<br />
| Collaborator <br />
| EGI.eu <br />
| Peter Solagna<br />
|-<br />
| Collaborator <br />
| INFN <br />
| Elisabetta Ronchieri<br />
|}<br />
<br />
== Information that should be published by a cloud service ==<br />
<br />
The following are the information identified during the TF F2F meeting: <br />
<br />
'''Please add more points edit/comments the list''' <br />
<br />
#What is the name of the resource and what type of interface can I use to manage instances on the resource? <br />
##What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
#What are the AuthN and AuthZ rules that operate on your cloud? <br />
#What instances are already installed on the resource and am I allowed to upload my own instances? <br />
#If I am able to upload instances what format of instances does the resource accept? <br />
#Is there a data interface available and if so what is it? <br />
#What is the overall size of the resource? <br />
#Are instance templates defined that limit the choice of instance scales I am able to run? <br />
#What type of virtual network can I establish on the resource? <br />
#Does the resource support cloud scalability through managed bursting to another external provider?<br />
<br />
The following are questions on the dynamic information; <br />
<br />
#I have a virtual instance that requires X,Y,Z resources, does your cloud have A&gt;X, B&gt;Y,C&gt;Z resource available? <br />
#My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur? <br />
#What is the charging scheme and how much will using your cloud cost?<br />
<br />
== More information to publish==<br />
=== Storage capabilities===<br />
In this section the storage capabilities information to be published are analyzed not (necessarily) considering the possibilities in the GLUE2 schema currently available.<br />
<br />
==== Relevant information ====<br />
The following table contains what is possible to insepct through the OCCI 1.2 spec:<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|occi.storage.size<br />
|Size of the storage resource instance<br />
|-<br />
|occi.storage.status<br />
|Status of the storage instance (online,offline,backup,snapshot..)<br />
|}<br />
<br />
These attributes are describing an actual instance, which is not going to be published in the information system. What we want to advertise are the capabilities that could be requested to the cloud service.<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|Max Storage installed in the site<br />
|This is the total amount of disk space that the cloud site provides as virtualized storage resource.<br />
|-<br />
|Max size of a single virtual storage resource<br />
|This is the max size of storage <br />
|-<br />
|Interfaces<br />
|How users and VMs can interact with the storage resources. E.g. CDMI.<br />
|-<br />
|Storage throughput<br />
|Max I/O speed allowed to VMs writing/reading to the storage area.<br />
|-<br />
|Capabilities<br />
|This are additional capabilities of a storage service, on top of the create/delete/link. Examples could be: backup or snapshot.<br />
|}<br />
<br />
''Please add more options in the table'', or participate to the discussion in the FedCloud task force mailing list.<br />
<br />
=== Network capabilities ===<br />
<br />
The following table contains some of the Network capabilities that could be advertised through the information system:<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|Internal Bandwith<br />
|The maximum bandwith available between the virtual machines in the cloud<br />
|-<br />
|Outbound bandwith<br />
|Bandwith that can be allocated to each virtual machine outside the cloud<br />
|-<br />
|Average latency<br />
|If the VM are deployed on different physical sites the latency between the instances can be higher and affect network performances. Low values of network latency assure that the virtual machine are physically instantiated in the same network.<br />
|-<br />
|IPv6 enabled<br />
|Can the virtual network be configured for IPv6?<br />
|-<br />
|Virtual private network enabled<br />
|Is it possible to set up a virtual private network, in order to increase the security and the isolation of the instantiated machines?<br />
|}<br />
<br />
== How to render those information in GLUE2 ==<br />
<br />
'''Note''': BDII service speaks only GLUE2. The Cloud information need to be squeezed in the current set of GLUE2 Entities. If the schema is extended to include Cloud-specific entities, it needs to be officially approved by OGF and implemented in the various ''glue-schema'' ''glue-validator'' components deployed with the BDII. <br />
<br />
=== Use the currently available GLUE2.0 entities ===<br />
<br />
Currently the GLUE2 includes two main conceptual models for Computing Elements and Storage Elements. These elements should be used to model the Cloud capabilities remaining compliant to the current GLUE2.0 schema. <br />
<br />
==== Capabilities for cloud services ====<br />
<br />
''Note: '''bold''' capabilities are new, not already in GLUE2 specification. Adding new capabilities do not requires an extension of the GLUE2 schema.<br>'' ''Please:'' add new ''high level'' capabilities if you feel that something is missing. These capabilities are used in the following entities. <br />
<br />
{| border="1"<br />
|-<br />
! Capability <br />
! Description<br />
|-<br />
| '''cloud.VMmanagement''' <br />
| This is the '''standard''' capability that every cloud service should publish if it allows to instantiate/suspend/delete virtual machines<br />
|-<br />
| '''cloud.virtualImagesUpload''' <br />
| This is the capability that allows users to upload their own virtual images through the cloud interface<br />
|-<br />
| security.authentication/security.authorization <br />
| I would leave those capability, given that every cloud provider has authentication<br />
|-<br />
| <br />
| <br />
|}<br />
<br />
==== Computing Service entity description ====<br />
<br />
*This Service is used to describe the computing resource itself, decoupling from the Grid endpoint. <br />
*Attributes that need to be provided by the resource providers are in '''bold'''<br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| Creation time <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| Validity <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| ID <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| '''Name''' <br />
| String <br />
| 1 <br />
| Human readable name. It could be used to fill the information: "what is the name of the resource"<br />
|-<br />
| OtherInfo <br />
| String <br />
| n <br />
| Placeholder to add information that does not fit into any other attribute. Cloud information that cannot be mapped in other attributes could be added here.<br />
|-<br />
| '''Capability''' <br />
| Capability_t <br />
| n <br />
| This attribute lists the capabilities available for this service, currently the type ''Capability_t'' does not include specific cloud capabilities. Being an open enum type it can be extended with additional capabilities. Currently some of the already available capabilities are: security.accounting, security.authentication or information.logging. We could consider to add capabilities like "''cloud.vm.uploadImage''" to add the information in the quesiton: "am I allowed to upload my own instances?". To identify cloud services there would be the need to add a new capability, common to all the cloud services regardless of their specific capabilities, like: "cloud.managementSystem" (nb: stupid example). ''Resource providers, in this design stage, could provide just descriptions of the capabilities they would like to publish. I (Peter) will try to group them proposing some labels for the different capabilities.''<br />
|-<br />
| '''Type''' <br />
| ServiceType_t <br />
| 1 <br />
| Type of service in a reverse namespace model, e.g.: org.glite.lb or org.glite.wms. It could be ''org.opennebula'', ''org.stratuslab'' or ''com.cloudsigma''<br />
|}<br />
<br />
There are, then, a number of more attributes (static and dynamic) that could be used by cloud services, like: StatusInfo,TotalJobs, RunningJobs etc etc. Please note that '''Location''' is a GLUE2 entity that can be linked to the Service entity, this could answer to the ''"Where is located the cloud facility?"'' question. <br />
<br />
=== ComputingEndpoint description ===<br />
<br />
Every ComputingService has associated '''one or more''' Computing Endpoint. The endpoint is used to create, control am monitor computational activities.<br> <br />
<br />
*Resource providers should provide the information to create one endpoint for each interface they're exposing for the cloud service.<br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| CreationTime <br />
| .. <br />
| .. <br />
| I will skip the most general, attributes like OtherInfo and Capability(described above).<br />
|-<br />
| '''URL''' <br />
| URI <br />
| 1 <br />
| Network location of the endpoint.<br />
|-<br />
| '''Capability''' <br />
| Capability_t <br />
| 0..n <br />
| It's the same field of the Service entity. Some capability could be interface-specific. I would replicate all the general capability also for this instance.<br />
|-<br />
| '''Technology''' <br />
| EndpointTechnology_t <br />
| 1 <br />
| Examples are "webservice" and "corba". We could add "webportal" or something like this to clarify that the endpoint refers to a web application.<br />
|-<br />
| '''InterFaceName''' <br />
| InterFaceName_t <br />
| 1 (mandatory) <br />
| The interface in the cloud case could be ''OCCI'', ''EC2'', ''jclouds'' or "webinterface". This can answer to the question: "what type of interface can I use to manage instances on the resource?"<br />
|-<br />
| '''InterfaceVersion''' <br />
| .. <br />
| .. <br />
| No description needed.<br />
|-<br />
| '''Supported profile''' <br />
| URI <br />
| * <br />
| We can define, here, a set of profiles for the authN/authZ of the users, like ''uri:sec:x509''.<br />
|}<br />
<br />
==== ExecutionEnvironment ====<br />
<br />
The ExecutionEnvironment class describes the hardware and operating system environment in which a job will run. It could be used to describe the VM images already available in the Cloud service. <br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| '''Platform''' <br />
| Platform_t <br />
| 1 <br />
| The platform atchitecture, can be: amd64,i386,itanum,powerpc,sparc<br />
|-<br />
| TotalInstances/used instances <br />
| - <br />
| - <br />
| These attributes are not relevant in a cloud environment, where the execution environment are deployed dynamically.<br />
|-<br />
| PhysicalCPUs <br />
| UInt32 <br />
| 0..1 <br />
| The physical CPUs are not relevant - I would say- in a virtualised environment.<br />
|-<br />
| '''LogicalCPUs''' <br />
| UInt32 <br />
| 0..1 <br />
| This attribute could be used to express the '''maximum''' number of cores that is possible to instantiate in a single VM of this type (likely it will be common to all the execution environments of the same cloud service).<br />
|-<br />
| '''MainMemorySize''' <br />
| UInt64 <br />
| 1 <br />
| Max physical memory that is possible to instantiate on a single VM.<br />
|-<br />
| <br />
*'''OSFamily''' <br />
*'''OSName''' <br />
*'''OSVersion'''<br />
<br />
| (*) <br />
| 1 <br />
| Attributes which define the operating system available. There will be an execution environment for every virtual machine available in the cloud service. We should define some placeholders to create an ExecutionEnvironment ''stub'' to describe the max cores/memory for the virtual machines uploaded by a user.<br />
|}<br />
<br />
=== Deploy a new set of entities ===<br />
<br />
This is the next step: define cloud specific GLUE entities to extend the GLUE2 schema in order to publish the cloud services in a standard way. <!-- What to model?<br />
What is the name of the resource and what type of interface can I use to manage instances on the resource?<br />
What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
What are the AuthN and AuthZ rules that operate on your cloud?<br />
What instances are already installed on the resource and am I allowed to upload my own instances?<br />
If I am able to upload instances what format of instances does the resource accept?<br />
Is there a data interface available and if so what is it?<br />
What is the overall size of the resource?<br />
Are instance templates defined that limit the choice of instance scales I am able to run?<br />
What type of virtual network can I establish on the resource?<br />
Does the resource support cloud scalability through managed bursting to another external provider? <br />
<br />
The following are questions on the dynamic information;<br />
<br />
I have a virtual instance that requires X,Y,Z resources, does your cloud have A>X, B>Y,C>Z resource available?<br />
My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur?<br />
What is the charging scheme and how much will using your cloud cost? <br />
--> <br />
<br />
= Technical implementation =<br />
<br />
For a first demo the best technical choice is to go for openldap, which is available in almost all the *nix machines in the world. On top of that, openldap is the server used by the gLite BDIIs, therefore it would be easy to use the same configuration files set-up used for the GRIS or the GIIS. <br />
<br />
* Host of the ldap server: '''ldap://fedclouds-is.hellasgrid.gr:2170'''<br />
** Backup hosted by CESGA: '''fedclouds-is2.hellasgrid.gr'''<br />
*Use the GLUE20.schema in the ''slapd.conf'' file to enable all the GLUE2.0 entities.<br />
<br />
== Resource providers LDAP servers ==<br />
<br />
'''!!! NEW&nbsp;!!!:''' ''IMPORTANT'': <span style="color:#DC143C">Fill the table with the address of the LDAP server set up as an information provider for your test bed.</span> <br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! Resource Centre <br />
! Address of the LDAP server "ldap://hostname:2170"<br />
|-<br />
| CESNET <br />
| CESNET Cloud <br />
| ldap://carach5.ics.muni.cz:2170<br />
|-<br />
| KTH <br />
| KTH Cloud <br />
| ldap://egi.cloud.pdc.kth.se:2170<br />
|-<br />
| GWDG <br />
| GWDG Cloud <br />
| ldap://one.cloud.gwdg.de:2170<br />
|-<br />
| SARA <br />
| SARA Cloud <br />
| ldap://bdii.cloud.sara.nl:2170<br />
|-<br />
| CESGA<br />
| CESGA Cloud<br />
| ldap://ui.egi.cesga.es:2170<br />
|-<br />
| CYFRONET<br />
| CYFRONET Cloud<br />
| ldap://cloud-lab.grid.cyf-kr.edu.pl:2170&nbsp;<br />
|-<br />
| TCD<br />
| TCD Cloud<br />
| ldap://cagnode42.cs.tcd.ie:2170<br />
|-<br />
|GRNET<br />
|GRNET_OKEANOS<br />
|ldap://okeanos-is.hellasgrid.gr:2170 <br />
|-<br />
| FZJ<br />
| FZJ Testbed<br />
| ldap://egi-cloud.zam.kfa-juelich.de:2170<br />
|-<br />
| CC-IN2P3<br />
| CC-IN2P3 Cloud<br />
| ldap://cccldbdii01.in2p3.fr:2170<br />
|-<br />
| INFN CNAF<br />
| WNoDeS Cloud<br />
| ldap://test-wnodes-is.cnaf.infn.it:2170<br />
|}<br />
<br />
'''EGI Community Forum 2012 demo''' <br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! RP contact name <br />
! Resource Centre name to be published (was Site Name) <br />
! Country <br />
! Capabilities to be published (specify the endpoints supporting the capabilities!) <br />
! Other info to publish <br />
! VM Manager <br />
! V.Images available (OSFamily,OSName,OSVersion) <br />
! Max cores <br />
! Max CPU speed <br />
! Max RAM<br />
|-<br />
| CESNET <br />
| Miroslav Ruda <br />
| CESNET Cloud <br />
| Czech Republic <br />
| cloud.managementSystem, cloud.vm.uploadImage, cloud.data.cdmi <br />
| <br />
| XEN <br />
| 1.) Linux, OpenSUSE, 11.4<br>2.) Linux, Debian, 6.0.3 <br />
| 24 <br />
| <br />
| 96GB<br />
|-<br />
| KTH <br />
| Zeeshan Ali&nbsp;Shah <br />
| KTH-PDC Cloud <br />
| Sweden <br />
| cloud.managementSystem, cloud.vm.customimage, cloud.data.cdmi <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GWDG <br />
| Piotr Kasprzak <br />
| GWDG Cloud <br />
| Germany <br />
| cloud.managementSystem, cloud.vm.uploadImage <br />
| <br />
| KVM <br />
| 1.) Linux, Scientific Linux, 6.1<br>2.) Linux, Ubuntu, 11.10 <br />
| 8 <br />
| 2.4 GHZ <br />
| 16GB<br />
|-<br />
| CYFRONET <br />
| Jan Meizner <br />
| CYFRONET Cloud <br />
| Poland <br />
| cloud.managementSystem, cloud.vm.uploadImage <br />
| <br />
| KVM <br />
| <br />
| 24 <br />
| <br />
| 48GB<br />
|-<br />
| CESGA <br />
| Alvaro Simon <br />
| CESGA Cloud <br />
| Spain <br />
| cloud.managementSystem, cloud.vm.customimage <br />
| <br />
| KVM <br />
| 1.) Linux, Scientific Linux, 5.5 <br />
| 264<br />
| 2.6 GHZ<br />
| 264GB<br />
|}<br />
<br />
== Distributed implementation ==<br />
<br />
Publishing correct information in the information system must be responsibility of the resource provider. To build a decentralized information system there is the need for:<br />
* Central information system instance that can pull the information from the resource information provider<br />
* Distributed information system to be queried by the central one (one instance for every cloud provider)<br />
<br />
<br />
A possible strategy is to base everything on the Top-BDII, which is the currently available technology. Ideally one LDAP server is sufficient for the resource providers. The Top-BDII can be configured to get the data from different LDAP servers, and merge them.<br />
<br />
Pros:<br />
* No need to develope transport and updating mechanisms <br />
* Resource providers need only to produce one LDIF file and load it into an LDAP server.<br />
Cons:<br />
* Top-BDII showed bad performances in publishing dynamic data (not an issue for testing as long as there are few resource providers)<br />
* We must publish 100% GLUE2 compliant information<br />
<br />
'''Find [[Fedclouds BDII instructions|here]] some guidelines for the ldap installation.'''<br />
<br />
=== Example queries ===<br />
<br />
# Get all the endpoints published by the resource providers, with the interface name and the version.<br />
<source lang=bash><br />
$ ldapsearch -x -H ldap://test03.egi.cesga.es:2170 -b o=glue '(objectClass=GLUE2Endpoint)' | perl -p00e 's/\r?\n //g' | grep -E 'GLUE2EndpointURL|GLUE2EndpointInterfaceName|GLUE2EndpointInterfaceVersion|dn\:' | awk '{printf("%s%s", $0, (NR%4 ? " === " : "\n"))}' | awk '{print ""$2" "$5" "$8" "$11}' | awk -F "GLUE2DomainID=" '{print $2}' | awk -F "," '{print $1 " "$3}' | awk '{print $1" "$4" "$3" "$5}' | sort<br />
<br />
CC-IN2P3 OCCI 1.1 https://ccocci.in2p3.fr:8788<br />
CESGA OCCI 1.1 http://cloud.cesga.es:3200<br />
CESGA OCCI 1.1 http://meghacloud.cesga.es:3200<br />
CESGA OCCI 1.1 https://cloud.cesga.es:3202<br />
CESGA OCCI 1.1 https://meghacloud.cesga.es:3202<br />
CESNET CDMI 1.0 https://carach3.ics.muni.cz:8080/<br />
CESNET OCA 3.4.1 https://carach5.ics.muni.cz:6443/RPC2<br />
CESNET OCCI 0.8 https://carach5.ics.muni.cz:9443/<br />
CESNET OCCI 1.1 http://carach5.ics.muni.cz:3333/<br />
CESNET OCCI 1.1 https://carach5.ics.muni.cz:10443/<br />
CESNET Sunstone 3.4.1 https://carach5.ics.muni.cz/<br />
csTCDie OCCI 1.1 https://cagnode42.cs.tcd.ie<br />
csTCDie XML-RPC 1.4 https://cagnode42.cs.tcd.ie:2634<br />
CYFRONET OCCI 1.1 http://cloud-lab.grid.cyf-kr.edu.pl:3200/<br />
CYFRONET OCCI 1.1 https://cloud-lab.grid.cyf-kr.edu.pl:3443/<br />
FZJ OCCI 1.1 https://egi-cloud.zam.kfa-juelich.de:8788/<br />
GRNET_OKEANOS OCCI 1.1 http://okeanos-occi.hellasgrid.gr:8888<br />
GWDG CDMI 1.0.1 http://cdmi.cloud.gwdg.de:4001<br />
GWDG CDMI 1.0.1 https://cdmi.cloud.gwdg.de:4000<br />
GWDG OCCI 0.8 http://occi.cloud.gwdg.de:3400<br />
GWDG OCCI 1.1 http://occi.cloud.gwdg.de:3200<br />
GWDG OCCI 1.1 http://occi.cloud.gwdg.de:5000<br />
GWDG OCCI 1.1 https://occi.cloud.gwdg.de:3100<br />
INFN_CNAF OCCI 1.1 https://test-wnodes-web01.cnaf.infn.it:8443/<br />
SARA OCCI 0.8 https://occi.cloud.sara.nl/<br />
SARA OCCI 1.1 https://occi11.cloud.sara.nl/<br />
</source><br />
<br />
= Implementation of the cloud service types in GOCDB =<br />
<br />
GOCDB is the EGI service registry. It contains the services endpoints ([https://goc.egi.eu/portal/index.php?Page_Type=View_Object&object_id=23483&grid_id=0 example]), the grid site topology and other information like downtimes register or contact lists. GOCDB does not contain dynamic information such as number of cores available or resources capacity.<br />
<br />
The plans for the inclusion in GOCDB may be the following:<br />
# Definition of the new service types:<br />
## Resource providers service types:<br />
##* ''org.ogf.OCCI'': service exposing OCCI interface<br />
##* ''org.snia.CDMI'': service exposing CDMI interface <br />
##* ''org.opennebula.OCA'': open nebula management interface<br />
##* ''eu.egi.cloud-site-bdii'': cloud site information provider<br />
##* ''eu.egi.cloud-accounting'': Accounting data parser<br />
##* ..more?<br />
## Infrastructure services<br />
##* ''org.stratuslab.marketplace'': StratusLab marketplace<br />
##* .. more?<br />
# Registration of the cloud resource centres<br />
## As non-EGI for the first iteration (the flag can be changed easily)<br />
## Fill the resource provider contacts (site manager, security officer..)<br />
## Add the service endpoints to the site</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=53370Fedcloud-tf:Testbed2013-03-27T13:58:15Z<p>Zashah: /* Endpoints */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
'''This page is for members of the Federated Cloud Task Force. If you wish to access resources from the EGI Federated Cloud, then please consult with the [https://wiki.egi.eu/wiki/Fedcloud-tf:UserCommunities User Communities section]''' <br />
<br />
<br> <br />
<br />
== Technologies ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
'''[[Image:|pChart]]'''<br />
<br />
<!--<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization, a contact email address and the DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support]. Except for the fedcloud testbed demos run by the taskforce, the cloud is only available to Dutch scientific institutions.<br />
|-<br />
| CC-IN2P3 <br />
| Connect to [http://cctools.in2p3.fr/cclogon/?lang=en CClogon] and request an account for the department/laboratory "CC Cloud/CLOUDCC". You have to select "egifctf" as belonging group in the second step.<br />
|-<br />
| SZTAKI<br />
| Send an e-mail to [mailto:] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|}<br />
<br />
--> <br />
<br />
== Endpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| BSC <br />
| EMOTIVE OCCI+OVF<br>CDMI Proxy (user/pass)<br>CDMI Proxy (x509) <br />
| https://bscgrid20.bsc.es/DRP/compute/ <br> http://bscgrid20.bsc.es:2365<br>https://bscgrid05.bsc.es:443<br />
|-<br />
| CC-IN2P3 <br />
| Keystone<br>Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3<br>OCCI 1.1 (X509)<br>LDAP server <br />
| https://cckeystone.in2p3.fr:5000<br>http://ccec2.in2p3.fr:8773/services/Cloud<br>https://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br>https://ccocci.in2p3.fr:8787<br>ldap://cccldbdii01.in2p3.fr:2170.<br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone<br>LDAP server <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br>ldap://ui.egi.cesga.es:2170 -b o=glue<br />
|-<br />
| CESNET <br />
| Sunstone<br>CDMI Proxy<br>LDAP server<br>OCCI 1.1 (X.509, VOMS) <br />
| https://carach5.ics.muni.cz/<br>https://carach3.ics.muni.cz:8080/<br>ldap://carach5.ics.muni.cz:2170<br>https://carach5.ics.muni.cz:11443/<br />
|-<br />
| Cyfronet <br />
| OCCI 1.1 (X.509)<br>OCCI 1.1 (user/pass)<br>LDAP Server<br>OCCI 1.1 (rOCCI-0.5) <br />
| https://cloud-lab.grid.cyf-kr.edu.pl:3443/ <br>http://cloud-lab.grid.cyf-kr.edu.pl:3200/<br>ldap://cloud-lab.grid.cyf-kr.edu.pl:2170<br>https://cloud-lab.grid.cyf-kr.edu.pl:11443/<br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3<br>OCCI 1.1 SSL<br>OCCI 1.1 <br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br>https://egi-cloud.zam.kfa-juelich.de:8788<br>https://egi-cloud.zam.kfa-juelich.de:8787<br> ldap://egi-cloud.zam.kfa-juelich.de:2170<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| https://onehost-2.lal.in2p3.fr<br />
|-<br />
| GRNET <br />
| occi (x509 voms) <br> occi web interface <br> Ldap Server <br />
| okeanos-occi.hellasgrid.gr 8888<br> http://okeanos-occi.hellasgrid.gr:8888/ <br> ldap://okeanos-is.hellasgrid.gr:2170 -b o=glue<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509)<br>CDMI proxy (user/pass)<br>CDMI proxy (X.509)<br>LDAP Server <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br>https://occi.cloud.gwdg.de:3100<br>http://cdmi.cloud.gwdg.de:4001<br>https://cdmi.cloud.gwdg.de:4000<br>ldap://one.cloud.gwdg.de:2170<br />
|-<br />
| IFCA <br />
| Keystone (VOMS, user/pass)<br>Openstack EC2<br>Openstack Nova API 1.1<br>OCCI 1.1<br>LDAP Server <br />
| https://keystone.ifca.es:5000/v2.0<br>https://cloud.ifca.es/services/Cloud<br>http://cloud.ifca.es:8774/v1.1/<br>http://cloud.ifca.es:8787<br>ldap://cloud.ibergrid.eu:2170<br />
|-<br />
| IGI/INFN <br />
| WNoDeS<br>OCCI 1.1(X509)<br> <br />
| https://test-wnodes-web01.cnaf.infn.it:8443/<br />
|-<br />
| KTH <br />
| OCCI 1.1 (x.509 auth) <br>LDAP <br />
| https://egi.cloud.pdc.kth.se:443/<br>ldap://egi.cloud.pdc.kth.se:2170/<br />
|-<br />
| SARA <br />
| Sunstone<br> OCCI 0.8<br> OCCI 1.1 (X.509)<br> LDAP<br> <br />
| http://ui.cloud.sara.nl/<br> https://occi.cloud.sara.nl/<br> https://occi11.cloud.sara.nl/<br>ldap://bdii.cloud.sara.nl:2170<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br>ldap://cagnode42.cs.tcd.ie:2170<br />
|-<br />
| SZTAKI <br />
| Sunstone<br>OCCI 1.1 (X.509)<br>OpenNebula EC2<br>OCCI 1.1 (X.509, rOCCI-server v0.5)<br> <br />
| http://cfe2.lpds.sztaki.hu/<br>http://cfe2.lpds.sztaki.hu:4568/<br>http://cfe2.lpds.sztaki.hu:4567<br>https://cfe2.lpds.sztaki.hu:3333/<br><br />
|-<br />
| IISAS <br />
| Openstack EC2 <br> OCCI 1.1 <br> Keystone (VOMS supported) <br> LDAP Server <br />
| http://nova.ui.savba.sk:8773 <br> http://nova.ui.savba.sk:8787 <br> https://keystone.ui.savba.sk:5000 <br> ldap://nova.ui.savba.sk:2170<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="BSC">'''BSC'''</div> (Daniele Lezzi) <br />
| In production with VENUS-C middleware <br />
| 96 cores (4 bi-processor Intel Xeon 6 cores, 24GB RAM; 3 bi-processor AMD Opteron 8 cores, 32GB RAM) <br />
| Emotive Cloud (BSC), planned to move to OpenNebula and OpenStack <br />
| Shared GlusterFS 3.6TB total; CDMI Proxy and FTP <br />
| Emotive Cloud <br />
| n/a <br />
| VENUS-C Accounting system <br />
| N/A <br />
| OCCI+OVF <br />
| OCCI provided by OpenNebula/OpenStack <br />
| X.509 with VPN <br />
| SSH keys<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Production <br />
| 10x (24 cores, 96GB RAM) + 44TB shared storage <br />
| OpenNebula 3.8.1 <br />
| Shared NFS filesystem, GridFTP, S3 Cumulus, CDMI Proxy <br />
| OpenNebula + OCCI v1.1 (rOCCI) <br />
| Nagios infrastructure is ready, custom probes for OpenNebula's OCCI, ECONE, OCA. Ganglia / Munin can be added on request. <br />
| OpenNebula accounting daemon + SSM <br />
| LB notification + STOMP based EGI messaging infrastructure is available on the site <br />
| OCCI v1.1 (rOCCI-server v0.4-v0.5) <br />
| Open for discussion <br />
| Username and password, X.509 certificates for OCCI <br />
| In general up to the user, currently registered SSH keys for root access to the VMs<br />
|-<br />
| <div id="ceta">'''CETA-CIEMAT'''</div> (Abel Paz) <br />
| Testbed, under construction <br />
| 14 servers (8 cores, 16 GB each one)<br />
| OpenStack Essex<br />
| Shared NFS filesystem<br />
| N/A<br />
| Nagios <br />
| N/A <br />
| Nagios notifications for admins (not users) <br />
| EC2/Nova/OCCI <br />
| Open for discussion <br />
| Username and password, implemented by OpenStack <br />
| SSH keys<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Kostas Koumantaros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Synefo (GRNET OpenStack implementation) <br />
| Local disks <br />
| <br />
OpenStack compatible<br />
<br />
voms enabled occi 1.1 interface&nbsp;<br />
<br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="ifca">'''IFCA'''</div> (Enol Fernandez) <br />
| Testbed <br />
| 32 x 8 core servers, 16GB RAM <br />
| Openstack Essex <br />
| Local disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| EC2/Nova/OCCI <br />
| - <br />
| user/password, VOMS <br />
| SSH with user defined keys<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| A new dedicated testbed is under configuration. <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Essex <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2/Nova/OCCI <br />
| OCCI <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password and X509&nbsp; <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Jhon Masschelein, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| Testing <br />
| 5 x dual quad core with 16GB RAM <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem, 1.5 TB <br />
| StratusLab web-monitor, Sunstone <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| StratusLab, OpenNebula <br />
| <br> <br />
| X509, Username and password <br />
| User SSH key for root access<br />
|-<br />
| <div id="sztaki">'''SZTAKI'''</div> (Sandor Acs, Peter Kotcauer, Mark Gergely) <br />
| Testing <br />
| 128 cores, 308GB RAM <br />
| OpenNebula 3.8.1 <br />
| 33TB (RAID5) iSCSI/AoE storage + local storages (~10TB) <br />
| OpenNebula OCCI/ECONE <br />
| Nagios, Munin, Zabbix <br />
| OpenNebula (adapted) <br />
| N/A <br />
| OCCI (both of standard OpenNebula and rOCCI) and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| Username and password, X.509 certificates for OCCI <br />
| SSH with password or with user key are preferred<br />
|-<br />
| <div id="iisas">'''IISAS'''</div> (Viet Tran, Binh Minh Nguyen) <br />
| Testing <br />
| Initially 2 servers with 16 cores, 48GB RAM, extension after testing <br />
| OpenStack Folsom <br />
| Shared NFS <br />
| N/A <br />
| N/A (Nagios planned) <br />
| N/A <br />
| N/A <br />
| EC2/Nova/OCCI <br />
| Open for discussion <br />
| user name/password, x509 being tested <br />
| SSH keys<br />
|}<br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black; text-align:center;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone] [https://carach5.ics.muni.cz:9443/ OCCI v0.8] [http://carach5.ics.muni.cz:3333/ OCCI v1.1]<br />
<br />
<br />
<br />
<br />
</span><br />
| <br> <span style="background:green">debian6</span> <br />
| <br> <span style="background:green">Yes</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#ifca|IFCA]] <br />
| <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|CC-IN2P3 (NGI FR)]] <br />
| [http://cctools.in2p3.fr/cclogon/?lang=en CC-IN2P3 account request] <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">[https://ui.cloud.sara.nl ui.cloud.sara.n]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|SZTAKI (NGI HU)]] <br />
| <span style="background:green">[mailto:cloud-support@lpds.sztaki.hu]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=FedClouds_QR9&diff=39172FedClouds QR92012-08-01T13:20:10Z<p>Zashah: </p>
<hr />
<div>This wiki page collects the contribution for the '''EGI Quarterly Report 9''', from ''May 2012'' to ''July 2012''.<br><br />
Reports must be submitted by:<br />
* All the work group leaders<br />
* All the resource providers willing to account effort on the newly created TSA2.6 task<br />
* All the resource prividers willing to have their contribution recorded in the QR, an official EGI document<br />
<br />
== Workbench reports ==<br />
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
! Workbench<br />
! Work group leader (NGI affiliation) <br />
! Activities carried during in the Q9 <br>Please specify other partners collaborating (if any)<br />
! Plans for the next quarter<br />
|-<br />
|VM Management<br />
|<br />
|<br />
|<br />
|-<br />
|Data Management<br />
|<br />
|<br />
|<br />
|-<br />
|Information System<br />
|<br />
|<br />
|<br />
|-<br />
|Accounting<br />
|<br />
|<br />
|<br />
|-<br />
|Monitoring<br />
|<br />
|<br />
|<br />
|-<br />
|Notification<br />
|<br />
|<br />
|<br />
|-<br />
|Federated AAI<br />
|<br />
|<br />
|<br />
|-<br />
|VM Marketplace<br />
|<br />
|<br />
|<br />
|-<br />
|Brokering<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Resource providers report ==<br />
<br />
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|-<br />
! Resource Provider site <br />
! NGI <br />
! Contact person <br />
! Activities carried out during the quarter <br />
! Plans for the next quarter<br />
|-<br />
| BSC <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| CESNET <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GRNET <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| CESGA <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Cyfronet <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GRIF <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GWDG <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH <br />
| SWEDEN <br />
| ZEESHAN ALI SHAH <br />
| OCCI interface, CDMI Interface , Accounting&nbsp; <br />
| Publish Image into Market place, Support Scenarios of EGI TF<br />
|-<br />
| SARA <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| FZ Jülich <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| TCD <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| CC-IN2P3 <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| IGI/INFN <br />
| <br />
| <br />
| <br />
| <br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=FedClouds_QR9&diff=39170FedClouds QR92012-08-01T12:48:15Z<p>Zashah: /* Resource providers report */</p>
<hr />
<div>This wiki page collects the contribution for the '''EGI Quarterly Report 9''', from ''May 2012'' to ''July 2012''.<br><br />
Reports must be submitted by:<br />
* All the work group leaders<br />
* All the resource providers willing to account effort on the newly created TSA2.6 task<br />
* All the resource prividers willing to have their contribution recorded in the QR, an official EGI document<br />
<br />
== Workbench reports ==<br />
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
! Workbench<br />
! Work group leader (NGI affiliation) <br />
! Activities carried during in the Q9 <br>Please specify other partners collaborating (if any)<br />
! Plans for the next quarter<br />
|-<br />
|VM Management<br />
|<br />
|<br />
|<br />
|-<br />
|Data Management<br />
|<br />
|<br />
|<br />
|-<br />
|Information System<br />
|<br />
|<br />
|<br />
|-<br />
|Accounting<br />
|<br />
|<br />
|<br />
|-<br />
|Monitoring<br />
|<br />
|<br />
|<br />
|-<br />
|Notification<br />
|<br />
|<br />
|<br />
|-<br />
|Federated AAI<br />
|<br />
|<br />
|<br />
|-<br />
|VM Marketplace<br />
|<br />
|<br />
|<br />
|-<br />
|Brokering<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Resource providers report ==<br />
<br />
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|-<br />
! Resource Provider site <br />
! NGI <br />
! Contact person <br />
! Activities carried out during the quarter <br />
! Plans for the next quarter<br />
|-<br />
| BSC <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| CESNET <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GRNET <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| CESGA <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Cyfronet <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GRIF <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GWDG <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH <br />
| <br><br />
| ZEESHAN ALI SHAH<br />
| OCCI interface, CDMI Interface , Accounting&nbsp;<br />
| Publish Image into Market place, Support Scenarios of EGI TF<br />
|-<br />
| SARA <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| FZ Jülich <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| TCD <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| CC-IN2P3 <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| IGI/INFN <br />
| <br />
| <br />
| <br />
| <br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:WorkGroups:Scenario3&diff=38723Fedcloud-tf:WorkGroups:Scenario32012-07-26T10:06:16Z<p>Zashah: /* Resource providers to be published for the demo */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}} <br />
<br />
== Scenario 3: Integrating information from multiple resource providers ==<br />
<br />
<font color="red">Leader: David Wallom, OeRC</font> <br />
<br />
== Scenario collaborators ==<br />
<br />
{| border="1"<br />
|-<br />
! Role <br />
! Institution <br />
! Name<br />
|-<br />
| Scenario leader <br />
| OeRC <br />
| David Wallom<br />
|-<br />
| Collaborator <br />
| OeRC <br />
| Matteo Turilli<br />
|-<br />
| Collaborator <br />
| EGI.eu <br />
| Peter Solagna<br />
|-<br />
| Collaborator <br />
| INFN <br />
| Elisabetta Ronchieri<br />
|}<br />
<br />
== Information that should be published by a cloud service ==<br />
<br />
The following are the information identified during the TF F2F meeting: <br />
<br />
'''Please add more points edit/comments the list''' <br />
<br />
#What is the name of the resource and what type of interface can I use to manage instances on the resource? <br />
##What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
#What are the AuthN and AuthZ rules that operate on your cloud? <br />
#What instances are already installed on the resource and am I allowed to upload my own instances? <br />
#If I am able to upload instances what format of instances does the resource accept? <br />
#Is there a data interface available and if so what is it? <br />
#What is the overall size of the resource? <br />
#Are instance templates defined that limit the choice of instance scales I am able to run? <br />
#What type of virtual network can I establish on the resource? <br />
#Does the resource support cloud scalability through managed bursting to another external provider?<br />
<br />
The following are questions on the dynamic information; <br />
<br />
#I have a virtual instance that requires X,Y,Z resources, does your cloud have A&gt;X, B&gt;Y,C&gt;Z resource available? <br />
#My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur? <br />
#What is the charging scheme and how much will using your cloud cost?<br />
<br />
== More information to publish==<br />
=== Storage capabilities===<br />
In this section the storage capabilities information to be published are analyzed not (necessarily) considering the possibilities in the GLUE2 schema currently available.<br />
<br />
==== Relevant information ====<br />
The following table contains what is possible to insepct through the OCCI 1.2 spec:<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|occi.storage.size<br />
|Size of the storage resource instance<br />
|-<br />
|occi.storage.status<br />
|Status of the storage instance (online,offline,backup,snapshot..)<br />
|}<br />
<br />
These attributes are describing an actual instance, which is not going to be published in the information system. What we want to advertise are the capabilities that could be requested to the cloud service.<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|Max Storage installed in the site<br />
|This is the total amount of disk space that the cloud site provides as virtualized storage resource.<br />
|-<br />
|Max size of a single virtual storage resource<br />
|This is the max size of storage <br />
|-<br />
|Interfaces<br />
|How users and VMs can interact with the storage resources. E.g. CDMI.<br />
|-<br />
|Storage throughput<br />
|Max I/O speed allowed to VMs writing/reading to the storage area.<br />
|-<br />
|Capabilities<br />
|This are additional capabilities of a storage service, on top of the create/delete/link. Examples could be: backup or snapshot.<br />
|}<br />
<br />
''Please add more options in the table'', or participate to the discussion in the FedCloud task force mailing list.<br />
<br />
=== Network capabilities ===<br />
<br />
The following table contains some of the Network capabilities that could be advertised through the information system:<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|Internal Bandwith<br />
|The maximum bandwith available between the virtual machines in the cloud<br />
|-<br />
|Outbound bandwith<br />
|Bandwith that can be allocated to each virtual machine outside the cloud<br />
|-<br />
|Average latency<br />
|If the VM are deployed on different physical sites the latency between the instances can be higher and affect network performances. Low values of network latency assure that the virtual machine are physically instantiated in the same network.<br />
|-<br />
|IPv6 enabled<br />
|Can the virtual network be configured for IPv6?<br />
|-<br />
|Virtual private network enabled<br />
|Is it possible to set up a virtual private network, in order to increase the security and the isolation of the instantiated machines?<br />
|}<br />
<br />
== How to render those information in GLUE2 ==<br />
<br />
'''Note''': BDII service speaks only GLUE2. The Cloud information need to be squeezed in the current set of GLUE2 Entities. If the schema is extended to include Cloud-specific entities, it needs to be officially approved by OGF and implemented in the various ''glue-schema'' ''glue-validator'' components deployed with the BDII. <br />
<br />
=== Use the currently available GLUE2.0 entities ===<br />
<br />
Currently the GLUE2 includes two main conceptual models for Computing Elements and Storage Elements. These elements should be used to model the Cloud capabilities remaining compliant to the current GLUE2.0 schema. <br />
<br />
==== Capabilities for cloud services ====<br />
<br />
''Note: '''bold''' capabilities are new, not already in GLUE2 specification. Adding new capabilities do not requires an extension of the GLUE2 schema.<br>'' ''Please:'' add new ''high level'' capabilities if you feel that something is missing. These capabilities are used in the following entities. <br />
<br />
{| border="1"<br />
|-<br />
! Capability <br />
! Description<br />
|-<br />
| '''cloud.VMmanagement''' <br />
| This is the '''standard''' capability that every cloud service should publish if it allows to instantiate/suspend/delete virtual machines<br />
|-<br />
| '''cloud.virtualImagesUpload''' <br />
| This is the capability that allows users to upload their own virtual images through the cloud interface<br />
|-<br />
| security.authentication/security.authorization <br />
| I would leave those capability, given that every cloud provider has authentication<br />
|-<br />
| <br />
| <br />
|}<br />
<br />
==== Computing Service entity description ====<br />
<br />
*This Service is used to describe the computing resource itself, decoupling from the Grid endpoint. <br />
*Attributes that need to be provided by the resource providers are in '''bold'''<br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| Creation time <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| Validity <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| ID <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| '''Name''' <br />
| String <br />
| 1 <br />
| Human readable name. It could be used to fill the information: "what is the name of the resource"<br />
|-<br />
| OtherInfo <br />
| String <br />
| n <br />
| Placeholder to add information that does not fit into any other attribute. Cloud information that cannot be mapped in other attributes could be added here.<br />
|-<br />
| '''Capability''' <br />
| Capability_t <br />
| n <br />
| This attribute lists the capabilities available for this service, currently the type ''Capability_t'' does not include specific cloud capabilities. Being an open enum type it can be extended with additional capabilities. Currently some of the already available capabilities are: security.accounting, security.authentication or information.logging. We could consider to add capabilities like "''cloud.vm.uploadImage''" to add the information in the quesiton: "am I allowed to upload my own instances?". To identify cloud services there would be the need to add a new capability, common to all the cloud services regardless of their specific capabilities, like: "cloud.managementSystem" (nb: stupid example). ''Resource providers, in this design stage, could provide just descriptions of the capabilities they would like to publish. I (Peter) will try to group them proposing some labels for the different capabilities.''<br />
|-<br />
| '''Type''' <br />
| ServiceType_t <br />
| 1 <br />
| Type of service in a reverse namespace model, e.g.: org.glite.lb or org.glite.wms. It could be ''org.opennebula'', ''org.stratuslab'' or ''com.cloudsigma''<br />
|}<br />
<br />
There are, then, a number of more attributes (static and dynamic) that could be used by cloud services, like: StatusInfo,TotalJobs, RunningJobs etc etc. Please note that '''Location''' is a GLUE2 entity that can be linked to the Service entity, this could answer to the ''"Where is located the cloud facility?"'' question. <br />
<br />
=== ComputingEndpoint description ===<br />
<br />
Every ComputingService has associated '''one or more''' Computing Endpoint. The endpoint is used to create, control am monitor computational activities.<br> <br />
<br />
*Resource providers should provide the information to create one endpoint for each interface they're exposing for the cloud service.<br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| CreationTime <br />
| .. <br />
| .. <br />
| I will skip the most general, attributes like OtherInfo and Capability(described above).<br />
|-<br />
| '''URL''' <br />
| URI <br />
| 1 <br />
| Network location of the endpoint.<br />
|-<br />
| '''Capability''' <br />
| Capability_t <br />
| 0..n <br />
| It's the same field of the Service entity. Some capability could be interface-specific. I would replicate all the general capability also for this instance.<br />
|-<br />
| '''Technology''' <br />
| EndpointTechnology_t <br />
| 1 <br />
| Examples are "webservice" and "corba". We could add "webportal" or something like this to clarify that the endpoint refers to a web application.<br />
|-<br />
| '''InterFaceName''' <br />
| InterFaceName_t <br />
| 1 (mandatory) <br />
| The interface in the cloud case could be ''OCCI'', ''EC2'', ''jclouds'' or "webinterface". This can answer to the question: "what type of interface can I use to manage instances on the resource?"<br />
|-<br />
| '''InterfaceVersion''' <br />
| .. <br />
| .. <br />
| No description needed.<br />
|-<br />
| '''Supported profile''' <br />
| URI <br />
| * <br />
| We can define, here, a set of profiles for the authN/authZ of the users, like ''uri:sec:x509''.<br />
|}<br />
<br />
==== ExecutionEnvironment ====<br />
<br />
The ExecutionEnvironment class describes the hardware and operating system environment in which a job will run. It could be used to describe the VM images already available in the Cloud service. <br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| '''Platform''' <br />
| Platform_t <br />
| 1 <br />
| The platform atchitecture, can be: amd64,i386,itanum,powerpc,sparc<br />
|-<br />
| TotalInstances/used instances <br />
| - <br />
| - <br />
| These attributes are not relevant in a cloud environment, where the execution environment are deployed dynamically.<br />
|-<br />
| PhysicalCPUs <br />
| UInt32 <br />
| 0..1 <br />
| The physical CPUs are not relevant - I would say- in a virtualised environment.<br />
|-<br />
| '''LogicalCPUs''' <br />
| UInt32 <br />
| 0..1 <br />
| This attribute could be used to express the '''maximum''' number of cores that is possible to instantiate in a single VM of this type (likely it will be common to all the execution environments of the same cloud service).<br />
|-<br />
| '''MainMemorySize''' <br />
| UInt64 <br />
| 1 <br />
| Max physical memory that is possible to instantiate on a single VM.<br />
|-<br />
| <br />
*'''OSFamily''' <br />
*'''OSName''' <br />
*'''OSVersion'''<br />
<br />
| (*) <br />
| 1 <br />
| Attributes which define the operating system available. There will be an execution environment for every virtual machine available in the cloud service. We should define some placeholders to create an ExecutionEnvironment ''stub'' to describe the max cores/memory for the virtual machines uploaded by a user.<br />
|}<br />
<br />
=== Deploy a new set of entities ===<br />
<br />
This is the next step: define cloud specific GLUE entities to extend the GLUE2 schema in order to publish the cloud services in a standard way. <!-- What to model?<br />
What is the name of the resource and what type of interface can I use to manage instances on the resource?<br />
What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
What are the AuthN and AuthZ rules that operate on your cloud?<br />
What instances are already installed on the resource and am I allowed to upload my own instances?<br />
If I am able to upload instances what format of instances does the resource accept?<br />
Is there a data interface available and if so what is it?<br />
What is the overall size of the resource?<br />
Are instance templates defined that limit the choice of instance scales I am able to run?<br />
What type of virtual network can I establish on the resource?<br />
Does the resource support cloud scalability through managed bursting to another external provider? <br />
<br />
The following are questions on the dynamic information;<br />
<br />
I have a virtual instance that requires X,Y,Z resources, does your cloud have A>X, B>Y,C>Z resource available?<br />
My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur?<br />
What is the charging scheme and how much will using your cloud cost? <br />
--> <br />
<br />
= Technical implementation =<br />
<br />
For a first demo the best technical choice is to go for openldap, which is available in almost all the *nix machines in the world. On top of that, openldap is the server used by the gLite BDIIs, therefore it would be easy to use the same configuration files set-up used for the GRIS or the GIIS. <br />
<br />
* Host of the ldap server: '''ldap://fedclouds-is.hellasgrid.gr:2170'''<br />
** Backup hosted by CESGA: '''fedclouds-is2.hellasgrid.gr'''<br />
*Use the GLUE20.schema in the ''slapd.conf'' file to enable all the GLUE2.0 entities.<br />
<br />
== Resource providers to be published for the demo ==<br />
<br />
'''!!! NEW&nbsp;!!!:''' ''IMPORTANT'': <span style="color:#DC143C">Fill the table with the address of the LDAP server set up as an information provider for your testbed.</span> <br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! Resource Centre <br />
! Address of the LDAP server "ldap://hostname:2170"<br />
|-<br />
| CESNET <br />
| CESNET Cloud <br />
| ldap://carach5.ics.muni.cz:2170<br />
|-<br />
| KTH <br />
| KTH Cloud <br />
| ldap://front.redcloud.pdc.kth.se:2170<br />
|-<br />
| GWDG <br />
| GWDG Cloud <br />
| ldap://one.cloud.gwdg.de:2170<br />
|-<br />
| SARA <br />
| SARA Cloud <br />
| ldap://bdii.cloud.sara.nl:2170<br />
|-<br />
| CESGA<br />
| CESGA Cloud<br />
| ldap://ui.egi.cesga.es:2170<br />
|-<br />
| CYFRONET<br />
| CYFRONET Cloud<br />
| ldap://cloud-lab.grid.cyf-kr.edu.pl:2170&nbsp;<br />
|-<br />
| TCD<br />
| TCD Cloud<br />
| ldap://cagnode42.cs.tcd.ie:2170<br />
|-<br />
|GRNET<br />
|GRNET_OKEANOS<br />
|ldap://okeanos-is.hellasgrid.gr:2170 <br />
|-<br />
| FZJ<br />
| FZJ Testbed<br />
| ldap://egi-cloud.zam.kfa-juelich.de:2170<br />
|-<br />
| CC-IN2P3<br />
| CC-IN2P3 Cloud<br />
| ldap://cccldbdii01.in2p3.fr:2170<br />
|}<br />
<br />
'''Olda table for the CF2012 demo''' <br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! RP contact name <br />
! Resource Centre name to be published (was Site Name) <br />
! Country <br />
! Capabilities to be published (specify the endpoints supporting the capabilities!) <br />
! Other info to publish <br />
! VM Manager <br />
! V.Images available (OSFamily,OSName,OSVersion) <br />
! Max cores <br />
! Max CPU speed <br />
! Max RAM<br />
|-<br />
| CESNET <br />
| Miroslav Ruda <br />
| CESNET Cloud <br />
| Czech Republic <br />
| cloud.managementSystem, cloud.vm.uploadImage, cloud.data.cdmi <br />
| <br />
| XEN <br />
| 1.) Linux, OpenSUSE, 11.4<br>2.) Linux, Debian, 6.0.3 <br />
| 24 <br />
| <br />
| 96GB<br />
|-<br />
| KTH <br />
| Zeeshan Ali&nbsp;Shah <br />
| KTH-PDC Cloud <br />
| Sweden <br />
| cloud.managementSystem, cloud.vm.customimage, cloud.data.cdmi <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GWDG <br />
| Piotr Kasprzak <br />
| GWDG Cloud <br />
| Germany <br />
| cloud.managementSystem, cloud.vm.uploadImage <br />
| <br />
| KVM <br />
| 1.) Linux, Scientific Linux, 6.1<br>2.) Linux, Ubuntu, 11.10 <br />
| 8 <br />
| 2.4 GHZ <br />
| 16GB<br />
|-<br />
| CYFRONET <br />
| Jan Meizner <br />
| CYFRONET Cloud <br />
| Poland <br />
| cloud.managementSystem, cloud.vm.uploadImage <br />
| <br />
| KVM <br />
| <br />
| 24 <br />
| <br />
| 48GB<br />
|-<br />
| CESGA <br />
| Alvaro Simon <br />
| CESGA Cloud <br />
| Spain <br />
| cloud.managementSystem, cloud.vm.customimage <br />
| <br />
| KVM <br />
| 1.) Linux, Scientific Linux, 5.5 <br />
| 264<br />
| 2.6 GHZ<br />
| 264GB<br />
|}<br />
<br />
== Distributed implementation ==<br />
<br />
Publishing correct information in the information system must be responsibility of the resource provider. To build a decentralized information system there is the need for:<br />
* Central information system instance that can pull the information from the resource information provider<br />
* Distributed information system to be queried by the central one (one instance for every cloud provider)<br />
<br />
<br />
A possible strategy is to base everything on the Top-BDII, which is the currently available technology. Ideally one LDAP server is sufficient for the resource providers. The Top-BDII can be configured to get the data from different LDAP servers, and merge them.<br />
<br />
Pros:<br />
* No need to develope transport and updating mechanisms <br />
* Resource providers need only to produce one LDIF file and load it into an LDAP server.<br />
Cons:<br />
* Top-BDII showed bad performances in publishing dynamic data (not an issue for testing as long as there are few resource providers)<br />
* We must publish 100% GLUE2 compliant information<br />
<br />
'''Find [[Fedclouds BDII instructions|here]] some guidelines for the ldap installation.'''<br />
<br />
=== Example queries ===<br />
<br />
# Get all the endpoints published by the resource providers, with the interface name and the version.<br />
<source lang=bash><br />
$ ldapsearch -x -H ldap://test03.egi.cesga.es:2170 -b o=glue '(objectClass=GLUE2Endpoint)' | perl -p00e 's/\r?\n //g' | grep -E 'GLUE2EndpointURL|GLUE2EndpointInterfaceName|GLUE2EndpointInterfaceVersion|dn\:' | awk '{printf("%s%s", $0, (NR%4 ? " === " : "\n"))}' | awk '{print ""$2" "$5" "$8" "$11}' | awk -F "GLUE2DomainID=" '{print $2}' | awk -F "," '{print $1 " "$3}' | awk '{print $1" "$4" "$3" "$5}' | sort<br />
<br />
CESGA OCCI 1.1 http://meghacloud.cesga.es:3200<br />
CESGA OCCI 1.1 https://meghacloud.cesga.es:3202<br />
CESNET CDMI 1.0 https://carach3.ics.muni.cz:8080/<br />
CESNET OCA 3.0 https://carach5.ics.muni.cz:6443/RPC2<br />
CESNET OCCI 0.8 https://carach5.ics.muni.cz:9443/<br />
CESNET OCCI 1.1 http://carach5.ics.muni.cz:3333/<br />
CESNET OCCI 1.1 https://carach5.ics.muni.cz:10443/<br />
CESNET Sunstone 3.0 https://carach5.ics.muni.cz/<br />
CYFRONET OCCI 1.1 http://cloud-lab.grid.cyf-kr.edu.pl:3200/<br />
CYFRONET OCCI 1.1 https://cloud-lab.grid.cyf-kr.edu.pl:3443/<br />
GRNET_OKEANOS INTERFACE2 NA https://endpoint2<br />
GRNET_OKEANOS OCCI XX http://okeanos-occi.hellasgrid.gr:8888<br />
GWDG CDMI 1.0.1 http://cdmi.cloud.gwdg.de:4001<br />
GWDG CDMI 1.0.1 https://cdmi.cloud.gwdg.de:4000<br />
GWDG OCCI 0.8 http://occi.cloud.gwdg.de:3400<br />
GWDG OCCI 1.1 http://occi.cloud.gwdg.de:3200<br />
GWDG OCCI 1.1 https://occi.cloud.gwdg.de:3100<br />
SARA INTERFACE XX https://occi.cloud.sara.nl<br />
SARA INTERFACE2 NA https://ui.cloud.sara.nl<br />
</source></div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=38642Fedcloud-tf:Testbed2012-07-23T10:40:21Z<p>Zashah: /* Endpoints */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
== Technologies ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
<pPie labels exploded 3d><br />
OpenNebula,7<br />
StratusLab,2<br />
OpenStack,3<br />
WNoDeS,1<br />
Okeanos,1<br />
</pPie><br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization, a contact email address and the DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|-<br />
| CC-IN2P3 <br />
| Connect to [http://cctools.in2p3.fr/cclogon/?lang=en CClogon] and request an account for the department/laboratory "CC Cloud/CLOUDCC". You have to select "egifctf" as belonging group in the second step.<br />
|}<br />
<br />
== Endpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| BSC <br />
| CDMI Proxy <br />
| <br />
http://bscgrid20.bsc.es:2365/ <br />
<br />
|-<br />
| CESNET <br />
| OCA<br>Sunstone<br>OCCI 0.8<br>OCCI 1.1<br>X509 OCCI 1.1<br>CDMI Proxy<br>LDAP server<br />
| https://carach5.ics.muni.cz:6443/RPC2<br>https://carach5.ics.muni.cz/<br>https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br>https://carach5.ics.muni.cz:10443/<br>https://carach3.ics.muni.cz:8080/<br>ldap://carach5.ics.muni.cz:2170<br />
|-<br />
| GRNET <br />
| occi (user/pass) <br> occi web interface <br> Ldap Server<br />
| okeanos-occi.hellasgrid.gr 8888<br> http://okeanos-occi.hellasgrid.gr:8888/ <br> ldap://okeanos-is.hellasgrid.gr:2170 -b o=glue (TBC)<br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br>ldap://ui.egi.cesga.es:2170 -b o=glue<br />
|-<br />
| Cyfronet <br />
| OCCI 1.1 (X.509)<br>OCCI 1.1 (user/pass)<br>LDAP Server <br />
| https://cloud-lab.grid.cyf-kr.edu.pl:3443/ <br>http://cloud-lab.grid.cyf-kr.edu.pl:3200/<br>ldap://cloud-lab.grid.cyf-kr.edu.pl:2170<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| cloud-lal.stratuslab.eu<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509)<br>CDMI proxy (user/pass)<br>CDMI proxy (X.509)<br>LDAP Server <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br>https://occi.cloud.gwdg.de:3100<br>http://cdmi.cloud.gwdg.de:4001<br>https://cdmi.cloud.gwdg.de:4000<br>ldap://one.cloud.gwdg.de:2170<br />
|-<br />
| KTH <br />
| OCCI 0.8<br>OVF<br>OCA<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (x.509 auth) <br>CDMI Proxy<br>LDAP <br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br>http://front.redcloud.pdc.kth.se:3000/<br>https://front.redcloud.pdc.kth.se:3043/<br>http://cdmi.pdc2.pdc.kth.se:3300/<br>ldap://front.redcloud.pdc.kth.se:2170/<br />
|-<br />
| SARA <br />
| Sunstone<br> OCCI 0.8<br><br />
| http://ui.cloud.sara.nl/<br> https://occi.cloud.sara.nl/<br><br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3<br>OCCI 1.1 SSL<br>OCCI 1.1<br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br>https://egi-cloud.zam.kfa-juelich.de:8788<br>http://egi-cloud.zam.kfa-juelich.de:8787<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br />
|-<br />
| CC-IN2P3 <br />
| Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3<br>OCCI 1.1 (X509)<br />
| http://ccec2.in2p3.fr:8773/services/Cloud<br>http://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br>https://ccocci.in2p3.fr:8787<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Production <br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage <br />
| OpenNebula 3.4.1 <br />
| Shared NFS filesystem, GridFTP, S3 Cumulus, CDMI Proxy <br />
| OpenNebula OCA/OCCI/ECONE + OGF-OCCI v1.1 <br />
| Nagios infrastructure is ready, custom probes for OpenNebula's OCCI, ECONE, OCA. Ganglia / Munin can be added on request. <br />
| OpenNebula accounting daemon. If reporter for standard usage records is implemented, it can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 <br />
| Open for discussion <br />
| Username and password, X.509 certificates for OGF-OCCI <br />
| In general up to the user, currently registered SSH keys for root access to the VMs<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Diablo <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2, Openstack API 1.1 <br />
| OCCI when available <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password and X509&nbsp; <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Jhon Masschelein, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| Testing <br />
| 5 x dual quad core with 16GB RAM <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem, 1.5 TB <br />
| StratusLab web-monitor, Sunstone <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| StratusLab, OpenNebula <br />
| <br> <br />
| X509, Username and password <br />
| User SSH key for root access<br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--> <br><br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black; text-align:center;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br />
<br />
[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br />
[http://carach5.ics.muni.cz:3333/ OCCI v1.1] <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span> <br />
| <br> <span style="background:red">No</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|CC-IN2P3 (NGI FR)]] <br />
| [http://cctools.in2p3.fr/cclogon/?lang=en CC-IN2P3 account request]<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">[https://ui.cloud.sara.nl ui.cloud.sara.n]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=34451Fedcloud-tf:Testbed2012-03-20T08:29:29Z<p>Zashah: /* Enpoints */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
== Technologies Distribution ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
'''[[Image:|pChart]]''' <br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|-<br />
| CC-IN2P3 <br />
| Still in design/validation by support teams. Likely to be available before end of march.<br />
|}<br />
<br />
== Enpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| BSC <br />
| CDMI Proxy <br />
| <br />
http://bscgrid20.bsc.es:2365/ <br />
<br />
|-<br />
| CESNET <br />
| OCA<br>Sunstone<br>OCCI 0.8<br>OCCI 1.1<br>X509 OCCI 1.1<br>CDMI Proxy <br />
| https://carach5.ics.muni.cz:6443/RPC2<br>https://carach5.ics.muni.cz/<br>https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br>https://carach5.ics.muni.cz:10443/<br>https://carach3.ics.muni.cz:8080/<br />
|-<br />
| GRNET <br />
| <br> <br />
| <br><br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br />
|-<br />
| Cyfronet <br />
| StratusLab <br>X509 OCCI 1.1 <br />
| https://149.156.10.30:2634/ <br>https://149.156.10.30:3443/<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| cloud-lal.stratuslab.eu<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509)<br>CDMI proxy (user/pass)<br>CDMI proxy (X.509) <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br>https://occi.cloud.gwdg.de:3100<br>http://cdmi.cloud.gwdg.de:4001<br>https://cdmi.cloud.gwdg.de:4000<br />
|-<br />
| KTH <br />
| OCCI 0.8<br>OVF<br>OCA<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (x.509 auth) <br>CDMI Proxy <br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br>http://front.redcloud.pdc.kth.se:3000/<br>https://front.redcloud.pdc.kth.se:3043/<br>http://cdmi.pdc2.pdc.kth.se:3300/<br />
|-<br />
| SARA <br />
| <br> <br />
| <br><br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3 <br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br />
|-<br />
| CC-IN2P3 <br />
| Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3 <br />
| http://ccec2.in2p3.fr:8773/services/Cloud<br>http://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Production <br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage <br />
| OpenNebula 3.0<br />
| Shared NFS filesystem, GridFTP, S3 Cumulus, CDMI Proxy<br />
| OpenNebula OCA/OCCI/ECONE + OGF-OCCI v1.1 <br />
| Nagios infrastructure is ready, custom probes for OpenNebula's OCCI, ECONE, OCA. Ganglia / Munin can be added on request. <br />
| OpenNebula accounting daemon. If reporter for standard usage records is implemented, it can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 <br />
| Open for discussion <br />
| Username and password, X.509 certificates for OGF-OCCI <br />
| In general up to the user, currently registered SSH keys for root access to the VMs<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Diablo <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2, Openstack API 1.1 <br />
| OCCI when available <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password and X509&nbsp; <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| Testing<br />
| 5 x dual quad core with 16GB RAM <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem, 1.5 TB<br />
| StratusLab web-monitor, Sunstone<br />
| Nagios <br />
| n/a <br />
| n/a <br />
| StratusLab, OpenNebula<br />
| <br> <br />
| X509, Username and password<br />
| User SSH key for root access<br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--> <br><br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black; text-align:center;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br />
<br />
[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br />
[http://carach5.ics.muni.cz:3333/ OCCI v1.1] <br />
<br />
</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span> <br />
| <br> <span style="background:red">No</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| align="left" nowrap="nowrap" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:WorkGroups:Scenario3&diff=34114Fedcloud-tf:WorkGroups:Scenario32012-03-13T11:54:59Z<p>Zashah: </p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}} <br />
<br />
== Scenario 3: Integrating information from multiple resource providers ==<br />
<br />
<font color="red">Leader: David Wallom, OeRC</font><br />
<br />
== Scenario collaborators ==<br />
{| border="1" <br />
!Role<br />
!Institution<br />
!Name<br />
|-<br />
|Scenario leader<br />
|OeRC<br />
|David Wallom<br />
|-<br />
|Collaborator<br />
| OeRC<br />
| Matteo Turilli<br />
|-<br />
|Collaborator<br />
|EGI.eu<br />
|Peter Solagna<br />
|-<br />
|Collaborator<br />
|INFN<br />
|Elisabetta Ronchieri <br />
|}<br />
<br />
== Information that should be published by a cloud service ==<br />
The following are the information identified during the TF F2F meeting:<br />
<br />
'''Please add more points edit/comments the list'''<br />
<br />
#What is the name of the resource and what type of interface can I use to manage instances on the resource?<br />
## What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal)<br />
#What are the AuthN and AuthZ rules that operate on your cloud?<br />
#What instances are already installed on the resource and am I allowed to upload my own instances?<br />
#If I am able to upload instances what format of instances does the resource accept?<br />
#Is there a data interface available and if so what is it?<br />
#What is the overall size of the resource?<br />
#Are instance templates defined that limit the choice of instance scales I am able to run?<br />
#What type of virtual network can I establish on the resource?<br />
#Does the resource support cloud scalability through managed bursting to another external provider?<br />
<br />
The following are questions on the dynamic information;<br />
#I have a virtual instance that requires X,Y,Z resources, does your cloud have A>X, B>Y,C>Z resource available?<br />
#My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur?<br />
#What is the charging scheme and how much will using your cloud cost?<br />
<br />
== How to render those information in GLUE2 ==<br />
'''Note''': BDII service speaks only GLUE2. The Cloud information need to be squeezed in the current set of GLUE2 Entities. If the schema is extended to include Cloud-specific entities, it needs to be officially approved by OGF and implemented in the various ''glue-schema'' ''glue-validator'' components deployed with the BDII.<br />
<br />
=== Use the currently available GLUE2.0 entities ===<br />
<br />
Currently the GLUE2 includes two main conceptual models for Computing Elements and Storage Elements. These elements should be used to model the Cloud capabilities remaining compliant to the current GLUE2.0 schema.<br />
<br />
==== Capabilities for cloud services ====<br />
''Note: '''bold''' capabilities are new, not already in GLUE2 specification. Adding new capabilities do not requires an extension of the GLUE2 schema.<br><br />
''Please:'' add new ''high level'' capabilities if you feel that something is missing. These capabilities are used in the following entities.<br />
<br />
{| border='1'<br />
!Capability<br />
!Description<br />
|-<br />
|'''cloud.VMmanagement'''<br />
|This is the '''standard''' capability that every cloud service should publish if it allows to instantiate/suspend/delete virtual machines<br />
|-<br />
|'''cloud.virtualImagesUpload'''<br />
|This is the capability that allows users to upload their own virtual images through the cloud interface<br />
|-<br />
|security.authentication/security.authorization<br />
|I would leave those capability, given that every cloud provider has authentication <br />
|-<br />
|<br />
|<br />
|}<br />
<br />
====Computing Service entity description====<br />
* This Service is used to describe the computing resource itself, decoupling from the Grid endpoint.<br />
* Attributes that need to be provided by the resource providers are in '''bold'''<br />
{| border='1'<br />
!Attribute<br />
!Type<br />
!Multiplicity<br />
!Description <br />
|-<br />
|Creation time<br />
|..<br />
|..<br />
|..<br />
|-<br />
|Validity<br />
|..<br />
|..<br />
|..<br />
|-<br />
|ID<br />
|..<br />
|..<br />
|..<br />
|-<br />
|'''Name'''<br />
|String<br />
|1<br />
|Human readable name. It could be used to fill the information: "what is the name of the resource"<br />
|-<br />
|OtherInfo<br />
|String<br />
|n<br />
|Placeholder to add information that does not fit into any other attribute. Cloud information that cannot be mapped in other attributes could be added here.<br />
|-<br />
|'''Capability'''<br />
|Capability_t<br />
|n<br />
|This attribute lists the capabilities available for this service, currently the type ''Capability_t'' does not include specific cloud capabilities. Being an open enum type it can be extended with additional capabilities. Currently some of the already available capabilities are: security.accounting, security.authentication or information.logging. We could consider to add capabilities like "''cloud.vm.uploadImage''" to add the information in the quesiton: "am I allowed to upload my own instances?". To identify cloud services there would be the need to add a new capability, common to all the cloud services regardless of their specific capabilities, like: "cloud.managementSystem" (nb: stupid example). ''Resource providers, in this design stage, could provide just descriptions of the capabilities they would like to publish. I (Peter) will try to group them proposing some labels for the different capabilities.''<br />
|-<br />
|'''Type'''<br />
|ServiceType_t<br />
|1<br />
|Type of service in a reverse namespace model, e.g.: org.glite.lb or org.glite.wms. It could be ''org.opennebula'', ''org.stratuslab'' or ''com.cloudsigma''<br />
|}<br />
<br />
There are, then, a number of more attributes (static and dynamic) that could be used by cloud services, like: StatusInfo,TotalJobs, RunningJobs etc etc.<br />
Please note that '''Location''' is a GLUE2 entity that can be linked to the Service entity, this could answer to the ''"Where is located the cloud facility?"'' question.<br />
<br />
===ComputingEndpoint description===<br />
<br />
Every ComputingService has associated '''one or more''' Computing Endpoint. The endpoint is used to create, control am monitor computational activities.<br><br />
*Resource providers should provide the information to create one endpoint for each interface they're exposing for the cloud service.<br />
{| border='1'<br />
!Attribute<br />
!Type<br />
!Multiplicity<br />
!Description <br />
|-<br />
|CreationTime<br />
|..<br />
|..<br />
|I will skip the most general, attributes like OtherInfo and Capability(described above).<br />
|-<br />
|'''URL'''<br />
|URI<br />
|1<br />
|Network location of the endpoint.<br />
|-<br />
|'''Capability'''<br />
|Capability_t<br />
|0..n<br />
|It's the same field of the Service entity. Some capability could be interface-specific. I would replicate all the general capability also for this instance.<br />
|-<br />
|'''Technology'''<br />
|EndpointTechnology_t<br />
|1<br />
|Examples are "webservice" and "corba". We could add "webportal" or something like this to clarify that the endpoint refers to a web application.<br />
|-<br />
|'''InterFaceName'''<br />
|InterFaceName_t<br />
|1 (mandatory)<br />
|The interface in the cloud case could be ''OCCI'', ''EC2'', ''jclouds'' or "webinterface". This can answer to the question: "what type of interface can I use to manage instances on the resource?"<br />
|-<br />
|'''InterfaceVersion'''<br />
|..<br />
|..<br />
|No description needed.<br />
|-<br />
|'''Supported profile'''<br />
|URI<br />
|*<br />
|We can define, here, a set of profiles for the authN/authZ of the users, like ''uri:sec:x509''.<br />
|}<br />
<br />
==== ExecutionEnvironment ====<br />
The ExecutionEnvironment class describes the hardware and operating system environment in which a job will run. It could be used to describe the VM images already available in the Cloud service. <br />
<br />
{| border='1'<br />
!Attribute<br />
!Type<br />
!Multiplicity<br />
!Description <br />
|-<br />
|'''Platform'''<br />
|Platform_t<br />
|1<br />
|The platform atchitecture, can be: amd64,i386,itanum,powerpc,sparc<br />
|-<br />
|TotalInstances/used instances<br />
| -<br />
| -<br />
|These attributes are not relevant in a cloud environment, where the execution environment are deployed dynamically.<br />
|-<br />
|PhysicalCPUs<br />
|UInt32<br />
|0..1<br />
|The physical CPUs are not relevant - I would say- in a virtualised environment.<br />
|-<br />
|'''LogicalCPUs'''<br />
|UInt32<br />
|0..1<br />
|This attribute could be used to express the '''maximum''' number of cores that is possible to instantiate in a single VM of this type (likely it will be common to all the execution environments of the same cloud service).<br />
|-<br />
|'''MainMemorySize'''<br />
|UInt64<br />
|1<br />
|Max physical memory that is possible to instantiate on a single VM.<br />
|-<br />
|<br />
*'''OSFamily'''<br />
*'''OSName'''<br />
*'''OSVersion'''<br />
| (*)<br />
|1<br />
|Attributes which define the operating system available. There will be an execution environment for every virtual machine available in the cloud service. We should define some placeholders to create an ExecutionEnvironment ''stub'' to describe the max cores/memory for the virtual machines uploaded by a user.<br />
|-<br />
|}<br />
<br />
=== Deploy a new set of entities ===<br />
<br />
This is the next step: define cloud specific GLUE entities to extend the GLUE2 schema in order to publish the cloud services in a standard way.<br />
<br />
<!-- What to model?<br />
What is the name of the resource and what type of interface can I use to manage instances on the resource?<br />
What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
What are the AuthN and AuthZ rules that operate on your cloud?<br />
What instances are already installed on the resource and am I allowed to upload my own instances?<br />
If I am able to upload instances what format of instances does the resource accept?<br />
Is there a data interface available and if so what is it?<br />
What is the overall size of the resource?<br />
Are instance templates defined that limit the choice of instance scales I am able to run?<br />
What type of virtual network can I establish on the resource?<br />
Does the resource support cloud scalability through managed bursting to another external provider? <br />
<br />
The following are questions on the dynamic information;<br />
<br />
I have a virtual instance that requires X,Y,Z resources, does your cloud have A>X, B>Y,C>Z resource available?<br />
My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur?<br />
What is the charging scheme and how much will using your cloud cost? <br />
--><br />
<br />
== Technical implementation ==<br />
<br />
For a first demo the best technical choice is to go for openldap, which is available in almost all the *nix machines in the world. On top of that, openldap is the server used by the gLite BDIIs, therefore it would be easy to use the same configuration files set-up used for the GRIS or the GIIS.<br />
* Use the GLUE20.schema in the ''slapd.conf'' file to enable all the GLUE2.0 entities.<br />
<br />
== Resource providers to be published for the demo ==<br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! RP contact name <br />
! Resource Centre name to be published (was Site Name) <br />
! Country <br />
! Capabilities to be published (specify the endpoints supporting the capabilities!) <br />
! Other info to publish<br />
|-<br />
| e.g. CESNET <br />
| e.g. Peter Solagna (not for info sys) <br />
| e.g. CESNET-Cloud-testbed <br />
| e.g. Czech Republic <br />
| e.g. cloud.managementSystem, cloud.vm.uploadImage <br />
| (I will try to publish them in glue2 if possible)<br />
|-<br />
| KTH<br />
| Zeeshan Ali&nbsp;Shah<br />
| KTH-PDC Cloud<br />
| Sweden<br />
| cloud.managementSystem, cloud.vm.customimage, cloud.data.cdmi<br />
| <br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=34113Fedcloud-tf:Testbed2012-03-13T11:53:17Z<p>Zashah: </p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
== Technologies Distribution ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
'''[[Image:|pChart]]''' <br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|}<br />
<br />
== Enpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| CESNET <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1<br>X509 OCCI 1.1 <br />
| https://carach5.ics.muni.cz/<br>https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br>https://carach5.ics.muni.cz:10443/<br />
|-<br />
| GRNET <br />
| <br> <br />
| <br><br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br />
|-<br />
| Cyfronet <br />
| StratusLab <br>X509 OCCI 1.1 <br />
| https://149.156.10.30:2634/ <br>https://149.156.10.30:3443/<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| cloud-lal.stratuslab.eu<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br />
|-<br />
| KTH <br />
| OCCI 0.8<br>OVF<br>OCA<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (x.509 auth) <br>CDMI Proxy <br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br>http://front.pdc2.pdc.kth.se:3200/<br>https://front.pdc2.pdc.kth.se:3202/<br>http://front.pdc2.pdc.kth.se:3300/<br />
|-<br />
| SARA <br />
| <br> <br />
| <br><br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3 <br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br />
|-<br />
| CC-IN2P3 <br />
| Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3 <br />
| http://ccec2.in2p3.fr:8773/services/Cloud<br>http://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Beta (pre-production) <br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage <br />
| OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, S3 Cumulus implementation <br />
| OpenNebula/OCCI <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 <br />
| Open for discussion <br />
| Username and password as temporary solution, in the future X509 certificates <br />
| In general up to user, plan to support registered user SSH keys for root access<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Diablo <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2, Openstack API 1.1 <br />
| OCCI when available <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password and X509&nbsp; <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| <br> <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--> <br><br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br />
<br />
[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br />
[http://carach5.ics.muni.cz:3333/ OCCI v1.1] <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span> <br />
| <br> <span style="background:red">No</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=34112Fedcloud-tf:Testbed2012-03-13T11:52:16Z<p>Zashah: </p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
== Technologies Distribution ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
'''[[Image:|pChart]]''' <br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|}<br />
<br />
== Enpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| CESNET <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1<br>X509 OCCI 1.1 <br />
| https://carach5.ics.muni.cz/<br>https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br>https://carach5.ics.muni.cz:10443/<br />
|-<br />
| GRNET <br />
| <br> <br />
| <br><br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br />
|-<br />
| Cyfronet <br />
| StratusLab <br>X509 OCCI 1.1 <br />
| https://149.156.10.30:2634/ <br>https://149.156.10.30:3443/<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| cloud-lal.stratuslab.eu<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br />
|-<br />
| KTH <br />
| OCCI 0.8<br>OVF<br>OCA<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (x.509 auth) <br>CDMI Proxy <br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br>http://front.pdc2.pdc.kth.se:3200/<br>https://front.pdc2.pdc.kth.se:3202/<br>http://front.pdc2.pdc.kth.se:3300/<br />
|-<br />
| SARA <br />
| <br> <br />
| <br><br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3 <br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br />
|-<br />
| CC-IN2P3 <br />
| Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3 <br />
| http://ccec2.in2p3.fr:8773/services/Cloud<br>http://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Beta (pre-production) <br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage <br />
| OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, S3 Cumulus implementation <br />
| OpenNebula/OCCI <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 <br />
| Open for discussion <br />
| Username and password as temporary solution, in the future X509 certificates <br />
| In general up to user, plan to support registered user SSH keys for root access<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Diablo <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2, Openstack API 1.1 <br />
| OCCI when available <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password (current) X509 (need consenus within Taskforce) <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| <br> <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--> <br> <br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br />
<br />
[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br />
[http://carach5.ics.muni.cz:3333/ OCCI v1.1] <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span> <br />
| <br> <span style="background:red">No</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=34111Fedcloud-tf:Testbed2012-03-13T11:51:35Z<p>Zashah: </p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
== Technologies Distribution ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
'''[[Image:|pChart]]''' <br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|}<br />
<br />
== Enpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| CESNET <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1<br>X509 OCCI 1.1 <br />
| https://carach5.ics.muni.cz/<br>https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br>https://carach5.ics.muni.cz:10443/<br />
|-<br />
| GRNET <br />
| <br> <br />
| <br><br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br />
|-<br />
| Cyfronet <br />
| StratusLab <br>X509 OCCI 1.1 <br />
| https://149.156.10.30:2634/ <br>https://149.156.10.30:3443/<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| cloud-lal.stratuslab.eu<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br />
|-<br />
| KTH <br />
| OCCI 0.8<br>OVF<br>OCA<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (x.509 auth) <br>CDMI Proxy <br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br>http://front.pdc2.pdc.kth.se:3200/<br>https://front.pdc2.pdc.kth.se:3202</br>http://front.pdc2.pdc.kth.se:3300/<br />
|-<br />
| SARA <br />
| <br> <br />
| <br><br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3 <br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br />
|-<br />
| CC-IN2P3 <br />
| Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3 <br />
| http://ccec2.in2p3.fr:8773/services/Cloud<br>http://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Beta (pre-production) <br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage <br />
| OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, S3 Cumulus implementation <br />
| OpenNebula/OCCI <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 <br />
| Open for discussion <br />
| Username and password as temporary solution, in the future X509 certificates <br />
| In general up to user, plan to support registered user SSH keys for root access<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Diablo <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2, Openstack API 1.1 <br />
| OCCI when available <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password (current) X509 (need consenus within Taskforce) <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| <br> <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--> <br> <br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br />
<br />
[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br />
[http://carach5.ics.muni.cz:3333/ OCCI v1.1] <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span> <br />
| <br> <span style="background:red">No</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=34110Fedcloud-tf:Testbed2012-03-13T11:50:08Z<p>Zashah: /* Enpoints */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br> <br />
<br />
== Technologies Distribution ==<br />
<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently. <br />
<br />
'''[[Image:|pChart]]''' <br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups: Federated AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers. <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESGA <br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account: <br />
#subject should contain "[FedCloud registration]"; <br />
#body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.<br />
<br />
|-<br />
| FZ Jülich <br />
| Send an email to [mailto:b.hagemeier@fz-juelich.de Björn Hagemeier] stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.<br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF <br />
| https://register.stratuslab.eu:8444<br />
|-<br />
| GWDG <br />
| Send an e-mail to [mailto:piotr.kasprzak@gwdg.de piotr.kasprzak@gwdg.de] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| KTH <br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA <br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|}<br />
<br />
== Enpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]] <br />
<br />
{| cellspacing="5" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Interface Type <br />
! Endpoint<br />
|-<br />
| CESNET <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1<br>X509 OCCI 1.1 <br />
| https://carach5.ics.muni.cz/<br>https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br>https://carach5.ics.muni.cz:10443/<br />
|-<br />
| GRNET <br />
| <br> <br />
| <br><br />
|-<br />
| CESGA <br />
| OCCI 0.8<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (X.509 Auth)<br>SunStone <br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:3200<br>https://meghacloud.cesga.es:3202<br>http://meghacloud.cesga.es:9869/<br />
|-<br />
| Cyfronet <br />
| StratusLab <br>X509 OCCI 1.1 <br />
| https://149.156.10.30:2634/ <br>https://149.156.10.30:3443/<br />
|-<br />
| GRIF <br />
| StratusLab <br />
| cloud-lal.stratuslab.eu<br />
|-<br />
| GWDG <br />
| Sunstone<br>OCCI 0.8<br>OCCI 1.1 <br />
| https://one.cloud.gwdg.de:8443<br>http://occi.cloud.gwdg.de:3400<br>http://occi.cloud.gwdg.de:3200<br />
|-<br />
| KTH <br />
| OCCI 0.8<br>OVF<br>OCA<br>OCCI 1.1 (user/pass)<br>OCCI 1.1 (x.509 auth) <br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br>http://front.pdc2.pdc.kth.se:3200/<br>https://front.pdc2.pdc.kth.se:3202/<br />
|-<br />
| SARA <br />
| <br> <br />
| <br><br />
|-<br />
| FZ Jülich <br />
| OpenStack EC2<br>OpenStack S3 <br />
| http://egi-cloud.zam.kfa-juelich.de:8773<br>http://egi-cloud.zam.kfa-juelich.de:3333<br />
|-<br />
| TCD <br />
| StratusLab OpenNebula proxy <br />
| https://cagnode42.cs.tcd.ie:2634<br />
|-<br />
| CC-IN2P3 <br />
| Openstack EC2<br>Openstack Nova API 1.1<br>Openstack S3 <br />
| http://ccec2.in2p3.fr:8773/services/Cloud<br>http://ccnovaapi.in2p3.fr:8774/v1.1/<br>http://ccs3.in2p3.fr:3333<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br> <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Beta (pre-production) <br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage <br />
| OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, S3 Cumulus implementation <br />
| OpenNebula/OCCI <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 <br />
| Open for discussion <br />
| Username and password as temporary solution, in the future X509 certificates <br />
| In general up to user, plan to support registered user SSH keys for root access<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br> <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br> <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment <br />
| <br> <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.2 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br> <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Testbed <br />
| 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores <br />
| Openstack Diablo <br />
| Local disks <br />
| undef <br />
| Nagios, Collectd/Smurf <br />
| undef <br />
| undef <br />
| EC2, Openstack API 1.1 <br />
| OCCI when available <br />
| user/password, x509 when available <br />
| OpenSSH<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password (current) X509 (need consenus within Taskforce) <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br> <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| <br> <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--> <br> <br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| cellspacing="1" cellpadding="1" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black;"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br><br />
|}<br />
<br />
<br> <br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" class="wikitable" style="border-collapse: collapse; border:1px solid black; text-align:center;"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span> <br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br />
<br />
[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br />
[http://carach5.ics.muni.cz:3333/ OCCI v1.1] <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span> <br />
| <br> <span style="background:red">No</span> <br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span> <br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <span style="background:green">[mailto:piotr.kasprzak@gwdg.de]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| Mail to Björn Hagemeier <br />
| <br />
*EC2 [http://egi-cloud.zam.kfa-juelich.de:8773 egi-cloud.zam.kfa-juelich.de:8773] <br />
*S3 [http://egi-cloud.zam.kfa-juelich.de:3333 egi-cloud.zam.kfa-juelich.de:3333]<br />
<br />
| <span style="background:green">&nbsp;&nbsp;&nbsp; </span> <br />
| 134.94.32.33 - 134.94.32.40 <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=32465Fedcloud-tf:Testbed2012-02-07T12:07:50Z<p>Zashah: /* Enpoints */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br />
== Technologies Distribution ==<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently.<br />
<br />
<pPie labels exploded 3d><br />
OpenNebula,7<br />
StratusLab,3<br />
OpenStack,3<br />
WNoDeS,1<br />
</pPie><br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups:_Federated_AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers.<br />
<br />
{| border="1" cellspacing="5" cellpadding="5" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account:<ol><li>subject should contain "[FedCloud registration]";</li><li>body should contain a name/organization and a contact email<br />
address, optionally DN from their x509 EGI certificate.</li></ol> <br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESGA<br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF<br />
|<br />
|-<br />
| GWDG<br />
|<br />
|-<br />
| KTH<br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA<br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|}<br />
<br />
== Enpoints ==<br />
<br />
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]]<br />
<br />
{| border="1" cellspacing="5" cellpadding="5" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider<br />
! Interface Type<br />
! Endpoint<br />
|-<br />
| CESNET<br />
| OCCI 0.8<br>OCCI 1.1<br />
| https://carach5.ics.muni.cz:9443/<br>http://carach5.ics.muni.cz:3333/<br />
|-<br />
| GRNET<br />
|<br />
|<br />
|-<br />
| CESGA<br />
| OCCI 0.8<br>SunStone<br />
| http://meghacloud.cesga.es:4569/<br>http://meghacloud.cesga.es:9869/<br />
|-<br />
| GRIF<br />
|<br />
|<br />
|-<br />
| GWDG<br />
|<br />
|<br />
|-<br />
| KTH<br />
|<br />
OCCI 0.8<br>OVF<br>OCA<br />
| http://front.pdc2.pdc.kth.se:4569/<br>https://front.pdc2.pdc.kth.se:8443/ovf4one<br>http://front.pdc2.pdc.kth.se:2633/<br />
|<br />
|-<br />
| SARA<br />
|<br />
|<br />
|}<br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| border="1" cellspacing="1" cellpadding="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| Beta (pre-production)<br />
| 10x (24 cores, 100GB RAM) + 44TB shared storage<br />
| OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, S3 Cumulus implementation<br />
| OpenNebula/OCCI<br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1<br />
| Open for discussion <br />
| Username and password as temporary solution, in the future X509 certificates <br />
| In general up to user, plan to support registered user SSH keys for root access<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment<br />
| <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.0 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.0 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Work in progress <br />
| 5 nodes for a total of 40 cores, 80GB RAM, 800GB local disk <br />
| Openstack <br />
| undef <br />
| undef <br />
| Nagios <br />
| undef <br />
| undef <br />
| undef <br />
| OCCI when available <br />
| user/pwd, x509 when possible <br />
| undef<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password (current) X509 (need consenus within Taskforce) <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--><br />
<br><br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| border="1" cellspacing="1" cellpadding="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br />
|}<br />
<br />
<br><br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black; text-align:center;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <span style="background:green">[mailto:fedcloud@metacentrum.cz]</span><br />
| <br> <span style="background:green">[https://carach5.ics.muni.cz/ Sunstone]<br/>[https://carach5.ics.muni.cz:9443/ OCCI v0.8]<br/>[http://carach5.ics.muni.cz:3333/ OCCI v1.1]</span><br />
| <br> <span style="background:green">VM suse (storage/16) with NET public (network/4)</span><br />
| <br> <span style="background:red">No</span><br />
| <br> <span style="background:green">Cumulus at carach3.ics.muni.cz:8888 </span><br />
| <br> <span style="background:green">GridFTP at carach4.ics.muni.cz:50000 </span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| <span style="background:green">[mailto:b.hagemeier@fz-juelich.de]</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br><br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Testbed&diff=32365Fedcloud-tf:Testbed2012-02-06T15:34:25Z<p>Zashah: /* Cloud Resources Status */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
<br />
== Technologies Distribution ==<br />
The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently.<br />
<br />
<pPie labels exploded 3d><br />
OpenNebula,7<br />
StratusLab,3<br />
OpenStack,3<br />
WNoDeS,1<br />
</pPie><br />
<br />
== How to obtain an account for the test bed ==<br />
<br />
In the long term a [[Fedcloud-tf:WorkGroups:_Federated_AAI|federated AAI]] will be provided. In the meanwhile, you may create an account with each of the following providers.<br />
<br />
{| border="1" cellspacing="5" cellpadding="5" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Provider <br />
! Procedure to request an account<br />
|-<br />
| CESNET <br />
| Send a e-mail to [mailto:fedcloud@metacentrum.cz fedcloud@metacentrum.cz] asking for an account:<ol><li>subject should contain "[FedCloud registration]";</li><li>body should contain a name/organization and a contact email<br />
address, optionally DN from their x509 EGI certificate.</li></ol> <br />
|-<br />
| GRNET <br />
| Send a e-mail to [mailto:louridas@grnet.gr Panos Louridas] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| CESGA<br />
| Send an e-mail to [mailto:grid-admin@cesga.es grid-admin@cesga.es] requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.<br />
|-<br />
| GRIF<br />
|<br />
|-<br />
| GWDG<br />
|<br />
|-<br />
| KTH<br />
| PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow [http://www.pdc.kth.se/resources/computers/pdc-cloud this instructions].<br />
|-<br />
| SARA<br />
| Send an e-mail to [mailto:cloud-support@sara.nl cloud-support] with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.<br />
|}<br />
<br />
<!--<br />
|-<br />
| CloudSigma<br />
| <ol><li>Obtain a free trial account with [http://www.cloudsigma.com/en/our-cloud/free-trial/ CloudSigma].</li><li>Send an e-mail to [mailto:micheal.higgins@cloudsigma.com Micheal Higgins] specifying your chosen user name (not your password) and the amount of needed resources.</li></ol><br />
--><br />
<br><br />
<br />
== Resource Providers inventory ==<br />
<br />
The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| border="1" cellspacing="1" cellpadding="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! rowspan="2" | Status <br />
! rowspan="2" | capacity <br />
! colspan="6" | Capabilities <br />
! colspan="2" | Management Interface <br />
! colspan="2" | Authentication<br />
|- style="background-color:lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification <br />
! Supported <br />
! Planned <br />
! Service layer <br />
! VMs<br />
|-<br />
| <div id="cesga">'''CESGA (IBERgrid)'''</div> (Ivan Diaz, Esteban Freire) <br />
| Production <br />
| 33 octo-core servers (264 CPUs) <br />
| OpenNebula 3.0 <br />
| Shared NFS/SSH ~ 450GB per server <br />
| OpenNebula/OCCI <br />
| Ganglia <br />
| In-house WIP Development <br />
| N/A <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| <br />
| Username and Password X.509 future <br />
| As chosen by the users<br />
|-<br />
| <div id="cesnet">'''CESNET'''</div> (Miroslav Ruda) <br />
| <br />
| Several servers quickly, more (~10) can be added later this year <br />
| OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too <br />
| N/A? OpenNebula web interface? <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| X509 certificates preferred, username and password as temporary solution may be possible <br />
| In general up to user, plan to support registered user SSH keys for root access<br />
|-<br />
| <div id="cyfronet">'''Cyfronet'''</div> (Tomasz Szepieniec, Marcin Radecki) <br />
| <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| <div id="fz">'''FZ Jülich'''</div> (B. Hagemeier) <br />
| <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| OpenStack 'Diablo' <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| <div id="grif">'''GRIF''' </div> (Michel Jouvin) <br />
| Production <br />
| 10 servers (240 cores) <br />
| StratusLab <br />
| iSCSI-based permanent disks <br />
| n/a <br />
| n/a <br />
| n/a <br />
| n/a <br />
| Private (StratusLab) <br />
| OCCI <br />
| X509 certificates preferred, username and password also possible <br />
| User SSH keys for root access (configured when VM is launched)<br />
|-<br />
| <div id="grnet">'''GRNET'''</div> (Panos Louridas, Vangelis Floros) <br />
| Alpha <br />
| 25 servers (200 cores, 48 GB RAM each server), 22 TB storage<br> <br />
| Okeanos (GRNET OpenStack implementation) <br />
| Local disks <br />
| OpenStack compatible <br />
| Nagios, Munin, collectd, scripts <br />
| In house development <br />
| <br> <br />
| OpenStack, also complete web based environment<br />
| <br />
| Shibboleth, invitation tokens <br />
| User SSH<br />
|-<br />
| <div id="gwdg">'''GWDG'''</div> (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.0 with OCCI server <br />
| Shared NFS <br />
| OpenNebula Web interface (Sunstone) <br />
| tbd (most likely Nagios) <br />
| Currently n/a, usage of OpenNebula 3.0 accounting components planned for late 2011 <br />
| n/a <br />
| OCCI <br />
| <br />
| Username and password, additionally X.509 in the future <br />
| Up to the user, support for preregistered ssh keys in the future<br />
|-<br />
| <div id="igi">'''IGI'''</div> (Giancinto Donvito, Paolo Veronesi) <br />
| work in progress <br />
| 24 cores, 48 GB RAM, 2TB Disk <br />
| [http://web.infn.it/wnodes/ WNoDeS] <br />
| Shared NFS filesystem <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII (work in progress) <br />
| Nagios <br />
| accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy <br />
| notification based on Nagios for system administrator (not for end users) <br />
| [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] [http://www.eu-emi.eu/products/-/asset_publisher/z2MT/content/cream-1 CREAM] <br />
| Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Authentication Service (based on Shibboleth) should be supported in the next 4/6 months. <br />
| GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) <br />
| SSH keys for root access<br />
|-<br />
| <div id="in2p3">'''CC-IN2P3'''</div> (Helene Cordier, Gille Mathieu, Mattieu Puel) <br />
| Work in progress <br />
| 5 nodes for a total of 40 cores, 80GB RAM, 800GB local disk <br />
| Openstack <br />
| undef <br />
| undef <br />
| Nagios <br />
| undef <br />
| undef <br />
| undef <br />
| OCCI when available <br />
| user/pwd, x509 when possible <br />
| undef<br />
|-<br />
| <div id="kth">'''KTH'''</div> (Zeeshan Ali Shah) <br />
| Accessible since January, 2011 <br />
| Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| Username and Password (current) X509 (need consenus within Taskforce) <br />
| SSH Keys<br />
|-<br />
| <div id="oerc">'''OeRC (UK NGI)'''</div> (David Wallom, Matteo Turilli) <br />
| <br />
| 10 servers, between 8 and 2 VMs each <br />
| Deploying OpenStack <br />
| Data supplied through S3/EBS capable storage services <br />
| N/A <br />
| NAGIOS based <br />
| Developed service utilising extended OGF UR schema <br />
| N/A <br />
| Partial EC2 as implemented by OpenStack <br />
| OCCI when available <br />
| Username and password as implemented by OpenStack <br />
| As chosen by the users<br />
|-<br />
| <div id="sara">'''SARA'''</div> (Floris Sluiter, Maurice Bouwhuis, Machiel Jansen) <br />
| In production 1 January 2012 <br />
| 609 cores, 4,75 TB RAM <br />
| OpenNebula <br />
| 400 TB mountable storage, local disk 10 TB <br />
| Web interface and Red Mine portal <br />
| Nagios, Ganglia <br />
| OpenNebula (adapted) <br />
| Based on Nagios <br />
| OCCI and partial EC2 provided by OpenNebula <br />
| Open for discussion <br />
| username password, X509 planned <br />
| User defines<br />
|-<br />
| <div id="sara">'''STFC'''</div> (Ian Collier) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| <div id="tcd">'''TCD'''</div> (David O'Callaghan, Stuart Kenny) <br />
| <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|}<br />
<br />
<!--<br />
|-<br />
| <div id="cloudsigma">CloudSigma</div> (Micheal Higgins) <br />
| Available since 24 October 2011 <br />
| 100Ghz CPU, 50GB RAM, 5TB Disk <br />
| Web Console, Our API and Jclouds <br />
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS <br />
| KVM <br />
| Any user supplied monitoring <br />
| Accounting in 5 minute intervals, downloadable in CSV <br />
| <br />
| Web interface, API, Jclouds <br />
| OCCI possible in future <br />
| Username and Password X.509 future <br />
| <br />
--><br />
<br><br />
<br />
== Technology Provider inventory ==<br />
<br />
The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap. <br />
<br />
For the description of the Capabilities please refer to the [https://documents.egi.eu/document/435 Cloud Integration Profile document]. <br />
<br />
Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted. <br />
<br />
{| border="1" cellspacing="1" cellpadding="1" style="border-collapse: collapse; border:1px solid black;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! rowspan="2" | Provider <br />
! colspan="6" | Capabilities<br />
|- style="background-color: lightgray;"<br />
! VM Management <br />
! Data <br />
! Information <br />
! Monitoring <br />
! Accounting <br />
! Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|-<br />
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri) <br />
| WNoDeS, with [http://occi-wg.org/2010/12/15/occi-in-infn/ OCCI] interface <br />
| Posix I/O planned on Lustre, NFS and GPFS as persistent storage <br />
| Usage of the SoftwareRunTimeEnvironment attribute for publishing VM information by using BDII <br />
| Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes <br />
| Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure <br />
| <br />
|}<br />
<br />
<br><br />
<br />
== Cloud Resources Status ==<br />
<br />
The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers. <br />
<br />
{| cellspacing="0" cellpadding="5" border="1" style="border-collapse: collapse; border:1px solid black; text-align:center;" class="wikitable"<br />
|- style="background-color: lightgray;"<br />
! Providers <br />
! align="left" style="border-left:1px dotted silver" | <br />
<span style="background:green">&nbsp;&nbsp;&nbsp;</span> = Available<br> <span style="background:red">&nbsp;&nbsp;&nbsp;</span> = Not available<br> <br />
<br />
! User registration <br />
! User access <br />
! VM availability <br />
! Elastic IPs <br />
! Object Storage <br />
! Persistent Storage<br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesga|CESGA (IBERgrid)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cesnet|CESNET (NGI CZ)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#cyfronet|CYFRONET (NGI PL)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#gwdg|GWDG]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#fz|FZ Jülich]] <br />
| <span style="background:green">[mailto:b.hagemeier@fz-juelich.de]</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:red">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#igi|IGI]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#in2p3|IN2P3 (NGI FR)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#kth|KTH]] <br />
| <span style="background:green">[mailto:zashah@pdc.kth.se]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <br><br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#oerc|OerC (UK NGI)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#sara|SARA (NGI NL)]] <br />
| <span style="background:green">[mailto:cloud-support@sara.nl]</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span> <br />
| <span style="background:green">&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
|-<br />
| nowrap="nowrap" align="left" colspan="2" | [[Fedcloud-tf:Resources#tcd|TCD (NGI IE)]] <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|}</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:WorkGroups:Scenario6&diff=28197Fedcloud-tf:WorkGroups:Scenario62011-11-22T12:22:24Z<p>Zashah: /* Step 1: List the notification systems currently adopted */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}}<br />
<br />
= Scenario 6: VM/Resource state change notification =<br />
<br />
<font color="red">Leader: Peter Solagna, EGI</font><br />
<br />
The scenario enables:<br />
* Resource providers to notify interested parties about changes in the user's VM state of a proposed state change in the VMM.<br />
* End user to build automated VM management and reactive workflows in user space.<br />
<br />
Such notification system must enable automatic notification consumers and so there is the need to identify the standards to be used:<br />
* Standard for the payload in the notification messages <br />
* Standard for the messages transport<br />
<br />
If such standards are available in the federated cloud infrastructure, the users will be able to implement monitoring/management tools for their workflows.<br />
<br />
== Steps for the scenario fulfillment ==<br />
=== Information gathering steps ===<br />
The steps I identify to have a comprehensive description of the current situation:<br />
# List the notification systems adopted by the technologies currently deployed in the FedClouds testbeds.<br />
## Collect requirements and use cases from the users communities.<br />
# List the currently available standards available, and map them - if possible - to the technologies implemented in the testbed.<br />
# Map the users requirements to the existing implementations and standards. Assess if there are missing features in the current implementations and standards. <br />
# Identify if there are standards eligible to be suggested as common standards.<br />
<br />
=== Operative steps ===<br />
When the points above are done, I suggest the following, operative, tasks:<br />
<ol start="5"><br />
<li>Technology providers should provide the status change notifications through standard formats and transport protocols.</li><br />
<li>User communities should test these notification mechanisms in their workflows, or implementing a client prototype.</li><br />
</ol><br />
<br />
'''Comments to the steps list, or about any other aspect of the scenario are more than welcome'''<br />
<br />
== Scenario collaborators ==<br />
{| border="1" <br />
!Role<br />
!Institution<br />
!Name<br />
|-<br />
|Scenario leader<br />
|EGI.eu<br />
|Peter Solagna<br />
|-<br />
|Collaborator<br />
| OeRC<br />
| Matteo Turilli<br />
|-<br />
|Collaborator<br />
|<br />
|<br />
|}<br />
<br />
== Step 1: List the notification systems currently adopted ==<br />
{| border="1" <br />
!VM Manager<br />
!What is automatically notified<br />
!Any standard in the message format used<br />
!Message transport protocol<br />
|-<br />
|Open Nebula<br />
| states of VM<br />
| NO<br />
| Users have to query explicitly the result are in XML<br />
|-<br />
|CloudSigma<br />
|<br />
|<br />
|<br />
|-<br />
|OpenStack<br />
| Not clear yet, still at blueprint stage<ref name="NovaNotification">http://wiki.openstack.org/NotificationSystem</ref><ref>https://blueprints.launchpad.net/nova/+spec/notification-system</ref><ref>https://blueprints.launchpad.net/openstack-devel/+spec/notification-and-statistics</ref><br />
| Atom 1.0<ref name="NovaNotification"/><br />
| PubSubHubbub (PSH)<ref name="NovaNotification"/><br />
|-<br />
|WNoDeS<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Step 2: List of the standard available for notification message == <br />
{| border="1"<br />
! Standard name<br />
! Message Format/Transport<br />
! Comments and information<br />
|-<br />
|Atom 1.0<br />
|Message Format<br />
|Implemented by OpenStak<br />
|-<br />
|PubSubHubbub<br />
|Transport<br />
|Implemented by OpenStak<br />
|}<br />
<br />
== References ==<br />
<references/></div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:InventoryOfAvailableSoftware&diff=26257Fedcloud-tf:InventoryOfAvailableSoftware2011-10-20T11:51:36Z<p>Zashah: /* Resource Centre inventory */</p>
<hr />
<div>These two surveys aim at providing an inventory and a quick overview of software that is used to provision existing virtualised infrastructures, and what software is actually available that implements specific Cloud Capabilities exposing which standardised access interfaces. <br />
<br />
These surveys are intended to be filled out by the participating Resource Centres and Technology Providers as indicated and as appropriate. <br />
<br />
== Resource Centre inventory ==<br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Resource Centre <br />
! scope="col" | Status <br />
! scope="col" | committed test bed capacity <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification <br />
| '''Management Interface (curently supported)''' <br />
| '''Management Interface (future consideration''') <br />
| '''Authentication on Service layer''' <br />
| '''Authentication for Login into Cloud Instances'''<br />
|-<br />
| CESNET (Miroslav Ruda) <br />
| <br />
| several servers quickly, more (~10) can be added later this year <br />
| OpenNebula 3.0, to increase heterogenity we could add Eucalyptus 2.0 or Nimbus+Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too <br />
| N/A? OpenNebula web interface? <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia/Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| OCCI and partial EC2 provided by OpenNebula<br />
| Open for discussion<br />
| X509 certificates prefered, login/password as temporary solution may be possible<br />
| in general up to user, plan to support registered user SSH keys for root access<br />
|- style="background:red; color:white"<br />
| UK NGS (David Wallom) <br />
| <br />
| <br />
*&gt;10 servers<br />
<br />
| Currently open Eucalyptus 2.0, moving to 3.0 or Openstack, both supplied by Canonical as suppied in Ubuntu Enterprise Cloud <br />
| Data supplied through S3 capable service <br />
| N/A <br />
| NAGIO based through mediated service based probes <br />
| Developed service utilising extended OGF&nbsp;UR&nbsp;schema <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Cyfronet (Tomasz Szepieniec, Marcin Radecki) <br />
| <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| SARA (Floris Sluiter, Maurice Bouwhuis) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH (Zeeshan Ali Shah) <br />
| Accessible since January, 2011<br />
| Initially 2 Servers with Total 4 cores , 16 GB RAM and 1TB storage - <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| <br />
User/Pass (current) <br />
<br />
X509 (need consenus within Taskforce) <br />
<br />
| SSH Keys<br />
|- style="background:red; color:white"<br />
| CloudSigma (Micheal Higgins) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| GWDG (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.0 with OCCI server <br />
| tbd <br />
| OpenNebula Web interface with OCCI <br />
| tbd (most likely Nagios) <br />
| N/A <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| Trinity College Dublin (David O'Callaghan, Stuart Kenny) <br />
| <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| FZ Jülich (B. Hagemeier) <br />
| <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| openStack "Diablo" <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|}<br />
<br />
== Technology Provider inventory ==<br />
<br />
Likewise, an inventory survey for Technology Providers to fill in below: <br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Technology Provider <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username/password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability/Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|}<br />
<br />
--[[User:Michel|Michel]] 16:54, 13 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:InventoryOfAvailableSoftware&diff=26176Fedcloud-tf:InventoryOfAvailableSoftware2011-10-18T10:14:30Z<p>Zashah: /* Resource Centre inventory */</p>
<hr />
<div>These two surveys aim at providing an inventory and a quick overview of software that is used to provision existing virtualised infrastructures, and what software is actually available that implements specific Cloud Capabilities exposing which standardised access interfaces. <br />
<br />
These surveys are intended to be filled out by the participating Resource Centres and Technology Providers as indicated and as appropriate. <br />
<br />
== Resource Centre inventory ==<br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Resource Centre <br />
! scope="col" | Status <br />
! scope="col" | committed test bed capacity <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification <br />
| '''Management Interface (curently supported)''' <br />
| '''Management Interface (future consideration''') <br />
| '''Authentication on Service layer''' <br />
| '''Authentication for Login into Cloud Instances'''<br />
|-<br />
| CESNET (Miroslav Ruda) <br />
| <br />
| several servers quickly, more (~10) can be added later this year <br />
| OpenNebula 3.0, to increase heterogenity we could add Eucalyptus 2.0 or Nimbus+Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too <br />
| N/A? OpenNebula web interface? <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia/Munin can be added on request. <br />
| N/A? If reporter for standard usage records is implemented, can be deployed. <br />
| N/A? STOMP based EGI messaging infrastructure in available on the site <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| UK NGS (David Wallom) <br />
| <br />
| <br />
*&gt;10 servers<br />
<br />
| Currently open Eucalyptus 2.0, moving to 3.0 or Openstack, both supplied by Canonical as suppied in Ubuntu Enterprise Cloud <br />
| Data supplied through S3 capable service <br />
| N/A <br />
| NAGIO based through mediated service based probes <br />
| Developed service utilising extended OGF&nbsp;UR&nbsp;schema <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Cyfronet (Tomasz Szepieniec, Marcin Radecki) <br />
| <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| SARA (Floris Sluiter, Maurice Bouwhuis) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH (Zeeshan Ali Shah) <br />
| <br />
| Initially 2 Servers with Total 4 cores , 16 GB RAM and 1TB storage - <br />
| OpenNebula <br />
| Possibility to mount nfs storage <br />
| OpenNebula Web interface with OCCI and OCA api <br />
| Ganglia (need to experiment) <br />
| N/A <br />
| N/A <br />
| OCCI <br />
| Open for discussion <br />
| <br />
User/Pass (current) <br />
<br />
X509 (need consenus within Taskforce) <br />
<br />
| SSH Keys<br />
|- style="background:red; color:white"<br />
| CloudSigma (Micheal Higgins) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| GWDG (Philipp Wieder) <br />
| Accessible October 23, 2011 <br />
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 <br />
| OpenNebula 3.0 with OCCI server <br />
| tbd <br />
| OpenNebula Web interface with OCCI <br />
| tbd (most likely Nagios) <br />
| N/A <br />
| N/A <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| Trinity College Dublin (David O'Callaghan, Stuart Kenny) <br />
| <br />
| 6 servers <br />
| StratusLab, OpenNebula <br />
| Shared NFS filesystem <br />
| n/a <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|- style="background:red; color:white"<br />
| FZ Jülich (B. Hagemeier) <br />
| <br />
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk) <br />
| openStack "Diablo" <br />
| To be defined <br />
| n/a depends on solution above <br />
| Nagios <br />
| n/a <br />
| n/a <br />
| <br />
| <br />
| <br />
| <br />
|}<br />
<br />
== Technology Provider inventory ==<br />
<br />
Likewise, an inventory survey for Technology Providers to fill in below: <br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Technology Provider <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username/password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability/Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|}<br />
<br />
--[[User:Michel|Michel]] 16:54, 13 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:InventoryOfAvailableSoftware&diff=25412Fedcloud-tf:InventoryOfAvailableSoftware2011-10-04T13:34:25Z<p>Zashah: /* Resource Centre inventory */</p>
<hr />
<div>These two surveys aim at providing an inventory and a quick overview of software that is used to provision existing virtualised infrastructures, and what software is actually available that implements specific Cloud Capabilities exposing which standardised access interfaces. <br />
<br />
These surveys are intended to be filled out by the participating Resource Centres and Technology Providers as indicated and as appropriate. <br />
<br />
== Resource Centre inventory ==<br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Resource Centre <br />
! scope="col" | committed test bed capacity <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| Happy Cloud, Bratislava <br />
| <br />
*10 servers each with the following capacity <br />
**2 x 8 core Intel <br />
**64 GB RAM <br />
**2 TB local physical disk space each <br />
**4 x dual port 10 Gb copper Ethernet(Intel® 82598EB)<br />
<br />
| OpenNebula, OCCI based access interface, OVF based VM Image management and provisioning<br> <br />
| FTP over TLS to external storage providers, no local data storage <br />
| BDII, using LDAP query interface and STOMP based data feed (ApacheMQ) <br />
| Nagios based with custom probes <br />
| n/a <br />
| Messaging infrastructure based on STOMP, also available as a service to our users<br />
|-<br />
| CESNET (Miroslav Luda) <br />
| 10 servers quickly, more can be added later this year <br />
| Current plan is to use OpenNebula 3.0, to increase heterogenity we could add Eucalyptus 2.0 or Nimbus+Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too <br />
| N/A? OpenNebula web interface? <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly <br />
| N/A <br />
| N/A<br />
|-<br />
| UK NGS (David Wallom) <br />
| <br />
*&gt;10 servers<br />
<br />
| Currently open Eucalyptus 2.0, moving to 3.0 or Openstack, both supplied by Canonical as suppied in Ubuntu Enterprise Cloud <br />
| Data supplied through S3 capable service <br />
| N/A <br />
| NAGIO based through mediated service based probes <br />
| Developed service utilising extended OGF&nbsp;UR&nbsp;schema <br />
| N/A<br />
|-<br />
| Cyfronet (Tomasz Szepieniec, Marcin Radecki) <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A<br />
|-<br />
| SARA (Floris Sluiter, Maurice Bouwhuis) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH (Zeeshan Ali Shah) <br />
| Initially 2 Servers with Total 4 cores , 16 GB RAM and 1TB storage -<br />
| OpenNebula<br />
| Possibility to mount nfs storage<br />
| OpenNebula Web interface with OCCI and OCA api<br />
| Ganglia (need to experiment)<br />
| N/A<br />
| N/A<br />
|-<br />
| CloudSigma (Micheal Higgins) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Max-Planck-Institut (Ramin Yahyapour) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Trinity College Dublin (David O'Callaghan, Stuart Kenny) <br />
| 6 servers<br />
| StratusLab, OpenNebula<br />
| Shared NFS filesystem<br />
| n/a<br />
| Nagios<br />
| n/a<br />
| n/a<br />
|}<br />
<br />
== Technology Provider inventory ==<br />
<br />
Likewise, an inventory survey for Technology Providers to fill in below: <br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Technology Provider <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username/password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability/Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|}<br />
<br />
--[[User:Michel|Michel]] 16:54, 13 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:InventoryOfAvailableSoftware&diff=25408Fedcloud-tf:InventoryOfAvailableSoftware2011-10-04T13:25:10Z<p>Zashah: /* Resource Centre inventory */</p>
<hr />
<div>These two surveys aim at providing an inventory and a quick overview of software that is used to provision existing virtualised infrastructures, and what software is actually available that implements specific Cloud Capabilities exposing which standardised access interfaces. <br />
<br />
These surveys are intended to be filled out by the participating Resource Centres and Technology Providers as indicated and as appropriate. <br />
<br />
== Resource Centre inventory ==<br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Resource Centre <br />
! scope="col" | committed test bed capacity <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| Happy Cloud, Bratislava <br />
| <br />
*10 servers each with the following capacity <br />
**2 x 8 core Intel <br />
**64 GB RAM <br />
**2 TB local physical disk space each <br />
**4 x dual port 10 Gb copper Ethernet(Intel® 82598EB)<br />
<br />
| OpenNebula, OCCI based access interface, OVF based VM Image management and provisioning<br> <br />
| FTP over TLS to external storage providers, no local data storage <br />
| BDII, using LDAP query interface and STOMP based data feed (ApacheMQ) <br />
| Nagios based with custom probes <br />
| n/a <br />
| Messaging infrastructure based on STOMP, also available as a service to our users<br />
|-<br />
| CESNET (Miroslav Luda) <br />
| 10 servers quickly, more can be added later this year <br />
| Current plan is to use OpenNebula 3.0, to increase heterogenity we could add Eucalyptus 2.0 or Nimbus+Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too <br />
| N/A? OpenNebula web interface? <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly <br />
| N/A <br />
| N/A<br />
|-<br />
| UK NGS (David Wallom) <br />
| <br />
*&gt;10 servers<br />
<br />
| Currently open Eucalyptus 2.0, moving to 3.0 or Openstack, both supplied by Canonical as suppied in Ubuntu Enterprise Cloud <br />
| Data supplied through S3 capable service <br />
| N/A <br />
| NAGIO based through mediated service based probes <br />
| Developed service utilising extended OGF&nbsp;UR&nbsp;schema <br />
| N/A<br />
|-<br />
| Cyfronet (Tomasz Szepieniec, Marcin Radecki) <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A<br />
|-<br />
| SARA (Floris Sluiter, Maurice Bouwhuis) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH (Zeeshan Ali Shah) <br />
| 2 Nodes with Total 4 cores , 16 GB RAM and 1TB storage<br />
| OpenNebula<br />
| Possibility to mount nfs storage<br />
| OpenNebula Web interface with OCCI and OCA api<br />
| Ganglia (need to experiment)<br />
| N/A<br />
| N/A<br />
|-<br />
| CloudSigma (Micheal Higgins) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Max-Planck-Institut (Ramin Yahyapour) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Trinity College Dublin (David O'Callaghan, Stuart Kenny) <br />
| 6 servers<br />
| StratusLab, OpenNebula<br />
| Shared NFS filesystem<br />
| n/a<br />
| Nagios<br />
| n/a<br />
| n/a<br />
|}<br />
<br />
== Technology Provider inventory ==<br />
<br />
Likewise, an inventory survey for Technology Providers to fill in below: <br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Technology Provider <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username/password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability/Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|}<br />
<br />
--[[User:Michel|Michel]] 16:54, 13 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:InventoryOfAvailableSoftware&diff=25407Fedcloud-tf:InventoryOfAvailableSoftware2011-10-04T13:19:46Z<p>Zashah: /* Resource Centre inventory */</p>
<hr />
<div>These two surveys aim at providing an inventory and a quick overview of software that is used to provision existing virtualised infrastructures, and what software is actually available that implements specific Cloud Capabilities exposing which standardised access interfaces. <br />
<br />
These surveys are intended to be filled out by the participating Resource Centres and Technology Providers as indicated and as appropriate. <br />
<br />
== Resource Centre inventory ==<br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Resource Centre <br />
! scope="col" | committed test bed capacity <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| Happy Cloud, Bratislava <br />
| <br />
*10 servers each with the following capacity <br />
**2 x 8 core Intel <br />
**64 GB RAM <br />
**2 TB local physical disk space each <br />
**4 x dual port 10 Gb copper Ethernet(Intel® 82598EB)<br />
<br />
| OpenNebula, OCCI based access interface, OVF based VM Image management and provisioning<br> <br />
| FTP over TLS to external storage providers, no local data storage <br />
| BDII, using LDAP query interface and STOMP based data feed (ApacheMQ) <br />
| Nagios based with custom probes <br />
| n/a <br />
| Messaging infrastructure based on STOMP, also available as a service to our users<br />
|-<br />
| CESNET (Miroslav Luda) <br />
| 10 servers quickly, more can be added later this year <br />
| Current plan is to use OpenNebula 3.0, to increase heterogenity we could add Eucalyptus 2.0 or Nimbus+Cumulus interface too <br />
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too <br />
| N/A? OpenNebula web interface? <br />
| Nagios infrastructure is ready, custom probes from other groups can be added quickly <br />
| N/A <br />
| N/A<br />
|-<br />
| UK NGS (David Wallom) <br />
| <br />
*&gt;10 servers<br />
<br />
| Currently open Eucalyptus 2.0, moving to 3.0 or Openstack, both supplied by Canonical as suppied in Ubuntu Enterprise Cloud <br />
| Data supplied through S3 capable service <br />
| N/A <br />
| NAGIO based through mediated service based probes <br />
| Developed service utilising extended OGF&nbsp;UR&nbsp;schema <br />
| N/A<br />
|-<br />
| Cyfronet (Tomasz Szepieniec, Marcin Radecki) <br />
| for initial setup 12 servers ready, extensions depending on usage <br />
| Most likely OpenNebula 3.0 <br />
| Possibility for mounting iSCSI devices in VMs, others to be defined <br />
| Web interface integrated with PL-Grid User Portal <br />
| Nagios integration, experimenting with zabbix <br />
| planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components <br />
| N/A<br />
|-<br />
| SARA (Floris Sluiter, Maurice Bouwhuis) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| KTH (Zeeshan Ali Shah) <br />
| 2 Nodes with Total 4 cores , 16 GB RAM and 1TB storage<br />
| OpenNebula<br />
| Possibility to mount nfs storage<br />
| OpenNebula Web interface with OCCI and OCA api<br />
| N/A<br />
| N/A<br />
| N/A<br />
|-<br />
| CloudSigma (Micheal Higgins) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Max-Planck-Institut (Ramin Yahyapour) <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| Trinity College Dublin (David O'Callaghan, Stuart Kenny) <br />
| 6 servers<br />
| StratusLab, OpenNebula<br />
| Shared NFS filesystem<br />
| n/a<br />
| Nagios<br />
| n/a<br />
| n/a<br />
|}<br />
<br />
== Technology Provider inventory ==<br />
<br />
Likewise, an inventory survey for Technology Providers to fill in below: <br />
<br />
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435] <br />
<br />
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management. <br />
<br />
<br> <br />
<br />
{| cellspacing="1" cellpadding="1" border="1"<br />
|-<br />
! scope="col" | Technology Provider <br />
! scope="col" | Capability:<br>VM Management <br />
! scope="col" | Capability:<br>Data <br />
! scope="col" | Capability:<br>Information <br />
! scope="col" | Capability:<br>Monitoring <br />
! scope="col" | Capability:<br>Accounting <br />
! scope="col" | Capability:<br>Notification<br />
|-<br />
| StratusLab (Cal Loomis) <br />
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username/password, grid certificates and VOMS proxies, others methods should be easy to add <br />
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) <br />
| Planned in architecture, not implemented <br />
| Planned in architecture, not implemented <br />
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented <br />
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine<br />
|-<br />
| EGI-InSPIRE JRA1 (Daniele Cesini) <br />
| None <br />
| None <br />
| None <br />
| <br />
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems. <br />
<br />
Availability/Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1. <br />
<br />
No information discovery systems are developed within JRA1. <br />
<br />
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details. <br />
| None<br />
|}<br />
<br />
--[[User:Michel|Michel]] 16:54, 13 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Blueprint_EGI_Federated_Clouds&diff=25385Fedcloud-tf:Blueprint EGI Federated Clouds2011-10-04T11:36:08Z<p>Zashah: /* Scenario 1: Running a pre-defined VM Image */</p>
<hr />
<div>{{TOC_right}} <br />
<br />
= Introduction =<br />
<br />
= Six scenarios for minimal functionality =<br />
<br />
== Scenario 1: Running a pre-defined VM Image ==<br />
Following need to be considered with this scenario <br />
# Trust level and Auditing of the VM (since it has to run as Root access) <br />
# Different VMs needed based on underlying Infrastructure such as 64 vs 32bits Or VT enabled plus Xen vs KVM<br />
# Contextualization i.e. how users should login to this vm , how his public key transfer and active to login as root to this vm<br />
<br />
== Scenario 2: Running my data and VM in the Infrastructure ==<br />
<br />
== Scenario 3: Integrating multiple resource providers ==<br />
<br />
== Scenario 4: Accounting across Resource Providers ==<br />
<br />
== Scenario 5: Reliability/Availability of Resource Providers ==<br />
<br />
== Scenario 6: VM/Resource state change notification ==<br />
<br />
= Key Capabilities =<br />
<br />
== VM Management ==<br />
<br />
== Data access ==<br />
<br />
== Information discovery ==<br />
<br />
== Accounting ==<br />
<br />
== Monitoring ==<br />
<br />
== Notification ==<br />
<br />
= References =<br />
<br />
#http://go.egi.eu/435: Draft of Federated Clouds profile <br />
#http://go.egi.eu/803: Task Force presentations at the EGI Technical Forum 211, Lyon<br />
<br />
<br> <br />
<br />
--[[User:Michel|Michel]] 14:40, 26 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashahhttps://wiki.egi.eu/w/index.php?title=Fedcloud-tf:Blueprint_EGI_Federated_Clouds&diff=25384Fedcloud-tf:Blueprint EGI Federated Clouds2011-10-04T11:35:33Z<p>Zashah: /* Scenario 1: Running a pre-defined VM Image */</p>
<hr />
<div>{{TOC_right}} <br />
<br />
= Introduction =<br />
<br />
= Six scenarios for minimal functionality =<br />
<br />
== Scenario 1: Running a pre-defined VM Image ==<br />
Following need to be considered with this scenario <br />
* Trust level and Auditing of the VM (since it has to run as Root access) <br />
* Different VMs needed based on underlying Infrastructure such as 64 vs 32bits Or VT enabled plus Xen vs KVM<br />
* Contextualization i.e. how users should login to this vm , how his public key transfer and active to login as root to this vm<br />
<br />
== Scenario 2: Running my data and VM in the Infrastructure ==<br />
<br />
== Scenario 3: Integrating multiple resource providers ==<br />
<br />
== Scenario 4: Accounting across Resource Providers ==<br />
<br />
== Scenario 5: Reliability/Availability of Resource Providers ==<br />
<br />
== Scenario 6: VM/Resource state change notification ==<br />
<br />
= Key Capabilities =<br />
<br />
== VM Management ==<br />
<br />
== Data access ==<br />
<br />
== Information discovery ==<br />
<br />
== Accounting ==<br />
<br />
== Monitoring ==<br />
<br />
== Notification ==<br />
<br />
= References =<br />
<br />
#http://go.egi.eu/435: Draft of Federated Clouds profile <br />
#http://go.egi.eu/803: Task Force presentations at the EGI Technical Forum 211, Lyon<br />
<br />
<br> <br />
<br />
--[[User:Michel|Michel]] 14:40, 26 September 2011 (UTC) <br />
<br />
[[Category:Fedcloud-tf]]</div>Zashah