Difference between revisions of "Fedcloud-tf:WorkGroups:Workbenches"

From EGIWiki
Jump to: navigation, search
(Technology Provider inventory)
 
(18 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}}  
 
{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}}  
  
These two surveys aim at providing an inventory and a quick overview of software that is used to provision existing virtualised infrastructures, and what software is actually available that implements specific Cloud Capabilities exposing which standardised access interfaces.  
+
The Task Force activities are split across work groups. A leader is elected for each work group and members of the Task Force are free to spend their effort in one or more groups. Each work group investigates one or more capabilities that are required by a federation of clouds. The work done is recorded in the group workbench and, eventually, translated into the Task Force blueprint.  
  
These surveys are intended to be filled out by the participating Resource Centres and Technology Providers as indicated and as appropriate.
+
With the development of the test bed and of the blueprint, new capabilities must be investigated and addressed. As a consequence, new work groups are added to the Task Force when required.
 
 
== Resource Centre inventory  ==
 
 
 
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435]
 
 
 
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management.
 
 
 
<br>
 
 
 
{| cellspacing="1" cellpadding="1" border="1"
 
|-
 
! scope="col" | Resource Centre
 
! scope="col" | Status
 
! scope="col" | committed test bed capacity
 
! scope="col" | Capability:<br>VM Management
 
! scope="col" | Capability:<br>Data
 
! scope="col" | Capability:<br>Information
 
! scope="col" | Capability:<br>Monitoring
 
! scope="col" | Capability:<br>Accounting
 
! scope="col" | Capability:<br>Notification
 
| '''Management Interface (curently supported)'''
 
| '''Management Interface (future consideration''')
 
| '''Authentication on Service layer'''
 
| '''Authentication for Login into Cloud Instances'''
 
|-
 
| CESNET (Miroslav Ruda)
 
|
 
| several servers quickly, more (~10) can be added later this year
 
| OpenNebula 3.0, to increase heterogenity we could add Eucalyptus 2.0 or Nimbus+Cumulus interface too
 
| Shared NFS filesystem, GridFTP remote access, can provide S3 Cumulus implementation too
 
| N/A? OpenNebula web interface?
 
| Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia/Munin can be added on request.
 
| N/A? If reporter for standard usage records is implemented, can be deployed.
 
| N/A? STOMP based EGI messaging infrastructure in available on the site
 
| OCCI and partial EC2 provided by OpenNebula
 
| Open for discussion
 
| X509 certificates prefered, login/password as temporary solution may be possible
 
| in general up to user, plan to support registered user SSH keys for root access
 
|-
 
| UK NGS (David Wallom)
 
|
 
|
 
*&gt;10 servers
 
 
 
| Currently open Eucalyptus 2.0, moving to 3.0 or Openstack, both supplied by Canonical as suppied in Ubuntu Enterprise Cloud
 
| Data supplied through S3 capable service
 
| N/A
 
| NAGIO based through mediated service based probes
 
| Developed service utilising extended OGF&nbsp;UR&nbsp;schema
 
| N/A
 
|
 
|
 
|
 
|
 
|-
 
| Cyfronet (Tomasz Szepieniec, Marcin Radecki)
 
|
 
| for initial setup 12 servers ready, extensions depending on usage
 
| Most likely OpenNebula 3.0
 
| Possibility for mounting iSCSI devices in VMs, others to be defined
 
| Web interface integrated with PL-Grid User Portal
 
| Nagios integration, experimenting with zabbix
 
| planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components
 
| N/A
 
|
 
|
 
|
 
|
 
|- style="background:red; color:white"
 
| SARA (Floris Sluiter, Maurice Bouwhuis)
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|-
 
| KTH (Zeeshan Ali Shah)
 
| Accessible since January, 2011
 
| Initially 2 Servers with Total 4 cores , 16 GB RAM and 1TB storage -
 
| OpenNebula
 
| Possibility to mount nfs storage
 
| OpenNebula Web interface with OCCI and OCA api
 
| Ganglia (need to experiment)
 
| N/A
 
| N/A
 
| OCCI
 
| Open for discussion
 
|
 
User/Pass (current)
 
 
 
X509 (need consenus within Taskforce)
 
 
 
| SSH Keys
 
|-
 
| CloudSigma (Micheal Higgins)
 
| Available since 24 October 2011
 
| 100Ghz CPU, 50GB RAM, 5TB Disk
 
| Web Console, Our API and Jclouds
 
| Mountable disks, Mountable S3 (future), SSD drives (future), NFS
 
| KVM
 
| Any user supplied monitoring
 
| Accounting in 5 minute intervals, downloadable in CSV
 
|
 
| Web interface, API, Jclouds
 
| OCCI possible in future
 
| User/Password X.509 future
 
|
 
|-
 
| GWDG (Philipp Wieder)
 
| Accessible October 23, 2011
 
| As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012
 
| OpenNebula 3.0 with OCCI server
 
| tbd
 
| OpenNebula Web interface with OCCI
 
| tbd (most likely Nagios)
 
| N/A
 
| N/A
 
|
 
|
 
|
 
|
 
|-
 
| Trinity College Dublin (David O'Callaghan, Stuart Kenny)
 
|
 
| 6 servers
 
| StratusLab, OpenNebula
 
| Shared NFS filesystem
 
| n/a
 
| Nagios
 
| n/a
 
| n/a
 
|
 
|
 
|
 
|
 
|-
 
| FZ Jülich (B. Hagemeier)
 
|
 
| 1 Server (24 Cores, 24GB RAM, 1.5TB Disk)
 
| openStack "Diablo"
 
| To be defined
 
| n/a depends on solution above
 
| Nagios
 
| n/a
 
| n/a
 
|
 
|
 
|
 
|
 
|- style="background:red;"
 
| IGI (Giancinto Donvito/Paolo Veronesi)
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|- style="background:red;"
 
| IN2P3 (Helene Cordie/Gille Mathieu
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|}
 
 
 
== Technology Provider inventory  ==
 
 
 
Likewise, an inventory survey for Technology Providers to fill in below:
 
 
 
For the description of the Capabilities please refer to the Cloud Integration Profile document available at [https://documents.egi.eu/document/435 https://documents.egi.eu/document/435]
 
 
 
To indicate the type of information we are after, a sample hypothetical Resource Centre is provided for guidanc. Note that wherever suitable and possible, make not of any standards that yo know are implemented by the software you deployed for Cloud infrastructure management.
 
 
 
<br>
 
 
 
{| cellspacing="1" cellpadding="1" border="1"
 
|-
 
! scope="col" | Technology Provider
 
! scope="col" | Capability:<br>VM Management
 
! scope="col" | Capability:<br>Data
 
! scope="col" | Capability:<br>Information
 
! scope="col" | Capability:<br>Monitoring
 
! scope="col" | Capability:<br>Accounting
 
! scope="col" | Capability:<br>Notification
 
|-
 
| StratusLab (Cal Loomis)
 
| OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username/password, grid certificates and VOMS proxies, others methods should be easy to add
 
| Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI)
 
| Planned in architecture, not implemented
 
| Planned in architecture, not implemented
 
| Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented
 
| Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine
 
|-
 
| EGI-InSPIRE JRA1 (Daniele Cesini)
 
| None
 
| None
 
| None
 
|
 
EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems.
 
 
 
Availability/Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1.
 
 
 
No information discovery systems are developed within JRA1.
 
 
 
| EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI&nbsp;DoW for TJRA1.4 details.
 
| None
 
|-
 
| WNoDeS (Davide Salomoni, Elisabetta Ronchieri)
 
 
WNoDeS, with OCCI interface
 
 
 
|
 
|
 
|
 
Accounting is done via standard DGAS
 
|}
 
 
 
--[[User:Michel|Michel]] 17:05, 2 November 2011 (UTC)
 

Latest revision as of 10:37, 5 April 2012

Main Roadmap and Innovation Technology For Users For Resource Providers Media


Workbenches: Open issues
Scenario 1
VM Management
Scenario 2
Data Management
Scenario 3
Information Systems
Scenario 4
Accounting
Scenario 5
Monitoring
Scenario 6
Notification
Scenario 7
Federated AAI
Scenario 8
VM Image Management
Scenario 9
Brokering
Scenario 10
Contextualisation
Scenario 11
Security



The Task Force activities are split across work groups. A leader is elected for each work group and members of the Task Force are free to spend their effort in one or more groups. Each work group investigates one or more capabilities that are required by a federation of clouds. The work done is recorded in the group workbench and, eventually, translated into the Task Force blueprint.

With the development of the test bed and of the blueprint, new capabilities must be investigated and addressed. As a consequence, new work groups are added to the Task Force when required.