Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @

Difference between revisions of "Fedcloud-tf:Testbed"

From EGIWiki
Jump to navigation Jump to search
Line 69: Line 69:
| Cyfronet  
| Cyfronet  
| StratusLab  
| StratusLab <br>X509 OCCI 1.1
| <br>
| GRIF  
| GRIF  

Revision as of 12:16, 13 March 2012

Main Roadmap and Innovation Technology For Users For Resource Providers Media

Technologies Distribution

The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently.


How to obtain an account for the test bed

In the long term a federated AAI will be provided. In the meanwhile, you may create an account with each of the following providers.

Provider Procedure to request an account
CESGA Send an e-mail to requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.
CESNET Send a e-mail to asking for an account:
  1. subject should contain "[FedCloud registration]";
  2. body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.
FZ Jülich Send an email to Björn Hagemeier stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.
GRNET Send a e-mail to Panos Louridas requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.
GWDG Send an e-mail to requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.
KTH PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow this instructions.
SARA Send an e-mail to cloud-support with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.


The management interface endpoints made available by the TF Resource Providers

Provider Interface Type Endpoint
CESNET Sunstone
OCCI 0.8
OCCI 1.1
X509 OCCI 1.1

OCCI 1.1
Cyfronet StratusLab
X509 OCCI 1.1
GRIF StratusLab
GWDG Sunstone
OCCI 0.8
OCCI 1.1

FZ Jülich OpenStack EC2
OpenStack S3
TCD StratusLab OpenNebula proxy
CC-IN2P3 Openstack EC2
Openstack Nova API 1.1
Openstack S3

Resource Providers inventory

The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them.

For the description of the Capabilities please refer to the Cloud Integration Profile document.

Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted.

Provider Status capacity Capabilities Management Interface Authentication
VM Management Data Information Monitoring Accounting Notification Supported Planned Service layer VMs
(Ivan Diaz, Esteban Freire)
Production 33 octo-core servers (264 CPUs) OpenNebula 3.0 Shared NFS/SSH ~ 450GB per server OpenNebula/OCCI Ganglia In-house WIP Development N/A OCCI and partial EC2 provided by OpenNebula
Username and Password X.509 future As chosen by the users
(Miroslav Ruda)
Beta (pre-production) 10x (24 cores, 100GB RAM) + 44TB shared storage OpenNebula 3.0, to increase heterogeneity we could add Eucalyptus 2.0 or Nimbus + Cumulus interface too Shared NFS filesystem, GridFTP remote access, S3 Cumulus implementation OpenNebula/OCCI Nagios infrastructure is ready, custom probes from other groups can be added quickly. Ganglia / Munin can be added on request. N/A? If reporter for standard usage records is implemented, can be deployed. N/A? STOMP based EGI messaging infrastructure in available on the site OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 Open for discussion Username and password as temporary solution, in the future X509 certificates In general up to user, plan to support registered user SSH keys for root access
(Tomasz Szepieniec, Marcin Radecki)

for initial setup 12 servers ready, extensions depending on usage Most likely OpenNebula 3.0 Possibility for mounting iSCSI devices in VMs, others to be defined Web interface integrated with PL-Grid User Portal Nagios integration, experimenting with zabbix Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components N/A

FZ Jülich
(B. Hagemeier)

1 Server (24 Cores, 24GB RAM, 1.5TB Disk) OpenStack 'Diablo' To be defined n/a depends on solution above Nagios n/a n/a

(Michel Jouvin)
Production 10 servers (240 cores) StratusLab iSCSI-based permanent disks n/a n/a n/a n/a Private (StratusLab) OCCI X509 certificates preferred, username and password also possible User SSH keys for root access (configured when VM is launched)
(Panos Louridas, Vangelis Floros)
Alpha 25 servers (200 cores, 48 GB RAM each server), 22 TB storage
Okeanos (GRNET OpenStack implementation) Local disks OpenStack compatible Nagios, Munin, collectd, scripts In house development
OpenStack, also complete web based environment
Shibboleth, invitation tokens User SSH
(Philipp Wieder)
Accessible October 23, 2011 As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 OpenNebula 3.2 with OCCI server Shared NFS OpenNebula Web interface (Sunstone) tbd (most likely Nagios) Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 n/a OCCI
Username and password, additionally X.509 in the future Up to the user, support for preregistered ssh keys in the future
(Giancinto Donvito, Paolo Veronesi)
work in progress 24 cores, 48 GB RAM, 2TB Disk WNoDeS Shared NFS filesystem Usage of the Software­Run­Time­Environment attribute for publishing VM information by using BDII (work in progress) Nagios accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy notification based on Nagios for system administrator (not for end users) OCCI CREAM Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Au­then­ti­ca­tion Service (based on Shibboleth) should be supported in the next 4/6 months. GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) SSH keys for root access
(Helene Cordier, Gille Mathieu, Mattieu Puel)
Testbed 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores Openstack Diablo Local disks undef Nagios, Collectd/Smurf undef undef EC2, Openstack API 1.1 OCCI when available user/password, x509 when available OpenSSH
(Zeeshan Ali Shah)
Accessible since January, 2011 Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage OpenNebula Possibility to mount nfs storage OpenNebula Web interface with OCCI and OCA api Ganglia (need to experiment) N/A N/A OCCI Open for discussion Username and Password (current) X509 (need consenus within Taskforce) SSH Keys
(David Wallom, Matteo Turilli)

10 servers, between 8 and 2 VMs each Deploying OpenStack Data supplied through S3/EBS capable storage services N/A NAGIOS based Developed service utilising extended OGF UR schema N/A Partial EC2 as implemented by OpenStack OCCI when available Username and password as implemented by OpenStack As chosen by the users
(Floris Sluiter, Maurice Bouwhuis, Machiel Jansen)
In production 1 January 2012 609 cores, 4,75 TB RAM OpenNebula 400 TB mountable storage, local disk 10 TB Web interface and Red Mine portal Nagios, Ganglia OpenNebula (adapted) Based on Nagios OCCI and partial EC2 provided by OpenNebula Open for discussion username password, X509 planned User defines
(Ian Collier)

(David O'Callaghan, Stuart Kenny)

6 servers StratusLab, OpenNebula Shared NFS filesystem n/a Nagios n/a n/a

Technology Provider inventory

The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap.

For the description of the Capabilities please refer to the Cloud Integration Profile document.

Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted.

Provider Capabilities
VM Management Data Information Monitoring Accounting Notification
StratusLab (Cal Loomis) OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) Planned in architecture, not implemented Planned in architecture, not implemented Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine
EGI-InSPIRE JRA1 (Daniele Cesini) None None None

EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems.

Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1.

No information discovery systems are developed within JRA1.

EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI DoW for TJRA1.4 details. None
WNoDeS (Davide Salomoni, Elisabetta Ronchieri) WNoDeS, with OCCI interface Posix I/O planned on Lustre, NFS and GPFS as persistent storage Usage of the Software­Run­Time­Environment attribute for publishing VM information by using BDII Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure

Cloud Resources Status

The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers.


    = Available
    = Not available

User registration User access VM availability Elastic IPs Object Storage Persistent Storage



OCCI v0.8 OCCI v1.1

VM suse (storage/16) with NET public (network/4)


Cumulus at

GridFTP at

GWDG [2]                         
FZ Jülich Mail to Björn Hagemeier -          


KTH [3]          

SARA (NGI NL) [4]