Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Fedcloud-tf:Testbed"

From EGIWiki
Jump to navigation Jump to search
Line 49: Line 49:
|}
|}


== Enpoints ==
== Endpoints ==


The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]]  
The management interface endpoints made available by the TF [[Fedcloud-tf:Members#Resource_Providers|Resource Providers]]  

Revision as of 23:02, 23 March 2012

Main Roadmap and Innovation Technology For Users For Resource Providers Media




Technologies Distribution

The federation test bed does not mandate what VMM its resource providers should use. The federation adopts a set of well-defined functionalities and (standard) interfaces that every provider is free to implement independently.

[[Image:|pChart]]

How to obtain an account for the test bed

In the long term a federated AAI will be provided. In the meanwhile, you may create an account with each of the following providers.

Provider Procedure to request an account
CESGA Send an e-mail to grid-admin@cesga.es requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.
CESNET Send a e-mail to fedcloud@metacentrum.cz asking for an account:
  1. subject should contain "[FedCloud registration]";
  2. body should contain a name/organization and a contact email address, optionally DN from their x509 EGI certificate.
FZ Jülich Send an email to Björn Hagemeier stating that you are a user of the EGI Federated Cloud Task Force and would like to have access to our resources.
GRNET Send a e-mail to Panos Louridas requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.
GRIF https://register.stratuslab.eu:8444
GWDG Send an e-mail to piotr.kasprzak@gwdg.de requesting an invitation specifying that you are a user of the EGI Clouds Federation Task Force.
KTH PDC Cloud (PDC2) is a cloud resource which gives users the flexibility to customize the system according to their needs. This include operating system, libraries and custom softwares as long as they abide KTH computer usage rules . To apply for an account please follow this instructions.
SARA Send an e-mail to cloud-support with a request for an account and mentionining the Federation Clouds test bed project. Users should give an indication of resources needed (coure hours, disk space, memory). By default 5000 core hours are allocated but further resources can be granted upon review of the use cases.
CC-IN2P3 Still in design/validation by support teams. Likely to be available before end of march.

Endpoints

The management interface endpoints made available by the TF Resource Providers

Provider Interface Type Endpoint
BSC CDMI Proxy

http://bscgrid20.bsc.es:2365/

CESNET OCA
Sunstone
OCCI 0.8
OCCI 1.1
X509 OCCI 1.1
CDMI Proxy
https://carach5.ics.muni.cz:6443/RPC2
https://carach5.ics.muni.cz/
https://carach5.ics.muni.cz:9443/
http://carach5.ics.muni.cz:3333/
https://carach5.ics.muni.cz:10443/
https://carach3.ics.muni.cz:8080/
GRNET

CESGA OCCI 0.8
OCCI 1.1 (user/pass)
OCCI 1.1 (X.509 Auth)
SunStone
http://meghacloud.cesga.es:4569/
http://meghacloud.cesga.es:3200
https://meghacloud.cesga.es:3202
http://meghacloud.cesga.es:9869/
Cyfronet OCCI 1.1 (X.509)
OCCI 1.1 (user/pass)
https://cloud-lab.grid.cyf-kr.edu.pl:3443/
http://cloud-lab.grid.cyf-kr.edu.pl:3200/
GRIF StratusLab cloud-lal.stratuslab.eu
GWDG Sunstone
OCCI 0.8
OCCI 1.1 (user/pass)
OCCI 1.1 (X.509)
CDMI proxy (user/pass)
CDMI proxy (X.509)
https://one.cloud.gwdg.de:8443
http://occi.cloud.gwdg.de:3400
http://occi.cloud.gwdg.de:3200
https://occi.cloud.gwdg.de:3100
http://cdmi.cloud.gwdg.de:4001
https://cdmi.cloud.gwdg.de:4000
KTH OCCI 0.8
OVF
OCA
OCCI 1.1 (user/pass)
OCCI 1.1 (x.509 auth)
CDMI Proxy
http://front.pdc2.pdc.kth.se:4569/
https://front.pdc2.pdc.kth.se:8443/ovf4one
http://front.pdc2.pdc.kth.se:2633/
http://front.redcloud.pdc.kth.se:3000/
https://front.redcloud.pdc.kth.se:3043/
http://cdmi.pdc2.pdc.kth.se:3300/
SARA

FZ Jülich OpenStack EC2
OpenStack S3
http://egi-cloud.zam.kfa-juelich.de:8773
http://egi-cloud.zam.kfa-juelich.de:3333
TCD StratusLab OpenNebula proxy https://cagnode42.cs.tcd.ie:2634
CC-IN2P3 Openstack EC2
Openstack Nova API 1.1
Openstack S3
http://ccec2.in2p3.fr:8773/services/Cloud
http://ccnovaapi.in2p3.fr:8774/v1.1/
http://ccs3.in2p3.fr:3333

Resource Providers inventory

The Resource Providers that have joined the Task Force make available a small portion of their cloud infrastructure in order to design and test the technologies described in the blueprint document for clouds federation. These resources are available for testing to every user community interested in testing/using them.

For the description of the Capabilities please refer to the Cloud Integration Profile document.

Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted.

Provider Status capacity Capabilities Management Interface Authentication
VM Management Data Information Monitoring Accounting Notification Supported Planned Service layer VMs
CESGA (IBERgrid)
(Ivan Diaz, Esteban Freire)
Production 33 octo-core servers (264 CPUs) OpenNebula 3.0 Shared NFS/SSH ~ 450GB per server OpenNebula/OCCI Ganglia In-house WIP Development N/A OCCI and partial EC2 provided by OpenNebula
Username and Password X.509 future As chosen by the users
CESNET
(Miroslav Ruda)
Production 10x (24 cores, 100GB RAM) + 44TB shared storage OpenNebula 3.0 Shared NFS filesystem, GridFTP, S3 Cumulus, CDMI Proxy OpenNebula OCA/OCCI/ECONE + OGF-OCCI v1.1 Nagios infrastructure is ready, custom probes for OpenNebula's OCCI, ECONE, OCA. Ganglia / Munin can be added on request. OpenNebula accounting daemon. If reporter for standard usage records is implemented, it can be deployed. N/A? STOMP based EGI messaging infrastructure in available on the site OCCI v0.8 and partial EC2 provided by OpenNebula + OGF-OCCI v1.1 Open for discussion Username and password, X.509 certificates for OGF-OCCI In general up to the user, currently registered SSH keys for root access to the VMs
Cyfronet
(Tomasz Szepieniec, Marcin Radecki)

for initial setup 12 servers ready, extensions depending on usage Most likely OpenNebula 3.0 Possibility for mounting iSCSI devices in VMs, others to be defined Web interface integrated with PL-Grid User Portal Nagios integration, experimenting with zabbix Planned for early 2012 integration with PL-Grid Accounting using OpenNebula 3 accounting components N/A



FZ Jülich
(B. Hagemeier)

1 Server (24 Cores, 24GB RAM, 1.5TB Disk) OpenStack 'Diablo' To be defined n/a depends on solution above Nagios n/a n/a Partial EC2 as implemented by OpenStack OCCI when available Username and password as implemented by OpenStack User SSH keys for root access (configured when VM is launched)
GRIF
(Michel Jouvin)
Production 10 servers (240 cores) StratusLab iSCSI-based permanent disks n/a n/a n/a n/a Private (StratusLab) OCCI X509 certificates preferred, username and password also possible User SSH keys for root access (configured when VM is launched)
GRNET
(Panos Louridas, Vangelis Floros)
Alpha 25 servers (200 cores, 48 GB RAM each server), 22 TB storage
Okeanos (GRNET OpenStack implementation) Local disks OpenStack compatible Nagios, Munin, collectd, scripts In house development
OpenStack, also complete web based environment
Shibboleth, invitation tokens User SSH
GWDG
(Philipp Wieder)
Accessible October 23, 2011 As a start: 4 servers with Dual-Proc AMD Quad-Core "Barcelona", 2,4 GHz, 16 GB Ram, 250 GB HD. More beginning 2012 OpenNebula 3.2 with OCCI server Shared NFS OpenNebula Web interface (Sunstone) tbd (most likely Nagios) Currently n/a, usage of OpenNebula 3.2 accounting components planned for late 2011 n/a OCCI
Username and password, additionally X.509 in the future Up to the user, support for preregistered ssh keys in the future
IGI
(Giancinto Donvito, Paolo Veronesi)
work in progress 24 cores, 48 GB RAM, 2TB Disk WNoDeS Shared NFS filesystem Usage of the Software­Run­Time­Environment attribute for publishing VM information by using BDII (work in progress) Nagios accounting at batch system level(pbs)and integrated with DGAS Accounting System used for the Grid infrastructure in Italy notification based on Nagios for system administrator (not for end users) OCCI CREAM Web Portal (authentication based on X509) expected in the next 2 months. Federated Single Sign-On Au­then­ti­ca­tion Service (based on Shibboleth) should be supported in the next 4/6 months. GSI (Grid Security Infrastructure based on X509 personal certificates and VO membership based on VOMS) SSH keys for root access
CC-IN2P3
(Helene Cordier, Gille Mathieu, Mattieu Puel)
Testbed 16 x (24 cores, 96GB RAM, 2TB local disk) = 384 cores Openstack Diablo Local disks undef Nagios, Collectd/Smurf undef undef EC2, Openstack API 1.1 OCCI when available user/password, x509 when available OpenSSH
KTH
(Zeeshan Ali Shah)
Accessible since January, 2011 Initially 2 Servers with Total 4 cores, 16 GB RAM and 1TB storage OpenNebula Possibility to mount nfs storage OpenNebula Web interface with OCCI and OCA api Ganglia (need to experiment) N/A N/A OCCI Open for discussion Username and Password and X509  SSH Keys
OeRC (UK NGI)
(David Wallom, Matteo Turilli)

10 servers, between 8 and 2 VMs each Deploying OpenStack Data supplied through S3/EBS capable storage services N/A NAGIOS based Developed service utilising extended OGF UR schema N/A Partial EC2 as implemented by OpenStack OCCI when available Username and password as implemented by OpenStack As chosen by the users
SARA
(Floris Sluiter, Maurice Bouwhuis, Machiel Jansen)
In production 1 January 2012 609 cores, 4,75 TB RAM OpenNebula 400 TB mountable storage, local disk 10 TB Web interface and Red Mine portal Nagios, Ganglia OpenNebula (adapted) Based on Nagios OCCI and partial EC2 provided by OpenNebula Open for discussion username password, X509 planned User defines
STFC
(Ian Collier)












TCD
(David O'Callaghan, Stuart Kenny)
Testing 5 x dual quad core with 16GB RAM StratusLab, OpenNebula Shared NFS filesystem, 1.5 TB StratusLab web-monitor, Sunstone Nagios n/a n/a StratusLab, OpenNebula
X509, Username and password User SSH key for root access

Technology Provider inventory

The Technology Providers of the Task Force offer support for the technologies that they develop and evaluate further development in accordance with the federation roadmap.

For the description of the Capabilities please refer to the Cloud Integration Profile document.

Please note: wherever suitable and possible, any standards implemented by the adopted cloud software should be noted.

Provider Capabilities
VM Management Data Information Monitoring Accounting Notification
StratusLab (Cal Loomis) OpenNebula using XML-RPC interface (eventually OCCI); standard OpenNebula VM description for files (eventually OVF); authentication options are username and password, grid certificates and VOMS proxies, others methods should be easy to add Proprietary Persistent Disk Store with RESTful interface (eventually also CDMI) Planned in architecture, not implemented Planned in architecture, not implemented Some functionality in OpenNebula, implementation for all StratusLab services planned but not yet implemented Prototype implementation in place, allows notification through AMQP if users provide messaging coordinates when starting a virtual machine
EGI-InSPIRE JRA1 (Daniele Cesini) None None None

EGI-JRA1 offers NAGIOS probes integration Capability - No NAGIOS probes development is foreseen within JRA1 (with the exception of very few cases) - Technology Providers are expected to produce NAGIOS probes for their own systems.

Availability and Reliability calculation and reporting for sites and services are currently produced outside EGI-JRA1.

No information discovery systems are developed within JRA1.

EGI-JRA1 contains a task (TJRA1.4) responsible for the development of an accounting system capable of encompassing the new resource types that will appear in the EGI infrastructure including virtualised resources. See the EGI DoW for TJRA1.4 details. None
WNoDeS (Davide Salomoni, Elisabetta Ronchieri) WNoDeS, with OCCI interface Posix I/O planned on Lustre, NFS and GPFS as persistent storage Usage of the Software­Run­Time­Environment attribute for publishing VM information by using BDII Internal monitoring system for hypervisors. Not yet integrated with NAGIOS probes Accounting at batch system level (like lsf and pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure


Cloud Resources Status

The Task Force is developing a resource monitor solution for the clouds federation based on Nagios. Meanwhile, here a table showing the current status of the cloud resources made available by the resource providers that have joined the Task Force. This table is updated weekly by the resource providers.

Providers

    = Available
    = Not available

User registration User access VM availability Elastic IPs Object Storage Persistent Storage
CESGA (IBERgrid)





CESNET (NGI CZ)
[1]

Sunstone

OCCI v0.8 OCCI v1.1


VM suse (storage/16) with NET public (network/4)

No

Cumulus at carach3.ics.muni.cz:8888

GridFTP at carach4.ics.muni.cz:50000
CYFRONET (NGI PL)





GWDG [2]                         
FZ Jülich Mail to Björn Hagemeier     134.94.32.33 - 134.94.32.40          
IGI





IN2P3 (NGI FR)





KTH [3]          
         
OerC (UK NGI)





SARA (NGI NL) [4]                         
TCD (NGI IE)